KVM: x86: Optimize kvm->lock and SRCU interaction (KVM_X86_SET_MSR_FILTER)
authorMichal Luczaj <mhal@rbox.co>
Sat, 7 Jan 2023 00:12:52 +0000 (01:12 +0100)
committerSean Christopherson <seanjc@google.com>
Fri, 3 Feb 2023 23:30:17 +0000 (15:30 -0800)
Reduce time spent holding kvm->lock: unlock mutex before calling
synchronize_srcu().  There is no need to hold kvm->lock until all vCPUs
have been kicked, KVM only needs to guarantee that all vCPUs will switch
to the new filter before exiting to userspace.

Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Suggested-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Michal Luczaj <mhal@rbox.co>
Link: https://lore.kernel.org/r/20230107001256.2365304-3-mhal@rbox.co
[sean: expand changelog]
Signed-off-by: Sean Christopherson <seanjc@google.com>
arch/x86/kvm/x86.c

index d7d5bca00294fb1328f5c0af7d3b77ddc5dbd30b..4f8e78d495852dcfe4dd947ab0e61001171a9972 100644 (file)
@@ -6497,12 +6497,12 @@ static int kvm_vm_ioctl_set_msr_filter(struct kvm *kvm,
        old_filter = srcu_dereference_check(kvm->arch.msr_filter, &kvm->srcu, 1);
 
        rcu_assign_pointer(kvm->arch.msr_filter, new_filter);
+       mutex_unlock(&kvm->lock);
        synchronize_srcu(&kvm->srcu);
 
        kvm_free_msr_filter(old_filter);
 
        kvm_make_all_cpus_request(kvm, KVM_REQ_MSR_FILTER_CHANGED);
-       mutex_unlock(&kvm->lock);
 
        return 0;
 }