From: Michal Luczaj Date: Sat, 7 Jan 2023 00:12:52 +0000 (+0100) Subject: KVM: x86: Optimize kvm->lock and SRCU interaction (KVM_X86_SET_MSR_FILTER) X-Git-Tag: v6.3-rc1~85^2~11^2~4 X-Git-Url: https://git.kernel.dk/?a=commitdiff_plain;h=708f799d22fe6b69d2e6e3e0625c30666d5e798a;p=linux-block.git KVM: x86: Optimize kvm->lock and SRCU interaction (KVM_X86_SET_MSR_FILTER) Reduce time spent holding kvm->lock: unlock mutex before calling synchronize_srcu(). There is no need to hold kvm->lock until all vCPUs have been kicked, KVM only needs to guarantee that all vCPUs will switch to the new filter before exiting to userspace. Suggested-by: Paolo Bonzini Suggested-by: Sean Christopherson Signed-off-by: Michal Luczaj Link: https://lore.kernel.org/r/20230107001256.2365304-3-mhal@rbox.co [sean: expand changelog] Signed-off-by: Sean Christopherson --- diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index d7d5bca00294..4f8e78d49585 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -6497,12 +6497,12 @@ static int kvm_vm_ioctl_set_msr_filter(struct kvm *kvm, old_filter = srcu_dereference_check(kvm->arch.msr_filter, &kvm->srcu, 1); rcu_assign_pointer(kvm->arch.msr_filter, new_filter); + mutex_unlock(&kvm->lock); synchronize_srcu(&kvm->srcu); kvm_free_msr_filter(old_filter); kvm_make_all_cpus_request(kvm, KVM_REQ_MSR_FILTER_CHANGED); - mutex_unlock(&kvm->lock); return 0; }