KVM: Ensure lockdep knows about kvm->lock vs. vcpu->mutex ordering rule
authorDavid Woodhouse <dwmw@amazon.co.uk>
Wed, 11 Jan 2023 18:06:50 +0000 (18:06 +0000)
committerPaolo Bonzini <pbonzini@redhat.com>
Wed, 11 Jan 2023 18:32:21 +0000 (13:32 -0500)
Documentation/virt/kvm/locking.rst tells us that kvm->lock is taken outside
vcpu->mutex. But that doesn't actually happen very often; it's only in
some esoteric cases like migration with AMD SEV. This means that lockdep
usually doesn't notice, and doesn't do its job of keeping us honest.

Ensure that lockdep *always* knows about the ordering of these two locks,
by briefly taking vcpu->mutex in kvm_vm_ioctl_create_vcpu() while kvm->lock
is held.

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Message-Id: <20230111180651.14394-3-dwmw2@infradead.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
virt/kvm/kvm_main.c

index 13e88297f999631d1322db5bbc04b51159744578..9c60384b5ae0bacd9bbe1bb417bb5ff8afce2229 100644 (file)
@@ -3954,6 +3954,13 @@ static int kvm_vm_ioctl_create_vcpu(struct kvm *kvm, u32 id)
        }
 
        mutex_lock(&kvm->lock);
+
+#ifdef CONFIG_LOCKDEP
+       /* Ensure that lockdep knows vcpu->mutex is taken *inside* kvm->lock */
+       mutex_lock(&vcpu->mutex);
+       mutex_unlock(&vcpu->mutex);
+#endif
+
        if (kvm_get_vcpu_by_id(kvm, id)) {
                r = -EEXIST;
                goto unlock_vcpu_destroy;