KVM: x86: Move "entering SMM" tracepoint into kvm_smm_changed()
authorSean Christopherson <seanjc@google.com>
Wed, 9 Jun 2021 18:56:16 +0000 (11:56 -0700)
committerPaolo Bonzini <pbonzini@redhat.com>
Thu, 17 Jun 2021 17:09:34 +0000 (13:09 -0400)
Invoke the "entering SMM" tracepoint from kvm_smm_changed() instead of
enter_smm(), effectively moving it from before reading vCPU state to
after reading state (but still before writing it to SMRAM!).  The primary
motivation is to consolidate code, but calling the tracepoint from
kvm_smm_changed() also makes its invocation consistent with respect to
SMI and RSM, and with respect to KVM_SET_VCPU_EVENTS (which previously
only invoked the tracepoint when forcing the vCPU out of SMM).

Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20210609185619.992058-7-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
arch/x86/kvm/x86.c

index 57efc3a49753b40ab14f0023cf6e3d4f306e5800..389f634a40839806f78df47352e491eaa2c4557d 100644 (file)
@@ -7544,14 +7544,13 @@ static int complete_emulated_pio(struct kvm_vcpu *vcpu);
 
 static void kvm_smm_changed(struct kvm_vcpu *vcpu, bool entering_smm)
 {
+       trace_kvm_enter_smm(vcpu->vcpu_id, vcpu->arch.smbase, entering_smm);
+
        if (entering_smm) {
                vcpu->arch.hflags |= HF_SMM_MASK;
        } else {
                vcpu->arch.hflags &= ~(HF_SMM_MASK | HF_SMM_INSIDE_NMI_MASK);
 
-               /* This is a good place to trace that we are exiting SMM.  */
-               trace_kvm_enter_smm(vcpu->vcpu_id, vcpu->arch.smbase, false);
-
                /* Process a latched INIT or SMI, if any.  */
                kvm_make_request(KVM_REQ_EVENT, vcpu);
        }
@@ -9004,7 +9003,6 @@ static void enter_smm(struct kvm_vcpu *vcpu)
        char buf[512];
        u32 cr0;
 
-       trace_kvm_enter_smm(vcpu->vcpu_id, vcpu->arch.smbase, true);
        memset(buf, 0, 512);
 #ifdef CONFIG_X86_64
        if (guest_cpuid_has(vcpu, X86_FEATURE_LM))