KVM: arm64: Remove ad-hoc CPTR manipulation from kvm_hyp_handle_fpsimd()
authorMark Rutland <mark.rutland@arm.com>
Tue, 17 Jun 2025 13:37:16 +0000 (14:37 +0100)
committerMarc Zyngier <maz@kernel.org>
Thu, 19 Jun 2025 12:06:20 +0000 (13:06 +0100)
The hyp code FPSIMD/SVE/SME trap handling logic has some rather messy
open-coded manipulation of CPTR/CPACR. This is benign for non-nested
guests, but broken for nested guests, as the guest hypervisor's CPTR
configuration is not taken into account.

Consider the case where L0 provides FPSIMD+SVE to an L1 guest
hypervisor, and the L1 guest hypervisor only provides FPSIMD to an L2
guest (with L1 configuring CPTR/CPACR to trap SVE usage from L2). If the
L2 guest triggers an FPSIMD trap to the L0 hypervisor,
kvm_hyp_handle_fpsimd() will see that the vCPU supports FPSIMD+SVE, and
will configure CPTR/CPACR to NOT trap FPSIMD+SVE before returning to the
L2 guest. Consequently the L2 guest would be able to manipulate SVE
state even though the L1 hypervisor had configured CPTR/CPACR to forbid
this.

Clean this up, and fix the nested virt issue by always using
__deactivate_cptr_traps() and __activate_cptr_traps() to manage the CPTR
traps. This removes the need for the ad-hoc fixup in
kvm_hyp_save_fpsimd_host(), and ensures that any guest hypervisor
configuration of CPTR/CPACR is taken into account.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Fuad Tabba <tabba@google.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Mark Brown <broonie@kernel.org>
Cc: Oliver Upton <oliver.upton@linux.dev>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20250617133718.4014181-6-mark.rutland@arm.com
Signed-off-by: Marc Zyngier <maz@kernel.org>
arch/arm64/kvm/hyp/include/hyp/switch.h

index 8a77fcccbcf6e56cb6269564128636c0e4aab8c8..2ad57b117385a293341371c95d96af82dde873bd 100644 (file)
@@ -616,11 +616,6 @@ static void kvm_hyp_save_fpsimd_host(struct kvm_vcpu *vcpu)
         */
        if (system_supports_sve()) {
                __hyp_sve_save_host();
-
-               /* Re-enable SVE traps if not supported for the guest vcpu. */
-               if (!vcpu_has_sve(vcpu))
-                       cpacr_clear_set(CPACR_EL1_ZEN, 0);
-
        } else {
                __fpsimd_save_state(host_data_ptr(host_ctxt.fp_regs));
        }
@@ -671,10 +666,7 @@ static inline bool kvm_hyp_handle_fpsimd(struct kvm_vcpu *vcpu, u64 *exit_code)
        /* Valid trap.  Switch the context: */
 
        /* First disable enough traps to allow us to update the registers */
-       if (sve_guest || (is_protected_kvm_enabled() && system_supports_sve()))
-               cpacr_clear_set(0, CPACR_EL1_FPEN | CPACR_EL1_ZEN);
-       else
-               cpacr_clear_set(0, CPACR_EL1_FPEN);
+       __deactivate_cptr_traps(vcpu);
        isb();
 
        /* Write out the host state if it's in the registers */
@@ -696,6 +688,13 @@ static inline bool kvm_hyp_handle_fpsimd(struct kvm_vcpu *vcpu, u64 *exit_code)
 
        *host_data_ptr(fp_owner) = FP_STATE_GUEST_OWNED;
 
+       /*
+        * Re-enable traps necessary for the current state of the guest, e.g.
+        * those enabled by a guest hypervisor. The ERET to the guest will
+        * provide the necessary context synchronization.
+        */
+       __activate_cptr_traps(vcpu);
+
        return true;
 }