KVM: arm64: Ensure TLBI uses correct VMID after changing context
authorWill Deacon <will@kernel.org>
Wed, 14 Aug 2024 12:34:29 +0000 (13:34 +0100)
committerMarc Zyngier <maz@kernel.org>
Thu, 15 Aug 2024 13:05:02 +0000 (14:05 +0100)
When the target context passed to enter_vmid_context() matches the
current running context, the function returns early without manipulating
the registers of the stage-2 MMU. This can result in a stale VMID due to
the lack of an ISB instruction in exit_vmid_context() after writing the
VTTBR when ARM64_WORKAROUND_SPECULATIVE_AT is not enabled.

For example, with pKVM enabled:

// Initially running in host context
enter_vmid_context(guest);
-> __load_stage2(guest); isb // Writes VTCR & VTTBR
exit_vmid_context(guest);
-> __load_stage2(host); // Restores VTCR & VTTBR

enter_vmid_context(host);
-> Returns early as we're already in host context
tlbi vmalls12e1is // !!! Can use the stale VMID as we
// haven't performed context
// synchronisation since restoring
// VTTBR.VMID

Add an unconditional ISB instruction to exit_vmid_context() after
restoring the VTTBR. This already existed for the
ARM64_WORKAROUND_SPECULATIVE_AT path, so we can simply hoist that onto
the common path.

Cc: Marc Zyngier <maz@kernel.org>
Cc: Oliver Upton <oliver.upton@linux.dev>
Cc: Fuad Tabba <tabba@google.com>
Fixes: 58f3b0fc3b87 ("KVM: arm64: Support TLB invalidation in guest context")
Signed-off-by: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20240814123429.20457-3-will@kernel.org
Signed-off-by: Marc Zyngier <maz@kernel.org>
arch/arm64/kvm/hyp/nvhe/tlb.c

index ca3c09df8d7c96a3c7e0479cd2d7bbc366e0b97d..48da9ca9763f6eddb60b77372891bfc9eab2ffbb 100644 (file)
@@ -132,10 +132,10 @@ static void exit_vmid_context(struct tlb_inv_context *cxt)
        else
                __load_host_stage2();
 
-       if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT)) {
-               /* Ensure write of the old VMID */
-               isb();
+       /* Ensure write of the old VMID */
+       isb();
 
+       if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT)) {
                if (!(cxt->sctlr & SCTLR_ELx_M)) {
                        write_sysreg_el1(cxt->sctlr, SYS_SCTLR);
                        isb();