KVM: arm64: Fix reporting of endianess when the access originates at EL0
authorMarc Zyngier <maz@kernel.org>
Tue, 12 Oct 2021 11:23:12 +0000 (12:23 +0100)
committerMarc Zyngier <maz@kernel.org>
Tue, 12 Oct 2021 14:47:25 +0000 (15:47 +0100)
We currently check SCTLR_EL1.EE when computing the address of
a faulting guest access. However, the fault could have occured at
EL0, in which case the right bit to check would be SCTLR_EL1.E0E.

This is pretty unlikely to cause any issue in practice: You'd have
to have a guest with a LE EL1 and a BE EL0 (or the other way around),
and have mapped a device into the EL0 page tables.

Good luck with that!

Signed-off-by: Marc Zyngier <maz@kernel.org>
Reviewed-by: Andrew Jones <drjones@redhat.com>
Link: https://lore.kernel.org/r/20211012112312.1247467-1-maz@kernel.org
arch/arm64/include/asm/kvm_emulate.h

index fd418955e31e68c2d2d7ec2d477d724f3dd34bcc..f4871e47b2d0b121e2bfed24d706758d31d6f310 100644 (file)
@@ -396,7 +396,10 @@ static inline bool kvm_vcpu_is_be(struct kvm_vcpu *vcpu)
        if (vcpu_mode_is_32bit(vcpu))
                return !!(*vcpu_cpsr(vcpu) & PSR_AA32_E_BIT);
 
-       return !!(vcpu_read_sys_reg(vcpu, SCTLR_EL1) & (1 << 25));
+       if (vcpu_mode_priv(vcpu))
+               return !!(vcpu_read_sys_reg(vcpu, SCTLR_EL1) & SCTLR_ELx_EE);
+       else
+               return !!(vcpu_read_sys_reg(vcpu, SCTLR_EL1) & SCTLR_EL1_E0E);
 }
 
 static inline unsigned long vcpu_data_guest_to_host(struct kvm_vcpu *vcpu,