summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorQais Yousef <qais.yousef@arm.com>2020-10-27 21:51:13 +0000
committerMarc Zyngier <maz@kernel.org>2020-10-30 16:06:22 +0000
commit22f553842b14a1289c088a79a67fb479d3fa2a4e (patch)
tree1bda703a6c17ac70f6c37f57193f505ede6e874a
parentd86de40decaa14e6613af1b2783bf4d589d0f38b (diff)
downloadlinux-22f553842b14a1289c088a79a67fb479d3fa2a4e.tar.bz2
KVM: arm64: Handle Asymmetric AArch32 systems
On a system without uniform support for AArch32 at EL0, it is possible for the guest to force run AArch32 at EL0 and potentially cause an illegal exception if running on a core without AArch32. Add an extra check so that if we catch the guest doing that, then we prevent it from running again by resetting vcpu->arch.target and return ARM_EXCEPTION_IL. We try to catch this misbehaviour as early as possible and not rely on an illegal exception occuring to signal the problem. Attempting to run a 32bit app in the guest will produce an error from QEMU if the guest exits while running in AArch32 EL0. Tested on Juno by instrumenting the host to fake asym aarch32 and instrumenting KVM to make the asymmetry visible to the guest. [will: Incorporated feedback from Marc] Signed-off-by: Qais Yousef <qais.yousef@arm.com> Signed-off-by: Will Deacon <will@kernel.org> Signed-off-by: Marc Zyngier <maz@kernel.org> Cc: James Morse <james.morse@arm.com> Cc: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20201021104611.2744565-2-qais.yousef@arm.com Link: https://lore.kernel.org/r/20201027215118.27003-2-will@kernel.org
-rw-r--r--arch/arm64/kvm/arm.c19
1 files changed, 19 insertions, 0 deletions
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index f56122eedffc..a3b32df1afb0 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -808,6 +808,25 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
preempt_enable();
+ /*
+ * The ARMv8 architecture doesn't give the hypervisor
+ * a mechanism to prevent a guest from dropping to AArch32 EL0
+ * if implemented by the CPU. If we spot the guest in such
+ * state and that we decided it wasn't supposed to do so (like
+ * with the asymmetric AArch32 case), return to userspace with
+ * a fatal error.
+ */
+ if (!system_supports_32bit_el0() && vcpu_mode_is_32bit(vcpu)) {
+ /*
+ * As we have caught the guest red-handed, decide that
+ * it isn't fit for purpose anymore by making the vcpu
+ * invalid. The VMM can try and fix it by issuing a
+ * KVM_ARM_VCPU_INIT if it really wants to.
+ */
+ vcpu->arch.target = -1;
+ ret = ARM_EXCEPTION_IL;
+ }
+
ret = handle_exit(vcpu, ret);
}