diff options
author | Marc Zyngier <maz@kernel.org> | 2020-07-04 13:30:55 +0100 |
---|---|---|
committer | Marc Zyngier <maz@kernel.org> | 2020-07-06 11:47:02 +0100 |
commit | 146f76cc84b787c4eec6ed73ebeec708a06e4ae4 (patch) | |
tree | 36ee854a0a245d215240b0a8a263c6b91c14f1a7 /arch/arm64 | |
parent | a3f574cd65487cd993f79ab235d70229d9302c1e (diff) | |
download | linux-146f76cc84b787c4eec6ed73ebeec708a06e4ae4.tar.bz2 |
KVM: arm64: PMU: Fix per-CPU access in preemptible context
Commit 07da1ffaa137 ("KVM: arm64: Remove host_cpu_context
member from vcpu structure") has, by removing the host CPU
context pointer, exposed that kvm_vcpu_pmu_restore_guest
is called in preemptible contexts:
[ 266.932442] BUG: using smp_processor_id() in preemptible [00000000] code: qemu-system-aar/779
[ 266.939721] caller is debug_smp_processor_id+0x20/0x30
[ 266.944157] CPU: 2 PID: 779 Comm: qemu-system-aar Tainted: G E 5.8.0-rc3-00015-g8d4aa58b2fe3 #1374
[ 266.954268] Hardware name: amlogic w400/w400, BIOS 2020.04 05/22/2020
[ 266.960640] Call trace:
[ 266.963064] dump_backtrace+0x0/0x1e0
[ 266.966679] show_stack+0x20/0x30
[ 266.969959] dump_stack+0xe4/0x154
[ 266.973338] check_preemption_disabled+0xf8/0x108
[ 266.977978] debug_smp_processor_id+0x20/0x30
[ 266.982307] kvm_vcpu_pmu_restore_guest+0x2c/0x68
[ 266.986949] access_pmcr+0xf8/0x128
[ 266.990399] perform_access+0x8c/0x250
[ 266.994108] kvm_handle_sys_reg+0x10c/0x2f8
[ 266.998247] handle_exit+0x78/0x200
[ 267.001697] kvm_arch_vcpu_ioctl_run+0x2ac/0xab8
Note that the bug was always there, it is only the switch to
using percpu accessors that made it obvious.
The fix is to wrap these accesses in a preempt-disabled section,
so that we sample a coherent context on trap from the guest.
Fixes: 435e53fb5e21 ("arm64: KVM: Enable VHE support for :G/:H perf event modifiers")
Cc:: Andrew Murray <amurray@thegoodpenguin.co.uk>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Diffstat (limited to 'arch/arm64')
-rw-r--r-- | arch/arm64/kvm/pmu.c | 7 |
1 files changed, 6 insertions, 1 deletions
diff --git a/arch/arm64/kvm/pmu.c b/arch/arm64/kvm/pmu.c index b5ae3a5d509e..3c224162b3dd 100644 --- a/arch/arm64/kvm/pmu.c +++ b/arch/arm64/kvm/pmu.c @@ -159,7 +159,10 @@ static void kvm_vcpu_pmu_disable_el0(unsigned long events) } /* - * On VHE ensure that only guest events have EL0 counting enabled + * On VHE ensure that only guest events have EL0 counting enabled. + * This is called from both vcpu_{load,put} and the sysreg handling. + * Since the latter is preemptible, special care must be taken to + * disable preemption. */ void kvm_vcpu_pmu_restore_guest(struct kvm_vcpu *vcpu) { @@ -169,12 +172,14 @@ void kvm_vcpu_pmu_restore_guest(struct kvm_vcpu *vcpu) if (!has_vhe()) return; + preempt_disable(); host = this_cpu_ptr(&kvm_host_data); events_guest = host->pmu_events.events_guest; events_host = host->pmu_events.events_host; kvm_vcpu_pmu_enable_el0(events_guest); kvm_vcpu_pmu_disable_el0(events_host); + preempt_enable(); } /* |