diff options
author | Paolo Bonzini <pbonzini@redhat.com> | 2019-11-13 15:47:06 +0100 |
---|---|---|
committer | Paolo Bonzini <pbonzini@redhat.com> | 2019-11-13 15:47:06 +0100 |
commit | 13fb59276b47db556370bba53b5b55f3849dd8c9 (patch) | |
tree | efe502b1e3302e2e0930a96830217984e76be3b5 /arch | |
parent | 8c5bd25bf42effd194d4b0b43895c42b374e620b (diff) | |
download | linux-13fb59276b47db556370bba53b5b55f3849dd8c9.tar.bz2 |
kvm: x86: disable shattered huge page recovery for PREEMPT_RT.
If a huge page is recovered (and becomes no executable) while another
thread is executing it, the resulting contention on mmu_lock can cause
latency spikes. Disabling recovery for PREEMPT_RT kernels fixes this
issue.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Diffstat (limited to 'arch')
-rw-r--r-- | arch/x86/kvm/mmu.c | 5 |
1 files changed, 5 insertions, 0 deletions
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index fd6012eef9c9..cf718fa23dff 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -51,7 +51,12 @@ extern bool itlb_multihit_kvm_mitigation; static int __read_mostly nx_huge_pages = -1; +#ifdef CONFIG_PREEMPT_RT +/* Recovery can cause latency spikes, disable it for PREEMPT_RT. */ +static uint __read_mostly nx_huge_pages_recovery_ratio = 0; +#else static uint __read_mostly nx_huge_pages_recovery_ratio = 60; +#endif static int set_nx_huge_pages(const char *val, const struct kernel_param *kp); static int set_nx_huge_pages_recovery_ratio(const char *val, const struct kernel_param *kp); |