summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorSean Christopherson <sean.j.christopherson@intel.com>2020-09-23 11:37:32 -0700
committerPaolo Bonzini <pbonzini@redhat.com>2020-09-28 07:57:41 -0400
commit5bcaf3e1715f49de8935fc77bf74837287cae77d (patch)
tree6404796dbf0917bf8b8265ab3fafdc17387dae85
parent3cf066127e8754f680ecb3b63ad7dd6949e5e54c (diff)
downloadlinux-5bcaf3e1715f49de8935fc77bf74837287cae77d.tar.bz2
KVM: x86/mmu: Account NX huge page disallowed iff huge page was requested
Condition the accounting of a disallowed huge NX page on the original requested level of the page being greater than the current iterator level. This does two things: accounts the page if and only if a huge page was actually disallowed, and accounts the shadow page if and only if it was the level at which the huge page was disallowed. For the latter case, the previous logic would account all shadow pages used to create the translation for the forced small page, e.g. even PML4, which can't be a huge page on current hardware, would be accounted as having been a disallowed huge page when using 5-level EPT. The overzealous accounting is purely a performance issue, i.e. the recovery thread will spuriously zap shadow pages, but otherwise the bad behavior is harmless. Cc: Junaid Shahid <junaids@google.com> Fixes: b8e8c8303ff28 ("kvm: mmu: ITLB_MULTIHIT mitigation") Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Message-Id: <20200923183735.584-6-sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-rw-r--r--arch/x86/kvm/mmu/mmu.c3
-rw-r--r--arch/x86/kvm/mmu/paging_tmpl.h2
2 files changed, 3 insertions, 2 deletions
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 0f35f1a32960..4b8d33657cd3 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -3383,7 +3383,8 @@ static int __direct_map(struct kvm_vcpu *vcpu, gpa_t gpa, u32 error_code,
it.level - 1, true, ACC_ALL);
link_shadow_page(vcpu, it.sptep, sp);
- if (is_tdp && huge_page_disallowed)
+ if (is_tdp && huge_page_disallowed &&
+ req_level >= it.level)
account_huge_nx_page(vcpu->kvm, sp);
}
}
diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h
index ba9af7c06f9d..24dd2e9ad816 100644
--- a/arch/x86/kvm/mmu/paging_tmpl.h
+++ b/arch/x86/kvm/mmu/paging_tmpl.h
@@ -708,7 +708,7 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, gpa_t addr,
sp = kvm_mmu_get_page(vcpu, base_gfn, addr,
it.level - 1, true, direct_access);
link_shadow_page(vcpu, it.sptep, sp);
- if (huge_page_disallowed)
+ if (huge_page_disallowed && req_level >= it.level)
account_huge_nx_page(vcpu->kvm, sp);
}
}