summaryrefslogtreecommitdiffstats
path: root/virt
diff options
context:
space:
mode:
authorDavid Matlack <dmatlack@google.com>2021-08-04 22:28:40 +0000
committerPaolo Bonzini <pbonzini@redhat.com>2021-08-06 07:52:29 -0400
commitfe22ed827c5b60b895b15c5c3f04e04ac606be38 (patch)
treee53cb7787c35face88d10895d54307d72724bdae /virt
parent0f22af940dc8ec4f437189096a5f8677995323b0 (diff)
downloadlinux-fe22ed827c5b60b895b15c5c3f04e04ac606be38.tar.bz2
KVM: Cache the last used slot index per vCPU
The memslot for a given gfn is looked up multiple times during page fault handling. Avoid binary searching for it multiple times by caching the most recently used slot. There is an existing VM-wide last_used_slot but that does not work well for cases where vCPUs are accessing memory in different slots (see performance data below). Another benefit of caching the most recently use slot (versus looking up the slot once and passing around a pointer) is speeding up memslot lookups *across* faults and during spte prefetching. To measure the performance of this change I ran dirty_log_perf_test with 64 vCPUs and 64 memslots and measured "Populate memory time" and "Iteration 2 dirty memory time". Tests were ran with eptad=N to force dirty logging to use fast_page_fault so its performance could be measured. Config | Metric | Before | After ---------- | ----------------------------- | ------ | ------ tdp_mmu=Y | Populate memory time | 6.76s | 5.47s tdp_mmu=Y | Iteration 2 dirty memory time | 2.83s | 0.31s tdp_mmu=N | Populate memory time | 20.4s | 18.7s tdp_mmu=N | Iteration 2 dirty memory time | 2.65s | 0.30s The "Iteration 2 dirty memory time" results are especially compelling because they are equivalent to running the same test with a single memslot. In other words, fast_page_fault performance no longer scales with the number of memslots. Signed-off-by: David Matlack <dmatlack@google.com> Message-Id: <20210804222844.1419481-4-dmatlack@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Diffstat (limited to 'virt')
-rw-r--r--virt/kvm/kvm_main.c22
1 files changed, 21 insertions, 1 deletions
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 1984c7389787..30d322519253 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -415,6 +415,7 @@ static void kvm_vcpu_init(struct kvm_vcpu *vcpu, struct kvm *kvm, unsigned id)
vcpu->preempted = false;
vcpu->ready = false;
preempt_notifier_init(&vcpu->preempt_notifier, &kvm_preempt_ops);
+ vcpu->last_used_slot = 0;
}
void kvm_vcpu_destroy(struct kvm_vcpu *vcpu)
@@ -2025,7 +2026,26 @@ EXPORT_SYMBOL_GPL(gfn_to_memslot);
struct kvm_memory_slot *kvm_vcpu_gfn_to_memslot(struct kvm_vcpu *vcpu, gfn_t gfn)
{
- return __gfn_to_memslot(kvm_vcpu_memslots(vcpu), gfn);
+ struct kvm_memslots *slots = kvm_vcpu_memslots(vcpu);
+ struct kvm_memory_slot *slot;
+ int slot_index;
+
+ slot = try_get_memslot(slots, vcpu->last_used_slot, gfn);
+ if (slot)
+ return slot;
+
+ /*
+ * Fall back to searching all memslots. We purposely use
+ * search_memslots() instead of __gfn_to_memslot() to avoid
+ * thrashing the VM-wide last_used_index in kvm_memslots.
+ */
+ slot = search_memslots(slots, gfn, &slot_index);
+ if (slot) {
+ vcpu->last_used_slot = slot_index;
+ return slot;
+ }
+
+ return NULL;
}
EXPORT_SYMBOL_GPL(kvm_vcpu_gfn_to_memslot);