diff options
author | Keqian Zhu <zhukeqian1@huawei.com> | 2020-04-13 20:20:23 +0800 |
---|---|---|
committer | Marc Zyngier <maz@kernel.org> | 2020-05-16 15:05:02 +0100 |
commit | c862626e19efdc26b26481515470b160e8fe52f3 (patch) | |
tree | e9ef5a63c372c89f85404e6b58fc5683455fd9c8 /arch/arm64/kvm | |
parent | 0529c9021252a58b6d3808da86986a614b900b1b (diff) | |
download | linux-c862626e19efdc26b26481515470b160e8fe52f3.tar.bz2 |
KVM: arm64: Support enabling dirty log gradually in small chunks
There is already support of enabling dirty log gradually in small chunks
for x86 in commit 3c9bd4006bfc ("KVM: x86: enable dirty log gradually in
small chunks"). This adds support for arm64.
x86 still writes protect all huge pages when DIRTY_LOG_INITIALLY_ALL_SET
is enabled. However, for arm64, both huge pages and normal pages can be
write protected gradually by userspace.
Under the Huawei Kunpeng 920 2.6GHz platform, I did some tests on 128G
Linux VMs with different page size. The memory pressure is 127G in each
case. The time taken of memory_global_dirty_log_start in QEMU is listed
below:
Page Size Before After Optimization
4K 650ms 1.8ms
2M 4ms 1.8ms
1G 2ms 1.8ms
Besides the time reduction, the biggest improvement is that we will minimize
the performance side effect (because of dissolving huge pages and marking
memslots dirty) on guest after enabling dirty log.
Signed-off-by: Keqian Zhu <zhukeqian1@huawei.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20200413122023.52583-1-zhukeqian1@huawei.com
Diffstat (limited to 'arch/arm64/kvm')
-rw-r--r-- | arch/arm64/kvm/mmu.c | 12 |
1 files changed, 10 insertions, 2 deletions
diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 66eb8e3f6e8c..ddf85bf21897 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -2277,8 +2277,16 @@ void kvm_arch_commit_memory_region(struct kvm *kvm, * allocated dirty_bitmap[], dirty pages will be tracked while the * memory slot is write protected. */ - if (change != KVM_MR_DELETE && mem->flags & KVM_MEM_LOG_DIRTY_PAGES) - kvm_mmu_wp_memory_region(kvm, mem->slot); + if (change != KVM_MR_DELETE && mem->flags & KVM_MEM_LOG_DIRTY_PAGES) { + /* + * If we're with initial-all-set, we don't need to write + * protect any pages because they're all reported as dirty. + * Huge pages and normal pages will be write protect gradually. + */ + if (!kvm_dirty_log_manual_protect_and_init_set(kvm)) { + kvm_mmu_wp_memory_region(kvm, mem->slot); + } + } } int kvm_arch_prepare_memory_region(struct kvm *kvm, |