summaryrefslogtreecommitdiffstats
path: root/mm/ksm.c
diff options
context:
space:
mode:
authorYang Yang <yang.yang29@zte.com.cn>2022-03-22 14:46:33 -0700
committerLinus Torvalds <torvalds@linux-foundation.org>2022-03-22 15:57:09 -0700
commit4d45c3aff5ebf80d329eba0f90544d20224f612d (patch)
treefc2d0043a99fe44608ed8f01da3cfa3e4f7365cb /mm/ksm.c
parentd8c47cc7bf602ef73384a00869a70148146c1191 (diff)
downloadlinux-4d45c3aff5ebf80d329eba0f90544d20224f612d.tar.bz2
mm/vmstat: add event for ksm swapping in copy
When faults in from swap what used to be a KSM page and that page had been swapped in before, system has to make a copy, and leaves remerging the pages to a later pass of ksmd. That is not good for performace, we'd better to reduce this kind of copy. There are some ways to reduce it, for example lessen swappiness or madvise(, , MADV_MERGEABLE) range. So add this event to support doing this tuning. Just like this patch: "mm, THP, swap: add THP swapping out fallback counting". Link: https://lkml.kernel.org/r/20220113023839.758845-1-yang.yang29@zte.com.cn Signed-off-by: Yang Yang <yang.yang29@zte.com.cn> Reviewed-by: Ran Xiaokai <ran.xiaokai@zte.com.cn> Cc: Hugh Dickins <hughd@google.com> Cc: Yang Shi <yang.shi@linux.alibaba.com> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: Saravanan D <saravanand@fb.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/ksm.c')
-rw-r--r--mm/ksm.c3
1 files changed, 3 insertions, 0 deletions
diff --git a/mm/ksm.c b/mm/ksm.c
index c20bd4d9a0d9..4a7f8614e57d 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -2595,6 +2595,9 @@ struct page *ksm_might_need_to_copy(struct page *page,
SetPageDirty(new_page);
__SetPageUptodate(new_page);
__SetPageLocked(new_page);
+#ifdef CONFIG_SWAP
+ count_vm_event(KSM_SWPIN_COPY);
+#endif
}
return new_page;