diff options
author | Hugh Dickins <hughd@google.com> | 2022-02-14 18:33:17 -0800 |
---|---|---|
committer | Matthew Wilcox (Oracle) <willy@infradead.org> | 2022-02-17 11:57:06 -0500 |
commit | c3096e6782b733158bf34f6bbb4567808d4e0740 (patch) | |
tree | a28708da7662fc586a0ad8df19d29ccc162ecb12 /mm/internal.h | |
parent | 34b6792380ce4f4b41018351cd67c9c26f4a7a0d (diff) | |
download | linux-c3096e6782b733158bf34f6bbb4567808d4e0740.tar.bz2 |
mm/migrate: __unmap_and_move() push good newpage to LRU
Compaction, NUMA page movement, THP collapse/split, and memory failure
do isolate unevictable pages from their "LRU", losing the record of
mlock_count in doing so (isolators are likely to use page->lru for their
own private lists, so mlock_count has to be presumed lost).
That's unfortunate, and we should put in some work to correct that: one
can imagine a function to build up the mlock_count again - but it would
require i_mmap_rwsem for read, so be careful where it's called. Or
page_referenced_one() and try_to_unmap_one() might do that extra work.
But one place that can very easily be improved is page migration's
__unmap_and_move(): a small adjustment to where the successful new page
is put back on LRU, and its mlock_count (if any) is built back up by
remove_migration_ptes().
Signed-off-by: Hugh Dickins <hughd@google.com>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Diffstat (limited to 'mm/internal.h')
0 files changed, 0 insertions, 0 deletions