summaryrefslogtreecommitdiffstats
path: root/mm/swap.c
diff options
context:
space:
mode:
authorYang Shi <yang.shi@linux.alibaba.com>2020-04-01 21:06:23 -0700
committerLinus Torvalds <torvalds@linux-foundation.org>2020-04-02 09:35:28 -0700
commit9a9b6cce630d14851ab09534b6462258486048cd (patch)
tree951326bab4333b0a3328e0f5f510de8d7e27b903 /mm/swap.c
parent1eb6234e52f0cbb87f59c328687127866d57941a (diff)
downloadlinux-9a9b6cce630d14851ab09534b6462258486048cd.tar.bz2
mm: swap: use smp_mb__after_atomic() to order LRU bit set
Memory barrier is needed after setting LRU bit, but smp_mb() is too strong. Some architectures, i.e. x86, imply memory barrier with atomic operations, so replacing it with smp_mb__after_atomic() sounds better, which is nop on strong ordered machines, and full memory barriers on others. With this change the vm-scalability cases would perform better on x86, I saw total 6% improvement with this patch and previous inline fix. The test data (lru-file-readtwice throughput) against v5.6-rc4: mainline w/ inline fix w/ both (adding this) 150MB 154MB 159MB Fixes: 9c4e6b1a7027 ("mm, mlock, vmscan: no more skipping pagevecs") Signed-off-by: Yang Shi <yang.shi@linux.alibaba.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Tested-by: Shakeel Butt <shakeelb@google.com> Reviewed-by: Shakeel Butt <shakeelb@google.com> Acked-by: Vlastimil Babka <vbabka@suse.cz> Acked-by: Johannes Weiner <hannes@cmpxchg.org> Cc: Matthew Wilcox (Oracle) <willy@infradead.org> Link: http://lkml.kernel.org/r/1584500541-46817-2-git-send-email-yang.shi@linux.alibaba.com Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/swap.c')
-rw-r--r--mm/swap.c4
1 files changed, 2 insertions, 2 deletions
diff --git a/mm/swap.c b/mm/swap.c
index f502a2155e85..a4af8c999963 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -931,7 +931,6 @@ static void __pagevec_lru_add_fn(struct page *page, struct lruvec *lruvec,
VM_BUG_ON_PAGE(PageLRU(page), page);
- SetPageLRU(page);
/*
* Page becomes evictable in two ways:
* 1) Within LRU lock [munlock_vma_page() and __munlock_pagevec()].
@@ -958,7 +957,8 @@ static void __pagevec_lru_add_fn(struct page *page, struct lruvec *lruvec,
* looking at the same page) and the evictable page will be stranded
* in an unevictable LRU.
*/
- smp_mb();
+ SetPageLRU(page);
+ smp_mb__after_atomic();
if (page_evictable(page)) {
lru = page_lru(page);