summaryrefslogtreecommitdiffstats
path: root/arch/s390/include/asm/tlb.h
diff options
context:
space:
mode:
authorLinus Torvalds <torvalds@linux-foundation.org>2022-10-29 11:45:07 -0700
committerLinus Torvalds <torvalds@linux-foundation.org>2022-11-02 16:10:30 -0700
commit28154ddc676efa64e8e792389787eb85199d2772 (patch)
tree6bf1a0ba9189c8e904a91b71e01ec5e44651c8a3 /arch/s390/include/asm/tlb.h
parent655d4bdee63563392b0e5fb40f973c6d41658070 (diff)
downloadlinux-28154ddc676efa64e8e792389787eb85199d2772.tar.bz2
mm: delay rmap removal until after TLB flushmmu_gather-race-fix
When we remove a page table entry, we are very careful to only free the page after we have flushed the TLB, because other CPUs could still be using the page through stale TLB entries until after the flush. However, we have removed the rmap entry for that page early, which means that functions like folio_mkclean() would end up not serializing with the page table lock because the page had already been made invisible to rmap. And that is a problem, because while the TLB entry exists, we could end up with the following situation: (a) one CPU could come in and clean it, never seeing our mapping of the page (b) another CPU could continue to use the stale and dirty TLB entry and continue to write to said page resulting in a page that has been dirtied, but then marked clean again, all while another CPU might have dirtied it some more. End result: possibly lost dirty data. This commit uses the same old TLB gather array that we use to delay the freeing of the page to also say 'remove from rmap after flush', so that we can keep the rmap entries alive until all TLB entries have been flushed. It might be worth noting that this means that the page_zap_pte_rmap() is now called outside the page table lock. That was never mutual exclusion (since the same page could be mapped under multiple different page tables), but it does mean that it needs to use the more careful version of dec_lruvec_page_state() that doesn't depend on being called in a non-preemptable context. NOTE! While the "possibly lost dirty data" sounds catastrophic, for this all to happen you need to have a user thread doing either madvise() with MADV_DONTNEED or a full re-mmap() of the area concurrently with another thread continuing to use said mapping. So arguably this is about user space doing crazy things, but from a VM consistency standpoint it's better if we track the dirty bit properly even when user space goes off the rails. Reported-by: Nadav Amit <nadav.amit@gmail.com> Link: https://lore.kernel.org/all/B88D3073-440A-41C7-95F4-895D3F657EF2@gmail.com/ Cc: Will Deacon <will@kernel.org> Cc: Aneesh Kumar <aneesh.kumar@linux.ibm.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Nick Piggin <npiggin@gmail.com> Cc: Heiko Carstens <hca@linux.ibm.com> Cc: Vasily Gorbik <gor@linux.ibm.com> Cc: Alexander Gordeev <agordeev@linux.ibm.com> Cc: Christian Borntraeger <borntraeger@linux.ibm.com> Cc: Sven Schnelle <svens@linux.ibm.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Gerald Schaefer <gerald.schaefer@linux.ibm.com> # s390 Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'arch/s390/include/asm/tlb.h')
-rw-r--r--arch/s390/include/asm/tlb.h10
1 files changed, 8 insertions, 2 deletions
diff --git a/arch/s390/include/asm/tlb.h b/arch/s390/include/asm/tlb.h
index 3a5c8fb590e5..0d2c6c0168a3 100644
--- a/arch/s390/include/asm/tlb.h
+++ b/arch/s390/include/asm/tlb.h
@@ -25,7 +25,8 @@
void __tlb_remove_table(void *_table);
static inline void tlb_flush(struct mmu_gather *tlb);
static inline bool __tlb_remove_page_size(struct mmu_gather *tlb,
- struct page *page, int page_size);
+ struct page *page, int page_size,
+ unsigned int flags);
#define tlb_flush tlb_flush
#define pte_free_tlb pte_free_tlb
@@ -36,14 +37,19 @@ static inline bool __tlb_remove_page_size(struct mmu_gather *tlb,
#include <asm/tlbflush.h>
#include <asm-generic/tlb.h>
+void page_zap_pte_rmap(struct page *);
+
/*
* Release the page cache reference for a pte removed by
* tlb_ptep_clear_flush. In both flush modes the tlb for a page cache page
* has already been freed, so just do free_page_and_swap_cache.
*/
static inline bool __tlb_remove_page_size(struct mmu_gather *tlb,
- struct page *page, int page_size)
+ struct page *page, int page_size,
+ unsigned int flags)
{
+ if (flags & TLB_ZAP_RMAP)
+ page_zap_pte_rmap(page);
free_page_and_swap_cache(page);
return false;
}