diff options
author | Linus Torvalds <torvalds@linux-foundation.org> | 2014-04-25 16:05:40 -0700 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2014-04-25 16:05:40 -0700 |
commit | 1cf35d47712dd5dc4d62c6ce984f04ac6eab0408 (patch) | |
tree | f00857df7a2eec9520c1a950a0f9ae16cdfc4627 /arch/um/include | |
parent | 9a60ee117bbeaf2fb9a02ea80a6bdbc2811ca4d2 (diff) | |
download | linux-1cf35d47712dd5dc4d62c6ce984f04ac6eab0408.tar.bz2 |
mm: split 'tlb_flush_mmu()' into tlb flushing and memory freeing parts
The mmu-gather operation 'tlb_flush_mmu()' has done two things: the
actual tlb flush operation, and the batched freeing of the pages that
the TLB entries pointed at.
This splits the operation into separate phases, so that the forced
batched flushing done by zap_pte_range() can now do the actual TLB flush
while still holding the page table lock, but delay the batched freeing
of all the pages to after the lock has been dropped.
This in turn allows us to avoid a race condition between
set_page_dirty() (as called by zap_pte_range() when it finds a dirty
shared memory pte) and page_mkclean(): because we now flush all the
dirty page data from the TLB's while holding the pte lock,
page_mkclean() will be held up walking the (recently cleaned) page
tables until after the TLB entries have been flushed from all CPU's.
Reported-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Tested-by: Dave Hansen <dave.hansen@intel.com>
Acked-by: Hugh Dickins <hughd@google.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Russell King - ARM Linux <linux@arm.linux.org.uk>
Cc: Tony Luck <tony.luck@intel.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'arch/um/include')
-rw-r--r-- | arch/um/include/asm/tlb.h | 16 |
1 files changed, 14 insertions, 2 deletions
diff --git a/arch/um/include/asm/tlb.h b/arch/um/include/asm/tlb.h index 29b0301c18aa..16eb63fac57d 100644 --- a/arch/um/include/asm/tlb.h +++ b/arch/um/include/asm/tlb.h @@ -59,13 +59,25 @@ extern void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start, unsigned long end); static inline void +tlb_flush_mmu_tlbonly(struct mmu_gather *tlb) +{ + flush_tlb_mm_range(tlb->mm, tlb->start, tlb->end); +} + +static inline void +tlb_flush_mmu_free(struct mmu_gather *tlb) +{ + init_tlb_gather(tlb); +} + +static inline void tlb_flush_mmu(struct mmu_gather *tlb) { if (!tlb->need_flush) return; - flush_tlb_mm_range(tlb->mm, tlb->start, tlb->end); - init_tlb_gather(tlb); + tlb_flush_mmu_tlbonly(tlb); + tlb_flush_mmu_free(tlb); } /* tlb_finish_mmu |