diff options
author | Linus Torvalds <torvalds@linux-foundation.org> | 2014-04-27 15:08:12 -0700 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2014-04-27 15:08:12 -0700 |
commit | ac6c9e2bed093c4b60e313674fb7aec4f264c3d4 (patch) | |
tree | 6eccadffcc04a7b66021ab3bd114cafcb52181c6 /arch/arm | |
parent | 33c0022f0e687b0161a9bb84a5671df932551e3a (diff) | |
parent | 1cf35d47712dd5dc4d62c6ce984f04ac6eab0408 (diff) | |
download | linux-ac6c9e2bed093c4b60e313674fb7aec4f264c3d4.tar.bz2 |
Merge branch 'safe-dirty-tlb-flush'
This merges the patch to fix possible loss of dirty bit on munmap() or
madvice(DONTNEED). If there are concurrent writers on other CPU's that
have the unmapped/unneeded page in their TLBs, their writes to the page
could possibly get lost if a third CPU raced with the TLB flush and did
a page_mkclean() before the page was fully written.
Admittedly, if you unmap() or madvice(DONTNEED) an area _while_ another
thread is still busy writing to it, you deserve all the lost writes you
could get. But we kernel people hold ourselves to higher quality
standards than "crazy people deserve to lose", because, well, we've seen
people do all kinds of crazy things.
So let's get it right, just because we can, and we don't have to worry
about it.
* safe-dirty-tlb-flush:
mm: split 'tlb_flush_mmu()' into tlb flushing and memory freeing parts
Diffstat (limited to 'arch/arm')
-rw-r--r-- | arch/arm/include/asm/tlb.h | 12 |
1 files changed, 11 insertions, 1 deletions
diff --git a/arch/arm/include/asm/tlb.h b/arch/arm/include/asm/tlb.h index 0baf7f0d9394..f1a0dace3efe 100644 --- a/arch/arm/include/asm/tlb.h +++ b/arch/arm/include/asm/tlb.h @@ -98,15 +98,25 @@ static inline void __tlb_alloc_page(struct mmu_gather *tlb) } } -static inline void tlb_flush_mmu(struct mmu_gather *tlb) +static inline void tlb_flush_mmu_tlbonly(struct mmu_gather *tlb) { tlb_flush(tlb); +} + +static inline void tlb_flush_mmu_free(struct mmu_gather *tlb) +{ free_pages_and_swap_cache(tlb->pages, tlb->nr); tlb->nr = 0; if (tlb->pages == tlb->local) __tlb_alloc_page(tlb); } +static inline void tlb_flush_mmu(struct mmu_gather *tlb) +{ + tlb_flush_mmu_tlbonly(tlb); + tlb_flush_mmu_free(tlb); +} + static inline void tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm, unsigned long start, unsigned long end) { |