summaryrefslogtreecommitdiffstats
path: root/arch/x86
diff options
context:
space:
mode:
authorRik van Riel <riel@redhat.com>2012-11-06 09:55:18 +0000
committerMel Gorman <mgorman@suse.de>2012-12-11 14:28:33 +0000
commite4a1cc56e4d728eb87072c71c07581524e5160b1 (patch)
tree291232b64431eeb2c815adc38b20d66cb3355364 /arch/x86
parent0f9a921cf9bf3b524feddc484e2b4d070b7ca0d0 (diff)
downloadlinux-e4a1cc56e4d728eb87072c71c07581524e5160b1.tar.bz2
x86: mm: drop TLB flush from ptep_set_access_flags
Intel has an architectural guarantee that the TLB entry causing a page fault gets invalidated automatically. This means we should be able to drop the local TLB invalidation. Because of the way other areas of the page fault code work, chances are good that all x86 CPUs do this. However, if someone somewhere has an x86 CPU that does not invalidate the TLB entry causing a page fault, this one-liner should be easy to revert. Signed-off-by: Rik van Riel <riel@redhat.com> Cc: Linus Torvalds <torvalds@kernel.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Michel Lespinasse <walken@google.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ingo Molnar <mingo@redhat.com>
Diffstat (limited to 'arch/x86')
-rw-r--r--arch/x86/mm/pgtable.c1
1 files changed, 0 insertions, 1 deletions
diff --git a/arch/x86/mm/pgtable.c b/arch/x86/mm/pgtable.c
index be3bb4690887..7353de3d98a7 100644
--- a/arch/x86/mm/pgtable.c
+++ b/arch/x86/mm/pgtable.c
@@ -317,7 +317,6 @@ int ptep_set_access_flags(struct vm_area_struct *vma,
if (changed && dirty) {
*ptep = entry;
pte_update_defer(vma->vm_mm, address, ptep);
- __flush_tlb_one(address);
}
return changed;