summaryrefslogtreecommitdiffstats
path: root/mm
diff options
context:
space:
mode:
authorRik van Riel <riel@redhat.com>2016-09-08 21:30:53 -0400
committerIngo Molnar <mingo@kernel.org>2016-09-13 20:31:33 +0200
commitd59dc7bcfa649ef2128a76b6487b16f4b3f14d23 (patch)
tree0a1be3351915100a27ac466afcc65f555a71a248 /mm
parentb95202a3b6bb8715a716dbdb15cdb82bf622260b (diff)
downloadlinux-d59dc7bcfa649ef2128a76b6487b16f4b3f14d23.tar.bz2
sched/numa, mm: Revert to checking pmd/pte_write instead of VMA flags
Commit: 4d9424669946 ("mm: convert p[te|md]_mknonnuma and remaining page table manipulations") changed NUMA balancing from _PAGE_NUMA to using PROT_NONE, and was quickly found to introduce a regression with NUMA grouping. It was followed up by these commits: 53da3bc2ba9e ("mm: fix up numa read-only thread grouping logic") bea66fbd11af ("mm: numa: group related processes based on VMA flags instead of page table flags") b191f9b106ea ("mm: numa: preserve PTE write permissions across a NUMA hinting fault") The first of those two commits try alternate approaches to NUMA grouping, which apparently do not work as well as looking at the PTE write permissions. The latter patch preserves the PTE write permissions across a NUMA protection fault. However, it forgets to revert the condition for whether or not to group tasks together back to what it was before v3.19, even though the information is now preserved in the page tables once again. This patch brings the NUMA grouping heuristic back to what it was before commit 4d9424669946, which the changelogs of subsequent commits suggest worked best. We have all the information again. We should probably use it. Signed-off-by: Rik van Riel <riel@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: aarcange@redhat.com Cc: linux-mm@kvack.org Cc: mgorman@suse.de Link: http://lkml.kernel.org/r/20160908213053.07c992a9@annuminas.surriel.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'mm')
-rw-r--r--mm/huge_memory.c2
-rw-r--r--mm/memory.c2
2 files changed, 2 insertions, 2 deletions
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 2db2112aa31e..c8bde270f557 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1168,7 +1168,7 @@ int do_huge_pmd_numa_page(struct fault_env *fe, pmd_t pmd)
}
/* See similar comment in do_numa_page for explanation */
- if (!(vma->vm_flags & VM_WRITE))
+ if (!pmd_write(pmd))
flags |= TNF_NO_GROUP;
/*
diff --git a/mm/memory.c b/mm/memory.c
index 83be99d9d8a1..558c85270ae2 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3398,7 +3398,7 @@ static int do_numa_page(struct fault_env *fe, pte_t pte)
* pte_dirty has unpredictable behaviour between PTE scan updates,
* background writeback, dirty balancing and application behaviour.
*/
- if (!(vma->vm_flags & VM_WRITE))
+ if (!pte_write(pte))
flags |= TNF_NO_GROUP;
/*