From b8aa9d9d95b3b4b60d42ac95f65d33a92527aef3 Mon Sep 17 00:00:00 2001 From: Wei Yang Date: Thu, 6 Aug 2020 23:23:40 -0700 Subject: mm/mremap: it is sure to have enough space when extent meets requirement Patch series "mm/mremap: cleanup move_page_tables() a little", v5. move_page_tables() tries to move page table by PMD or PTE. The root reason is if it tries to move PMD, both old and new range should be PMD aligned. But current code calculate old range and new range separately. This leads to some redundant check and calculation. This cleanup tries to consolidate the range check in one place to reduce some extra range handling. This patch (of 3): old_end is passed to these two functions to check whether there is enough space to do the move, while this check is done before invoking these functions. These two functions only would be invoked when extent meets the requirement and there is one check before invoking these functions: if (extent > old_end - old_addr) extent = old_end - old_addr; This implies (old_end - old_addr) won't fail the check in these two functions. Signed-off-by: Wei Yang Signed-off-by: Andrew Morton Tested-by: Dmitry Osipenko Acked-by: Kirill A. Shutemov Cc: Vlastimil Babka Cc: Yang Shi Cc: Thomas Hellstrom (VMware) Cc: Anshuman Khandual Cc: Sean Christopherson Cc: Wei Yang Cc: Peter Xu Cc: Aneesh Kumar K.V Cc: Matthew Wilcox Cc: Thomas Hellstrom Link: http://lkml.kernel.org/r/20200710092835.56368-1-richard.weiyang@linux.alibaba.com Link: http://lkml.kernel.org/r/20200710092835.56368-2-richard.weiyang@linux.alibaba.com Link: http://lkml.kernel.org/r/20200708095028.41706-1-richard.weiyang@linux.alibaba.com Link: http://lkml.kernel.org/r/20200708095028.41706-2-richard.weiyang@linux.alibaba.com Signed-off-by: Linus Torvalds --- mm/huge_memory.c | 7 ++----- 1 file changed, 2 insertions(+), 5 deletions(-) (limited to 'mm/huge_memory.c') diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 78c84bee7e29..1e580fdad4d0 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1722,17 +1722,14 @@ static pmd_t move_soft_dirty_pmd(pmd_t pmd) } bool move_huge_pmd(struct vm_area_struct *vma, unsigned long old_addr, - unsigned long new_addr, unsigned long old_end, - pmd_t *old_pmd, pmd_t *new_pmd) + unsigned long new_addr, pmd_t *old_pmd, pmd_t *new_pmd) { spinlock_t *old_ptl, *new_ptl; pmd_t pmd; struct mm_struct *mm = vma->vm_mm; bool force_flush = false; - if ((old_addr & ~HPAGE_PMD_MASK) || - (new_addr & ~HPAGE_PMD_MASK) || - old_end - old_addr < HPAGE_PMD_SIZE) + if ((old_addr & ~HPAGE_PMD_MASK) || (new_addr & ~HPAGE_PMD_MASK)) return false; /* -- cgit v1.2.3 From 349d9fbb0b0a6734bcac9c08c5cc21992da5d2e6 Mon Sep 17 00:00:00 2001 From: Wei Yang Date: Thu, 6 Aug 2020 23:23:48 -0700 Subject: mm/mremap: start addresses are properly aligned After previous cleanup, extent is the minimal step for both source and destination. This means when extent is HPAGE_PMD_SIZE or PMD_SIZE, old_addr and new_addr are properly aligned too. Since these two functions are only invoked in move_page_tables, it is safe to remove the check now. Signed-off-by: Wei Yang Signed-off-by: Andrew Morton Tested-by: Dmitry Osipenko Acked-by: Kirill A. Shutemov Cc: Aneesh Kumar K.V Cc: Anshuman Khandual Cc: Matthew Wilcox Cc: Peter Xu Cc: Sean Christopherson Cc: Thomas Hellstrom Cc: Thomas Hellstrom (VMware) Cc: Vlastimil Babka Cc: Yang Shi Link: http://lkml.kernel.org/r/20200708095028.41706-4-richard.weiyang@linux.alibaba.com Signed-off-by: Linus Torvalds --- mm/huge_memory.c | 3 --- mm/mremap.c | 3 --- 2 files changed, 6 deletions(-) (limited to 'mm/huge_memory.c') diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 1e580fdad4d0..462a7dbd6350 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1729,9 +1729,6 @@ bool move_huge_pmd(struct vm_area_struct *vma, unsigned long old_addr, struct mm_struct *mm = vma->vm_mm; bool force_flush = false; - if ((old_addr & ~HPAGE_PMD_MASK) || (new_addr & ~HPAGE_PMD_MASK)) - return false; - /* * The destination pmd shouldn't be established, free_pgtables() * should have release it. diff --git a/mm/mremap.c b/mm/mremap.c index c3622bfcf6d9..138abbae4f75 100644 --- a/mm/mremap.c +++ b/mm/mremap.c @@ -199,9 +199,6 @@ static bool move_normal_pmd(struct vm_area_struct *vma, unsigned long old_addr, struct mm_struct *mm = vma->vm_mm; pmd_t pmd; - if ((old_addr & ~PMD_MASK) || (new_addr & ~PMD_MASK)) - return false; - /* * The destination pmd shouldn't be established, free_pgtables() * should have released it. -- cgit v1.2.3 From 42742d9bde2a8e11ec932cb5821f720a40a7c2a9 Mon Sep 17 00:00:00 2001 From: "Alexander A. Klimov" Date: Thu, 6 Aug 2020 23:26:08 -0700 Subject: mm: thp: replace HTTP links with HTTPS ones Rationale: Reduces attack surface on kernel devs opening the links for MITM as HTTPS traffic is much harder to manipulate. Deterministic algorithm: For each file: If not .svg: For each line: If doesn't contain `xmlns`: For each link, `http://[^# ]*(?:\w|/)`: If neither `gnu\.org/license`, nor `mozilla\.org/MPL`: If both the HTTP and HTTPS versions return 200 OK and serve the same content: Replace HTTP with HTTPS. [akpm@linux-foundation.org: fix amd.com URL, per Vlastimil] Signed-off-by: Alexander A. Klimov Signed-off-by: Andrew Morton Reviewed-by: Andrew Morton Cc: Vlastimil Babka Link: http://lkml.kernel.org/r/20200713164345.36088-1-grandmaster@al2klimov.de Signed-off-by: Linus Torvalds --- mm/huge_memory.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) (limited to 'mm/huge_memory.c') diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 462a7dbd6350..206f52b36ffb 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2063,8 +2063,8 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, * free), userland could trigger a small page size TLB miss on the * small sized TLB while the hugepage TLB entry is still established in * the huge TLB. Some CPU doesn't like that. - * See http://support.amd.com/us/Processor_TechDocs/41322.pdf, Erratum - * 383 on page 93. Intel should be safe but is also warns that it's + * See http://support.amd.com/TechDocs/41322_10h_Rev_Gd.pdf, Erratum + * 383 on page 105. Intel should be safe but is also warns that it's * only safe if the permission and cache attributes of the two entries * loaded in the two TLB is identical (which should be the case here). * But it is generally safer to never allow small and huge TLB entries -- cgit v1.2.3