diff options
author | Steven Price <steven.price@arm.com> | 2020-02-03 17:36:42 -0800 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2020-02-04 03:05:25 +0000 |
commit | e47690d756a760579141560ded06ec1020dd85e8 (patch) | |
tree | b81cb0311e9b565111d512b278b77d02b260e2d4 /mm | |
parent | f8f0d0b6fa203bfa363d30f34f6fecce9e5cc2f7 (diff) | |
download | linux-e47690d756a760579141560ded06ec1020dd85e8.tar.bz2 |
x86: mm: avoid allocating struct mm_struct on the stack
struct mm_struct is quite large (~1664 bytes) and so allocating on the
stack may cause problems as the kernel stack size is small.
Since ptdump_walk_pgd_level_core() was only allocating the structure so
that it could modify the pgd argument we can instead introduce a pgd
override in struct mm_walk and pass this down the call stack to where it
is needed.
Since the correct mm_struct is now being passed down, it is now also
unnecessary to take the mmap_sem semaphore because ptdump_walk_pgd() will
now take the semaphore on the real mm.
[steven.price@arm.com: restore missed arm64 changes]
Link: http://lkml.kernel.org/r/20200108145710.34314-1-steven.price@arm.com
Link: http://lkml.kernel.org/r/20200108145710.34314-1-steven.price@arm.com
Signed-off-by: Steven Price <steven.price@arm.com>
Reported-by: Stephen Rothwell <sfr@canb.auug.org.au>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Albert Ou <aou@eecs.berkeley.edu>
Cc: Alexandre Ghiti <alex@ghiti.fr>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Christian Borntraeger <borntraeger@de.ibm.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: David S. Miller <davem@davemloft.net>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: James Hogan <jhogan@kernel.org>
Cc: James Morse <james.morse@arm.com>
Cc: Jerome Glisse <jglisse@redhat.com>
Cc: "Liang, Kan" <kan.liang@linux.intel.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Paul Burton <paul.burton@mips.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Vasily Gorbik <gor@linux.ibm.com>
Cc: Vineet Gupta <vgupta@synopsys.com>
Cc: Will Deacon <will@kernel.org>
Cc: Zong Li <zong.li@sifive.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm')
-rw-r--r-- | mm/pagewalk.c | 7 | ||||
-rw-r--r-- | mm/ptdump.c | 4 |
2 files changed, 8 insertions, 3 deletions
diff --git a/mm/pagewalk.c b/mm/pagewalk.c index 5895ce4f1a85..928df1638c30 100644 --- a/mm/pagewalk.c +++ b/mm/pagewalk.c @@ -206,7 +206,10 @@ static int walk_pgd_range(unsigned long addr, unsigned long end, const struct mm_walk_ops *ops = walk->ops; int err = 0; - pgd = pgd_offset(walk->mm, addr); + if (walk->pgd) + pgd = walk->pgd + pgd_index(addr); + else + pgd = pgd_offset(walk->mm, addr); do { next = pgd_addr_end(addr, end); if (pgd_none_or_clear_bad(pgd)) { @@ -436,11 +439,13 @@ int walk_page_range(struct mm_struct *mm, unsigned long start, */ int walk_page_range_novma(struct mm_struct *mm, unsigned long start, unsigned long end, const struct mm_walk_ops *ops, + pgd_t *pgd, void *private) { struct mm_walk walk = { .ops = ops, .mm = mm, + .pgd = pgd, .private = private, .no_vma = true }; diff --git a/mm/ptdump.c b/mm/ptdump.c index ad18a9839d6f..26208d0d03b7 100644 --- a/mm/ptdump.c +++ b/mm/ptdump.c @@ -122,14 +122,14 @@ static const struct mm_walk_ops ptdump_ops = { .pte_hole = ptdump_hole, }; -void ptdump_walk_pgd(struct ptdump_state *st, struct mm_struct *mm) +void ptdump_walk_pgd(struct ptdump_state *st, struct mm_struct *mm, pgd_t *pgd) { const struct ptdump_range *range = st->range; down_read(&mm->mmap_sem); while (range->start != range->end) { walk_page_range_novma(mm, range->start, range->end, - &ptdump_ops, st); + &ptdump_ops, pgd, st); range++; } up_read(&mm->mmap_sem); |