summaryrefslogtreecommitdiffstats
path: root/arch/x86_64/mm
diff options
context:
space:
mode:
authorBob Picco <bob.picco@hp.com>2007-06-08 13:47:00 -0700
committerLinus Torvalds <torvalds@woody.linux-foundation.org>2007-06-08 17:23:34 -0700
commit12710a56cb56e81bd8f457cc2f50c2ebfc0cb390 (patch)
treec71714196a6b3cc2f801387016bf2a1dd1c6726c /arch/x86_64/mm
parent778e9a9c3e7193ea9f434f382947155ffb59c755 (diff)
downloadlinux-12710a56cb56e81bd8f457cc2f50c2ebfc0cb390.tar.bz2
fix sysrq-m oops
We aren't sampling for holes in memory. Thus we encounter a section hole with empty section map pointer for SPARSEMEM and OOPs for show_mem. This issue has been seen in 2.6.21, current git and current mm. The patch below is for mainline and mm. It was boot tested for SPARSEMEM, current VMEMMAP of Andy's in mm ml and DISCONTIGMEM. A slightly different patch will be posted to stable for 2.6.21. Previous to commit f0a5a58aa812b31fd9f197c4ba48245942364eae memory_present was called for node_start_pfn to node_end_pfn. This would cover the hole(s) with reserved pages and valid sections. Most SPARSEMEM supported arches do a pfn_valid check in show_mem before computing the page structure address. This issue was brought to my attention on IRC by Arnaldo Carvalho de Melo. Thanks to Arnaldo for testing. Signed-off-by: Bob Picco <bob.picco@hp.com> Cc: Chuck Ebbert <cebbert@redhat.com> Cc: Andi Kleen <ak@suse.de> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Acked-by: Andy Whitcroft <apw@shadowen.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'arch/x86_64/mm')
-rw-r--r--arch/x86_64/mm/init.c2
1 files changed, 2 insertions, 0 deletions
diff --git a/arch/x86_64/mm/init.c b/arch/x86_64/mm/init.c
index 1ad5111aec38..efb6e845114e 100644
--- a/arch/x86_64/mm/init.c
+++ b/arch/x86_64/mm/init.c
@@ -79,6 +79,8 @@ void show_mem(void)
if (unlikely(i % MAX_ORDER_NR_PAGES == 0)) {
touch_nmi_watchdog();
}
+ if (!pfn_valid(pgdat->node_start_pfn + i))
+ continue;
page = pfn_to_page(pgdat->node_start_pfn + i);
total++;
if (PageReserved(page))