summaryrefslogtreecommitdiffstats
path: root/arch/xtensa/mm/init.c
AgeCommit message (Collapse)AuthorFilesLines
2022-07-17xtensa/mm: enable ARCH_HAS_VM_GET_PAGE_PROTAnshuman Khandual1-0/+22
This enables ARCH_HAS_VM_GET_PAGE_PROT on the platform and exports standard vm_get_page_prot() implementation via DECLARE_VM_GET_PAGE_PROT, which looks up a private and static protection_map[] array. Subsequently all __SXXX and __PXXX macros can be dropped which are no longer needed. Link: https://lkml.kernel.org/r/20220711070600.2378316-12-anshuman.khandual@arm.com Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> Cc: Chris Zankel <chris@zankel.net> Cc: Guo Ren <guoren@kernel.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Brian Cain <bcain@quicinc.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Christophe Leroy <christophe.leroy@csgroup.eu> Cc: Christoph Hellwig <hch@infradead.org> Cc: Christoph Hellwig <hch@lst.de> Cc: "David S. Miller" <davem@davemloft.net> Cc: Dinh Nguyen <dinguyen@kernel.org> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Heiko Carstens <hca@linux.ibm.com> Cc: Huacai Chen <chenhuacai@kernel.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com> Cc: Jeff Dike <jdike@addtoit.com> Cc: Jonas Bonn <jonas@southpole.se> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Simek <monstr@monstr.eu> Cc: Nicholas Piggin <npiggin@gmail.com> Cc: Palmer Dabbelt <palmer@dabbelt.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Richard Henderson <rth@twiddle.net> Cc: Rich Felker <dalias@libc.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Sam Ravnborg <sam@ravnborg.org> Cc: Stafford Horne <shorne@gmail.com> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vasily Gorbik <gor@linux.ibm.com> Cc: Vineet Gupta <vgupta@kernel.org> Cc: WANG Xuerui <kernel@xen0n.name> Cc: Will Deacon <will@kernel.org> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2021-04-30mm: move mem_init_print_info() into mm_init()Kefeng Wang1-1/+0
mem_init_print_info() is called in mem_init() on each architecture, and pass NULL argument, so using void argument and move it into mm_init(). Link: https://lkml.kernel.org/r/20210317015210.33641-1-wangkefeng.wang@huawei.com Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com> Acked-by: Dave Hansen <dave.hansen@linux.intel.com> [x86] Reviewed-by: Christophe Leroy <christophe.leroy@c-s.fr> [powerpc] Acked-by: David Hildenbrand <david@redhat.com> Tested-by: Anatoly Pugachev <matorola@gmail.com> [sparc64] Acked-by: Russell King <rmk+kernel@armlinux.org.uk> [arm] Acked-by: Mike Rapoport <rppt@linux.ibm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Richard Henderson <rth@twiddle.net> Cc: Guo Ren <guoren@kernel.org> Cc: Yoshinori Sato <ysato@users.osdn.me> Cc: Huacai Chen <chenhuacai@kernel.org> Cc: Jonas Bonn <jonas@southpole.se> Cc: Palmer Dabbelt <palmer@dabbelt.com> Cc: Heiko Carstens <hca@linux.ibm.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: "Peter Zijlstra" <peterz@infradead.org> Cc: Ingo Molnar <mingo@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-12-14Merge tag 'core-mm-2020-12-14' of ↵Linus Torvalds1-2/+2
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull kmap updates from Thomas Gleixner: "The new preemtible kmap_local() implementation: - Consolidate all kmap_atomic() internals into a generic implementation which builds the base for the kmap_local() API and make the kmap_atomic() interface wrappers which handle the disabling/enabling of preemption and pagefaults. - Switch the storage from per-CPU to per task and provide scheduler support for clearing mapping when scheduling out and restoring them when scheduling back in. - Merge the migrate_disable/enable() code, which is also part of the scheduler pull request. This was required to make the kmap_local() interface available which does not disable preemption when a mapping is established. It has to disable migration instead to guarantee that the virtual address of the mapped slot is the same across preemption. - Provide better debug facilities: guard pages and enforced utilization of the mapping mechanics on 64bit systems when the architecture allows it. - Provide the new kmap_local() API which can now be used to cleanup the kmap_atomic() usage sites all over the place. Most of the usage sites do not require the implicit disabling of preemption and pagefaults so the penalty on 64bit and 32bit non-highmem systems is removed and quite some of the code can be simplified. A wholesale conversion is not possible because some usage depends on the implicit side effects and some need to be cleaned up because they work around these side effects. The migrate disable side effect is only effective on highmem systems and when enforced debugging is enabled. On 64bit and 32bit non-highmem systems the overhead is completely avoided" * tag 'core-mm-2020-12-14' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (33 commits) ARM: highmem: Fix cache_is_vivt() reference x86/crashdump/32: Simplify copy_oldmem_page() io-mapping: Provide iomap_local variant mm/highmem: Provide kmap_local* sched: highmem: Store local kmaps in task struct x86: Support kmap_local() forced debugging mm/highmem: Provide CONFIG_DEBUG_KMAP_LOCAL_FORCE_MAP mm/highmem: Provide and use CONFIG_DEBUG_KMAP_LOCAL microblaze/mm/highmem: Add dropped #ifdef back xtensa/mm/highmem: Make generic kmap_atomic() work correctly mm/highmem: Take kmap_high_get() properly into account highmem: High implementation details and document API Documentation/io-mapping: Remove outdated blurb io-mapping: Cleanup atomic iomap mm/highmem: Remove the old kmap_atomic cruft highmem: Get rid of kmap_types.h xtensa/mm/highmem: Switch to generic kmap atomic sparc/mm/highmem: Switch to generic kmap atomic powerpc/mm/highmem: Switch to generic kmap atomic nds32/mm/highmem: Switch to generic kmap atomic ...
2020-11-16xtensa/mm/highmem: Make generic kmap_atomic() work correctlyThomas Gleixner1-2/+2
The conversion to the generic kmap_atomic() implementation missed the fact that xtensa's fixmap works bottom up while all other implementations work top down. There is no real reason why xtensa needs to work that way. Cure it by: - Using the generic fix_to_virt()/virt_to_fix() functions which work top down - Adjusting the mapping defines - Using the generic index calculation for the non cache aliasing case - Making the cache colour offset reverse so the effective index is correct While at it, remove the outdated and misleading comment above the fixmap enum which originates from the initial copy&pasta of this code from i386. [ Max: Fixed the off by one in the index calculation ] Fixes: 629ed3f7dad2 ("xtensa/mm/highmem: Switch to generic kmap atomic") Reported-by: Max Filippov <jcmvbkbc@gmail.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Max Filippov <jcmvbkbc@gmail.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Max Filippov <jcmvbkbc@gmail.com> Link: https://lore.kernel.org/r/20201116193253.23875-1-jcmvbkbc@gmail.com
2020-11-04ARM, xtensa: highmem: avoid clobbering non-page aligned memory reservationsArd Biesheuvel1-2/+2
free_highpages() iterates over the free memblock regions in high memory, and marks each page as available for the memory management system. Until commit cddb5ddf2b76 ("arm, xtensa: simplify initialization of high memory pages") it rounded beginning of each region upwards and end of each region downwards. However, after that commit free_highmem() rounds the beginning and end of each region downwards, and we may end up freeing a page that is memblock_reserve()d, resulting in memory corruption. Restore the original rounding of the region boundaries to avoid freeing reserved pages. Fixes: cddb5ddf2b76 ("arm, xtensa: simplify initialization of high memory pages") Link: https://lore.kernel.org/r/20201029110334.4118-1-ardb@kernel.org/ Link: https://lore.kernel.org/r/20201031094345.6984-1-rppt@kernel.org Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Co-developed-by: Mike Rapoport <rppt@linux.ibm.com> Signed-off-by: Mike Rapoport <rppt@linux.ibm.com> Acked-by: Max Filippov <jcmvbkbc@gmail.com>
2020-10-15Merge tag 'dma-mapping-5.10' of git://git.infradead.org/users/hch/dma-mappingLinus Torvalds1-1/+1
Pull dma-mapping updates from Christoph Hellwig: - rework the non-coherent DMA allocator - move private definitions out of <linux/dma-mapping.h> - lower CMA_ALIGNMENT (Paul Cercueil) - remove the omap1 dma address translation in favor of the common code - make dma-direct aware of multiple dma offset ranges (Jim Quinlan) - support per-node DMA CMA areas (Barry Song) - increase the default seg boundary limit (Nicolin Chen) - misc fixes (Robin Murphy, Thomas Tai, Xu Wang) - various cleanups * tag 'dma-mapping-5.10' of git://git.infradead.org/users/hch/dma-mapping: (63 commits) ARM/ixp4xx: add a missing include of dma-map-ops.h dma-direct: simplify the DMA_ATTR_NO_KERNEL_MAPPING handling dma-direct: factor out a dma_direct_alloc_from_pool helper dma-direct check for highmem pages in dma_direct_alloc_pages dma-mapping: merge <linux/dma-noncoherent.h> into <linux/dma-map-ops.h> dma-mapping: move large parts of <linux/dma-direct.h> to kernel/dma dma-mapping: move dma-debug.h to kernel/dma/ dma-mapping: remove <asm/dma-contiguous.h> dma-mapping: merge <linux/dma-contiguous.h> into <linux/dma-map-ops.h> dma-contiguous: remove dma_contiguous_set_default dma-contiguous: remove dev_set_cma_area dma-contiguous: remove dma_declare_contiguous dma-mapping: split <linux/dma-mapping.h> cma: decrease CMA_ALIGNMENT lower limit to 2 firewire-ohci: use dma_alloc_pages dma-iommu: implement ->alloc_noncoherent dma-mapping: add new {alloc,free}_noncoherent dma_map_ops methods dma-mapping: add a new dma_alloc_pages API dma-mapping: remove dma_cache_sync 53c700: convert to dma_alloc_noncoherent ...
2020-10-13arm, xtensa: simplify initialization of high memory pagesMike Rapoport1-45/+10
free_highpages() in both arm and xtensa essentially open-code for_each_free_mem_range() loop to detect high memory pages that were not reserved and that should be initialized and passed to the buddy allocator. Replace open-coded implementation of for_each_free_mem_range() with usage of memblock API to simplify the code. Signed-off-by: Mike Rapoport <rppt@linux.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Tested-by: Max Filippov <jcmvbkbc@gmail.com> [xtensa] Reviewed-by: Max Filippov <jcmvbkbc@gmail.com> [xtensa] Cc: Andy Lutomirski <luto@kernel.org> Cc: Baoquan He <bhe@redhat.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Daniel Axtens <dja@axtens.net> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: Emil Renner Berthing <kernel@esmil.dk> Cc: Hari Bathini <hbathini@linux.ibm.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com> Cc: Marek Szyprowski <m.szyprowski@samsung.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Simek <monstr@monstr.eu> Cc: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com> Cc: Palmer Dabbelt <palmer@dabbelt.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Stafford Horne <shorne@gmail.com> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will@kernel.org> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Link: https://lkml.kernel.org/r/20200818151634.14343-4-rppt@kernel.org Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-10-06dma-mapping: merge <linux/dma-contiguous.h> into <linux/dma-map-ops.h>Christoph Hellwig1-1/+1
Merge dma-contiguous.h into dma-map-ops.h, after removing the comment describing the contiguous allocator into kernel/dma/contigous.c. Signed-off-by: Christoph Hellwig <hch@lst.de>
2020-06-03xtensa: simplify detection of memory zone boundariesMike Rapoport1-4/+4
free_area_init() only requires the definition of maximal PFN for each of the supported zone rater than calculation of actual zone sizes and the sizes of the holes between the zones. After removal of CONFIG_HAVE_MEMBLOCK_NODE_MAP the free_area_init() is available to all architectures. Using this function instead of free_area_init_node() simplifies the zone detection. Signed-off-by: Mike Rapoport <rppt@linux.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Tested-by: Hoan Tran <hoan@os.amperecomputing.com> [arm64] Cc: Baoquan He <bhe@redhat.com> Cc: Brian Cain <bcain@codeaurora.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Greentime Hu <green.hu@gmail.com> Cc: Greg Ungerer <gerg@linux-m68k.org> Cc: Guan Xuetao <gxt@pku.edu.cn> Cc: Guo Ren <guoren@kernel.org> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Helge Deller <deller@gmx.de> Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Ley Foon Tan <ley.foon.tan@intel.com> Cc: Mark Salter <msalter@redhat.com> Cc: Matt Turner <mattst88@gmail.com> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Hocko <mhocko@kernel.org> Cc: Michal Simek <monstr@monstr.eu> Cc: Nick Hu <nickhu@andestech.com> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Richard Weinberger <richard@nod.at> Cc: Rich Felker <dalias@libc.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Stafford Horne <shorne@gmail.com> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Cc: Tony Luck <tony.luck@intel.com> Cc: Vineet Gupta <vgupta@synopsys.com> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Link: http://lkml.kernel.org/r/20200412194859.12663-15-rppt@kernel.org Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-10-20xtensa: use correct symbol for the end of .rodataMax Filippov1-2/+2
Use correct symbol for the end of .rodata section when dumping virtual memory layout. This fixes odd rodata size with XIP kernel. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2019-08-26xtensa: remove free_initrd_memMike Rapoport1-10/+0
The xtensa free_initrd_mem() verifies that initrd is mapped and then frees its memory using free_reserved_area(). The initrd is considered mapped when its memory was successfully reserved with mem_reserve(). Resetting initrd_start to 0 in case of mem_reserve() failure allows to switch to generic free_initrd_mem() implementation. Signed-off-by: Mike Rapoport <rppt@linux.ibm.com> Message-Id: <1563977432-8376-1-git-send-email-rppt@linux.ibm.com> Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2019-07-05xtensa: One function call less in bootmem_init()Markus Elfring1-4/+1
Avoid an extra function call by using a ternary operator instead of a conditional statement for a setting selection. This issue was detected by using the Coccinelle software. Signed-off-by: Markus Elfring <elfring@users.sourceforge.net> Message-Id: <495c9f2e-7880-ee9a-5c61-eee598bb24c2@web.de> Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2019-05-14init: provide a generic free_initmem implementationMike Rapoport1-5/+0
Patch series "provide a generic free_initmem implementation", v2. Many architectures implement free_initmem() in exactly the same or very similar way: they wrap the call to free_initmem_default() with sometimes different 'poison' parameter. These patches switch those architectures to use a generic implementation that does free_initmem_default(POISON_FREE_INITMEM). This was inspired by Christoph's patches for free_initrd_mem [1] and I shamelessly copied changelog entries from his patches :) [1] https://lore.kernel.org/lkml/20190213174621.29297-1-hch@lst.de/ This patch (of 2): For most architectures free_initmem just a wrapper for the same free_initmem_default(-1) call. Provide that as a generic implementation marked __weak. Link: http://lkml.kernel.org/r/1550515285-17446-2-git-send-email-rppt@linux.ibm.com Signed-off-by: Mike Rapoport <rppt@linux.ibm.com> Reviewed-by: Andrew Morton <akpm@linux-foundation.org> Cc: Christoph Hellwig <hch@lst.de> Cc: Palmer Dabbelt <palmer@sifive.com> Cc: Richard Kuo <rkuo@codeaurora.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-12-17xtensa: support memtestMax Filippov1-0/+3
Call early_memtest from the bootmem_init to run memtest if it's configured and enabled. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2018-11-01Merge tag 'xtensa-20181101' of git://github.com/jcmvbkbc/linux-xtensaLinus Torvalds1-1/+1
Pull Xtensa fixes and cleanups from Max Filippov: - use ZONE_NORMAL instead of ZONE_DMA - fix Image.elf build error caused by assignment of incorrect address to the .note.Linux section - clean up debug and property sections in the vmlinux.lds.S * tag 'xtensa-20181101' of git://github.com/jcmvbkbc/linux-xtensa: xtensa: clean up xtensa-specific property sections xtensa: use DWARF_DEBUG in the vmlinux.lds.S xtensa: add NOTES section to the linker script xtensa: remove ZONE_DMA
2018-10-31mm: remove include/linux/bootmem.hMike Rapoport1-1/+1
Move remaining definitions and declarations from include/linux/bootmem.h into include/linux/memblock.h and remove the redundant header. The includes were replaced with the semantic patch below and then semi-automated removal of duplicated '#include <linux/memblock.h> @@ @@ - #include <linux/bootmem.h> + #include <linux/memblock.h> [sfr@canb.auug.org.au: dma-direct: fix up for the removal of linux/bootmem.h] Link: http://lkml.kernel.org/r/20181002185342.133d1680@canb.auug.org.au [sfr@canb.auug.org.au: powerpc: fix up for removal of linux/bootmem.h] Link: http://lkml.kernel.org/r/20181005161406.73ef8727@canb.auug.org.au [sfr@canb.auug.org.au: x86/kaslr, ACPI/NUMA: fix for linux/bootmem.h removal] Link: http://lkml.kernel.org/r/20181008190341.5e396491@canb.auug.org.au Link: http://lkml.kernel.org/r/1536927045-23536-30-git-send-email-rppt@linux.vnet.ibm.com Signed-off-by: Mike Rapoport <rppt@linux.vnet.ibm.com> Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Acked-by: Michal Hocko <mhocko@suse.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Chris Zankel <chris@zankel.net> Cc: "David S. Miller" <davem@davemloft.net> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Greentime Hu <green.hu@gmail.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Guan Xuetao <gxt@pku.edu.cn> Cc: Ingo Molnar <mingo@redhat.com> Cc: "James E.J. Bottomley" <jejb@parisc-linux.org> Cc: Jonas Bonn <jonas@southpole.se> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Ley Foon Tan <lftan@altera.com> Cc: Mark Salter <msalter@redhat.com> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Matt Turner <mattst88@gmail.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Simek <monstr@monstr.eu> Cc: Palmer Dabbelt <palmer@sifive.com> Cc: Paul Burton <paul.burton@mips.com> Cc: Richard Kuo <rkuo@codeaurora.org> Cc: Richard Weinberger <richard@nod.at> Cc: Rich Felker <dalias@libc.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Serge Semin <fancer.lancer@gmail.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Tony Luck <tony.luck@intel.com> Cc: Vineet Gupta <vgupta@synopsys.com> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-10-31memblock: rename free_all_bootmem to memblock_free_allMike Rapoport1-1/+1
The conversion is done using sed -i 's@free_all_bootmem@memblock_free_all@' \ $(git grep -l free_all_bootmem) Link: http://lkml.kernel.org/r/1536927045-23536-26-git-send-email-rppt@linux.vnet.ibm.com Signed-off-by: Mike Rapoport <rppt@linux.vnet.ibm.com> Acked-by: Michal Hocko <mhocko@suse.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Chris Zankel <chris@zankel.net> Cc: "David S. Miller" <davem@davemloft.net> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Greentime Hu <green.hu@gmail.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Guan Xuetao <gxt@pku.edu.cn> Cc: Ingo Molnar <mingo@redhat.com> Cc: "James E.J. Bottomley" <jejb@parisc-linux.org> Cc: Jonas Bonn <jonas@southpole.se> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Ley Foon Tan <lftan@altera.com> Cc: Mark Salter <msalter@redhat.com> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Matt Turner <mattst88@gmail.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Simek <monstr@monstr.eu> Cc: Palmer Dabbelt <palmer@sifive.com> Cc: Paul Burton <paul.burton@mips.com> Cc: Richard Kuo <rkuo@codeaurora.org> Cc: Richard Weinberger <richard@nod.at> Cc: Rich Felker <dalias@libc.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Serge Semin <fancer.lancer@gmail.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Tony Luck <tony.luck@intel.com> Cc: Vineet Gupta <vgupta@synopsys.com> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-10-15xtensa: remove ZONE_DMAChristoph Hellwig1-1/+1
ZONE_DMA is intended for magic < 32-bit pools (usually ISA DMA), which isn't required on xtensa. Move all the non-highmem memory into ZONE_NORMAL instead to match other architectures. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2018-02-15xtensa: fix high memory/reserved memory collisionMax Filippov1-7/+63
Xtensa memory initialization code frees high memory pages without checking whether they are in the reserved memory regions or not. That results in invalid value of totalram_pages and duplicate page usage by CMA and highmem. It produces a bunch of BUGs at startup looking like this: BUG: Bad page state in process swapper pfn:70800 page:be60c000 count:0 mapcount:-127 mapping: (null) index:0x1 flags: 0x80000000() raw: 80000000 00000000 00000001 ffffff80 00000000 be60c014 be60c014 0000000a page dumped because: nonzero mapcount Modules linked in: CPU: 0 PID: 1 Comm: swapper Tainted: G B 4.16.0-rc1-00015-g7928b2cbe55b-dirty #23 Stack: bd839d33 00000000 00000018 ba97b64c a106578c bd839d70 be60c000 00000000 a1378054 bd86a000 00000003 ba97b64c a1066166 bd839da0 be60c000 ffe00000 a1066b58 bd839dc0 be504000 00000000 000002f4 bd838000 00000000 0000001e Call Trace: [<a1065734>] bad_page+0xac/0xd0 [<a106578c>] free_pages_check_bad+0x34/0x4c [<a1066166>] __free_pages_ok+0xae/0x14c [<a1066b58>] __free_pages+0x30/0x64 [<a1365de5>] init_cma_reserved_pageblock+0x35/0x44 [<a13682dc>] cma_init_reserved_areas+0xf4/0x148 [<a10034b8>] do_one_initcall+0x80/0xf8 [<a1361c16>] kernel_init_freeable+0xda/0x13c [<a125b59d>] kernel_init+0x9/0xd0 [<a1004304>] ret_from_kernel_thread+0xc/0x18 Only free high memory pages that are not reserved. Cc: stable@vger.kernel.org Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2017-12-17xtensa: print kernel sections info in mem_initMax Filippov1-2/+17
Output virtual addresses and sizes occupied by the main kernel sections: .text, .rodata, .data, .init and .bss. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2017-12-16xtensa: add support for KASANMax Filippov1-0/+7
Cover kernel addresses above 0x90000000 by the shadow map. Enable HAVE_ARCH_KASAN when MMU is enabled. Provide kasan_early_init that fills shadow map with writable copies of kasan_zero_page. Call kasan_early_init right after mmu initialization in the setup_arch. Provide kasan_init that allocates proper shadow map pages from the memblock and puts these pages into the shadow map for addresses from VMALLOC area to the end of KSEG. Call kasan_init right after memblock initialization. Don't use KASAN for the boot code, MMU and KASAN initialization and page fault handler. Make kernel stack size 4 times larger when KASAN is enabled to avoid stack overflows. GCC 7.3, 8 or newer is required to build the xtensa kernel with KASAN. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2017-12-16xtensa: move fixmap and kmap just above the KSEGMax Filippov1-6/+6
The virtual address space between the page table and the VMALLOC region is big enough to host KASAN shadow map and there's enough space between the VMALLOC area and KSEG for the fixmap and kmap. Move fixmap and kmap to the gap between VMALLOC area and KSEG, just above the KSEG. Reorder entries in the kernel memory layout printing code. Drop duplicate PGTABLE_START definition, use XCHAL_PAGE_TABLE_VADDR instead. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2016-12-15xtensa: enable HAVE_DMA_CONTIGUOUSMax Filippov1-0/+2
Enable HAVE_DMA_CONTIGUOUS, reserve contiguous memory at bootmem_init, use dma_alloc_from_contiguous and dma_release_from_contiguous in xtensa_dma_alloc/free. This allows for big contiguous DMA buffer allocation from designated area configured in the device tree. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2016-07-24xtensa: support reserved-memory DT nodeMax Filippov1-0/+2
This allows reserving regions of physical memory from the device tree. See Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt for more details. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2016-07-24xtensa: drop sysmem and switch to memblockMax Filippov1-257/+15
Memblock is the standard kernel boot-time memory tracker/allocator. Use it instead of the custom sysmem allocator. This allows using kmemleak, CMA and device tree memory reservation. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2016-07-24xtensa: add alternative kernel memory layoutsMax Filippov1-2/+1
MMUv3 is able to support low memory bigger than 128MB. Implement 256MB and 512MB KSEG layouts: - add Kconfig selector for KSEG layout; - add KSEG base address, size and alignment definitions to arch/xtensa/include/asm/kmem_layout.h; - use new definitions in TLB initialization; - add build time memory map consistency checks. See Documentation/xtensa/mmu.txt for the details of new memory layouts. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2016-07-24xtensa: move kernel mapping addresses into kmem_layout.hMax Filippov1-1/+1
Create a header dedicated to memory layout definitions. Include it from places where these definitions are needed. Express vmalloc area address, VIRTUAL_MEMORY_ADDRESS and KERNELOFFSET through KSEG address. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2014-10-21xtensa: nommu: clean up memory map dumpMax Filippov1-1/+7
noMMU configuration doesn't use special area for vmalloc allocations, don't print it in the memory map. PAGE_OFFSET is fixed to 0 in noMMU, use min_low_pfn and max_low_pfn for lowmem range display. Make all XCHAL_KSEG_* constants unsigned long for consistency with other addresses. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2014-10-21xtensa: nommu: reserve memory below PLATFORM_DEFAULT_MEM_STARTMax Filippov1-0/+11
Memory accounting code can't handle pages below PLATFORM_DEFAULT_MEM_START. Reserve those pages if they exist. When PLATFORM_DEFAULT_MEM_START is zero reserve one page at address 0 to make sure that successfull memory allocations don't return NULL. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2014-06-09xtensa: fix sysmem reservation at the end of existing blockMax Filippov1-1/+1
When sysmem reservation occurs exactly at the end of an existing block that block is deleted, because it is incorrectly included in the range of memblocks to remove. Fix that by skipping such block. Cc: stable@vger.kernel.org Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2014-04-06xtensa: add HIGHMEM supportMax Filippov1-14/+31
Introduce fixmap area just below the vmalloc region. Use it for atomic mapping of high memory pages. High memory on cores with cache aliasing is not supported and is still to be implemented. Fail build for such configurations for now. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2014-04-02xtensa: dump sysmem from the bootmem_initMax Filippov1-0/+12
Debug dump of physical memory configuration. Useful for inspection of resulting memory map, esp. in the presence of memmap= kernel option. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2014-04-02xtensa: handle memmap kernel optionMax Filippov1-0/+50
This option is useful for reserving memory regions for secondary cores in AMP configurations. Implement the following memmap variants: - memmap=nn[KMG]@ss[KMG]: force usage of a specific region of memory; - memmap=nn[KMG]$ss[KMG]: mark specified memory as reserved; - memmap=nn[KMG]: set end of memory. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2014-04-02xtensa: keep sysmem banks ordered in mem_reserveMax Filippov1-32/+50
Rewrite mem_reserve so that it keeps bank order. Also make its return code more traditional. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2014-04-02xtensa: keep sysmem banks ordered in add_sysmem_bankMax Filippov1-5/+98
Rewrite add_sysmem_bank so that it keeps bank order and merges adjacent/overlapping banks. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2014-04-02xtensa: split bootparam and kernel meminfoMax Filippov1-0/+17
Bootparam meminfo is a bootloader ABI, kernel meminfo is for the kernel bookkeeping, keep them separate. Kernel doesn't care of memory region types, so drop the type field and don't pass it to add_sysmem_bank. Move kernel sysmem structures and prototypes to asm/sysmem.h and sysmem variable and add_sysmem_bank to mm/init.c Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2014-02-21xtensa: don't pass high memory to bootmem allocatorMax Filippov1-4/+9
This fixes panic when booting on machine with more than 128M memory passed from the bootloader. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2013-07-03mm/xtensa: prepare for removing num_physpages and simplify mem_init()Jiang Liu1-25/+2
Prepare for removing num_physpages and simplify mem_init(). Signed-off-by: Jiang Liu <jiang.liu@huawei.com> Cc: Chris Zankel <chris@zankel.net> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-07-03mm: concentrate modification of totalram_pages into the mm coreJiang Liu1-1/+1
Concentrate code to modify totalram_pages into the mm core, so the arch memory initialized code doesn't need to take care of it. With these changes applied, only following functions from mm core modify global variable totalram_pages: free_bootmem_late(), free_all_bootmem(), free_all_bootmem_node(), adjust_managed_page_count(). With this patch applied, it will be much more easier for us to keep totalram_pages and zone->managed_pages in consistence. Signed-off-by: Jiang Liu <jiang.liu@huawei.com> Acked-by: David Howells <dhowells@redhat.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: <sworddragon2@aol.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Chris Metcalf <cmetcalf@tilera.com> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jeremy Fitzhardinge <jeremy@goop.org> Cc: Jianguo Wu <wujianguo@huawei.com> Cc: Joonsoo Kim <js1304@gmail.com> Cc: Kamezawa Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Cc: Marek Szyprowski <m.szyprowski@samsung.com> Cc: Mel Gorman <mel@csn.ul.ie> Cc: Michel Lespinasse <walken@google.com> Cc: Minchan Kim <minchan@kernel.org> Cc: Rik van Riel <riel@redhat.com> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Tang Chen <tangchen@cn.fujitsu.com> Cc: Tejun Heo <tj@kernel.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Wen Congyang <wency@cn.fujitsu.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com> Cc: Yinghai Lu <yinghai@kernel.org> Cc: Russell King <rmk@arm.linux.org.uk> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-07-03mm: enhance free_reserved_area() to support poisoning memory with zeroJiang Liu1-2/+2
Address more review comments from last round of code review. 1) Enhance free_reserved_area() to support poisoning freed memory with pattern '0'. This could be used to get rid of poison_init_mem() on ARM64. 2) A previous patch has disabled memory poison for initmem on s390 by mistake, so restore to the original behavior. 3) Remove redundant PAGE_ALIGN() when calling free_reserved_area(). Signed-off-by: Jiang Liu <jiang.liu@huawei.com> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: <sworddragon2@aol.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Chris Metcalf <cmetcalf@tilera.com> Cc: David Howells <dhowells@redhat.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jeremy Fitzhardinge <jeremy@goop.org> Cc: Jianguo Wu <wujianguo@huawei.com> Cc: Joonsoo Kim <js1304@gmail.com> Cc: Kamezawa Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Cc: Marek Szyprowski <m.szyprowski@samsung.com> Cc: Mel Gorman <mel@csn.ul.ie> Cc: Michel Lespinasse <walken@google.com> Cc: Minchan Kim <minchan@kernel.org> Cc: Rik van Riel <riel@redhat.com> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Tang Chen <tangchen@cn.fujitsu.com> Cc: Tejun Heo <tj@kernel.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Wen Congyang <wency@cn.fujitsu.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com> Cc: Yinghai Lu <yinghai@kernel.org> Cc: Russell King <rmk@arm.linux.org.uk> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-07-03mm: change signature of free_reserved_area() to fix building warningsJiang Liu1-1/+1
Change signature of free_reserved_area() according to Russell King's suggestion to fix following build warnings: arch/arm/mm/init.c: In function 'mem_init': arch/arm/mm/init.c:603:2: warning: passing argument 1 of 'free_reserved_area' makes integer from pointer without a cast [enabled by default] free_reserved_area(__va(PHYS_PFN_OFFSET), swapper_pg_dir, 0, NULL); ^ In file included from include/linux/mman.h:4:0, from arch/arm/mm/init.c:15: include/linux/mm.h:1301:22: note: expected 'long unsigned int' but argument is of type 'void *' extern unsigned long free_reserved_area(unsigned long start, unsigned long end, mm/page_alloc.c: In function 'free_reserved_area': >> mm/page_alloc.c:5134:3: warning: passing argument 1 of 'virt_to_phys' makes pointer from integer without a cast [enabled by default] In file included from arch/mips/include/asm/page.h:49:0, from include/linux/mmzone.h:20, from include/linux/gfp.h:4, from include/linux/mm.h:8, from mm/page_alloc.c:18: arch/mips/include/asm/io.h:119:29: note: expected 'const volatile void *' but argument is of type 'long unsigned int' mm/page_alloc.c: In function 'free_area_init_nodes': mm/page_alloc.c:5030:34: warning: array subscript is below array bounds [-Warray-bounds] Also address some minor code review comments. Signed-off-by: Jiang Liu <jiang.liu@huawei.com> Reported-by: Arnd Bergmann <arnd@arndb.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: <sworddragon2@aol.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Chris Metcalf <cmetcalf@tilera.com> Cc: David Howells <dhowells@redhat.com> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jeremy Fitzhardinge <jeremy@goop.org> Cc: Jianguo Wu <wujianguo@huawei.com> Cc: Joonsoo Kim <js1304@gmail.com> Cc: Kamezawa Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Cc: Marek Szyprowski <m.szyprowski@samsung.com> Cc: Mel Gorman <mel@csn.ul.ie> Cc: Michel Lespinasse <walken@google.com> Cc: Minchan Kim <minchan@kernel.org> Cc: Rik van Riel <riel@redhat.com> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Tang Chen <tangchen@cn.fujitsu.com> Cc: Tejun Heo <tj@kernel.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Wen Congyang <wency@cn.fujitsu.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com> Cc: Yinghai Lu <yinghai@kernel.org> Cc: Russell King <rmk@arm.linux.org.uk> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-04-29mm/xtensa: use common help functions to free reserved pagesJiang Liu1-18/+3
Use common help functions to free reserved pages. Signed-off-by: Jiang Liu <jiang.liu@huawei.com> Cc: Chris Zankel <chris@zankel.net> Cc: Max Filippov <jcmvbkbc@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2012-12-18xtensa: clean up files to make them code-style compliantChris Zankel1-8/+8
Remove heading and trailing spaces, trim trailing lines, and wrap lines that are longer than 80 characters. Signed-off-by: Chris Zankel <chris@zankel.net>
2012-06-20xtensa: use the declarations provided by <asm/sections.h>Geert Uytterhoeven1-11/+7
Cleanups: - Include <asm/sections.h>, - Remove the (different) extern declarations, - Remove the no longer needed address-of ('&') operators, - Use %p to format pointer differences. Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Chris Zankel <chris@zankel.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2012-06-20xtensa: replace xtensa-specific _f{data,text} by _s{data,text}Geert Uytterhoeven1-3/+3
commit a2d063ac216c161 ("extable, core_kernel_data(): Make sure all archs define _sdata") missed xtensa. Xtensa does have a start of data marker, but calls it _fdata, causing kernel/built-in.o:(.text+0x964): undefined reference to `_sdata' _stext was already defined, but it was duplicated by _fdata. Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Chris Zankel <chris@zankel.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-03-30include cleanup: Update gfp.h and slab.h includes to prepare for breaking ↵Tejun Heo1-1/+1
implicit slab.h inclusion from percpu.h percpu.h is included by sched.h and module.h and thus ends up being included when building most .c files. percpu.h includes slab.h which in turn includes gfp.h making everything defined by the two files universally available and complicating inclusion dependencies. percpu.h -> slab.h dependency is about to be removed. Prepare for this change by updating users of gfp and slab facilities include those headers directly instead of assuming availability. As this conversion needs to touch large number of source files, the following script is used as the basis of conversion. http://userweb.kernel.org/~tj/misc/slabh-sweep.py The script does the followings. * Scan files for gfp and slab usages and update includes such that only the necessary includes are there. ie. if only gfp is used, gfp.h, if slab is used, slab.h. * When the script inserts a new include, it looks at the include blocks and try to put the new include such that its order conforms to its surrounding. It's put in the include block which contains core kernel includes, in the same order that the rest are ordered - alphabetical, Christmas tree, rev-Xmas-tree or at the end if there doesn't seem to be any matching order. * If the script can't find a place to put a new include (mostly because the file doesn't have fitting include block), it prints out an error message indicating which .h file needs to be added to the file. The conversion was done in the following steps. 1. The initial automatic conversion of all .c files updated slightly over 4000 files, deleting around 700 includes and adding ~480 gfp.h and ~3000 slab.h inclusions. The script emitted errors for ~400 files. 2. Each error was manually checked. Some didn't need the inclusion, some needed manual addition while adding it to implementation .h or embedding .c file was more appropriate for others. This step added inclusions to around 150 files. 3. The script was run again and the output was compared to the edits from #2 to make sure no file was left behind. 4. Several build tests were done and a couple of problems were fixed. e.g. lib/decompress_*.c used malloc/free() wrappers around slab APIs requiring slab.h to be added manually. 5. The script was run on all .h files but without automatically editing them as sprinkling gfp.h and slab.h inclusions around .h files could easily lead to inclusion dependency hell. Most gfp.h inclusion directives were ignored as stuff from gfp.h was usually wildly available and often used in preprocessor macros. Each slab.h inclusion directive was examined and added manually as necessary. 6. percpu.h was updated not to include slab.h. 7. Build test were done on the following configurations and failures were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my distributed build env didn't work with gcov compiles) and a few more options had to be turned off depending on archs to make things build (like ipr on powerpc/64 which failed due to missing writeq). * x86 and x86_64 UP and SMP allmodconfig and a custom test config. * powerpc and powerpc64 SMP allmodconfig * sparc and sparc64 SMP allmodconfig * ia64 SMP allmodconfig * s390 SMP allmodconfig * alpha SMP allmodconfig * um on x86_64 SMP allmodconfig 8. percpu.h modifications were reverted so that it could be applied as a separate patch and serve as bisection point. Given the fact that I had only a couple of failures from tests on step 6, I'm fairly confident about the coverage of this conversion patch. If there is a breakage, it's likely to be something in one of the arch headers which should be easily discoverable easily on most builds of the specific arch. Signed-off-by: Tejun Heo <tj@kernel.org> Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
2009-09-22arches: drop superfluous casts in nr_free_pages() callersGeert Uytterhoeven1-1/+1
Commit 96177299416dbccb73b54e6b344260154a445375 ("Drop free_pages()") modified nr_free_pages() to return 'unsigned long' instead of 'unsigned int'. This made the casts to 'unsigned long' in most callers superfluous, so remove them. [akpm@linux-foundation.org: coding-style fixes] Signed-off-by: Geert Uytterhoeven <Geert.Uytterhoeven@sonycom.com> Reviewed-by: Christoph Lameter <cl@linux-foundation.org> Acked-by: Ingo Molnar <mingo@elte.hu> Acked-by: Russell King <rmk+kernel@arm.linux.org.uk> Acked-by: David S. Miller <davem@davemloft.net> Acked-by: Kyle McMartin <kyle@mcmartin.ca> Acked-by: WANG Cong <xiyou.wangcong@gmail.com> Cc: Richard Henderson <rth@twiddle.net> Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru> Cc: Haavard Skinnemoen <hskinnemoen@atmel.com> Cc: Mikael Starvik <starvik@axis.com> Cc: "Luck, Tony" <tony.luck@intel.com> Cc: Hirokazu Takata <takata@linux-m32r.org> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: David Howells <dhowells@redhat.com> Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Paul Mundt <lethal@linux-sh.org> Cc: Chris Zankel <zankel@tensilica.com> Cc: Michal Simek <monstr@monstr.eu> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-04-02xtensa: nommu supportJohannes Weiner1-61/+1
Add support for !CONFIG_MMU setups. Signed-off-by: Johannes Weiner <jw@emlix.com> Signed-off-by: Chris Zankel <chris@zankel.net>
2009-04-02xtensa: cope with ram beginning at higher addressesJohannes Weiner1-5/+5
The current assumption of the memory code is that the first RAM PFN in the system is 0. Adjust the relevant code to play well with setups where memory starts at higher addresses, indicated by PLATFORM_DEFAULT_MEM_START. The new memory model looks like this: +----------+--+----------------------+----------------+ | | | | | | | | RAM | | | | | | | +----------+--+----------------------+----------------+ | | | | | +- PFN 0 | +- min_low_pfn +- max_low_pfn +- max_pfn | +- ARCH_PFN_OFFSET +- PLATFORM_DEFAULT_MEM_START >> PAGE_SIZE The memory map contains pages starting from pfn ARCH_PFN_OFFSET up to max_low_pfn. The only zone used right now will span exactly the same region. Usually, ARCH_PFN_OFFSET and min_low_pfn are the same value. Handle them separately for robustness. Gapping pages will be in the memory map but marked as reserved and won't be touched. Signed-off-by: Johannes Weiner <jw@emlix.com> Signed-off-by: Chris Zankel <chris@zankel.net>
2009-04-02xtensa: don't make bootmem bitmap larger than requiredJohannes Weiner1-1/+2
If min_low_pfn is non-zero, the bitmap reserved for bootmem is bigger than needed. The number of pages bootmem has to maintain is the range from min_low_pfn to max_low_pfn. For now it has only been a theoretical mistake, min_low_pfn was always zero. Signed-off-by: Johannes Weiner <jw@emlix.com> Signed-off-by: Chris Zankel <chris@zankel.net>