summaryrefslogtreecommitdiffstats
path: root/arch
AgeCommit message (Collapse)AuthorFilesLines
2020-08-08arm64/fixmap: make notes of fixed_addresses more preciselyPingfan Liu1-4/+3
These 'compile-time allocated' memory buffers can occupy more than one page and each enum increment is page-sized. So improve the note about it. Signed-off-by: Pingfan Liu <kernelfans@gmail.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/1596460720-19243-1-git-send-email-kernelfans@gmail.com To: linux-arm-kernel@lists.infradead.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-07-31Merge branch 'for-next/read-barrier-depends' into for-next/coreCatalin Marinas14-74/+67
* for-next/read-barrier-depends: : Allow architectures to override __READ_ONCE() arm64: Reduce the number of header files pulled into vmlinux.lds.S compiler.h: Move compiletime_assert() macros into compiler_types.h checkpatch: Remove checks relating to [smp_]read_barrier_depends() include/linux: Remove smp_read_barrier_depends() from comments tools/memory-model: Remove smp_read_barrier_depends() from informal doc Documentation/barriers/kokr: Remove references to [smp_]read_barrier_depends() Documentation/barriers: Remove references to [smp_]read_barrier_depends() locking/barriers: Remove definitions for [smp_]read_barrier_depends() alpha: Replace smp_read_barrier_depends() usage with smp_[r]mb() vhost: Remove redundant use of read_barrier_depends() barrier asm/rwonce: Don't pull <asm/barrier.h> into 'asm-generic/rwonce.h' asm/rwonce: Remove smp_read_barrier_depends() invocation alpha: Override READ_ONCE() with barriered implementation asm/rwonce: Allow __READ_ONCE to be overridden by the architecture compiler.h: Split {READ,WRITE}_ONCE definitions out into rwonce.h tools: bpf: Use local copy of headers including uapi/linux/filter.h
2020-07-31Merge branch 'for-next/tlbi' into for-next/coreCatalin Marinas11-17/+264
* for-next/tlbi: : Support for TTL (translation table level) hint in the TLB operations arm64: tlb: Use the TLBI RANGE feature in arm64 arm64: enable tlbi range instructions arm64: tlb: Detect the ARMv8.4 TLBI RANGE feature arm64: tlb: don't set the ttl value in flush_tlb_page_nosync arm64: Shift the __tlbi_level() indentation left arm64: tlb: Set the TTL field in flush_*_tlb_range arm64: tlb: Set the TTL field in flush_tlb_range tlb: mmu_gather: add tlb_flush_*_range APIs arm64: Add tlbi_user_level TLB invalidation helper arm64: Add level-hinted TLB invalidation helper arm64: Document SW reserved PTE/PMD bits in Stage-2 descriptors arm64: Detect the ARMv8.4 TTL feature
2020-07-31Merge branches 'for-next/misc', 'for-next/vmcoreinfo', ↵Catalin Marinas17-81/+469
'for-next/cpufeature', 'for-next/acpi', 'for-next/perf', 'for-next/timens', 'for-next/msi-iommu' and 'for-next/trivial' into for-next/core * for-next/misc: : Miscellaneous fixes and cleanups arm64: use IRQ_STACK_SIZE instead of THREAD_SIZE for irq stack arm64/mm: save memory access in check_and_switch_context() fast switch path recordmcount: only record relocation of type R_AARCH64_CALL26 on arm64. arm64: Reserve HWCAP2_MTE as (1 << 18) arm64/entry: deduplicate SW PAN entry/exit routines arm64: s/AMEVTYPE/AMEVTYPER arm64/hugetlb: Reserve CMA areas for gigantic pages on 16K and 64K configs arm64: stacktrace: Move export for save_stack_trace_tsk() smccc: Make constants available to assembly arm64/mm: Redefine CONT_{PTE, PMD}_SHIFT arm64/defconfig: Enable CONFIG_KEXEC_FILE arm64: Document sysctls for emulated deprecated instructions arm64/panic: Unify all three existing notifier blocks arm64/module: Optimize module load time by optimizing PLT counting * for-next/vmcoreinfo: : Export the virtual and physical address sizes in vmcoreinfo arm64/crash_core: Export TCR_EL1.T1SZ in vmcoreinfo crash_core, vmcoreinfo: Append 'MAX_PHYSMEM_BITS' to vmcoreinfo * for-next/cpufeature: : CPU feature handling cleanups arm64/cpufeature: Validate feature bits spacing in arm64_ftr_regs[] arm64/cpufeature: Replace all open bits shift encodings with macros arm64/cpufeature: Add remaining feature bits in ID_AA64MMFR2 register arm64/cpufeature: Add remaining feature bits in ID_AA64MMFR1 register arm64/cpufeature: Add remaining feature bits in ID_AA64MMFR0 register * for-next/acpi: : ACPI updates for arm64 arm64/acpi: disallow writeable AML opregion mapping for EFI code regions arm64/acpi: disallow AML memory opregions to access kernel memory * for-next/perf: : perf updates for arm64 arm64: perf: Expose some new events via sysfs tools headers UAPI: Update tools's copy of linux/perf_event.h arm64: perf: Add cap_user_time_short perf: Add perf_event_mmap_page::cap_user_time_short ABI arm64: perf: Only advertise cap_user_time for arch_timer arm64: perf: Implement correct cap_user_time time/sched_clock: Use raw_read_seqcount_latch() sched_clock: Expose struct clock_read_data arm64: perf: Correct the event index in sysfs perf/smmuv3: To simplify code for ioremap page in pmcg * for-next/timens: : Time namespace support for arm64 arm64: enable time namespace support arm64/vdso: Restrict splitting VVAR VMA arm64/vdso: Handle faults on timens page arm64/vdso: Add time namespace page arm64/vdso: Zap vvar pages when switching to a time namespace arm64/vdso: use the fault callback to map vvar pages * for-next/msi-iommu: : Make the MSI/IOMMU input/output ID translation PCI agnostic, augment the : MSI/IOMMU ACPI/OF ID mapping APIs to accept an input ID bus-specific parameter : and apply the resulting changes to the device ID space provided by the : Freescale FSL bus bus: fsl-mc: Add ACPI support for fsl-mc bus/fsl-mc: Refactor the MSI domain creation in the DPRC driver of/irq: Make of_msi_map_rid() PCI bus agnostic of/irq: make of_msi_map_get_device_domain() bus agnostic dt-bindings: arm: fsl: Add msi-map device-tree binding for fsl-mc bus of/device: Add input id to of_dma_configure() of/iommu: Make of_map_rid() PCI agnostic ACPI/IORT: Add an input ID to acpi_dma_configure() ACPI/IORT: Remove useless PCI bus walk ACPI/IORT: Make iort_msi_map_rid() PCI agnostic ACPI/IORT: Make iort_get_device_domain IRQ domain agnostic ACPI/IORT: Make iort_match_node_callback walk the ACPI namespace for NC * for-next/trivial: : Trivial fixes arm64: sigcontext.h: delete duplicated word arm64: ptrace.h: delete duplicated word arm64: pgtable-hwdef.h: delete duplicated words
2020-07-31arm64: use IRQ_STACK_SIZE instead of THREAD_SIZE for irq stackManinder Singh1-1/+1
IRQ_STACK_SIZE can be made different from THREAD_SIZE, and as IRQ_STACK_SIZE is used while irq stack allocation, same define should be used while printing information of irq stack. Signed-off-by: Maninder Singh <maninder1.s@samsung.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/1596196190-14141-1-git-send-email-maninder1.s@samsung.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-07-30arm64/mm: save memory access in check_and_switch_context() fast switch pathPingfan Liu2-8/+8
On arm64, smp_processor_id() reads a per-cpu `cpu_number` variable, using the per-cpu offset stored in the tpidr_el1 system register. In some cases we generate a per-cpu address with a sequence like: cpu_ptr = &per_cpu(ptr, smp_processor_id()); Which potentially incurs a cache miss for both `cpu_number` and the in-memory `__per_cpu_offset` array. This can be written more optimally as: cpu_ptr = this_cpu_ptr(ptr); Which only needs the offset from tpidr_el1, and does not need to load from memory. The following two test cases show a small performance improvement measured on a 46-cpus qualcomm machine with 5.8.0-rc4 kernel. Test 1: (about 0.3% improvement) #cat b.sh make clean && make all -j138 #perf stat --repeat 10 --null --sync sh b.sh - before this patch Performance counter stats for 'sh b.sh' (10 runs): 298.62 +- 1.86 seconds time elapsed ( +- 0.62% ) - after this patch Performance counter stats for 'sh b.sh' (10 runs): 297.734 +- 0.954 seconds time elapsed ( +- 0.32% ) Test 2: (about 1.69% improvement) 'perf stat -r 10 perf bench sched messaging' Then sum the total time of 'sched/messaging' by manual. - before this patch total 0.707 sec for 10 times - after this patch totol 0.695 sec for 10 times Signed-off-by: Pingfan Liu <kernelfans@gmail.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Steve Capper <steve.capper@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Vladimir Murzin <vladimir.murzin@arm.com> Cc: Jean-Philippe Brucker <jean-philippe@linaro.org> Link: https://lore.kernel.org/r/1594389852-19949-1-git-send-email-kernelfans@gmail.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-07-30arm64: sigcontext.h: delete duplicated wordRandy Dunlap1-1/+1
Drop the repeated word "the". Signed-off-by: Randy Dunlap <rdunlap@infradead.org> Cc: Will Deacon <will@kernel.org> Cc: linux-arm-kernel@lists.infradead.org Link: https://lore.kernel.org/r/20200726003207.20253-4-rdunlap@infradead.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-07-30arm64: ptrace.h: delete duplicated wordRandy Dunlap1-1/+1
Drop the repeated word "the". Signed-off-by: Randy Dunlap <rdunlap@infradead.org> Cc: Will Deacon <will@kernel.org> Cc: linux-arm-kernel@lists.infradead.org Link: https://lore.kernel.org/r/20200726003207.20253-3-rdunlap@infradead.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-07-30arm64: pgtable-hwdef.h: delete duplicated wordsRandy Dunlap1-2/+2
Drop the repeated words "at" and "the". Signed-off-by: Randy Dunlap <rdunlap@infradead.org> Cc: Will Deacon <will@kernel.org> Cc: linux-arm-kernel@lists.infradead.org Link: https://lore.kernel.org/r/20200726003207.20253-2-rdunlap@infradead.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-07-24arm64: enable time namespace supportAndrei Vagin1-0/+1
CONFIG_TIME_NS is dependes on GENERIC_VDSO_TIME_NS. Signed-off-by: Andrei Vagin <avagin@gmail.com> Reviewed-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Reviewed-by: Dmitry Safonov <dima@arista.com> Link: https://lore.kernel.org/r/20200624083321.144975-7-avagin@gmail.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-07-24arm64/vdso: Restrict splitting VVAR VMAAndrei Vagin1-0/+13
Forbid splitting VVAR VMA resulting in a stricter ABI and reducing the amount of corner-cases to consider while working further on VDSO time namespace support. As the offset from timens to VVAR page is computed compile-time, the pages in VVAR should stay together and not being partically mremap()'ed. Signed-off-by: Andrei Vagin <avagin@gmail.com> Reviewed-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Reviewed-by: Dmitry Safonov <dima@arista.com> Link: https://lore.kernel.org/r/20200624083321.144975-6-avagin@gmail.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-07-24arm64/vdso: Handle faults on timens pageAndrei Vagin1-4/+52
If a task belongs to a time namespace then the VVAR page which contains the system wide VDSO data is replaced with a namespace specific page which has the same layout as the VVAR page. Signed-off-by: Andrei Vagin <avagin@gmail.com> Reviewed-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Reviewed-by: Dmitry Safonov <dima@arista.com> Link: https://lore.kernel.org/r/20200624083321.144975-5-avagin@gmail.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-07-24arm64/vdso: Add time namespace pageAndrei Vagin6-5/+46
Allocate the time namespace page among VVAR pages. Provide __arch_get_timens_vdso_data() helper for VDSO code to get the code-relative position of VVARs on that special page. If a task belongs to a time namespace then the VVAR page which contains the system wide VDSO data is replaced with a namespace specific page which has the same layout as the VVAR page. That page has vdso_data->seq set to 1 to enforce the slow path and vdso_data->clock_mode set to VCLOCK_TIMENS to enforce the time namespace handling path. The extra check in the case that vdso_data->seq is odd, e.g. a concurrent update of the VDSO data is in progress, is not really affecting regular tasks which are not part of a time namespace as the task is spin waiting for the update to finish and vdso_data->seq to become even again. If a time namespace task hits that code path, it invokes the corresponding time getter function which retrieves the real VVAR page, reads host time and then adds the offset for the requested clock which is stored in the special VVAR page. The time-namespace page isn't allocated on !CONFIG_TIME_NAMESPACE, but vma is the same size, which simplifies criu/vdso migration between different kernel configs. Signed-off-by: Andrei Vagin <avagin@gmail.com> Reviewed-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Reviewed-by: Dmitry Safonov <dima@arista.com> Cc: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20200624083321.144975-4-avagin@gmail.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-07-24arm64/vdso: Zap vvar pages when switching to a time namespaceAndrei Vagin1-0/+31
The order of vvar pages depends on whether a task belongs to the root time namespace or not. In the root time namespace, a task doesn't have a per-namespace page. In a non-root namespace, the VVAR page which contains the system-wide VDSO data is replaced with a namespace specific page that contains clock offsets. Whenever a task changes its namespace, the VVAR page tables are cleared and then they will be re-faulted with a corresponding layout. A task can switch its time namespace only if its ->mm isn't shared with another task. Signed-off-by: Andrei Vagin <avagin@gmail.com> Reviewed-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Reviewed-by: Dmitry Safonov <dima@arista.com> Reviewed-by: Christian Brauner <christian.brauner@ubuntu.com> Link: https://lore.kernel.org/r/20200624083321.144975-3-avagin@gmail.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-07-24arm64/vdso: use the fault callback to map vvar pagesAndrei Vagin1-10/+15
Currently the vdso has no awareness of time namespaces, which may apply distinct offsets to processes in different namespaces. To handle this within the vdso, we'll need to expose a per-namespace data page. As a preparatory step, this patch separates the vdso data page from the code pages, and has it faulted in via its own fault callback. Subsquent patches will extend this to support distinct pages per time namespace. The vvar vma has to be installed with the VM_PFNMAP flag to handle faults via its vma fault callback. Signed-off-by: Andrei Vagin <avagin@gmail.com> Reviewed-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Reviewed-by: Dmitry Safonov <dima@arista.com> Link: https://lore.kernel.org/r/20200624083321.144975-2-avagin@gmail.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-07-24arm64: Reserve HWCAP2_MTE as (1 << 18)Catalin Marinas3-0/+3
While MTE is not supported in the upstream kernel yet, add a comment that HWCAP2_MTE as (1 << 18) is reserved. Glibc makes use of it for the resolving (ifunc) of the MTE-safe string routines. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-07-23arm64/entry: deduplicate SW PAN entry/exit routinesArd Biesheuvel1-48/+47
Factor the 12 copies of the SW PAN entry and exit code into callable subroutines, and use alternatives patching to either emit a 'bl' instruction to call them, or a NOP if h/w PAN is found to be available at runtime. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20200721083315.4816-1-ardb@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-07-22arm64: s/AMEVTYPE/AMEVTYPERVladimir Murzin2-36/+36
Activity Monitor Event Type Registers are named as AMEVTYPER{0,1}<n> Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20200721091259.102756-1-vladimir.murzin@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-07-21arm64: perf: Expose some new events via sysfsShaokun Zhang2-0/+46
Some new PMU events can been detected by PMCEID1_EL0, but it can't be listed, Let's expose these through sysfs. Signed-off-by: Shaokun Zhang <zhangshaokun@hisilicon.com> Cc: Will Deacon <will@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/1595328573-12751-2-git-send-email-zhangshaokun@hisilicon.com Signed-off-by: Will Deacon <will@kernel.org>
2020-07-21arm64: Reduce the number of header files pulled into vmlinux.lds.SWill Deacon6-7/+10
Although vmlinux.lds.S smells like an assembly file and is compiled with __ASSEMBLY__ defined, it's actually just fed to the preprocessor to create our linker script. This means that any assembly macros defined by headers that it includes will result in a helpful link error: | aarch64-linux-gnu-ld:./arch/arm64/kernel/vmlinux.lds:1: syntax error In preparation for an arm64-private asm/rwonce.h implementation, which will end up pulling assembly macros into linux/compiler.h, reduce the number of headers we include directly and transitively in vmlinux.lds.S Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Will Deacon <will@kernel.org>
2020-07-21alpha: Replace smp_read_barrier_depends() usage with smp_[r]mb()Will Deacon2-13/+13
In preparation for removing smp_read_barrier_depends() altogether, move the Alpha code over to using smp_rmb() and smp_mb() directly. Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Paul E. McKenney <paulmck@kernel.org> Signed-off-by: Will Deacon <will@kernel.org>
2020-07-21asm/rwonce: Don't pull <asm/barrier.h> into 'asm-generic/rwonce.h'Will Deacon4-0/+4
Now that 'smp_read_barrier_depends()' has gone the way of the Norwegian Blue, drop the inclusion of <asm/barrier.h> in 'asm-generic/rwonce.h'. This requires fixups to some architecture vdso headers which were previously relying on 'asm/barrier.h' coming in via 'linux/compiler.h'. Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Will Deacon <will@kernel.org>
2020-07-21alpha: Override READ_ONCE() with barriered implementationWill Deacon2-54/+40
Rather then relying on the core code to use smp_read_barrier_depends() as part of the READ_ONCE() definition, instead override __READ_ONCE() in the Alpha code so that it generates the required mb() and then implement smp_load_acquire() using the new macro to avoid redundant back-to-back barriers from the generic implementation. Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Paul E. McKenney <paulmck@kernel.org> Signed-off-by: Will Deacon <will@kernel.org>
2020-07-20arm64: perf: Add cap_user_time_shortPeter Zijlstra1-5/+7
This completes the ARM64 cap_user_time support. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Leo Yan <leo.yan@linaro.org> Link: https://lore.kernel.org/r/20200716051130.4359-7-leo.yan@linaro.org Signed-off-by: Will Deacon <will@kernel.org>
2020-07-20arm64: perf: Only advertise cap_user_time for arch_timerPeter Zijlstra1-6/+13
When sched_clock is running on anything other than arch_timer, don't advertise cap_user_time*. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Leo Yan <leo.yan@linaro.org> Link: https://lore.kernel.org/r/20200716051130.4359-5-leo.yan@linaro.org Requested-by: Will Deacon <will@kernel.org> Signed-off-by: Will Deacon <will@kernel.org>
2020-07-20arm64: perf: Implement correct cap_user_timePeter Zijlstra1-9/+29
As reported by Leo; the existing implementation is broken when the clock and counter don't intersect at 0. Use the sched_clock's struct clock_read_data information to correctly implement cap_user_time and cap_user_time_zero. Note that the ARM64 counter is architecturally only guaranteed to be 56bit wide (implementations are allowed to be wider) and the existing perf ABI cannot deal with wrap-around. This implementation should also be faster than the old; seeing how we don't need to recompute mult and shift all the time. [leoyan: Use mul_u64_u32_shr() to convert cyc to ns to avoid overflow] Reported-by: Leo Yan <leo.yan@linaro.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Leo Yan <leo.yan@linaro.org> Link: https://lore.kernel.org/r/20200716051130.4359-4-leo.yan@linaro.org Signed-off-by: Will Deacon <will@kernel.org>
2020-07-20arm64: perf: Correct the event index in sysfsShaokun Zhang1-5/+8
When PMU event ID is equal or greater than 0x4000, it will be reduced by 0x4000 and it is not the raw number in the sysfs. Let's correct it and obtain the raw event ID. Before this patch: cat /sys/bus/event_source/devices/armv8_pmuv3_0/events/sample_feed event=0x001 After this patch: cat /sys/bus/event_source/devices/armv8_pmuv3_0/events/sample_feed event=0x4001 Signed-off-by: Shaokun Zhang <zhangshaokun@hisilicon.com> Cc: Will Deacon <will@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: <stable@vger.kernel.org> Link: https://lore.kernel.org/r/1592487344-30555-3-git-send-email-zhangshaokun@hisilicon.com [will: fixed formatting of 'if' condition] Signed-off-by: Will Deacon <will@kernel.org>
2020-07-15arm64: tlb: Use the TLBI RANGE feature in arm64Zhenyu Ye2-29/+131
Add __TLBI_VADDR_RANGE macro and rewrite __flush_tlb_range(). When cpu supports TLBI feature, the minimum range granularity is decided by 'scale', so we can not flush all pages by one instruction in some cases. For example, when the pages = 0xe81a, let's start 'scale' from maximum, and find right 'num' for each 'scale': 1. scale = 3, we can flush no pages because the minimum range is 2^(5*3 + 1) = 0x10000. 2. scale = 2, the minimum range is 2^(5*2 + 1) = 0x800, we can flush 0xe800 pages this time, the num = 0xe800/0x800 - 1 = 0x1c. Remaining pages is 0x1a; 3. scale = 1, the minimum range is 2^(5*1 + 1) = 0x40, no page can be flushed. 4. scale = 0, we flush the remaining 0x1a pages, the num = 0x1a/0x2 - 1 = 0xd. However, in most scenarios, the pages = 1 when flush_tlb_range() is called. Start from scale = 3 or other proper value (such as scale = ilog2(pages)), will incur extra overhead. So increase 'scale' from 0 to maximum, the flush order is exactly opposite to the example. Signed-off-by: Zhenyu Ye <yezhenyu2@huawei.com> Link: https://lore.kernel.org/r/20200715071945.897-4-yezhenyu2@huawei.com [catalin.marinas@arm.com: removed unnecessary masks in __TLBI_VADDR_RANGE] [catalin.marinas@arm.com: __TLB_RANGE_NUM subtracts 1] [catalin.marinas@arm.com: minor adjustments to the comments] [catalin.marinas@arm.com: introduce system_supports_tlb_range()] Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-07-15arm64: enable tlbi range instructionsZhenyu Ye2-0/+21
TLBI RANGE feature instoduces new assembly instructions and only support by binutils >= 2.30. Add necessary Kconfig logic to allow this to be enabled and pass '-march=armv8.4-a' to KBUILD_CFLAGS. Signed-off-by: Zhenyu Ye <yezhenyu2@huawei.com> Link: https://lore.kernel.org/r/20200715071945.897-3-yezhenyu2@huawei.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-07-15arm64: tlb: Detect the ARMv8.4 TLBI RANGE featureZhenyu Ye3-1/+15
ARMv8.4-TLBI provides TLBI invalidation instruction that apply to a range of input addresses. This patch detect this feature. Signed-off-by: Zhenyu Ye <yezhenyu2@huawei.com> Link: https://lore.kernel.org/r/20200715071945.897-2-yezhenyu2@huawei.com [catalin.marinas@arm.com: some renaming for consistency] Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-07-15arm64/hugetlb: Reserve CMA areas for gigantic pages on 16K and 64K configsAnshuman Khandual3-2/+42
Currently 'hugetlb_cma=' command line argument does not create CMA area on ARM64_16K_PAGES and ARM64_64K_PAGES based platforms. Instead, it just ends up with the following warning message. Reason being, hugetlb_cma_reserve() never gets called for these huge page sizes. [ 64.255669] hugetlb_cma: the option isn't supported by current arch This enables CMA areas reservation on ARM64_16K_PAGES and ARM64_64K_PAGES configs by defining an unified arm64_hugetlb_cma_reseve() that is wrapped in CONFIG_CMA. Call site for arm64_hugetlb_cma_reserve() is also protected as <asm/hugetlb.h> is conditionally included and hence cannot contain stub for the inverse config i.e !(CONFIG_HUGETLB_PAGE && CONFIG_CMA). Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Mike Kravetz <mike.kravetz@oracle.com> Cc: Barry Song <song.bao.hua@hisilicon.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Link: https://lore.kernel.org/r/1593578521-24672-1-git-send-email-anshuman.khandual@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-07-14arm64: stacktrace: Move export for save_stack_trace_tsk()Mark Brown1-1/+1
Due to refactoring way back in bb53c820c5b0f1 ("arm64: stacktrace: avoid listing stacktrace functions in stacktrace") the EXPORT_SYMBOL_GPL() for save_stack_trace_tsk() is at the end of __save_stack_trace() rather than the function it exports. Move it to the expected location. Signed-off-by: Mark Brown <broonie@kernel.org> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20200710182402.50473-1-broonie@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-07-14arm64/acpi: disallow writeable AML opregion mapping for EFI code regionsArd Biesheuvel1-0/+9
Given that the contents of EFI runtime code and data regions are provided by the firmware, as well as the DSDT, it is not unimaginable that AML code exists today that accesses EFI runtime code regions using a SystemMemory OpRegion. There is nothing fundamentally wrong with that, but since we take great care to ensure that executable code is never mapped writeable and executable at the same time, we should not permit AML to create writable mapping. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Acked-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Link: https://lore.kernel.org/r/20200626155832.2323789-3-ardb@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-07-14arm64/acpi: disallow AML memory opregions to access kernel memoryArd Biesheuvel2-14/+67
AML uses SystemMemory opregions to allow AML handlers to access MMIO registers of, e.g., GPIO controllers, or access reserved regions of memory that are owned by the firmware. Currently, we also allow AML access to memory that is owned by the kernel and mapped via the linear region, which does not seem to be supported by a valid use case, and exposes the kernel's internal state to AML methods that may be buggy and exploitable. On arm64, ACPI support requires booting in EFI mode, and so we can cross reference the requested region against the EFI memory map, rather than just do a minimal check on the first page. So let's only permit regions to be remapped by the ACPI core if - they don't appear in the EFI memory map at all (which is the case for most MMIO), or - they are covered by a single region in the EFI memory map, which is not of a type that describes memory that is given to the kernel at boot. Reported-by: Jason A. Donenfeld <Jason@zx2c4.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Acked-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Link: https://lore.kernel.org/r/20200626155832.2323789-2-ardb@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-07-10arm64: tlb: don't set the ttl value in flush_tlb_page_nosyncZhenyu Ye1-3/+2
flush_tlb_page_nosync() may be called from pmd level, so we can not set the ttl = 3 here. The callstack is as follows: pmdp_set_access_flags ptep_set_access_flags flush_tlb_fix_spurious_fault flush_tlb_page flush_tlb_page_nosync Fixes: e735b98a5fe0 ("arm64: Add tlbi_user_level TLB invalidation helper") Reported-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Zhenyu Ye <yezhenyu2@huawei.com> Link: https://lore.kernel.org/r/20200710094158.468-1-yezhenyu2@huawei.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-07-07arm64/cpufeature: Validate feature bits spacing in arm64_ftr_regs[]Anshuman Khandual1-3/+44
arm64_feature_bits for a register in arm64_ftr_regs[] are in a descending order as per their shift values. Validate that these features bits are defined correctly and do not overlap with each other. This check protects against any inadvertent erroneous changes to the register definitions. Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: Mark Brown <broonie@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Link: https://lore.kernel.org/r/1594131793-9498-1-git-send-email-anshuman.khandual@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-07-07arm64: Shift the __tlbi_level() indentation leftCatalin Marinas1-22/+21
This is for consistency with the other __tlbi macros in this file. No functional change. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-07-07arm64: tlb: Set the TTL field in flush_*_tlb_rangeZhenyu Ye1-0/+10
This patch implement flush_{pmd|pud}_tlb_range() in arm64 by calling __flush_tlb_range() with the corresponding stride and tlb_level values. Signed-off-by: Zhenyu Ye <yezhenyu2@huawei.com> Link: https://lore.kernel.org/r/20200625080314.230-7-yezhenyu2@huawei.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-07-07arm64: tlb: Set the TTL field in flush_tlb_rangeZhenyu Ye2-7/+36
This patch uses the cleared_* in struct mmu_gather to set the TTL field in flush_tlb_range(). Signed-off-by: Zhenyu Ye <yezhenyu2@huawei.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20200625080314.230-6-yezhenyu2@huawei.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-07-07arm64: Add tlbi_user_level TLB invalidation helperZhenyu Ye1-6/+12
Add a level-hinted parameter to __tlbi_user, which only gets used if ARMv8.4-TTL gets detected. ARMv8.4-TTL provides the TTL field in tlbi instruction to indicate the level of translation table walk holding the leaf entry for the address that is being invalidated. This patch set the default level value of flush_tlb_range() to 0, which will be updated in future patches. And set the ttl value of flush_tlb_page_nosync() to 3 because it is only called to flush a single pte page. Signed-off-by: Zhenyu Ye <yezhenyu2@huawei.com> Link: https://lore.kernel.org/r/20200625080314.230-4-yezhenyu2@huawei.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-07-07arm64: Add level-hinted TLB invalidation helperMarc Zyngier2-0/+54
Add a level-hinted TLB invalidation helper that only gets used if ARMv8.4-TTL gets detected. Reviewed-by: Alexandru Elisei <alexandru.elisei@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org>
2020-07-07arm64: Document SW reserved PTE/PMD bits in Stage-2 descriptorsMarc Zyngier1-0/+2
Advertise bits [58:55] as reserved for SW in the S2 descriptors. Reviewed-by: Andrew Scull <ascull@google.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org>
2020-07-07arm64: Detect the ARMv8.4 TTL featureMarc Zyngier3-1/+14
In order to reduce the cost of TLB invalidation, the ARMv8.4 TTL feature allows TLBs to be issued with a level allowing for quicker invalidation. Let's detect the feature for now. Further patches will implement its actual usage. Reviewed-by : Suzuki K Polose <suzuki.poulose@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org>
2020-07-03arm64/mm: Redefine CONT_{PTE, PMD}_SHIFTGavin Shan2-10/+10
Currently, the value of CONT_{PTE, PMD}_SHIFT is off from standard {PAGE, PMD}_SHIFT. In turn, we have to consider adding {PAGE, PMD}_SHIFT when using CONT_{PTE, PMD}_SHIFT in the function hugetlbpage_init(). It's a bit confusing. This redefines CONT_{PTE, PMD}_SHIFT with {PAGE, PMD}_SHIFT included so that the later values needn't be added when using the former ones in function hugetlbpage_init(). Note that the values of CONT_{PTES, PMDS} are unchanged. Suggested-by: Will Deacon <will@kernel.org> Signed-off-by: Gavin Shan <gshan@redhat.com> Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com> Link: https://lkml.org/lkml/2020/5/6/190 Link: https://lore.kernel.org/r/20200630062428.194235-1-gshan@redhat.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-07-03arm64/defconfig: Enable CONFIG_KEXEC_FILEBhupesh Sharma1-0/+1
kexec_file_load() syscall interface is now supported for arm64 architecture as well via commits: 3751e728cef2 ("arm64: kexec_file: add crash dump support") and 3ddd9992a590 ("arm64: enable KEXEC_FILE config")]. This patch enables config KEXEC_FILE by default in the arm64 defconfig, so that user-space tools like kexec-tools can use the same as the default interface for kexec/kdump on arm64. Cc: AKASHI Takahiro <takahiro.akashi@linaro.org> Cc: James Morse <james.morse@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will@kernel.org> Cc: kexec@lists.infradead.org Signed-off-by: Bhupesh Sharma <bhsharma@redhat.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/1586212300-30797-1-git-send-email-bhsharma@redhat.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-07-03arm64/cpufeature: Replace all open bits shift encodings with macrosAnshuman Khandual2-26/+55
There are many open bits shift encodings for various CPU ID registers that are scattered across cpufeature. This replaces them with register specific sensible macro definitions. This should not have any functional change. Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Marc Zyngier <maz@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Link: https://lore.kernel.org/r/1593748297-1965-5-git-send-email-anshuman.khandual@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-07-03arm64/cpufeature: Add remaining feature bits in ID_AA64MMFR2 registerAnshuman Khandual2-0/+14
Enable EVT, BBM, TTL, IDS, ST, NV and CCIDX features bits in ID_AA64MMFR2 register as per ARM DDI 0487F.a specification. Suggested-by: Will Deacon <will@kernel.org> Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Link: https://lore.kernel.org/r/1593748297-1965-4-git-send-email-anshuman.khandual@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-07-03arm64/cpufeature: Add remaining feature bits in ID_AA64MMFR1 registerAnshuman Khandual2-0/+8
Enable ETS, TWED, XNX and SPECSEI features bits in ID_AA64MMFR1 register as per ARM DDI 0487F.a specification. Suggested-by: Will Deacon <will@kernel.org> Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Link: https://lore.kernel.org/r/1593748297-1965-3-git-send-email-anshuman.khandual@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-07-03arm64/cpufeature: Add remaining feature bits in ID_AA64MMFR0 registerAnshuman Khandual2-0/+6
Enable EVC, FGT, EXS features bits in ID_AA64MMFR0 register as per ARM DDI 0487F.a specification. Suggested-by: Will Deacon <will@kernel.org> Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Link: https://lore.kernel.org/r/1593748297-1965-2-git-send-email-anshuman.khandual@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-07-02arm64: Document sysctls for emulated deprecated instructionsMark Brown1-2/+6
We have support for emulating a number of deprecated instructions in the kernel with individual Kconfig options enabling this support per instruction. In addition to the Kconfig options we also provide runtime control via sysctls but this is not currently mentioned in the Kconfig so not very discoverable for users. This is particularly important for SWP/SWPB since this is disabled by default at runtime and must be enabled via the sysctl, causing considerable frustration for users who have enabled the config option and are then confused to find that the instruction is still faulting. Add a reference to the sysctls in the help text for each of the config options, noting that SWP/SWPB is disabled by default, to improve the user experience. Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20200625131507.32334-1-broonie@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>