summaryrefslogtreecommitdiffstats
path: root/arch/arm64
AgeCommit message (Collapse)AuthorFilesLines
2018-04-04Merge branch 'linus' of ↵Linus Torvalds10-355/+886
git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6 Pull crypto updates from Herbert Xu: "API: - add AEAD support to crypto engine - allow batch registration in simd Algorithms: - add CFB mode - add speck block cipher - add sm4 block cipher - new test case for crct10dif - improve scheduling latency on ARM - scatter/gather support to gcm in aesni - convert x86 crypto algorithms to skcihper Drivers: - hmac(sha224/sha256) support in inside-secure - aes gcm/ccm support in stm32 - stm32mp1 support in stm32 - ccree driver from staging tree - gcm support over QI in caam - add ks-sa hwrng driver" * 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (212 commits) crypto: ccree - remove unused enums crypto: ahash - Fix early termination in hash walk crypto: brcm - explicitly cast cipher to hash type crypto: talitos - don't leak pointers to authenc keys crypto: qat - don't leak pointers to authenc keys crypto: picoxcell - don't leak pointers to authenc keys crypto: ixp4xx - don't leak pointers to authenc keys crypto: chelsio - don't leak pointers to authenc keys crypto: caam/qi - don't leak pointers to authenc keys crypto: caam - don't leak pointers to authenc keys crypto: lrw - Free rctx->ext with kzfree crypto: talitos - fix IPsec cipher in length crypto: Deduplicate le32_to_cpu_array() and cpu_to_le32_array() crypto: doc - clarify hash callbacks state machine crypto: api - Keep failed instances alive crypto: api - Make crypto_alg_lookup static crypto: api - Remove unused crypto_type lookup function crypto: chelsio - Remove declaration of static function from header crypto: inside-secure - hmac(sha224) support crypto: inside-secure - hmac(sha256) support ..
2018-04-04Merge tag 'arm64-upstream' of ↵Linus Torvalds46-774/+1433
git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux Pull arm64 updates from Will Deacon: "Nothing particularly stands out here, probably because people were tied up with spectre/meltdown stuff last time around. Still, the main pieces are: - Rework of our CPU features framework so that we can whitelist CPUs that don't require kpti even in a heterogeneous system - Support for the IDC/DIC architecture extensions, which allow us to elide instruction and data cache maintenance when writing out instructions - Removal of the large memory model which resulted in suboptimal codegen by the compiler and increased the use of literal pools, which could potentially be used as ROP gadgets since they are mapped as executable - Rework of forced signal delivery so that the siginfo_t is well-formed and handling of show_unhandled_signals is consolidated and made consistent between different fault types - More siginfo cleanup based on the initial patches from Eric Biederman - Workaround for Cortex-A55 erratum #1024718 - Some small ACPI IORT updates and cleanups from Lorenzo Pieralisi - Misc cleanups and non-critical fixes" * tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (70 commits) arm64: uaccess: Fix omissions from usercopy whitelist arm64: fpsimd: Split cpu field out from struct fpsimd_state arm64: tlbflush: avoid writing RES0 bits arm64: cmpxchg: Include linux/compiler.h in asm/cmpxchg.h arm64: move percpu cmpxchg implementation from cmpxchg.h to percpu.h arm64: cmpxchg: Include build_bug.h instead of bug.h for BUILD_BUG arm64: lse: Include compiler_types.h and export.h for out-of-line LL/SC arm64: fpsimd: include <linux/init.h> in fpsimd.h drivers/perf: arm_pmu_platform: do not warn about affinity on uniprocessor perf: arm_spe: include linux/vmalloc.h for vmap() Revert "arm64: Revert L1_CACHE_SHIFT back to 6 (64-byte cache line size)" arm64: cpufeature: Avoid warnings due to unused symbols arm64: Add work around for Arm Cortex-A55 Erratum 1024718 arm64: Delay enabling hardware DBM feature arm64: Add MIDR encoding for Arm Cortex-A55 and Cortex-A35 arm64: capabilities: Handle shared entries arm64: capabilities: Add support for checks based on a list of MIDRs arm64: Add helpers for checking CPU MIDR against a range arm64: capabilities: Clean up midr range helpers arm64: capabilities: Change scope of VHE to Boot CPU feature ...
2018-04-04Merge branch 'irq-core-for-linus' of ↵Linus Torvalds2-5/+4
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull irq updates from Thomas Gleixner: "The usual pile of boring changes: - Consolidate tasklet functions to share code instead of duplicating it - The first step for making the low level entry handler management on multi-platform kernels generic - A new sysfs file which allows to retrieve the wakeup state of interrupts. - Ensure that the interrupt thread follows the effective affinity and not the programmed affinity to avoid cross core wakeups. - Two new interrupt controller drivers (Microsemi Ocelot and Qualcomm PDC) - Fix the wakeup path clock handling for Reneasas interrupt chips. - Rework the boot time register reset for ARM GIC-V2/3 - Better suspend/resume support for ARM GIV-V3/ITS - Add missing locking to the ARM GIC set_type() callback - Small fixes for the irq simulator code - SPDX identifiers for the irq core code and removal of boiler plate - Small cleanups all over the place" * 'irq-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (37 commits) openrisc: Set CONFIG_MULTI_IRQ_HANDLER arm64: Set CONFIG_MULTI_IRQ_HANDLER genirq: Make GENERIC_IRQ_MULTI_HANDLER depend on !MULTI_IRQ_HANDLER irqchip/gic: Take lock when updating irq type irqchip/gic: Update supports_deactivate static key to modern api irqchip/gic-v3: Ensure GICR_CTLR.EnableLPI=0 is observed before enabling irqchip: Add a driver for the Microsemi Ocelot controller dt-bindings: interrupt-controller: Add binding for the Microsemi Ocelot interrupt controller irqchip/gic-v3: Probe for SCR_EL3 being clear before resetting AP0Rn irqchip/gic-v3: Don't try to reset AP0Rn irqchip/gic-v3: Do not check trigger configuration of partitionned LPIs genirq: Remove license boilerplate/references genirq: Add missing SPDX identifiers genirq/matrix: Cleanup SPDX identifier genirq: Cleanup top of file comments genirq: Pass desc to __irq_free instead of irq number irqchip/gic-v3: Loudly complain about the use of IRQ_TYPE_NONE irqchip/gic: Loudly complain about the use of IRQ_TYPE_NONE RISC-V: Move to the new GENERIC_IRQ_MULTI_HANDLER handler genirq: Add CONFIG_GENERIC_IRQ_MULTI_HANDLER ...
2018-04-04arm64: Set CONFIG_MULTI_IRQ_HANDLERPalmer Dabbelt1-0/+4
arm has an optional MULTI_IRQ_HANDLER, which arm64 copied but didn't make optional. The multi irq handler infrastructure has been copied to generic code selectable with a new config symbol. That symbol can be selected by randconfig builds and can cause build breakage. Introduce CONFIG_MULTI_IRQ_HANDLER as an intermediate step which prevents the core config symbol from being selected. The arm64 local config symbol will be removed once arm64 gets converted to the generic code. Signed-off-by: Palmer Dabbelt <palmer@sifive.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Arnd Bergmann <arnd@arndb.de> Link: https://lkml.kernel.org/r/20180404043130.31277-2-palmer@sifive.com
2018-04-03Merge tag 'mmc-v4.17' of git://git.kernel.org/pub/scm/linux/kernel/git/ulfh/mmcLinus Torvalds2-3/+0
Pull MMC updates from Ulf Hansson: "MMC core: - Export host capabilities through debugfs - Export card RCA register via sysfs - Improve card initializing sequence while enabling 4-bit bus - Export a function to enable/disable wakeup for card detect IRQ MMC host: - dw_mmc: Add support for new hi3798cv200 variant - dw_mmc: Remove support for some deprecated DT properties - mediatek: Add support for new variant used on MT7622 SoC - sdhci: Improve wakeup support for SDIO IRQs - sdhci: Improve wakeup support for card detect IRQs - sdhci-omap: Add tuning support - sdhci_omap: Add UHS-I mode support - sunxi: Prepare for runtime PM support via a few re-factorings - tmio: deprecate "toshiba,mmc-wrprotect-disable" DT property - tmio/renesas_sdhi: Consolidate code supporting write protect - tmio: Improve DMA vs PIO handling - tmio: Add support for IP-builtin card detection logic" * tag 'mmc-v4.17' of git://git.kernel.org/pub/scm/linux/kernel/git/ulfh/mmc: (55 commits) mmc: renesas_sdhi: replace EXT_ACC with HOST_MODE mmc: update sdio_claim_irq documentation mmc: Export host capabilities to debugfs. mmc: core: Disable HPI for certain Micron (Numonyx) eMMC cards mmc: block: fix updating ext_csd caches on ioctl call mmc: sunxi: Set our device drvdata earlier mmc: sunxi: Move the reset deassertion before enabling the clocks mmc: sunxi: Move resources management to separate functions mmc: dw_mmc: add support for hi3798cv200 specific extensions of dw-mshc dt-bindings: mmc: add bindings for hi3798cv200-dw-mshc mmc: core: Export card RCA register via sysfs mmc: renesas_sdhi: fix WP detection mmc: core: Use memdup_user() rather than duplicating its implementation mmc: dw_mmc-rockchip: correct property names in debug mmc: sd: Remove redundant err assignment from mmc_read_switch mmc: sdio: Check the return value of sdio_enable_4bit_bus mmc: core: Don't try UHS-I mode if 4-bit mode isn't supported arm64: dts: hi3660: remove 'num-slots' property for dwmmc ARM: dts: lpc18xx: remove 'num-slots' property for dwmmc arm64: dts: stratix10: remove 'num-slots' property for dwmmc ...
2018-04-02Merge branch 'syscalls-next' of ↵Linus Torvalds1-1/+1
git://git.kernel.org/pub/scm/linux/kernel/git/brodo/linux Pull removal of in-kernel calls to syscalls from Dominik Brodowski: "System calls are interaction points between userspace and the kernel. Therefore, system call functions such as sys_xyzzy() or compat_sys_xyzzy() should only be called from userspace via the syscall table, but not from elsewhere in the kernel. At least on 64-bit x86, it will likely be a hard requirement from v4.17 onwards to not call system call functions in the kernel: It is better to use use a different calling convention for system calls there, where struct pt_regs is decoded on-the-fly in a syscall wrapper which then hands processing over to the actual syscall function. This means that only those parameters which are actually needed for a specific syscall are passed on during syscall entry, instead of filling in six CPU registers with random user space content all the time (which may cause serious trouble down the call chain). Those x86-specific patches will be pushed through the x86 tree in the near future. Moreover, rules on how data may be accessed may differ between kernel data and user data. This is another reason why calling sys_xyzzy() is generally a bad idea, and -- at most -- acceptable in arch-specific code. This patchset removes all in-kernel calls to syscall functions in the kernel with the exception of arch/. On top of this, it cleans up the three places where many syscalls are referenced or prototyped, namely kernel/sys_ni.c, include/linux/syscalls.h and include/linux/compat.h" * 'syscalls-next' of git://git.kernel.org/pub/scm/linux/kernel/git/brodo/linux: (109 commits) bpf: whitelist all syscalls for error injection kernel/sys_ni: remove {sys_,sys_compat} from cond_syscall definitions kernel/sys_ni: sort cond_syscall() entries syscalls/x86: auto-create compat_sys_*() prototypes syscalls: sort syscall prototypes in include/linux/compat.h net: remove compat_sys_*() prototypes from net/compat.h syscalls: sort syscall prototypes in include/linux/syscalls.h kexec: move sys_kexec_load() prototype to syscalls.h x86/sigreturn: use SYSCALL_DEFINE0 x86: fix sys_sigreturn() return type to be long, not unsigned long x86/ioport: add ksys_ioperm() helper; remove in-kernel calls to sys_ioperm() mm: add ksys_readahead() helper; remove in-kernel calls to sys_readahead() mm: add ksys_mmap_pgoff() helper; remove in-kernel calls to sys_mmap_pgoff() mm: add ksys_fadvise64_64() helper; remove in-kernel call to sys_fadvise64_64() fs: add ksys_fallocate() wrapper; remove in-kernel calls to sys_fallocate() fs: add ksys_p{read,write}64() helpers; remove in-kernel calls to syscalls fs: add ksys_truncate() wrapper; remove in-kernel calls to sys_truncate() fs: add ksys_sync_file_range helper(); remove in-kernel calls to syscall kernel: add ksys_setsid() helper; remove in-kernel call to sys_setsid() kernel: add ksys_unshare() helper; remove in-kernel calls to sys_unshare() ...
2018-04-02Merge branch 'efi-core-for-linus' of ↵Linus Torvalds4-2/+52
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull EFI updates from Ingo Molnar: "The main EFI changes in this cycle were: - Fix the apple-properties code (Andy Shevchenko) - Add WARN() on arm64 if UEFI Runtime Services corrupt the reserved x18 register (Ard Biesheuvel) - Use efi_switch_mm() on x86 instead of manipulating %cr3 directly (Sai Praneeth) - Fix early memremap leak in ESRT code (Ard Biesheuvel) - Switch to L"xxx" notation for wide string literals (Ard Biesheuvel) - ... plus misc other cleanups and bugfixes" * 'efi-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86/efi: Use efi_switch_mm() rather than manually twiddling with %cr3 x86/efi: Replace efi_pgd with efi_mm.pgd efi: Use string literals for efi_char16_t variable initializers efi/esrt: Fix handling of early ESRT table mapping efi: Use efi_mm in x86 as well as ARM efi: Make const array 'apple' static efi/apple-properties: Use memremap() instead of ioremap() efi: Reorder pr_notice() with add_device_randomness() call x86/efi: Replace GFP_ATOMIC with GFP_KERNEL in efi_query_variable_store() efi/arm64: Check whether x18 is preserved by runtime services calls efi/arm*: Stop printing addresses of virtual mappings efi/apple-properties: Remove redundant attribute initialization from unmarshal_key_value_pairs() efi/arm*: Only register page tables when they exist
2018-04-02mm: add ksys_mmap_pgoff() helper; remove in-kernel calls to sys_mmap_pgoff()Dominik Brodowski1-1/+1
Using this helper allows us to avoid the in-kernel calls to the sys_mmap_pgoff() syscall. The ksys_ prefix denotes that this function is meant as a drop-in replacement for the syscall. In particular, it uses the same calling convention as sys_mmap_pgoff(). This patch is part of a series which removes in-kernel calls to syscalls. On this basis, the syscall entry path can be streamlined. For details, see http://lkml.kernel.org/r/20180325162527.GA17492@light.dominikbrodowski.net Cc: Andrew Morton <akpm@linux-foundation.org> Cc: linux-mm@kvack.org Signed-off-by: Dominik Brodowski <linux@dominikbrodowski.net>
2018-03-28Merge tag 'armsoc-fixes' of ↵Linus Torvalds2-17/+7
git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc Pull ARM SoC fixes from Arnd Bergmann: "Here are are a couple of last-minute fixes for 4.16, mostly for regressions. As usual, the majory are device tree changes: - USB 3 support on rk3399 didn't work and is being reverted for now - One fix for an old suspend/resume bug on rk3399 - A few regulator related fixes on Banana Pi M2, and on imx7d-sdb - A boot regression fix for all Aspeed SoCs failing to find their memory - One more dtc warning fix The other changes are: - A few updates to the MAINTAINERS file - A revert for an incorrect orion5x cleanup - Two power management fixes for OMAP" * tag 'armsoc-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc: ARM: OMAP: Fix SRAM W+X mapping ARM: dts: aspeed: Add default memory node mailmap: Update email address for Gregory CLEMENT ARM: davinci: fix the GPIO lookup for omapl138-hawk MAINTAINERS: Update Tegra IOMMU maintainer ARM: dts: imx7d-sdb: Fix regulator-usb-otg2-vbus node name ARM: ux500: Fix PMU IRQ regression ARM: dts: rockchip: Add missing #sound-dai-cells on rk3288 Revert "arm64: dts: rockchip: add usb3-phy otg-port support for rk3399" arm64: dts: rockchip: Fix rk3399-gru-* s2r (pinctrl hogs, wifi reset) ARM: OMAP: Fix dmtimer init for omap1 MAINTAINERS: update email address for Maxime Ripard ARM: dts: sun6i: a31s: bpi-m2: add missing regulators ARM: dts: sun6i: a31s: bpi-m2: improve pmic properties
2018-03-28arm64: uaccess: Fix omissions from usercopy whitelistDave Martin7-57/+72
When the hardend usercopy support was added for arm64, it was concluded that all cases of usercopy into and out of thread_struct were statically sized and so didn't require explicit whitelisting of the appropriate fields in thread_struct. Testing with usercopy hardening enabled has revealed that this is not the case for certain ptrace regset manipulation calls on arm64. This occurs because the sizes of usercopies associated with the regset API are dynamic by construction, and because arm64 does not always stage such copies via the stack: indeed the regset API is designed to avoid the need for that by adding some bounds checking. This is currently believed to affect only the fpsimd and TLS registers. Because the whitelisted fields in thread_struct must be contiguous, this patch groups them together in a nested struct. It is also necessary to be able to determine the location and size of that struct, so rather than making the struct anonymous (which would save on edits elsewhere) or adding an anonymous union containing named and unnamed instances of the same struct (gross), this patch gives the struct a name and makes the necessary edits to code that references it (noisy but simple). Care is needed to ensure that the new struct does not contain padding (which the usercopy hardening would fail to protect). For this reason, the presence of tp2_value is made unconditional, since a padding field would be needed there in any case. This pads up to the 16-byte alignment required by struct user_fpsimd_state. Acked-by: Kees Cook <keescook@chromium.org> Reported-by: Mark Rutland <mark.rutland@arm.com> Fixes: 9e8084d3f761 ("arm64: Implement thread_struct whitelist for hardened usercopy") Signed-off-by: Dave Martin <Dave.Martin@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2018-03-28arm64: fpsimd: Split cpu field out from struct fpsimd_stateDave Martin6-57/+29
In preparation for using a common representation of the FPSIMD state for tasks and KVM vcpus, this patch separates out the "cpu" field that is used to track the cpu on which the state was most recently loaded. This will allow common code to operate on task and vcpu contexts without requiring the cpu field to be stored at the same offset from the FPSIMD register data in both cases. This should avoid the need for messing with the definition of those parts of struct vcpu_arch that are exposed in the KVM user ABI. The resulting change is also convenient for grouping and defining the set of thread_struct fields that are supposed to be accessible to copy_{to,from}_user(), which includes user_fpsimd_state but should exclude the cpu field. This patch does not amend the usercopy whitelist to match: that will be addressed in a subsequent patch. Signed-off-by: Dave Martin <Dave.Martin@arm.com> [will: inline fpsimd_flush_state for now] Signed-off-by: Will Deacon <will.deacon@arm.com>
2018-03-28arm64: tlbflush: avoid writing RES0 bitsPhilip Elcan1-8/+17
Several of the bits of the TLBI register operand are RES0 per the ARM ARM, so TLBI operations should avoid writing non-zero values to these bits. This patch adds a macro __TLBI_VADDR(addr, asid) that creates the operand register in the correct format and honors the RES0 bits. Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Philip Elcan <pelcan@codeaurora.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
2018-03-27arm64: cmpxchg: Include linux/compiler.h in asm/cmpxchg.hWill Deacon1-0/+1
We need linux/compiler.h for unreachable(), so #include it here. Reported-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2018-03-27arm64: move percpu cmpxchg implementation from cmpxchg.h to percpu.hWill Deacon2-26/+29
We want to avoid pulling linux/preempt.h into cmpxchg.h, since that can introduce a circular dependency on linux/bitops.h. linux/preempt.h is only needed by the per-cpu cmpxchg implementation, which is better off alongside the per-cpu xchg implementation in percpu.h, so move it there and add the missing #include. Reported-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2018-03-27arm64: cmpxchg: Include build_bug.h instead of bug.h for BUILD_BUGWill Deacon1-1/+1
Having asm/cmpxchg.h pull in linux/bug.h is problematic because this ends up pulling in the atomic bitops which themselves may be built on top of atomic.h and cmpxchg.h. Instead, just include build_bug.h for the definition of BUILD_BUG. Signed-off-by: Will Deacon <will.deacon@arm.com>
2018-03-27arm64: lse: Include compiler_types.h and export.h for out-of-line LL/SCWill Deacon1-0/+3
When the LL/SC atomics are moved out-of-line, they are annotated as notrace and exported to modules. Ensure we pull in the relevant include files so that these macros are defined when we need them. Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2018-03-27arm64: fpsimd: include <linux/init.h> in fpsimd.hWill Deacon1-0/+1
fpsimd.h uses the __init annotation, so pull in linux/init.h Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2018-03-27Revert "arm64: Revert L1_CACHE_SHIFT back to 6 (64-byte cache line size)"Will Deacon6-68/+11
This reverts commit 1f85b42a691cd8329ba82dbcaeec80ac1231b32a. The internal dma-direct.h API has changed in -next, which collides with us trying to use it to manage non-coherent DMA devices on systems with unreasonably large cache writeback granules. This isn't at all trivial to resolve, so revert our changes for now and we can revisit this after the merge window. Effectively, this just restores our behaviour back to that of 4.16. Signed-off-by: Will Deacon <will.deacon@arm.com>
2018-03-27arm64: cpufeature: Avoid warnings due to unused symbolsWill Deacon2-8/+10
An allnoconfig build complains about unused symbols due to functions that are called via conditional cpufeature and cpu_errata table entries. Annotate these as __maybe_unused if they are likely to be generic, or predicate their compilation on the same option as the table entry if they are specific to a given alternative. Signed-off-by: Will Deacon <will.deacon@arm.com>
2018-03-26arm64: Add work around for Arm Cortex-A55 Erratum 1024718Suzuki K Poulose2-1/+29
Some variants of the Arm Cortex-55 cores (r0p0, r0p1, r1p0) suffer from an erratum 1024718, which causes incorrect updates when DBM/AP bits in a page table entry is modified without a break-before-make sequence. The work around is to skip enabling the hardware DBM feature on the affected cores. The hardware Access Flag management features is not affected. There are some other cores suffering from this errata, which could be added to the midr_list to trigger the work around. Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: ckadabi@codeaurora.org Reviewed-by: Dave Martin <dave.martin@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2018-03-26arm64: Delay enabling hardware DBM featureSuzuki K Poulose3-8/+79
We enable hardware DBM bit in a capable CPU, very early in the boot via __cpu_setup. This doesn't give us a flexibility of optionally disable the feature, as the clearing the bit is a bit costly as the TLB can cache the settings. Instead, we delay enabling the feature until the CPU is brought up into the kernel. We use the feature capability mechanism to handle it. The hardware DBM is a non-conflicting feature. i.e, the kernel can safely run with a mix of CPUs with some using the feature and the others don't. So, it is safe for a late CPU to have this capability and enable it, even if the active CPUs don't. To get this handled properly by the infrastructure, we unconditionally set the capability and only enable it on CPUs which really have the feature. Also, we print the feature detection from the "matches" call back to make sure we don't mislead the user when none of the CPUs could use the feature. Cc: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Dave Martin <dave.martin@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2018-03-26arm64: Add MIDR encoding for Arm Cortex-A55 and Cortex-A35Suzuki K Poulose1-0/+4
Update the MIDR encodings for the Cortex-A55 and Cortex-A35 Cc: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Dave Martin <dave.martin@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2018-03-26arm64: capabilities: Handle shared entriesSuzuki K Poulose3-14/+66
Some capabilities have different criteria for detection and associated actions based on the matching criteria, even though they all share the same capability bit. So far we have used multiple entries with the same capability bit to handle this. This is prone to errors, as the cpu_enable is invoked for each entry, irrespective of whether the detection rule applies to the CPU or not. And also this complicates other helpers, e.g, __this_cpu_has_cap. This patch adds a wrapper entry to cover all the possible variations of a capability by maintaining list of matches + cpu_enable callbacks. To avoid complicating the prototypes for the "matches()", we use arm64_cpu_capabilities maintain the list and we ignore all the other fields except the matches & cpu_enable. This ensures : 1) The capabilitiy is set when at least one of the entry detects 2) Action is only taken for the entries that "matches". This avoids explicit checks in the cpu_enable() take some action. The only constraint here is that, all the entries should have the same "type" (i.e, scope and conflict rules). If a cpu_enable() method is associated with multiple matches for a single capability, care should be taken that either the match criteria are mutually exclusive, or that the method is robust against being called multiple times. This also reverts the changes introduced by commit 67948af41f2e6818ed ("arm64: capabilities: Handle duplicate entries for a capability"). Cc: Robin Murphy <robin.murphy@arm.com> Reviewed-by: Dave Martin <dave.martin@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2018-03-26arm64: capabilities: Add support for checks based on a list of MIDRsSuzuki K Poulose4-41/+60
Add helpers for detecting an errata on list of midr ranges of affected CPUs, with the same work around. Cc: Will Deacon <will.deacon@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Reviewed-by: Dave Martin <dave.martin@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2018-03-26arm64: Add helpers for checking CPU MIDR against a rangeSuzuki K Poulose3-13/+37
Add helpers for checking if the given CPU midr falls in a range of variants/revisions for a given model. Cc: Will Deacon <will.deacon@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Reviewed-by: Dave Martin <dave.martin@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2018-03-26arm64: capabilities: Clean up midr range helpersSuzuki K Poulose1-48/+60
We are about to introduce generic MIDR range helpers. Clean up the existing helpers in erratum handling, preparing them to use generic version. Cc: Will Deacon <will.deacon@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Reviewed-by: Dave Martin <dave.martin@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2018-03-26arm64: capabilities: Change scope of VHE to Boot CPU featureSuzuki K Poulose4-46/+9
We expect all CPUs to be running at the same EL inside the kernel with or without VHE enabled and we have strict checks to ensure that any mismatch triggers a kernel panic. If VHE is enabled, we use the feature based on the boot CPU and all other CPUs should follow. This makes it a perfect candidate for a capability based on the boot CPU, which should be matched by all the CPUs (both when is ON and OFF). This saves us some not-so-pretty hooks and special code, just for verifying the conflict. The patch also makes the VHE capability entry depend on CONFIG_ARM64_VHE. Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Will Deacon <will.deacon@arm.com> Reviewed-by: Dave Martin <dave.martin@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2018-03-26arm64: capabilities: Add support for features enabled earlySuzuki K Poulose2-22/+83
The kernel detects and uses some of the features based on the boot CPU and expects that all the following CPUs conform to it. e.g, with VHE and the boot CPU running at EL2, the kernel decides to keep the kernel running at EL2. If another CPU is brought up without this capability, we use custom hooks (via check_early_cpu_features()) to handle it. To handle such capabilities add support for detecting and enabling capabilities based on the boot CPU. A bit is added to indicate if the capability should be detected early on the boot CPU. The infrastructure then ensures that such capabilities are probed and "enabled" early on in the boot CPU and, enabled on the subsequent CPUs. Cc: Julien Thierry <julien.thierry@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Reviewed-by: Dave Martin <dave.martin@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2018-03-26arm64: capabilities: Restrict KPTI detection to boot-time CPUsSuzuki K Poulose2-5/+20
KPTI is treated as a system wide feature and is only detected if all the CPUs in the sysetm needs the defense, unless it is forced via kernel command line. This leaves a system with a mix of CPUs with and without the defense vulnerable. Also, if a late CPU needs KPTI but KPTI was not activated at boot time, the CPU is currently allowed to boot, which is a potential security vulnerability. This patch ensures that the KPTI is turned on if at least one CPU detects the capability (i.e, change scope to SCOPE_LOCAL_CPU). Also rejetcs a late CPU, if it requires the defense, when the system hasn't enabled it, Cc: Will Deacon <will.deacon@arm.com> Reviewed-by: Dave Martin <dave.martin@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2018-03-26arm64: capabilities: Introduce weak features based on local CPUSuzuki K Poulose2-1/+9
Now that we have the flexibility of defining system features based on individual CPUs, introduce CPU feature type that can be detected on a local SCOPE and ignores the conflict on late CPUs. This is applicable for ARM64_HAS_NO_HW_PREFETCH, where it is fine for the system to have CPUs without hardware prefetch turning up later. We only suffer a performance penalty, nothing fatal. Cc: Will Deacon <will.deacon@arm.com> Reviewed-by: Dave Martin <dave.martin@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2018-03-26arm64: capabilities: Group handling of features and errata workaroundsSuzuki K Poulose1-31/+42
Now that the features and errata workarounds have the same rules and flow, group the handling of the tables. Reviewed-by: Dave Martin <dave.martin@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2018-03-26arm64: capabilities: Allow features based on local CPU scopeSuzuki K Poulose1-6/+11
So far we have treated the feature capabilities as system wide and this wouldn't help with features that could be detected locally on one or more CPUs (e.g, KPTI, Software prefetch). This patch splits the feature detection to two phases : 1) Local CPU features are checked on all boot time active CPUs. 2) System wide features are checked only once after all CPUs are active. Reviewed-by: Dave Martin <dave.martin@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2018-03-26arm64: capabilities: Split the processing of errata work aroundsSuzuki K Poulose1-2/+4
Right now we run through the errata workarounds check on all boot active CPUs, with SCOPE_ALL. This wouldn't help for detecting erratum workarounds with a SYSTEM_SCOPE. There are none yet, but we plan to introduce some: let us clean this up so that such workarounds can be detected and enabled correctly. So, we run the checks with SCOPE_LOCAL_CPU on all CPUs and SCOPE_SYSTEM checks are run only once after all the boot time CPUs are active. Reviewed-by: Dave Martin <dave.martin@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2018-03-26arm64: capabilities: Prepare for grouping features and errata work aroundsSuzuki K Poulose1-40/+18
We are about to group the handling of all capabilities (features and errata workarounds). This patch open codes the wrapper routines to make it easier to merge the handling. Reviewed-by: Dave Martin <dave.martin@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2018-03-26arm64: capabilities: Filter the entries based on a given maskSuzuki K Poulose2-11/+23
While processing the list of capabilities, it is useful to filter out some of the entries based on the given mask for the scope of the capabilities to allow better control. This can be used later for handling LOCAL vs SYSTEM wide capabilities and more. All capabilities should have their scope set to either LOCAL_CPU or SYSTEM. No functional/flow change. Cc: Will Deacon <will.deacon@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Dave Martin <dave.martin@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2018-03-26arm64: capabilities: Unify the verificationSuzuki K Poulose1-33/+58
Now that each capability describes how to treat the conflicts of CPU cap state vs System wide cap state, we can unify the verification logic to a single place. Reviewed-by: Dave Martin <dave.martin@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2018-03-26arm64: capabilities: Add flags to handle the conflicts on late CPUSuzuki K Poulose3-21/+89
When a CPU is brought up, it is checked against the caps that are known to be enabled on the system (via verify_local_cpu_capabilities()). Based on the state of the capability on the CPU vs. that of System we could have the following combinations of conflict. x-----------------------------x | Type | System | Late CPU | |-----------------------------| | a | y | n | |-----------------------------| | b | n | y | x-----------------------------x Case (a) is not permitted for caps which are system features, which the system expects all the CPUs to have (e.g VHE). While (a) is ignored for all errata work arounds. However, there could be exceptions to the plain filtering approach. e.g, KPTI is an optional feature for a late CPU as long as the system already enables it. Case (b) is not permitted for errata work arounds that cannot be activated after the kernel has finished booting.And we ignore (b) for features. Here, yet again, KPTI is an exception, where if a late CPU needs KPTI we are too late to enable it (because we change the allocation of ASIDs etc). Add two different flags to indicate how the conflict should be handled. ARM64_CPUCAP_PERMITTED_FOR_LATE_CPU - CPUs may have the capability ARM64_CPUCAP_OPTIONAL_FOR_LATE_CPU - CPUs may not have the cappability. Now that we have the flags to describe the behavior of the errata and the features, as we treat them, define types for ERRATUM and FEATURE. Cc: Will Deacon <will.deacon@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Dave Martin <dave.martin@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2018-03-26arm64: capabilities: Prepare for fine grained capabilitiesSuzuki K Poulose3-31/+124
We use arm64_cpu_capabilities to represent CPU ELF HWCAPs exposed to the userspace and the CPU hwcaps used by the kernel, which include cpu features and CPU errata work arounds. Capabilities have some properties that decide how they should be treated : 1) Detection, i.e scope : A cap could be "detected" either : - if it is present on at least one CPU (SCOPE_LOCAL_CPU) Or - if it is present on all the CPUs (SCOPE_SYSTEM) 2) When is it enabled ? - A cap is treated as "enabled" when the system takes some action based on whether the capability is detected or not. e.g, setting some control register, patching the kernel code. Right now, we treat all caps are enabled at boot-time, after all the CPUs are brought up by the kernel. But there are certain caps, which are enabled early during the boot (e.g, VHE, GIC_CPUIF for NMI) and kernel starts using them, even before the secondary CPUs are brought up. We would need a way to describe this for each capability. 3) Conflict on a late CPU - When a CPU is brought up, it is checked against the caps that are known to be enabled on the system (via verify_local_cpu_capabilities()). Based on the state of the capability on the CPU vs. that of System we could have the following combinations of conflict. x-----------------------------x | Type | System | Late CPU | ------------------------------| | a | y | n | ------------------------------| | b | n | y | x-----------------------------x Case (a) is not permitted for caps which are system features, which the system expects all the CPUs to have (e.g VHE). While (a) is ignored for all errata work arounds. However, there could be exceptions to the plain filtering approach. e.g, KPTI is an optional feature for a late CPU as long as the system already enables it. Case (b) is not permitted for errata work arounds which requires some work around, which cannot be delayed. And we ignore (b) for features. Here, yet again, KPTI is an exception, where if a late CPU needs KPTI we are too late to enable it (because we change the allocation of ASIDs etc). So this calls for a lot more fine grained behavior for each capability. And if we define all the attributes to control their behavior properly, we may be able to use a single table for the CPU hwcaps (which cover errata and features, not the ELF HWCAPs). This is a prepartory step to get there. More bits would be added for the properties listed above. We are going to use a bit-mask to encode all the properties of a capabilities. This patch encodes the "SCOPE" of the capability. As such there is no change in how the capabilities are treated. Cc: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Dave Martin <dave.martin@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2018-03-26arm64: capabilities: Move errata processing codeSuzuki K Poulose3-43/+40
We have errata work around processing code in cpu_errata.c, which calls back into helpers defined in cpufeature.c. Now that we are going to make the handling of capabilities generic, by adding the information to each capability, move the errata work around specific processing code. No functional changes. Cc: Will Deacon <will.deacon@arm.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Andre Przywara <andre.przywara@arm.com> Reviewed-by: Dave Martin <dave.martin@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2018-03-26arm64: capabilities: Move errata work around check on boot CPUSuzuki K Poulose2-6/+6
We trigger CPU errata work around check on the boot CPU from smp_prepare_boot_cpu() to make sure that we run the checks only after the CPU feature infrastructure is initialised. While this is correct, we can also do this from init_cpu_features() which initilises the infrastructure, and is called only on the Boot CPU. This helps to consolidate the CPU capability handling to cpufeature.c. No functional changes. Cc: Will Deacon <will.deacon@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Dave Martin <dave.martin@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2018-03-26arm64: capabilities: Update prototype for enable call backDave Martin8-58/+67
We issue the enable() call back for all CPU hwcaps capabilities available on the system, on all the CPUs. So far we have ignored the argument passed to the call back, which had a prototype to accept a "void *" for use with on_each_cpu() and later with stop_machine(). However, with commit 0a0d111d40fd1 ("arm64: cpufeature: Pass capability structure to ->enable callback"), there are some users of the argument who wants the matching capability struct pointer where there are multiple matching criteria for a single capability. Clean up the declaration of the call back to make it clear. 1) Renamed to cpu_enable(), to imply taking necessary actions on the called CPU for the entry. 2) Pass const pointer to the capability, to allow the call back to check the entry. (e.,g to check if any action is needed on the CPU) 3) We don't care about the result of the call back, turning this to a void. Cc: Will Deacon <will.deacon@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Andre Przywara <andre.przywara@arm.com> Cc: James Morse <james.morse@arm.com> Acked-by: Robin Murphy <robin.murphy@arm.com> Reviewed-by: Julien Thierry <julien.thierry@arm.com> Signed-off-by: Dave Martin <dave.martin@arm.com> [suzuki: convert more users, rename call back and drop results] Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2018-03-26arm64/sve: Document firmware support requirements in KconfigDave Martin1-0/+17
Use of SVE by EL2 and below requires explicit support in the firmware. There is no means to hide the presence of SVE from EL2, so a kernel configured with CONFIG_ARM64_SVE=y will typically not work correctly on SVE capable hardware unless the firmware does include the appropriate support. This is not expected to pose a problem in the wild, since platform integrators are responsible for ensuring that they ship up-to-date firmware to support their hardware. However, developers may hit the issue when using mismatched compoments. In order to draw attention to the issue and how to solve it, this patch adds some Kconfig text giving a brief explanation and details of compatible firmware versions. Signed-off-by: Dave Martin <Dave.Martin@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2018-03-23crypto: arm,arm64 - Fix random regeneration of S_shippedLeonard Crestez1-0/+2
The decision to rebuild .S_shipped is made based on the relative timestamps of .S_shipped and .pl files but git makes this essentially random. This means that the perl script might run anyway (usually at most once per checkout), defeating the whole purpose of _shipped. Fix by skipping the rule unless explicit make variables are provided: REGENERATE_ARM_CRYPTO or REGENERATE_ARM64_CRYPTO. This can produce nasty occasional build failures downstream, for example for toolchains with broken perl. The solution is minimally intrusive to make it easier to push into stable. Another report on a similar issue here: https://lkml.org/lkml/2018/3/8/1379 Signed-off-by: Leonard Crestez <leonard.crestez@nxp.com> Cc: <stable@vger.kernel.org> Reviewed-by: Masahiro Yamada <yamada.masahiro@socionext.com> Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-03-22mm/vmalloc: add interfaces to free unmapped page tableToshi Kani1-0/+10
On architectures with CONFIG_HAVE_ARCH_HUGE_VMAP set, ioremap() may create pud/pmd mappings. A kernel panic was observed on arm64 systems with Cortex-A75 in the following steps as described by Hanjun Guo. 1. ioremap a 4K size, valid page table will build, 2. iounmap it, pte0 will set to 0; 3. ioremap the same address with 2M size, pgd/pmd is unchanged, then set the a new value for pmd; 4. pte0 is leaked; 5. CPU may meet exception because the old pmd is still in TLB, which will lead to kernel panic. This panic is not reproducible on x86. INVLPG, called from iounmap, purges all levels of entries associated with purged address on x86. x86 still has memory leak. The patch changes the ioremap path to free unmapped page table(s) since doing so in the unmap path has the following issues: - The iounmap() path is shared with vunmap(). Since vmap() only supports pte mappings, making vunmap() to free a pte page is an overhead for regular vmap users as they do not need a pte page freed up. - Checking if all entries in a pte page are cleared in the unmap path is racy, and serializing this check is expensive. - The unmap path calls free_vmap_area_noflush() to do lazy TLB purges. Clearing a pud/pmd entry before the lazy TLB purges needs extra TLB purge. Add two interfaces, pud_free_pmd_page() and pmd_free_pte_page(), which clear a given pud/pmd entry and free up a page for the lower level entries. This patch implements their stub functions on x86 and arm64, which work as workaround. [akpm@linux-foundation.org: fix typo in pmd_free_pte_page() stub] Link: http://lkml.kernel.org/r/20180314180155.19492-2-toshi.kani@hpe.com Fixes: e61ce6ade404e ("mm: change ioremap to set up huge I/O mappings") Reported-by: Lei Li <lious.lilei@hisilicon.com> Signed-off-by: Toshi Kani <toshi.kani@hpe.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Wang Xuefeng <wxf.wang@hisilicon.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Hanjun Guo <guohanjun@huawei.com> Cc: Michal Hocko <mhocko@suse.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Borislav Petkov <bp@suse.de> Cc: Matthew Wilcox <willy@infradead.org> Cc: Chintan Pandya <cpandya@codeaurora.org> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-03-22irqchip/gic-v3: Probe for SCR_EL3 being clear before resetting AP0RnMarc Zyngier1-5/+0
We would like to reset the Group-0 Active Priority Registers at boot time if they are available to us. They would be available if SCR_EL3.FIQ was not set, but we cannot directly probe this bit, and short of checking, we may end-up trapping to EL3, and the firmware may not be please to get such an exception. Yes, this is dumb. Instead, let's use PMR to find out if its value gets affected by SCR_EL3.FIQ being set. We use the fact that when SCR_EL3.FIQ is set, the LSB of the priority is lost due to the shifting back and forth of the actual priority. If we read back a 0, we know that Group0 is unavailable. In case we read a non-zero value, we can safely reset the AP0Rn register. Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2018-03-20arm64: fpsimd: Fix bad si_code for undiagnosed SIGFPEDave Martin3-19/+24
Currently a SIGFPE delivered in response to a floating-point exception trap may have si_code set to 0 on arm64. As reported by Eric, this is a bad idea since this is the value of SI_USER -- yet this signal is definitely not the result of kill(2), tgkill(2) etc. and si_uid and si_pid make limited sense whereas we do want to yield a value for si_addr (which doesn't exist for SI_USER). It's not entirely clear whether the architecure permits a "spurious" fp exception trap where none of the exception flag bits in ESR_ELx is set. (IMHO the architectural intent is to forbid this.) However, it does permit those bits to contain garbage if the TFV bit in ESR_ELx is 0. That case isn't currently handled at all and may result in si_code == 0 or si_code containing a FPE_FLT* constant corresponding to an exception that did not in fact happen. There is nothing sensible we can return for si_code in such cases, but SI_USER is certainly not appropriate and will lead to violation of legitimate userspace assumptions. This patch allocates a new si_code value FPE_UNKNOWN that at least does not conflict with any existing SI_* or FPE_* code, and yields this in si_code for undiagnosable cases. This is probably the best simplicity/incorrectness tradeoff achieveable without relying on implementation-dependent features or adding a lot of code. In any case, there appears to be no perfect solution possible that would justify a lot of effort here. Yielding FPE_UNKNOWN when some well-defined fp exception caused the trap is a violation of POSIX, but this is forced by the architecture. We have no realistic prospect of yielding the correct code in such cases. At present I am not aware of any ARMv8 implementation that supports trapped floating-point exceptions in any case. The new code may be applicable to other architectures for similar reasons. No attempt is made to provide ESR_ELx to userspace in the signal frame, since architectural limitations mean that it is unlikely to provide much diagnostic value, doesn't benefit existing software and would create ABI with no proven purpose. The existing mechanism for passing it also has problems of its own which may result in the wrong value being passed to userspace due to interaction with mm faults. The implied rework does not appear justified. Acked-by: "Eric W. Biederman" <ebiederm@xmission.com> Reported-by: "Eric W. Biederman" <ebiederm@xmission.com> Signed-off-by: Dave Martin <Dave.Martin@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2018-03-19arm64: Expose Arm v8.4 featuresSuzuki K Poulose4-0/+18
Expose the new features introduced by Arm v8.4 extensions to Arm v8-A profile. These include : 1) Data indpendent timing of instructions. (DIT, exposed as HWCAP_DIT) 2) Unaligned atomic instructions and Single-copy atomicity of loads and stores. (AT, expose as HWCAP_USCAT) 3) LDAPR and STLR instructions with immediate offsets (extension to LRCPC, exposed as HWCAP_ILRCPC) 4) Flag manipulation instructions (TS, exposed as HWCAP_FLAGM). Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Dave Martin <dave.martin@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2018-03-19arm64: asm: drop special versions of adr_l/ldr_l/str_l for modulesArd Biesheuvel1-32/+2
Now that we started keeping modules within 4 GB of the core kernel in all cases, we no longer need to special case the adr_l/ldr_l/str_l macros for modules to deal with them being loaded farther away. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
2018-03-19arm64: fix undefined reference to 'printk'Arnd Bergmann2-3/+3
The printk symbol was intended as a generic address that is always exported, however that turned out to be false with CONFIG_PRINTK=n: ERROR: "printk" [arch/arm64/kernel/arm64-reloc-test.ko] undefined! This changes the references to memstart_addr, which should be there regardless of configuration. Fixes: a257e02579e4 ("arm64/kernel: don't ban ADRP to work around Cortex-A53 erratum #843419") Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Will Deacon <will.deacon@arm.com>
2018-03-16crypto: arm64/sha256-neon - play nice with CONFIG_PREEMPT kernelsArd Biesheuvel1-13/+23
Tweak the SHA256 update routines to invoke the SHA256 block transform block by block, to avoid excessive scheduling delays caused by the NEON algorithm running with preemption disabled. Also, remove a stale comment which no longer applies now that kernel mode NEON is actually disallowed in some contexts. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>