summaryrefslogtreecommitdiffstats
path: root/arch/arm64/include/asm
AgeCommit message (Collapse)AuthorFilesLines
2020-04-24Merge tag 'arm64-fixes' of ↵Linus Torvalds1-3/+6
git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux Pull arm64 fixes from Catalin Marinas: - Ensure context synchronisation after a write to APIAKey. - Fix bullet list formatting in Documentation/arm64/amu.rst to eliminate doc warnings. * tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: Documentation: arm64: fix amu.rst doc warnings arm64: sync kernel APIAKey when installing
2020-04-23arch: split MODULE_ARCH_VERMAGIC definitions out to <asm/vermagic.h>Masahiro Yamada2-2/+10
As the bug report [1] pointed out, <linux/vermagic.h> must be included after <linux/module.h>. I believe we should not impose any include order restriction. We often sort include directives alphabetically, but it is just coding style convention. Technically, we can include header files in any order by making every header self-contained. Currently, arch-specific MODULE_ARCH_VERMAGIC is defined in <asm/module.h>, which is not included from <linux/vermagic.h>. Hence, the straight-forward fix-up would be as follows: |--- a/include/linux/vermagic.h |+++ b/include/linux/vermagic.h |@@ -1,5 +1,6 @@ | /* SPDX-License-Identifier: GPL-2.0 */ | #include <generated/utsrelease.h> |+#include <linux/module.h> | | /* Simply sanity version stamp for modules. */ | #ifdef CONFIG_SMP This works enough, but for further cleanups, I split MODULE_ARCH_VERMAGIC definitions into <asm/vermagic.h>. With this, <linux/module.h> and <linux/vermagic.h> will be orthogonal, and the location of MODULE_ARCH_VERMAGIC definitions will be consistent. For arc and ia64, MODULE_PROC_FAMILY is only used for defining MODULE_ARCH_VERMAGIC. I squashed it. For hexagon, nds32, and xtensa, I removed <asm/modules.h> entirely because they contained nothing but MODULE_ARCH_VERMAGIC definition. Kbuild will automatically generate <asm/modules.h> at build-time, wrapping <asm-generic/module.h>. [1] https://lore.kernel.org/lkml/20200411155623.GA22175@zn.tnic Reported-by: Borislav Petkov <bp@suse.de> Signed-off-by: Masahiro Yamada <masahiroy@kernel.org> Acked-by: Jessica Yu <jeyu@kernel.org>
2020-04-21arm64: sync kernel APIAKey when installingMark Rutland1-3/+6
A direct write to a APxxKey_EL1 register requires a context synchronization event to ensure that indirect reads made by subsequent instructions (e.g. AUTIASP, PACIASP) observe the new value. When we initialize the boot task's APIAKey in boot_init_stack_canary() via ptrauth_keys_switch_kernel() we miss the necessary ISB, and so there is a window where instructions are not guaranteed to use the new APIAKey value. This has been observed to result in boot-time crashes where PACIASP and AUTIASP within a function used a mixture of the old and new key values. Fix this by having ptrauth_keys_switch_kernel() synchronize the new key value with an ISB. At the same time, __ptrauth_key_install() is renamed to __ptrauth_key_install_nosync() so that it is obvious that this performs no synchronization itself. Fixes: 28321582334c261c ("arm64: initialize ptrauth keys for kernel booting task") Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reported-by: Will Deacon <will@kernel.org> Cc: Amit Daniel Kachhap <amit.kachhap@arm.com> Cc: Marc Zyngier <maz@kernel.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Tested-by: Will Deacon <will@kernel.org>
2020-04-15arm64: Delete the space separator in __emit_instFangrui Song1-1/+3
In assembly, many instances of __emit_inst(x) expand to a directive. In a few places __emit_inst(x) is used as an assembler macro argument. For example, in arch/arm64/kvm/hyp/entry.S ALTERNATIVE(nop, SET_PSTATE_PAN(1), ARM64_HAS_PAN, CONFIG_ARM64_PAN) expands to the following by the C preprocessor: alternative_insn nop, .inst (0xd500401f | ((0) << 16 | (4) << 5) | ((!!1) << 8)), 4, 1 Both comma and space are separators, with an exception that content inside a pair of parentheses/quotes is not split, so the clang integrated assembler splits the arguments to: nop, .inst, (0xd500401f | ((0) << 16 | (4) << 5) | ((!!1) << 8)), 4, 1 GNU as preprocesses the input with do_scrub_chars(). Its arm64 backend (along with many other non-x86 backends) sees: alternative_insn nop,.inst(0xd500401f|((0)<<16|(4)<<5)|((!!1)<<8)),4,1 # .inst(...) is parsed as one argument while its x86 backend sees: alternative_insn nop,.inst (0xd500401f|((0)<<16|(4)<<5)|((!!1)<<8)),4,1 # The extra space before '(' makes the whole .inst (...) parsed as two arguments The non-x86 backend's behavior is considered unintentional (https://sourceware.org/bugzilla/show_bug.cgi?id=25750). So drop the space separator inside `.inst (...)` to make the clang integrated assembler work. Suggested-by: Ilie Halip <ilie.halip@gmail.com> Signed-off-by: Fangrui Song <maskray@google.com> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Link: https://github.com/ClangBuiltLinux/linux/issues/939 Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-04-10mm/vma: define a default value for VM_DATA_DEFAULT_FLAGSAnshuman Khandual1-3/+1
There are many platforms with exact same value for VM_DATA_DEFAULT_FLAGS This creates a default value for VM_DATA_DEFAULT_FLAGS in line with the existing VM_STACK_DEFAULT_FLAGS. While here, also define some more macros with standard VMA access flag combinations that are used frequently across many platforms. Apart from simplification, this reduces code duplication as well. Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Reviewed-by: Vlastimil Babka <vbabka@suse.cz> Acked-by: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Richard Henderson <rth@twiddle.net> Cc: Vineet Gupta <vgupta@synopsys.com> Cc: Russell King <linux@armlinux.org.uk> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Mark Salter <msalter@redhat.com> Cc: Guo Ren <guoren@kernel.org> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Cc: Brian Cain <bcain@codeaurora.org> Cc: Tony Luck <tony.luck@intel.com> Cc: Michal Simek <monstr@monstr.eu> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Paul Burton <paulburton@kernel.org> Cc: Nick Hu <nickhu@andestech.com> Cc: Ley Foon Tan <ley.foon.tan@intel.com> Cc: Jonas Bonn <jonas@southpole.se> Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Rich Felker <dalias@libc.org> Cc: "David S. Miller" <davem@davemloft.net> Cc: Guan Xuetao <gxt@pku.edu.cn> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Jeff Dike <jdike@addtoit.com> Cc: Chris Zankel <chris@zankel.net> Link: http://lkml.kernel.org/r/1583391014-8170-2-git-send-email-anshuman.khandual@arm.com Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-04-09Merge tag 'arm64-fixes' of ↵Linus Torvalds1-11/+1
git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux Pull arm64 fixes from Catalin Marinas: - Ensure that the compiler and linker versions are aligned so that ld doesn't complain about not understanding a .note.gnu.property section (emitted when pointer authentication is enabled). - Force -mbranch-protection=none when the feature is not enabled, in case a compiler may choose a different default value. - Remove CONFIG_DEBUG_ALIGN_RODATA. It was never in defconfig and rarely enabled. - Fix checking 16-bit Thumb-2 instructions checking mask in the emulation of the SETEND instruction (it could match the bottom half of a 32-bit Thumb-2 instruction). * tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: arm64: armv8_deprecated: Fix undef_hook mask for thumb setend arm64: remove CONFIG_DEBUG_ALIGN_RODATA feature arm64: Always force a branch protection mode when the compiler has one arm64: Kconfig: ptrauth: Add binutils version check to fix mismatch init/kconfig: Add LD_VERSION Kconfig
2020-04-05Merge tag 'random_for_linus' of ↵Linus Torvalds1-0/+14
git://git.kernel.org/pub/scm/linux/kernel/git/tytso/random Pull /dev/random updates from Ted Ts'o: - Improve getrandom and /dev/random's support for those arm64 architecture variants that have RNG instructions. - Use batched output from CRNG instead of CPU's RNG instructions for better performance. - Miscellaneous bug fixes. * tag 'random_for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tytso/random: random: avoid warnings for !CONFIG_NUMA builds random: fix data races at timer_rand_state random: always use batched entropy for get_random_u{32,64} random: Make RANDOM_TRUST_CPU depend on ARCH_RANDOM arm64: add credited/trusted RNG support random: add arch_get_random_*long_early() random: split primary/secondary crng init paths
2020-04-02Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvmLinus Torvalds2-1/+3
Pull kvm updates from Paolo Bonzini: "ARM: - GICv4.1 support - 32bit host removal PPC: - secure (encrypted) using under the Protected Execution Framework ultravisor s390: - allow disabling GISA (hardware interrupt injection) and protected VMs/ultravisor support. x86: - New dirty bitmap flag that sets all bits in the bitmap when dirty page logging is enabled; this is faster because it doesn't require bulk modification of the page tables. - Initial work on making nested SVM event injection more similar to VMX, and less buggy. - Various cleanups to MMU code (though the big ones and related optimizations were delayed to 5.8). Instead of using cr3 in function names which occasionally means eptp, KVM too has standardized on "pgd". - A large refactoring of CPUID features, which now use an array that parallels the core x86_features. - Some removal of pointer chasing from kvm_x86_ops, which will also be switched to static calls as soon as they are available. - New Tigerlake CPUID features. - More bugfixes, optimizations and cleanups. Generic: - selftests: cleanups, new MMU notifier stress test, steal-time test - CSV output for kvm_stat" * tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: (277 commits) x86/kvm: fix a missing-prototypes "vmread_error" KVM: x86: Fix BUILD_BUG() in __cpuid_entry_get_reg() w/ CONFIG_UBSAN=y KVM: VMX: Add a trampoline to fix VMREAD error handling KVM: SVM: Annotate svm_x86_ops as __initdata KVM: VMX: Annotate vmx_x86_ops as __initdata KVM: x86: Drop __exit from kvm_x86_ops' hardware_unsetup() KVM: x86: Copy kvm_x86_ops by value to eliminate layer of indirection KVM: x86: Set kvm_x86_ops only after ->hardware_setup() completes KVM: VMX: Configure runtime hooks using vmx_x86_ops KVM: VMX: Move hardware_setup() definition below vmx_x86_ops KVM: x86: Move init-only kvm_x86_ops to separate struct KVM: Pass kvm_init()'s opaque param to additional arch funcs s390/gmap: return proper error code on ksm unsharing KVM: selftests: Fix cosmetic copy-paste error in vm_mem_region_move() KVM: Fix out of range accesses to memslots KVM: X86: Micro-optimize IPI fastpath delay KVM: X86: Delay read msr data iff writes ICR MSR KVM: PPC: Book3S HV: Add a capability for enabling secure guests KVM: arm64: GICv4.1: Expose HW-based SGIs in debugfs KVM: arm64: GICv4.1: Allow non-trapping WFI when using HW SGIs ...
2020-04-02asm-generic: make more kernel-space headers mandatoryMasahiro Yamada1-18/+0
Change a header to mandatory-y if both of the following are met: [1] At least one architecture (except um) specifies it as generic-y in arch/*/include/asm/Kbuild [2] Every architecture (except um) either has its own implementation (arch/*/include/asm/*.h) or specifies it as generic-y in arch/*/include/asm/Kbuild This commit was generated by the following shell script. ----------------------------------->8----------------------------------- arches=$(cd arch; ls -1 | sed -e '/Kconfig/d' -e '/um/d') tmpfile=$(mktemp) grep "^mandatory-y +=" include/asm-generic/Kbuild > $tmpfile find arch -path 'arch/*/include/asm/Kbuild' | xargs sed -n 's/^generic-y += \(.*\)/\1/p' | sort -u | while read header do mandatory=yes for arch in $arches do if ! grep -q "generic-y += $header" arch/$arch/include/asm/Kbuild && ! [ -f arch/$arch/include/asm/$header ]; then mandatory=no break fi done if [ "$mandatory" = yes ]; then echo "mandatory-y += $header" >> $tmpfile for arch in $arches do sed -i "/generic-y += $header/d" arch/$arch/include/asm/Kbuild done fi done sed -i '/^mandatory-y +=/d' include/asm-generic/Kbuild LANG=C sort $tmpfile >> include/asm-generic/Kbuild ----------------------------------->8----------------------------------- One obvious benefit is the diff stat: 25 files changed, 52 insertions(+), 557 deletions(-) It is tedious to list generic-y for each arch that needs it. So, mandatory-y works like a fallback default (by just wrapping asm-generic one) when arch does not have a specific header implementation. See the following commits: def3f7cefe4e81c296090e1722a76551142c227c a1b39bae16a62ce4aae02d958224f19316d98b24 It is tedious to convert headers one by one, so I processed by a shell script. Signed-off-by: Masahiro Yamada <masahiroy@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Cc: Michal Simek <michal.simek@xilinx.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Arnd Bergmann <arnd@arndb.de> Link: http://lkml.kernel.org/r/20200210175452.5030-1-masahiroy@kernel.org Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-04-01arm64: remove CONFIG_DEBUG_ALIGN_RODATA featureArd Biesheuvel1-11/+1
When CONFIG_DEBUG_ALIGN_RODATA is enabled, kernel segments mapped with different permissions (r-x for .text, r-- for .rodata, rw- for .data, etc) are rounded up to 2 MiB so they can be mapped more efficiently. In particular, it permits the segments to be mapped using level 2 block entries when using 4k pages, which is expected to result in less TLB pressure. However, the mappings for the bulk of the kernel will use level 2 entries anyway, and the misaligned fringes are organized such that they can take advantage of the contiguous bit, and use far fewer level 3 entries than would be needed otherwise. This makes the value of this feature dubious at best, and since it is not enabled in defconfig or in the distro configs, it does not appear to be in wide use either. So let's just remove it. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Will Deacon <will@kernel.org> Acked-by: Laura Abbott <labbott@kernel.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-03-31Merge tag 'arm64-upstream' of ↵Linus Torvalds23-83/+332
git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux Pull arm64 updates from Catalin Marinas: "The bulk is in-kernel pointer authentication, activity monitors and lots of asm symbol annotations. I also queued the sys_mremap() patch commenting the asymmetry in the address untagging. Summary: - In-kernel Pointer Authentication support (previously only offered to user space). - ARM Activity Monitors (AMU) extension support allowing better CPU utilisation numbers for the scheduler (frequency invariance). - Memory hot-remove support for arm64. - Lots of asm annotations (SYM_*) in preparation for the in-kernel Branch Target Identification (BTI) support. - arm64 perf updates: ARMv8.5-PMU 64-bit counters, refactoring the PMU init callbacks, support for new DT compatibles. - IPv6 header checksum optimisation. - Fixes: SDEI (software delegated exception interface) double-lock on hibernate with shared events. - Minor clean-ups and refactoring: cpu_ops accessor, cpu_do_switch_mm() converted to C, cpufeature finalisation helper. - sys_mremap() comment explaining the asymmetric address untagging behaviour" * tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (81 commits) mm/mremap: Add comment explaining the untagging behaviour of mremap() arm64: head: Convert install_el2_stub to SYM_INNER_LABEL arm64: Introduce get_cpu_ops() helper function arm64: Rename cpu_read_ops() to init_cpu_ops() arm64: Declare ACPI parking protocol CPU operation if needed arm64: move kimage_vaddr to .rodata arm64: use mov_q instead of literal ldr arm64: Kconfig: verify binutils support for ARM64_PTR_AUTH lkdtm: arm64: test kernel pointer authentication arm64: compile the kernel with ptrauth return address signing kconfig: Add support for 'as-option' arm64: suspend: restore the kernel ptrauth keys arm64: __show_regs: strip PAC from lr in printk arm64: unwind: strip PAC from kernel addresses arm64: mask PAC bits of __builtin_return_address arm64: initialize ptrauth keys for kernel booting task arm64: initialize and switch ptrauth kernel keys arm64: enable ptrauth earlier arm64: cpufeature: handle conflicts based on capability arm64: cpufeature: Move cpu capability helpers inside C file ...
2020-03-30Merge tag 'timers-core-2020-03-30' of ↵Linus Torvalds7-39/+39
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull timekeeping and timer updates from Thomas Gleixner: "Core: - Consolidation of the vDSO build infrastructure to address the difficulties of cross-builds for ARM64 compat vDSO libraries by restricting the exposure of header content to the vDSO build. This is achieved by splitting out header content into separate headers. which contain only the minimaly required information which is necessary to build the vDSO. These new headers are included from the kernel headers and the vDSO specific files. - Enhancements to the generic vDSO library allowing more fine grained control over the compiled in code, further reducing architecture specific storage and preparing for adopting the generic library by PPC. - Cleanup and consolidation of the exit related code in posix CPU timers. - Small cleanups and enhancements here and there Drivers: - The obligatory new drivers: Ingenic JZ47xx and X1000 TCU support - Correct the clock rate of PIT64b global clock - setup_irq() cleanup - Preparation for PWM and suspend support for the TI DM timer - Expand the fttmr010 driver to support ast2600 systems - The usual small fixes, enhancements and cleanups all over the place" * tag 'timers-core-2020-03-30' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (80 commits) Revert "clocksource/drivers/timer-probe: Avoid creating dead devices" vdso: Fix clocksource.h macro detection um: Fix header inclusion arm64: vdso32: Enable Clang Compilation lib/vdso: Enable common headers arm: vdso: Enable arm to use common headers x86/vdso: Enable x86 to use common headers mips: vdso: Enable mips to use common headers arm64: vdso32: Include common headers in the vdso library arm64: vdso: Include common headers in the vdso library arm64: Introduce asm/vdso/processor.h arm64: vdso32: Code clean up linux/elfnote.h: Replace elf.h with UAPI equivalent scripts: Fix the inclusion order in modpost common: Introduce processor.h linux/ktime.h: Extract common header for vDSO linux/jiffies.h: Extract common header for vDSO linux/time64.h: Extract common header for vDSO linux/time32.h: Extract common header for vDSO linux/time.h: Extract common header for vDSO ...
2020-03-30Merge tag 'timers-nohz-2020-03-30' of ↵Linus Torvalds1-3/+1
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull NOHZ update from Thomas Gleixner: "Remove TIF_NOHZ from three architectures These architectures use a static key to decide whether context tracking needs to be invoked and the TIF_NOHZ flag just causes a pointless slowpath execution for nothing" * tag 'timers-nohz-2020-03-30' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: arm64: Remove TIF_NOHZ arm: Remove TIF_NOHZ x86: Remove TIF_NOHZ context-tracking: Introduce CONFIG_HAVE_TIF_NOHZ x86/entry: Remove _TIF_NOHZ from _TIF_WORK_SYSCALL_ENTRY
2020-03-30Merge branch 'sched-core-for-linus' of ↵Linus Torvalds1-0/+3
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull scheduler updates from Ingo Molnar: "The main changes in this cycle are: - Various NUMA scheduling updates: harmonize the load-balancer and NUMA placement logic to not work against each other. The intended result is better locality, better utilization and fewer migrations. - Introduce Thermal Pressure tracking and optimizations, to improve task placement on thermally overloaded systems. - Implement frequency invariant scheduler accounting on (some) x86 CPUs. This is done by observing and sampling the 'recent' CPU frequency average at ~tick boundaries. The CPU provides this data via the APERF/MPERF MSRs. This hopefully makes our capacity estimates more precise and keeps tasks on the same CPU better even if it might seem overloaded at a lower momentary frequency. (As usual, turbo mode is a complication that we resolve by observing the maximum frequency and renormalizing to it.) - Add asymmetric CPU capacity wakeup scan to improve capacity utilization on asymmetric topologies. (big.LITTLE systems) - PSI fixes and optimizations. - RT scheduling capacity awareness fixes & improvements. - Optimize the CONFIG_RT_GROUP_SCHED constraints code. - Misc fixes, cleanups and optimizations - see the changelog for details" * 'sched-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (62 commits) threads: Update PID limit comment according to futex UAPI change sched/fair: Fix condition of avg_load calculation sched/rt: cpupri_find: Trigger a full search as fallback kthread: Do not preempt current task if it is going to call schedule() sched/fair: Improve spreading of utilization sched: Avoid scale real weight down to zero psi: Move PF_MEMSTALL out of task->flags MAINTAINERS: Add maintenance information for psi psi: Optimize switching tasks inside shared cgroups psi: Fix cpu.pressure for cpu.max and competing cgroups sched/core: Distribute tasks within affinity masks sched/fair: Fix enqueue_task_fair warning thermal/cpu-cooling, sched/core: Move the arch_set_thermal_pressure() API to generic scheduler code sched/rt: Remove unnecessary push for unfit tasks sched/rt: Allow pulling unfitting task sched/rt: Optimize cpupri_find() on non-heterogenous systems sched/rt: Re-instate old behavior in select_task_rq_rt() sched/rt: cpupri_find: Implement fallback mechanism for !fit case sched/fair: Fix reordering of enqueue/dequeue_task_fair() sched/fair: Fix runnable_avg for throttled cfs ...
2020-03-30Merge branch 'locking-core-for-linus' of ↵Linus Torvalds1-3/+2
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull locking updates from Ingo Molnar: "The main changes in this cycle were: - Continued user-access cleanups in the futex code. - percpu-rwsem rewrite that uses its own waitqueue and atomic_t instead of an embedded rwsem. This addresses a couple of weaknesses, but the primary motivation was complications on the -rt kernel. - Introduce raw lock nesting detection on lockdep (CONFIG_PROVE_RAW_LOCK_NESTING=y), document the raw_lock vs. normal lock differences. This too originates from -rt. - Reuse lockdep zapped chain_hlocks entries, to conserve RAM footprint on distro-ish kernels running into the "BUG: MAX_LOCKDEP_CHAIN_HLOCKS too low!" depletion of the lockdep chain-entries pool. - Misc cleanups, smaller fixes and enhancements - see the changelog for details" * 'locking-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (55 commits) fs/buffer: Make BH_Uptodate_Lock bit_spin_lock a regular spinlock_t thermal/x86_pkg_temp: Make pkg_temp_lock a raw_spinlock_t Documentation/locking/locktypes: Minor copy editor fixes Documentation/locking/locktypes: Further clarifications and wordsmithing m68knommu: Remove mm.h include from uaccess_no.h x86: get rid of user_atomic_cmpxchg_inatomic() generic arch_futex_atomic_op_inuser() doesn't need access_ok() x86: don't reload after cmpxchg in unsafe_atomic_op2() loop x86: convert arch_futex_atomic_op_inuser() to user_access_begin/user_access_end() objtool: whitelist __sanitizer_cov_trace_switch() [parisc, s390, sparc64] no need for access_ok() in futex handling sh: no need of access_ok() in arch_futex_atomic_op_inuser() futex: arch_futex_atomic_op_inuser() calling conventions change completion: Use lockdep_assert_RT_in_threaded_ctx() in complete_all() lockdep: Add posixtimer context tracing bits lockdep: Annotate irq_work lockdep: Add hrtimer context tracing bits lockdep: Introduce wait-type checks completion: Use simple wait queues sched/swait: Prepare usage in completions ...
2020-03-30Merge branch 'efi-core-for-linus' of ↵Linus Torvalds1-10/+0
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull EFI updates from Ingo Molnar: "The EFI changes in this cycle are much larger than usual, for two (positive) reasons: - The GRUB project is showing signs of life again, resulting in the introduction of the generic Linux/UEFI boot protocol, instead of x86 specific hacks which are increasingly difficult to maintain. There's hope that all future extensions will now go through that boot protocol. - Preparatory work for RISC-V EFI support. The main changes are: - Boot time GDT handling changes - Simplify handling of EFI properties table on arm64 - Generic EFI stub cleanups, to improve command line handling, file I/O, memory allocation, etc. - Introduce a generic initrd loading method based on calling back into the firmware, instead of relying on the x86 EFI handover protocol or device tree. - Introduce a mixed mode boot method that does not rely on the x86 EFI handover protocol either, and could potentially be adopted by other architectures (if another one ever surfaces where one execution mode is a superset of another) - Clean up the contents of 'struct efi', and move out everything that doesn't need to be stored there. - Incorporate support for UEFI spec v2.8A changes that permit firmware implementations to return EFI_UNSUPPORTED from UEFI runtime services at OS runtime, and expose a mask of which ones are supported or unsupported via a configuration table. - Partial fix for the lack of by-VA cache maintenance in the decompressor on 32-bit ARM. - Changes to load device firmware from EFI boot service memory regions - Various documentation updates and minor code cleanups and fixes" * 'efi-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (114 commits) efi/libstub/arm: Fix spurious message that an initrd was loaded efi/libstub/arm64: Avoid image_base value from efi_loaded_image partitions/efi: Fix partition name parsing in GUID partition entry efi/x86: Fix cast of image argument efi/libstub/x86: Use ULONG_MAX as upper bound for all allocations efi: Fix a mistype in comments mentioning efivar_entry_iter_begin() efi/libstub: Avoid linking libstub/lib-ksyms.o into vmlinux efi/x86: Preserve %ebx correctly in efi_set_virtual_address_map() efi/x86: Ignore the memory attributes table on i386 efi/x86: Don't relocate the kernel unless necessary efi/x86: Remove extra headroom for setup block efi/x86: Add kernel preferred address to PE header efi/x86: Decompress at start of PE image load address x86/boot/compressed/32: Save the output address instead of recalculating it efi/libstub/x86: Deal with exit() boot service returning x86/boot: Use unsigned comparison for addresses efi/x86: Avoid using code32_start efi/x86: Make efi32_pe_entry() more readable efi/x86: Respect 32-bit ABI in efi32_pe_entry() efi/x86: Annotate the LOADED_IMAGE_PROTOCOL_GUID with SYM_DATA ...
2020-03-28Merge branch 'uaccess.futex' of ↵Thomas Gleixner1-3/+2
git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs into locking/core Pull uaccess futex cleanups for Al Viro: Consolidate access_ok() usage and the futex uaccess function zoo.
2020-03-27futex: arch_futex_atomic_op_inuser() calling conventions changeAl Viro1-3/+2
Move access_ok() in and pagefault_enable()/pagefault_disable() out. Mechanical conversion only - some instances don't really need a separate access_ok() at all (e.g. the ones only using get_user()/put_user(), or architectures where access_ok() is always true); we'll deal with that in followups. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-03-27Merge tag 'arm64-fixes' of ↵Linus Torvalds1-1/+1
git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux Pull arm64 fix from Will Deacon: "Fix defconfig build when using Clang's integrated assembler" * tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: arm64: alternative: fix build with clang integrated assembler
2020-03-25Merge branch 'for-next/kernel-ptrauth' into for-next/coreCatalin Marinas8-48/+154
* for-next/kernel-ptrauth: : Return address signing - in-kernel support arm64: Kconfig: verify binutils support for ARM64_PTR_AUTH lkdtm: arm64: test kernel pointer authentication arm64: compile the kernel with ptrauth return address signing kconfig: Add support for 'as-option' arm64: suspend: restore the kernel ptrauth keys arm64: __show_regs: strip PAC from lr in printk arm64: unwind: strip PAC from kernel addresses arm64: mask PAC bits of __builtin_return_address arm64: initialize ptrauth keys for kernel booting task arm64: initialize and switch ptrauth kernel keys arm64: enable ptrauth earlier arm64: cpufeature: handle conflicts based on capability arm64: cpufeature: Move cpu capability helpers inside C file arm64: ptrauth: Add bootup/runtime flags for __cpu_setup arm64: install user ptrauth keys at kernel exit time arm64: rename ptrauth key structures to be user-specific arm64: cpufeature: add pointer auth meta-capabilities arm64: cpufeature: Fix meta-capability cpufeature check
2020-03-25Merge branch 'for-next/asm-annotations' into for-next/coreCatalin Marinas3-6/+11
* for-next/asm-annotations: : Modernise arm64 assembly annotations arm64: head: Convert install_el2_stub to SYM_INNER_LABEL arm64: Mark call_smc_arch_workaround_1 as __maybe_unused arm64: entry-ftrace.S: Fix missing argument for CONFIG_FUNCTION_GRAPH_TRACER=y arm64: vdso32: Convert to modern assembler annotations arm64: vdso: Convert to modern assembler annotations arm64: sdei: Annotate SDEI entry points using new style annotations arm64: kvm: Modernize __smccc_workaround_1_smc_start annotations arm64: kvm: Modernize annotation for __bp_harden_hyp_vecs arm64: kvm: Annotate assembly using modern annoations arm64: kernel: Convert to modern annotations for assembly data arm64: head: Annotate stext and preserve_boot_args as code arm64: head.S: Convert to modern annotations for assembly functions arm64: ftrace: Modernise annotation of return_to_handler arm64: ftrace: Correct annotation of ftrace_caller assembly arm64: entry-ftrace.S: Convert to modern annotations for assembly functions arm64: entry: Additional annotation conversions for entry.S arm64: entry: Annotate ret_from_fork as code arm64: entry: Annotate vector table and handlers as code arm64: crypto: Modernize names for AES function macros arm64: crypto: Modernize some extra assembly annotations
2020-03-25Merge branches 'for-next/memory-hotremove', 'for-next/arm_sdei', ↵Catalin Marinas14-30/+167
'for-next/amu', 'for-next/final-cap-helper', 'for-next/cpu_ops-cleanup', 'for-next/misc' and 'for-next/perf' into for-next/core * for-next/memory-hotremove: : Memory hot-remove support for arm64 arm64/mm: Enable memory hot remove arm64/mm: Hold memory hotplug lock while walking for kernel page table dump * for-next/arm_sdei: : SDEI: fix double locking on return from hibernate and clean-up firmware: arm_sdei: clean up sdei_event_create() firmware: arm_sdei: Use cpus_read_lock() to avoid races with cpuhp firmware: arm_sdei: fix possible double-lock on hibernate error path firmware: arm_sdei: fix double-lock on hibernate with shared events * for-next/amu: : ARMv8.4 Activity Monitors support clocksource/drivers/arm_arch_timer: validate arch_timer_rate arm64: use activity monitors for frequency invariance cpufreq: add function to get the hardware max frequency Documentation: arm64: document support for the AMU extension arm64/kvm: disable access to AMU registers from kvm guests arm64: trap to EL1 accesses to AMU counters from EL0 arm64: add support for the AMU extension v1 * for-next/final-cap-helper: : Introduce cpus_have_final_cap_helper(), migrate arm64 KVM to it arm64: kvm: hyp: use cpus_have_final_cap() arm64: cpufeature: add cpus_have_final_cap() * for-next/cpu_ops-cleanup: : cpu_ops[] access code clean-up arm64: Introduce get_cpu_ops() helper function arm64: Rename cpu_read_ops() to init_cpu_ops() arm64: Declare ACPI parking protocol CPU operation if needed * for-next/misc: : Various fixes and clean-ups arm64: define __alloc_zeroed_user_highpage arm64/kernel: Simplify __cpu_up() by bailing out early arm64: remove redundant blank for '=' operator arm64: kexec_file: Fixed code style. arm64: add blank after 'if' arm64: fix spelling mistake "ca not" -> "cannot" arm64: entry: unmask IRQ in el0_sp() arm64: efi: add efi-entry.o to targets instead of extra-$(CONFIG_EFI) arm64: csum: Optimise IPv6 header checksum arch/arm64: fix typo in a comment arm64: remove gratuitious/stray .ltorg stanzas arm64: Update comment for ASID() macro arm64: mm: convert cpu_do_switch_mm() to C arm64: fix NUMA Kconfig typos * for-next/perf: : arm64 perf updates arm64: perf: Add support for ARMv8.5-PMU 64-bit counters KVM: arm64: limit PMU version to PMUv3 for ARMv8.1 arm64: cpufeature: Extract capped perfmon fields arm64: perf: Clean up enable/disable calls perf: arm-ccn: Use scnprintf() for robustness arm64: perf: Support new DT compatibles arm64: perf: Refactor PMU init callbacks perf: arm_spe: Remove unnecessary zero check on 'nr_pages'
2020-03-24arm64: Introduce get_cpu_ops() helper functionGavin Shan1-1/+1
This introduces get_cpu_ops() to return the CPU operations according to the given CPU index. For now, it simply returns the @cpu_ops[cpu] as before. Also, helper function __cpu_try_die() is introduced to be shared by cpu_die() and ipi_cpu_crash_stop(). So it shouldn't introduce any functional changes. Signed-off-by: Gavin Shan <gshan@redhat.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Mark Rutland <mark.rutland@arm.com>
2020-03-24arm64: Rename cpu_read_ops() to init_cpu_ops()Gavin Shan1-3/+3
This renames cpu_read_ops() to init_cpu_ops() as the function is only called in initialization phase. Also, we will introduce get_cpu_ops() in the subsequent patches, to retireve the CPU operation by the given CPU index. The usage of cpu_read_ops() and get_cpu_ops() are difficult to be distinguished from their names. Signed-off-by: Gavin Shan <gshan@redhat.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-03-24KVM: arm64: GICv4.1: Allow non-trapping WFI when using HW SGIsMarc Zyngier1-1/+2
Just like for VLPIs, it is beneficial to avoid trapping on WFI when the vcpu is using the GICv4.1 SGIs. Add such a check to vcpu_clear_wfx_traps(). Signed-off-by: Marc Zyngier <maz@kernel.org> Reviewed-by: Zenghui Yu <yuzenghui@huawei.com> Reviewed-by: Eric Auger <eric.auger@redhat.com> Link: https://lore.kernel.org/r/20200304203330.4967-23-maz@kernel.org
2020-03-24KVM: arm64: GICv4.1: Reload VLPI configuration on distributor enable/disableMarc Zyngier1-0/+1
Each time a Group-enable bit gets flipped, the state of these bits needs to be forwarded to the hardware. This is a pretty heavy handed operation, requiring all vcpus to reload their GICv4 configuration. It is thus implemented as a new request type. These enable bits are programmed into the HW by setting the VGrp{0,1}En fields of GICR_VPENDBASER when the vPEs are made resident again. Of course, we only support Group-1 for now... Signed-off-by: Marc Zyngier <maz@kernel.org> Reviewed-by: Zenghui Yu <yuzenghui@huawei.com> Link: https://lore.kernel.org/r/20200304203330.4967-22-maz@kernel.org
2020-03-21arm64: vdso32: Include common headers in the vdso libraryVincenzo Frascino1-1/+1
The vDSO library should only include the necessary headers required for a userspace library (UAPI and a minimal set of kernel headers). To make this possible it is necessary to isolate from the kernel headers the common parts that are strictly necessary to build the library. Refactor the vdso32 implementation to include common headers. Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lkml.kernel.org/r/20200320145351.32292-22-vincenzo.frascino@arm.com
2020-03-21arm64: vdso: Include common headers in the vdso libraryVincenzo Frascino1-1/+0
The vDSO library should only include the necessary headers required for a userspace library (UAPI and a minimal set of kernel headers). To make this possible it is necessary to isolate from the kernel headers the common parts that are strictly necessary to build the library. Refactor the vdso implementation to include common headers. Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lkml.kernel.org/r/20200320145351.32292-21-vincenzo.frascino@arm.com
2020-03-21arm64: Introduce asm/vdso/processor.hVincenzo Frascino2-5/+19
The vDSO library should only include the necessary headers required for a userspace library (UAPI and a minimal set of kernel headers). To make this possible it is necessary to isolate from the kernel headers the common parts that are strictly necessary to build the library. Introduce asm/vdso/processor.h to contain all the arm64 specific functions that are suitable for vDSO inclusion. Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lkml.kernel.org/r/20200320145351.32292-20-vincenzo.frascino@arm.com
2020-03-21arm64: vdso32: Code clean upVincenzo Frascino1-8/+0
The compat vdso library had some checks that are not anymore relevant. Remove the unused code from the compat vDSO library. Note: This patch is preparatory for a future one that will introduce asm/vdso/processor.h on arm64. Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/lkml/20200317122220.30393-19-vincenzo.frascino@arm.com Link: https://lkml.kernel.org/r/20200320145351.32292-19-vincenzo.frascino@arm.com
2020-03-21arm64: Introduce asm/vdso/clocksource.hVincenzo Frascino2-2/+9
The vDSO library should only include the necessary headers required for a userspace library (UAPI and a minimal set of kernel headers). To make this possible it is necessary to isolate from the kernel headers the common parts that are strictly necessary to build the library. Introduce asm/vdso/clocksource.h to contain all the arm64 specific functions that are suitable for vDSO inclusion. This header will be required by a future patch that will generalize vdso/clocksource.h. Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lkml.kernel.org/r/20200320145351.32292-7-vincenzo.frascino@arm.com
2020-03-20Merge tag 'arm64-fixes' of ↵Linus Torvalds3-6/+6
git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux Pull arm64 fixes from Will Deacon: - Fix panic() when it occurs during secondary CPU startup - Fix "kpti=off" when KASLR is enabled - Fix howler in compat syscall table for vDSO clock_getres() fallback * tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: arm64: compat: Fix syscall number of compat_clock_getres arm64: kpti: Fix "kpti=off" when KASLR is enabled arm64: smp: fix crash_smp_send_stop() behaviour arm64: smp: fix smp_send_stop() behaviour
2020-03-20arm64: alternative: fix build with clang integrated assemblerIlie Halip1-1/+1
Building an arm64 defconfig with clang's integrated assembler, this error occurs: <instantiation>:2:2: error: unrecognized instruction mnemonic _ASM_EXTABLE 9999b, 9f ^ arch/arm64/mm/cache.S:50:1: note: while in macro instantiation user_alt 9f, "dc cvau, x4", "dc civac, x4", 0 ^ While GNU as seems fine with case-sensitive macro instantiations, clang doesn't, so use the actual macro name (_asm_extable) as in the rest of the file. Also checked that the generated assembly matches the GCC output. Reviewed-by: Nick Desaulniers <ndesaulniers@google.com> Tested-by: Nick Desaulniers <ndesaulniers@google.com> Fixes: 290622efc76e ("arm64: fix "dc cvau" cache operation on errata-affected core") Link: https://github.com/ClangBuiltLinux/linux/issues/924 Signed-off-by: Ilie Halip <ilie.halip@gmail.com> Signed-off-by: Will Deacon <will@kernel.org>
2020-03-19arm64: compat: Fix syscall number of compat_clock_getresVincenzo Frascino1-1/+1
The syscall number of compat_clock_getres was erroneously set to 247 (__NR_io_cancel!) instead of 264. This causes the vDSO fallback of clock_getres() to land on the wrong syscall for compat tasks. Fix the numbering. Cc: <stable@vger.kernel.org> Fixes: 53c489e1dfeb6 ("arm64: compat: Add missing syscall numbers") Acked-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Nick Desaulniers <ndesaulniers@google.com> Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2020-03-19arm64: kpti: Fix "kpti=off" when KASLR is enabledWill Deacon2-5/+5
Enabling KASLR forces the use of non-global page-table entries for kernel mappings, as this is a decision that we have to make very early on before mapping the kernel proper. When used in conjunction with the "kpti=off" command-line option, it is possible to use non-global kernel mappings but with the kpti trampoline disabled. Since commit 09e3c22a86f6 ("arm64: Use a variable to store non-global mappings decision"), arm64_kernel_unmapped_at_el0() reflects only the use of non-global mappings and does not take into account whether the kpti trampoline is enabled. This breaks context switching of the TPIDRRO_EL0 register for 64-bit tasks, where the clearing of the register is deferred to the ret-to-user code, but it also breaks the ARM SPE PMU driver which helpfully recommends passing "kpti=off" on the command line! Report whether or not KPTI is actually enabled in arm64_kernel_unmapped_at_el0() and check the 'arm64_use_ng_mappings' global variable directly when determining the protection flags for kernel mappings. Cc: Mark Brown <broonie@kernel.org> Reported-by: Hongbo Yao <yaohongbo@huawei.com> Tested-by: Hongbo Yao <yaohongbo@huawei.com> Fixes: 09e3c22a86f6 ("arm64: Use a variable to store non-global mappings decision") Signed-off-by: Will Deacon <will@kernel.org>
2020-03-18arm64: suspend: restore the kernel ptrauth keysAmit Daniel Kachhap1-2/+4
This patch restores the kernel keys from current task during cpu resume after the mmu is turned on and ptrauth is enabled. A flag is added in macro ptrauth_keys_install_kernel to check if isb instruction needs to be executed. Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com> Reviewed-by: Vincenzo Frascino <Vincenzo.Frascino@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-03-18arm64: mask PAC bits of __builtin_return_addressAmit Daniel Kachhap2-8/+25
Functions like vmap() record how much memory has been allocated by their callers, and callers are identified using __builtin_return_address(). Once the kernel is using pointer-auth the return address will be signed. This means it will not match any kernel symbol, and will vary between threads even for the same caller. The output of /proc/vmallocinfo in this case may look like, 0x(____ptrval____)-0x(____ptrval____) 20480 0x86e28000100e7c60 pages=4 vmalloc N0=4 0x(____ptrval____)-0x(____ptrval____) 20480 0x86e28000100e7c60 pages=4 vmalloc N0=4 0x(____ptrval____)-0x(____ptrval____) 20480 0xc5c78000100e7c60 pages=4 vmalloc N0=4 The above three 64bit values should be the same symbol name and not different LR values. Use the pre-processor to add logic to clear the PAC to __builtin_return_address() callers. This patch adds a new file asm/compiler.h and is transitively included via include/compiler_types.h on the compiler command line so it is guaranteed to be loaded and the users of this macro will not find a wrong version. Helper macros ptrauth_kernel_pac_mask/ptrauth_clear_pac are created for this purpose and added in this file. Existing macro ptrauth_user_pac_mask moved from asm/pointer_auth.h. Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com> Reviewed-by: James Morse <james.morse@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-03-18arm64: initialize ptrauth keys for kernel booting taskAmit Daniel Kachhap2-1/+15
This patch uses the existing boot_init_stack_canary arch function to initialize the ptrauth keys for the booting task in the primary core. The requirement here is that it should be always inline and the caller must never return. As pointer authentication too detects a subset of stack corruption so it makes sense to place this code here. Both pointer authentication and stack canary codes are protected by their respective config option. Suggested-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com> Reviewed-by: Vincenzo Frascino <Vincenzo.Frascino@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-03-18arm64: initialize and switch ptrauth kernel keysKristina Martsenko4-0/+32
Set up keys to use pointer authentication within the kernel. The kernel will be compiled with APIAKey instructions, the other keys are currently unused. Each task is given its own APIAKey, which is initialized during fork. The key is changed during context switch and on kernel entry from EL0. The keys for idle threads need to be set before calling any C functions, because it is not possible to enter and exit a function with different keys. Reviewed-by: Kees Cook <keescook@chromium.org> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Vincenzo Frascino <Vincenzo.Frascino@arm.com> Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com> [Amit: Modified secondary cores key structure, comments] Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-03-18arm64: enable ptrauth earlierKristina Martsenko1-0/+9
When the kernel is compiled with pointer auth instructions, the boot CPU needs to start using address auth very early, so change the cpucap to account for this. Pointer auth must be enabled before we call C functions, because it is not possible to enter a function with pointer auth disabled and exit it with pointer auth enabled. Note, mismatches between architected and IMPDEF algorithms will still be caught by the cpufeature framework (the separate *_ARCH and *_IMP_DEF cpucaps). Note the change in behavior: if the boot CPU has address auth and a late CPU does not, then the late CPU is parked by the cpufeature framework. This is possible as kernel will only have NOP space intructions for PAC so such mismatched late cpu will silently ignore those instructions in C functions. Also, if the boot CPU does not have address auth and the late CPU has then the late cpu will still boot but with ptrauth feature disabled. Leave generic authentication as a "system scope" cpucap for now, since initially the kernel will only use address authentication. Reviewed-by: Kees Cook <keescook@chromium.org> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Reviewed-by: Vincenzo Frascino <Vincenzo.Frascino@arm.com> Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com> [Amit: Re-worked ptrauth setup logic, comments] Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-03-18arm64: cpufeature: handle conflicts based on capabilityKristina Martsenko1-2/+10
Each system capability can be of either boot, local, or system scope, depending on when the state of the capability is finalized. When we detect a conflict on a late CPU, we either offline the CPU or panic the system. We currently always panic if the conflict is caused by a boot scope capability, and offline the CPU if the conflict is caused by a local or system scope capability. We're going to want to add a new capability (for pointer authentication) which needs to be boot scope but doesn't need to panic the system when a conflict is detected. So add a new flag to specify whether the capability requires the system to panic or not. Current boot scope capabilities are updated to set the flag, so there should be no functional change as a result of this patch. Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com> Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com> Reviewed-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Reviewed-by: Kees Cook <keescook@chromium.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-03-18arm64: cpufeature: Move cpu capability helpers inside C fileAmit Daniel Kachhap1-12/+0
These helpers are used only by functions inside cpufeature.c and hence makes sense to be moved from cpufeature.h to cpufeature.c as they are not expected to be used globally. This change helps in reducing the header file size as well as to add future cpu capability types without confusion. Only a cpu capability type macro is sufficient to expose those capabilities globally. Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com> Reviewed-by: Vincenzo Frascino <Vincenzo.Frascino@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-03-18arm64: ptrauth: Add bootup/runtime flags for __cpu_setupAmit Daniel Kachhap1-0/+8
This patch allows __cpu_setup to be invoked with one of these flags, ARM64_CPU_BOOT_PRIMARY, ARM64_CPU_BOOT_SECONDARY or ARM64_CPU_RUNTIME. This is required as some cpufeatures need different handling during different scenarios. The input parameter in x0 is preserved till the end to be used inside this function. There should be no functional change with this patch and is useful for the subsequent ptrauth patch which utilizes it. Some upcoming arm cpufeatures can also utilize these flags. Suggested-by: James Morse <james.morse@arm.com> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com> Reviewed-by: Vincenzo Frascino <Vincenzo.Frascino@arm.com> Reviewed-by: James Morse <james.morse@arm.com> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-03-18arm64: install user ptrauth keys at kernel exit timeKristina Martsenko2-22/+50
As we're going to enable pointer auth within the kernel and use a different APIAKey for the kernel itself, so move the user APIAKey switch to EL0 exception return. The other 4 keys could remain switched during task switch, but are also moved to keep things consistent. Reviewed-by: Kees Cook <keescook@chromium.org> Reviewed-by: James Morse <james.morse@arm.com> Reviewed-by: Vincenzo Frascino <Vincenzo.Frascino@arm.com> Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com> [Amit: commit msg, re-positioned the patch, comments] Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-03-18arm64: rename ptrauth key structures to be user-specificKristina Martsenko2-7/+7
We currently enable ptrauth for userspace, but do not use it within the kernel. We're going to enable it for the kernel, and will need to manage a separate set of ptrauth keys for the kernel. We currently keep all 5 keys in struct ptrauth_keys. However, as the kernel will only need to use 1 key, it is a bit wasteful to allocate a whole ptrauth_keys struct for every thread. Therefore, a subsequent patch will define a separate struct, with only 1 key, for the kernel. In preparation for that, rename the existing struct (and associated macros and functions) to reflect that they are specific to userspace. Acked-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Vincenzo Frascino <Vincenzo.Frascino@arm.com> Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com> [Amit: Re-positioned the patch to reduce the diff] Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-03-18arm64: cpufeature: add pointer auth meta-capabilitiesKristina Martsenko2-5/+5
To enable pointer auth for the kernel, we're going to need to check for the presence of address auth and generic auth using alternative_if. We currently have two cpucaps for each, but alternative_if needs to check a single cpucap. So define meta-capabilities that are present when either of the current two capabilities is present. Leave the existing four cpucaps in place, as they are still needed to check for mismatched systems where one CPU has the architected algorithm but another has the IMP DEF algorithm. Note, the meta-capabilities were present before but were removed in commit a56005d32105 ("arm64: cpufeature: Reduce number of pointer auth CPU caps from 6 to 4") and commit 1e013d06120c ("arm64: cpufeature: Rework ptr auth hwcaps using multi_entry_cap_matches"), as they were not needed then. Note, unlike before, the current patch checks the cpucap values directly, instead of reading the CPU ID register value. Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Reviewed-by: Kees Cook <keescook@chromium.org> Reviewed-by: Vincenzo Frascino <Vincenzo.Frascino@arm.com> Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com> [Amit: commit message and macro rebase, use __system_matches_cap] Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-03-17arm64: perf: Add support for ARMv8.5-PMU 64-bit countersAndrew Murray2-1/+6
At present ARMv8 event counters are limited to 32-bits, though by using the CHAIN event it's possible to combine adjacent counters to achieve 64-bits. The perf config1:0 bit can be set to use such a configuration. With the introduction of ARMv8.5-PMU support, all event counters can now be used as 64-bit counters. Let's enable 64-bit event counters where support exists. Unless the user sets config1:0 we will adjust the counter value such that it overflows upon 32-bit overflow. This follows the same behaviour as the cycle counter which has always been (and remains) 64-bits. Signed-off-by: Andrew Murray <andrew.murray@arm.com> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> [Mark: fix ID field names, compare with 8.5 value] Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2020-03-17KVM: arm64: limit PMU version to PMUv3 for ARMv8.1Andrew Murray1-0/+6
We currently expose the PMU version of the host to the guest via emulation of the DFR0_EL1 and AA64DFR0_EL1 debug feature registers. However many of the features offered beyond PMUv3 for 8.1 are not supported in KVM. Examples of this include support for the PMMIR registers (added in PMUv3 for ARMv8.4) and 64-bit event counters added in (PMUv3 for ARMv8.5). Let's trap the Debug Feature Registers in order to limit PMUVer/PerfMon in the Debug Feature Registers to PMUv3 for ARMv8.1 to avoid unexpected behaviour. Both ID_AA64DFR0.PMUVer and ID_DFR0.PerfMon follow the "Alternative ID scheme used for the Performance Monitors Extension version" where 0xF means an IMPLEMENTATION DEFINED PMU is implemented, and values 0x0-0xE are treated as with an unsigned field (with 0x0 meaning no PMU is present). As we don't expect to expose an IMPLEMENTATION DEFINED PMU, and our cap is below 0xF, we can treat these fields as unsigned when applying the cap. Signed-off-by: Andrew Murray <andrew.murray@arm.com> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> [Mark: make field names consistent, use perfmon cap] Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2020-03-17arm64: cpufeature: Extract capped perfmon fieldsAndrew Murray1-0/+23
When emulating ID registers there is often a need to cap the version bits of a feature such that the guest will not use features that the host is not aware of. For example, when KVM mediates access to the PMU by emulating register accesses. Let's add a helper that extracts a performance monitors ID field and caps the version to a given value. Fields that identify the version of the Performance Monitors Extension do not follow the standard ID scheme, and instead follow the scheme described in ARM DDI 0487E.a page D13-2825 "Alternative ID scheme used for the Performance Monitors Extension version". The value 0xF means an IMPLEMENTATION DEFINED PMU is present, and values 0x0-OxE can be treated the same as an unsigned field with 0x0 meaning no PMU is present. Signed-off-by: Andrew Murray <andrew.murray@arm.com> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> [Mark: rework to handle perfmon fields] Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2020-03-17arm64: define __alloc_zeroed_user_highpageglider@google.com1-0/+4
When running the kernel with init_on_alloc=1, calling the default implementation of __alloc_zeroed_user_highpage() from include/linux/highmem.h leads to double-initialization of the allocated page (first by the page allocator, then by clear_user_page(). Calling alloc_page_vma() with __GFP_ZERO, similarly to e.g. x86, seems to be enough to ensure the user page is zeroed only once. Signed-off-by: Alexander Potapenko <glider@google.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>