summaryrefslogtreecommitdiffstats
path: root/arch/x86/lib
AgeCommit message (Collapse)AuthorFilesLines
2022-04-03Merge tag 'x86-urgent-2022-04-03' of ↵Linus Torvalds1-8/+57
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86 fixes from Thomas Gleixner: "A set of x86 fixes and updates: - Make the prctl() for enabling dynamic XSTATE components correct so it adds the newly requested feature to the permission bitmap instead of overwriting it. Add a selftest which validates that. - Unroll string MMIO for encrypted SEV guests as the hypervisor cannot emulate it. - Handle supervisor states correctly in the FPU/XSTATE code so it takes the feature set of the fpstate buffer into account. The feature sets can differ between host and guest buffers. Guest buffers do not contain supervisor states. So far this was not an issue, but with enabling PASID it needs to be handled in the buffer offset calculation and in the permission bitmaps. - Avoid a gazillion of repeated CPUID invocations in by caching the values early in the FPU/XSTATE code. - Enable CONFIG_WERROR in x86 defconfig. - Make the X86 defconfigs more useful by adapting them to Y2022 reality" * tag 'x86-urgent-2022-04-03' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86/fpu/xstate: Consolidate size calculations x86/fpu/xstate: Handle supervisor states in XSTATE permissions x86/fpu/xsave: Handle compacted offsets correctly with supervisor states x86/fpu: Cache xfeature flags from CPUID x86/fpu/xsave: Initialize offset/size cache early x86/fpu: Remove unused supervisor only offsets x86/fpu: Remove redundant XCOMP_BV initialization x86/sev: Unroll string mmio with CC_ATTR_GUEST_UNROLL_STRING_IO x86/config: Make the x86 defconfigs a bit more usable x86/defconfig: Enable WERROR selftests/x86/amx: Update the ARCH_REQ_XCOMP_PERM test x86/fpu/xstate: Fix the ARCH_REQ_XCOMP_PERM implementation
2022-04-01Merge branch 'work.misc' of ↵Linus Torvalds1-26/+0
git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs Pull vfs updates from Al Viro: "Assorted bits and pieces" * 'work.misc' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs: aio: drop needless assignment in aio_read() clean overflow checks in count_mounts() a bit seq_file: fix NULL pointer arithmetic warning uml/x86: use x86 load_unaligned_zeropad() asm/user.h: killed unused macros constify struct path argument of finish_automount()/do_add_mount() fs: Remove FIXME comment in generic_write_checks()
2022-03-29x86/sev: Unroll string mmio with CC_ATTR_GUEST_UNROLL_STRING_IOJoerg Roedel1-8/+57
The io-specific memcpy/memset functions use string mmio accesses to do their work. Under SEV, the hypervisor can't emulate these instructions because they read/write directly from/to encrypted memory. KVM will inject a page fault exception into the guest when it is asked to emulate string mmio instructions for an SEV guest: BUG: unable to handle page fault for address: ffffc90000065068 #PF: supervisor read access in kernel mode #PF: error_code(0x0000) - not-present page PGD 8000100000067 P4D 8000100000067 PUD 80001000fb067 PMD 80001000fc067 PTE 80000000fed40173 Oops: 0000 [#1] PREEMPT SMP NOPTI CPU: 0 PID: 1 Comm: swapper/0 Not tainted 5.17.0-rc7 #3 As string mmio for an SEV guest can not be supported by the hypervisor, unroll the instructions for CC_ATTR_GUEST_UNROLL_STRING_IO enabled kernels. This issue appears when kernels are launched in recent libvirt-managed SEV virtual machines, because virt-install started to add a tpm-crb device to the guest by default and proactively because, raisins: https://github.com/virt-manager/virt-manager/commit/eb58c09f488b0633ed1eea012cd311e48864401e and as that commit says, the default adding of a TPM can be disabled with "virt-install ... --tpm none". The kernel driver for tpm-crb uses memcpy_to/from_io() functions to access MMIO memory, resulting in a page-fault injected by KVM and crashing the kernel at boot. [ bp: Massage and extend commit message. ] Fixes: d8aa7eea78a1 ('x86/mm: Add Secure Encrypted Virtualization (SEV) support') Signed-off-by: Joerg Roedel <jroedel@suse.de> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com> Cc: <stable@vger.kernel.org> Link: https://lore.kernel.org/r/20220321093351.23976-1-joro@8bytes.org
2022-03-27Merge tag 'x86_core_for_5.18_rc1' of ↵Linus Torvalds2-0/+3
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86 CET-IBT (Control-Flow-Integrity) support from Peter Zijlstra: "Add support for Intel CET-IBT, available since Tigerlake (11th gen), which is a coarse grained, hardware based, forward edge Control-Flow-Integrity mechanism where any indirect CALL/JMP must target an ENDBR instruction or suffer #CP. Additionally, since Alderlake (12th gen)/Sapphire-Rapids, speculation is limited to 2 instructions (and typically fewer) on branch targets not starting with ENDBR. CET-IBT also limits speculation of the next sequential instruction after the indirect CALL/JMP [1]. CET-IBT is fundamentally incompatible with retpolines, but provides, as described above, speculation limits itself" [1] https://www.intel.com/content/www/us/en/developer/articles/technical/software-security-guidance/technical-documentation/branch-history-injection.html * tag 'x86_core_for_5.18_rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (53 commits) kvm/emulate: Fix SETcc emulation for ENDBR x86/Kconfig: Only allow CONFIG_X86_KERNEL_IBT with ld.lld >= 14.0.0 x86/Kconfig: Only enable CONFIG_CC_HAS_IBT for clang >= 14.0.0 kbuild: Fixup the IBT kbuild changes x86/Kconfig: Do not allow CONFIG_X86_X32_ABI=y with llvm-objcopy x86: Remove toolchain check for X32 ABI capability x86/alternative: Use .ibt_endbr_seal to seal indirect calls objtool: Find unused ENDBR instructions objtool: Validate IBT assumptions objtool: Add IBT/ENDBR decoding objtool: Read the NOENDBR annotation x86: Annotate idtentry_df() x86,objtool: Move the ASM_REACHABLE annotation to objtool.h x86: Annotate call_on_stack() objtool: Rework ASM_REACHABLE x86: Mark __invalid_creds() __noreturn exit: Mark do_group_exit() __noreturn x86: Mark stop_this_cpu() __noreturn objtool: Ignore extra-symbol code objtool: Rename --duplicate to --lto ...
2022-03-26Merge tag 'memcpy-v5.18-rc1' of ↵Linus Torvalds1-0/+1
git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux Pull FORTIFY_SOURCE updates from Kees Cook: "This series consists of two halves: - strict compile-time buffer size checking under FORTIFY_SOURCE for the memcpy()-family of functions (for extensive details and rationale, see the first commit) - enabling FORTIFY_SOURCE for Clang, which has had many overlapping bugs that we've finally worked past" * tag 'memcpy-v5.18-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux: fortify: Add Clang support fortify: Make sure strlen() may still be used as a constant expression fortify: Use __diagnose_as() for better diagnostic coverage fortify: Make pointer arguments const Compiler Attributes: Add __diagnose_as for Clang Compiler Attributes: Add __overloadable for Clang Compiler Attributes: Add __pass_object_size for Clang fortify: Replace open-coded __gnu_inline attribute fortify: Update compile-time tests for Clang 14 fortify: Detect struct member overflows in memset() at compile-time fortify: Detect struct member overflows in memmove() at compile-time fortify: Detect struct member overflows in memcpy() at compile-time
2022-03-23Merge tag 'asm-generic-5.18' of ↵Linus Torvalds1-1/+1
git://git.kernel.org/pub/scm/linux/kernel/git/arnd/asm-generic Pull asm-generic updates from Arnd Bergmann: "There are three sets of updates for 5.18 in the asm-generic tree: - The set_fs()/get_fs() infrastructure gets removed for good. This was already gone from all major architectures, but now we can finally remove it everywhere, which loses some particularly tricky and error-prone code. There is a small merge conflict against a parisc cleanup, the solution is to use their new version. - The nds32 architecture ends its tenure in the Linux kernel. The hardware is still used and the code is in reasonable shape, but the mainline port is not actively maintained any more, as all remaining users are thought to run vendor kernels that would never be updated to a future release. - A series from Masahiro Yamada cleans up some of the uapi header files to pass the compile-time checks" * tag 'asm-generic-5.18' of git://git.kernel.org/pub/scm/linux/kernel/git/arnd/asm-generic: (27 commits) nds32: Remove the architecture uaccess: remove CONFIG_SET_FS ia64: remove CONFIG_SET_FS support sh: remove CONFIG_SET_FS support sparc64: remove CONFIG_SET_FS support lib/test_lockup: fix kernel pointer check for separate address spaces uaccess: generalize access_ok() uaccess: fix type mismatch warnings from access_ok() arm64: simplify access_ok() m68k: fix access_ok for coldfire MIPS: use simpler access_ok() MIPS: Handle address errors for accesses above CPU max virtual user address uaccess: add generic __{get,put}_kernel_nofault nios2: drop access_ok() check from __put_user() x86: use more conventional access_ok() definition x86: remove __range_not_ok() sparc64: add __{get,put}_kernel_nofault() nds32: fix access_ok() checks in get/put_user uaccess: fix nios2 and microblaze get_user_8() sparc64: fix building assembly files ...
2022-03-21Merge tag 'x86_misc_for_v5.18_rc1' of ↵Linus Torvalds1-13/+98
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull misc x86 updates from Borislav Petkov: - Add support for a couple new insn sets to the insn decoder: AVX512-FP16, AMX, other misc insns. - Update VMware-specific MAINTAINERS entries * tag 'x86_misc_for_v5.18_rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: MAINTAINERS: Mark VMware mailing list entries as email aliases MAINTAINERS: Add Zack as maintainer of vmmouse driver MAINTAINERS: Update maintainers for paravirt ops and VMware hypervisor interface x86/insn: Add AVX512-FP16 instructions to the x86 instruction decoder perf/tests: Add AVX512-FP16 instructions to x86 instruction decoder test x86/insn: Add misc instructions to x86 instruction decoder perf/tests: Add misc instructions to the x86 instruction decoder test x86/insn: Add AMX instructions to the x86 instruction decoder perf/tests: Add AMX instructions to x86 instruction decoder test
2022-03-21Merge tag 'arm64-upstream' of ↵Linus Torvalds3-10/+10
git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux Pull arm64 updates from Will Deacon: - Support for including MTE tags in ELF coredumps - Instruction encoder updates, including fixes to 64-bit immediate generation and support for the LSE atomic instructions - Improvements to kselftests for MTE and fpsimd - Symbol aliasing and linker script cleanups - Reduce instruction cache maintenance performed for user mappings created using contiguous PTEs - Support for the new "asymmetric" MTE mode, where stores are checked asynchronously but loads are checked synchronously - Support for the latest pointer authentication algorithm ("QARMA3") - Support for the DDR PMU present in the Marvell CN10K platform - Support for the CPU PMU present in the Apple M1 platform - Use the RNDR instruction for arch_get_random_{int,long}() - Update our copy of the Arm optimised string routines for str{n}cmp() - Fix signal frame generation for CPUs which have foolishly elected to avoid building in support for the fpsimd instructions - Workaround for Marvell GICv3 erratum #38545 - Clarification to our Documentation (booting reqs. and MTE prctl()) - Miscellanous cleanups and minor fixes * tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (90 commits) docs: sysfs-devices-system-cpu: document "asymm" value for mte_tcf_preferred arm64/mte: Remove asymmetric mode from the prctl() interface arm64: Add cavium_erratum_23154_cpus missing sentinel perf/marvell: Fix !CONFIG_OF build for CN10K DDR PMU driver arm64: mm: Drop 'const' from conditional arm64_dma_phys_limit definition Documentation: vmcoreinfo: Fix htmldocs warning kasan: fix a missing header include of static_keys.h drivers/perf: Add Apple icestorm/firestorm CPU PMU driver drivers/perf: arm_pmu: Handle 47 bit counters arm64: perf: Consistently make all event numbers as 16-bits arm64: perf: Expose some Armv9 common events under sysfs perf/marvell: cn10k DDR perf event core ownership perf/marvell: cn10k DDR perfmon event overflow handling perf/marvell: CN10k DDR performance monitor support dt-bindings: perf: marvell: cn10k ddr performance monitor arm64: clean up tools Makefile perf/arm-cmn: Update watchpoint format perf/arm-cmn: Hide XP PUB events for CMN-600 arm64: drop unused includes of <linux/personality.h> arm64: Do not defer reserve_crashkernel() for platforms with no DMA memory zones ...
2022-03-15x86/ibt: Annotate text referencesPeter Zijlstra2-0/+3
Annotate away some of the generic code references. This is things where we take the address of a symbol for exception handling or return addresses (eg. context switch). Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Josh Poimboeuf <jpoimboe@redhat.com> Link: https://lore.kernel.org/r/20220308154318.877758523@infradead.org
2022-03-15Merge branch 'arm64/for-next/linkage'Peter Zijlstra3-10/+10
Enjoy the cleanups and avoid conflicts vs linkage Signed-off-by: Peter Zijlstra <peterz@infradead.org>
2022-02-25x86: remove __range_not_ok()Arnd Bergmann1-1/+1
The __range_not_ok() helper is an x86 (and sparc64) specific interface that does roughly the same thing as __access_ok(), but with different calling conventions. Change this to use the normal interface in order for consistency as we clean up all access_ok() implementations. This changes the limit from TASK_SIZE to TASK_SIZE_MAX, which Al points out is the right thing do do here anyway. The callers have to use __access_ok() instead of the normal access_ok() though, because on x86 that contains a WARN_ON_IN_IRQ() check that cannot be used inside of NMI context while tracing. The check in copy_code() is not needed any more, because this one is already done by copy_from_user_nmi(). Suggested-by: Al Viro <viro@zeniv.linux.org.uk> Suggested-by: Christoph Hellwig <hch@infradead.org> Link: https://lore.kernel.org/lkml/YgsUKcXGR7r4nINj@zeniv-ca.linux.org.uk/ Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2022-02-22x86: clean up symbol aliasingMark Rutland3-10/+10
Now that we have SYM_FUNC_ALIAS() and SYM_FUNC_ALIAS_WEAK(), use those to simplify the definition of function aliases across arch/x86. For clarity, where there are multiple annotations such as EXPORT_SYMBOL(), I've tried to keep annotations grouped by symbol. For example, where a function has a name and an alias which are both exported, this is organised as: SYM_FUNC_START(func) ... asm insns ... SYM_FUNC_END(func) EXPORT_SYMBOL(func) SYM_FUNC_ALIAS(alias, func) EXPORT_SYMBOL(alias) Where there are only aliases and no exports or other annotations, I have not bothered with line spacing, e.g. SYM_FUNC_START(func) ... asm insns ... SYM_FUNC_END(func) SYM_FUNC_ALIAS(alias, func) The tools/perf/ copies of memset_64.S and memset_64.S are updated likewise to avoid the build system complaining these are mismatched: | Warning: Kernel ABI header at 'tools/arch/x86/lib/memcpy_64.S' differs from latest version at 'arch/x86/lib/memcpy_64.S' | diff -u tools/arch/x86/lib/memcpy_64.S arch/x86/lib/memcpy_64.S | Warning: Kernel ABI header at 'tools/arch/x86/lib/memset_64.S' differs from latest version at 'arch/x86/lib/memset_64.S' | diff -u tools/arch/x86/lib/memset_64.S arch/x86/lib/memset_64.S There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Ard Biesheuvel <ardb@kernel.org> Acked-by: Josh Poimboeuf <jpoimboe@redhat.com> Acked-by: Mark Brown <broonie@kernel.org> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Borislav Petkov <bp@alien8.de> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jiri Slaby <jslaby@suse.cz> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20220216162229.1076788-4-mark.rutland@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2022-02-21x86/speculation: Rename RETPOLINE_AMD to RETPOLINE_LFENCEPeter Zijlstra (Intel)1-1/+1
The RETPOLINE_AMD name is unfortunate since it isn't necessarily AMD only, in fact Hygon also uses it. Furthermore it will likely be sufficient for some Intel processors. Therefore rename the thing to RETPOLINE_LFENCE to better describe what it is. Add the spectre_v2=retpoline,lfence option as an alias to spectre_v2=retpoline,amd to preserve existing setups. However, the output of /sys/devices/system/cpu/vulnerabilities/spectre_v2 will be changed. [ bp: Fix typos, massage. ] Co-developed-by: Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
2022-02-13fortify: Detect struct member overflows in memmove() at compile-timeKees Cook1-0/+1
As done for memcpy(), also update memmove() to use the same tightened compile-time checks under CONFIG_FORTIFY_SOURCE. Signed-off-by: Kees Cook <keescook@chromium.org>
2022-01-30uml/x86: use x86 load_unaligned_zeropad()Al Viro1-26/+0
allows, among other things, to drop !DCACHE_WORD_ACCESS mess in x86 csum-partial_64.c Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2022-01-23x86/insn: Add AVX512-FP16 instructions to the x86 instruction decoderAdrian Hunter1-8/+87
The x86 instruction decoder is used for both kernel instructions and user space instructions (e.g. uprobes, perf tools Intel PT), so it is good to update it with new instructions. Add AVX512-FP16 instructions to x86 instruction decoder. Note the EVEX map field is extended by 1 bit, and most instructions are in map 5 and map 6. Reference: Intel AVX512-FP16 Architecture Specification June 2021 Revision 1.0 Document Number: 347407-001US Example using perf tools' x86 instruction decoder test: $ perf test -v "x86 instruction decoder" |& grep vfcmaddcph | head -2 Decoded ok: 62 f6 6f 48 56 cb vfcmaddcph %zmm3,%zmm2,%zmm1 Decoded ok: 62 f6 6f 48 56 8c c8 78 56 34 12 vfcmaddcph 0x12345678(%eax,%ecx,8),%zmm2,%zmm1 Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Signed-off-by: Borislav Petkov <bp@suse.de> Acked-by: Arnaldo Carvalho de Melo <acme@redhat.com> Acked-by: Masami Hiramatsu <mhiramat@kernel.org> Link: https://lore.kernel.org/r/20211202095029.2165714-7-adrian.hunter@intel.com
2022-01-23x86/insn: Add misc instructions to x86 instruction decoderAdrian Hunter1-3/+3
x86 instruction decoder is used for both kernel instructions and user space instructions (e.g. uprobes, perf tools Intel PT), so it is good to update it with new instructions. Add instructions to x86 instruction decoder: User Interrupt clui senduipi stui testui uiret Prediction history reset hreset Serialize instruction execution serialize TSX suspend load address tracking xresldtrk xsusldtrk Reference: Intel Architecture Instruction Set Extensions and Future Features Programming Reference May 2021 Document Number: 319433-044 Example using perf tools' x86 instruction decoder test: $ perf test -v "x86 instruction decoder" |& grep -i hreset Decoded ok: f3 0f 3a f0 c0 00 hreset $0x0 Decoded ok: f3 0f 3a f0 c0 00 hreset $0x0 Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Signed-off-by: Borislav Petkov <bp@suse.de> Acked-by: Arnaldo Carvalho de Melo <acme@redhat.com> Acked-by: Masami Hiramatsu <mhiramat@kernel.org> Link: https://lore.kernel.org/r/20211202095029.2165714-5-adrian.hunter@intel.com
2022-01-23x86/insn: Add AMX instructions to the x86 instruction decoderAdrian Hunter1-2/+8
The x86 instruction decoder is used for both kernel instructions and user space instructions (e.g. uprobes, perf tools Intel PT), so it is good to update it with new instructions. Add AMX instructions to the x86 instruction decoder. Reference: Intel Architecture Instruction Set Extensions and Future Features Programming Reference May 2021 Document Number: 319433-044 Example using perf tools' x86 instruction decoder test: $ INSN='ldtilecfg\|sttilecfg\|tdpbf16ps\|tdpbssd\|' $ INSN+='tdpbsud\|tdpbusd\|'tdpbuud\|tileloadd\|' $ INSN+='tileloaddt1\|tilerelease\|tilestored\|tilezero' $ perf test -v "x86 instruction decoder" |& grep -i $INSN Decoded ok: c4 e2 78 49 04 c8 ldtilecfg (%rax,%rcx,8) Decoded ok: c4 c2 78 49 04 c8 ldtilecfg (%r8,%rcx,8) Decoded ok: c4 e2 79 49 04 c8 sttilecfg (%rax,%rcx,8) Decoded ok: c4 c2 79 49 04 c8 sttilecfg (%r8,%rcx,8) Decoded ok: c4 e2 7a 5c d1 tdpbf16ps %tmm0,%tmm1,%tmm2 Decoded ok: c4 e2 7b 5e d1 tdpbssd %tmm0,%tmm1,%tmm2 Decoded ok: c4 e2 7a 5e d1 tdpbsud %tmm0,%tmm1,%tmm2 Decoded ok: c4 e2 79 5e d1 tdpbusd %tmm0,%tmm1,%tmm2 Decoded ok: c4 e2 78 5e d1 tdpbuud %tmm0,%tmm1,%tmm2 Decoded ok: c4 e2 7b 4b 0c c8 tileloadd (%rax,%rcx,8),%tmm1 Decoded ok: c4 c2 7b 4b 14 c8 tileloadd (%r8,%rcx,8),%tmm2 Decoded ok: c4 e2 79 4b 0c c8 tileloaddt1 (%rax,%rcx,8),%tmm1 Decoded ok: c4 c2 79 4b 14 c8 tileloaddt1 (%r8,%rcx,8),%tmm2 Decoded ok: c4 e2 78 49 c0 tilerelease Decoded ok: c4 e2 7a 4b 0c c8 tilestored %tmm1,(%rax,%rcx,8) Decoded ok: c4 c2 7a 4b 14 c8 tilestored %tmm2,(%r8,%rcx,8) Decoded ok: c4 e2 7b 49 c0 tilezero %tmm0 Decoded ok: c4 e2 7b 49 f8 tilezero %tmm7 Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Signed-off-by: Borislav Petkov <bp@suse.de> Acked-by: Arnaldo Carvalho de Melo <acme@redhat.com> Acked-by: Masami Hiramatsu <mhiramat@kernel.org> Link: https://lore.kernel.org/r/20211202095029.2165714-3-adrian.hunter@intel.com
2022-01-12Merge tag 'x86_core_for_v5.17_rc1' of ↵Linus Torvalds27-705/+295
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86 core updates from Borislav Petkov: - Get rid of all the .fixup sections because this generates misleading/wrong stacktraces and confuse RELIABLE_STACKTRACE and LIVEPATCH as the backtrace misses the function which is being fixed up. - Add Straight Line Speculation mitigation support which uses a new compiler switch -mharden-sls= which sticks an INT3 after a RET or an indirect branch in order to block speculation after them. Reportedly, CPUs do speculate behind such insns. - The usual set of cleanups and improvements * tag 'x86_core_for_v5.17_rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (32 commits) x86/entry_32: Fix segment exceptions objtool: Remove .fixup handling x86: Remove .fixup section x86/word-at-a-time: Remove .fixup usage x86/usercopy: Remove .fixup usage x86/usercopy_32: Simplify __copy_user_intel_nocache() x86/sgx: Remove .fixup usage x86/checksum_32: Remove .fixup usage x86/vmx: Remove .fixup usage x86/kvm: Remove .fixup usage x86/segment: Remove .fixup usage x86/fpu: Remove .fixup usage x86/xen: Remove .fixup usage x86/uaccess: Remove .fixup usage x86/futex: Remove .fixup usage x86/msr: Remove .fixup usage x86/extable: Extend extable functionality x86/entry_32: Remove .fixup usage x86/entry_64: Remove .fixup usage x86/copy_mc_64: Remove .fixup usage ...
2022-01-12x86/entry_32: Fix segment exceptionsPeter Zijlstra1-0/+5
The LKP robot reported that commit in Fixes: caused a failure. Turns out the ldt_gdt_32 selftest turns into an infinite loop trying to clear the segment. As discovered by Sean, what happens is that PARANOID_EXIT_TO_KERNEL_MODE in the handle_exception_return path overwrites the entry stack data with the task stack data, restoring the "bad" segment value. Instead of having the exception retry the instruction, have it emulate the full instruction. Replace EX_TYPE_POP_ZERO with EX_TYPE_POP_REG which will do the equivalent of: POP %reg; MOV $imm, %reg. In order to encode the segment registers, add them as registers 8-11 for 32-bit. By setting regs->[defg]s the (nested) RESTORE_REGS will pop this value at the end of the exception handler and by increasing regs->sp, it will have skipped the stack slot. This was debugged by Sean Christopherson <seanjc@google.com>. [ bp: Add EX_REG_GS too. ] Fixes: aa93e2ad7464 ("x86/entry_32: Remove .fixup usage") Reported-by: kernel test robot <oliver.sang@intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lore.kernel.org/r/Yd1l0gInc4zRcnt/@hirez.programming.kicks-ass.net
2022-01-10Merge tag 'ras_core_for_v5.17_rc1' of ↵Linus Torvalds1-0/+9
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull RAS updates from Borislav Petkov: "A relatively big amount of movements in RAS-land this time around: - First part of a series to move the AMD address translation code from arch/x86/ to amd64_edac as that is its only user anyway - Some MCE error injection improvements to the AMD side - Reorganization of the #MC handler code and the facilities it calls to make it noinstr-safe - Add support for new AMD MCA bank types and non-uniform banks layout - The usual set of cleanups and fixes" * tag 'ras_core_for_v5.17_rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (24 commits) x86/mce: Reduce number of machine checks taken during recovery x86/mce/inject: Avoid out-of-bounds write when setting flags x86/MCE/AMD, EDAC/mce_amd: Support non-uniform MCA bank type enumeration x86/MCE/AMD, EDAC/mce_amd: Add new SMCA bank types x86/mce: Check regs before accessing it x86/mce: Mark mce_start() noinstr x86/mce: Mark mce_timed_out() noinstr x86/mce: Move the tainting outside of the noinstr region x86/mce: Mark mce_read_aux() noinstr x86/mce: Mark mce_end() noinstr x86/mce: Mark mce_panic() noinstr x86/mce: Prevent severity computation from being instrumented x86/mce: Allow instrumentation during task work queueing x86/mce: Remove noinstr annotation from mce_setup() x86/mce: Use mce_rdmsrl() in severity checking code x86/mce: Remove function-local cpus variables x86/mce: Do not use memset to clear the banks bitmaps x86/mce/inject: Set the valid bit in MCA_STATUS before error injection x86/mce/inject: Check if a bank is populated before injecting x86/mce: Get rid of cpu_missing ...
2022-01-10Merge tag 'x86_cpu_for_v5.17_rc1' of ↵Linus Torvalds1-2/+2
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86 cpuid updates from Borislav Petkov: - Enable the short string copies for CPUs which support them, in copy_user_enhanced_fast_string() - Avoid writing MSR_CSTAR on Intel due to TDX guests raising a #VE trap * tag 'x86_cpu_for_v5.17_rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86/lib: Add fast-short-rep-movs check to copy_user_enhanced_fast_string() x86/cpu: Don't write CSTAR MSR on Intel CPUs
2021-12-31x86/mce: Reduce number of machine checks taken during recoveryYouquan Song1-0/+9
When any of the copy functions in arch/x86/lib/copy_user_64.S take a fault, the fixup code copies the remaining byte count from %ecx to %edx and unconditionally jumps to .Lcopy_user_handle_tail to continue the copy in case any more bytes can be copied. If the fault was #PF this may copy more bytes (because the page fault handler might have fixed the fault). But when the fault is a machine check the original copy code will have copied all the way to the poisoned cache line. So .Lcopy_user_handle_tail will just take another machine check for no good reason. Every code path to .Lcopy_user_handle_tail comes from an exception fixup path, so add a check there to check the trap type (in %eax) and simply return the count of remaining bytes if the trap was a machine check. Doing this reduces the number of machine checks taken during synthetic tests from four to three. As well as reducing the number of machine checks, this also allows Skylake generation Xeons to recover some cases that currently fail. The is because REP; MOVSB is only recoverable when source and destination are well aligned and the byte count is large. That useless call to .Lcopy_user_handle_tail may violate one or more of these conditions and generate a fatal machine check. [ Tony: Add more details to commit message. ] [ bp: Fixup comment. Also, another tip patchset which is adding straight-line speculation mitigation changes the "ret" instruction to an all-caps macro "RET". But, since gas is case-insensitive, use "RET" in the newly added asm block already in order to simplify tip branch merging on its way upstream. ] Signed-off-by: Youquan Song <youquan.song@intel.com> Signed-off-by: Tony Luck <tony.luck@intel.com> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lore.kernel.org/r/YcTW5dh8yTGucDd+@agluck-desk2.amr.corp.intel.com
2021-12-29x86/lib: Add fast-short-rep-movs check to copy_user_enhanced_fast_string()Tony Luck1-2/+2
Commit f444a5ff95dc ("x86/cpufeatures: Add support for fast short REP; MOVSB") fixed memmove() with an ALTERNATIVE that will use REP MOVSB for all string lengths. copy_user_enhanced_fast_string() has a similar run time check to avoid using REP MOVSB for copies less that 64 bytes. Add an ALTERNATIVE to patch out the short length check and always use REP MOVSB on X86_FEATURE_FSRM CPUs. Signed-off-by: Tony Luck <tony.luck@intel.com> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lore.kernel.org/r/20211216172431.1396371-1-tony.luck@intel.com
2021-12-11x86/usercopy: Remove .fixup usagePeter Zijlstra2-28/+8
Typically usercopy does whole word copies followed by a number of byte copies to finish the tail. This means that on exception it needs to compute the remaining length as: words*sizeof(long) + bytes. Create a new extable handler to do just this. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Josh Poimboeuf <jpoimboe@redhat.com> Link: https://lore.kernel.org/r/20211110101326.081701085@infradead.org
2021-12-11x86/usercopy_32: Simplify __copy_user_intel_nocache()Peter Zijlstra1-20/+20
Have an exception jump to a .fixup to only immediately jump out is daft, jump to the right place in one go. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Josh Poimboeuf <jpoimboe@redhat.com> Link: https://lore.kernel.org/r/20211110101326.021517780@infradead.org
2021-12-11x86/checksum_32: Remove .fixup usagePeter Zijlstra1-16/+3
Simply add EX_FLAG_CLEAR_AX to do as the .fixup used to do. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Josh Poimboeuf <jpoimboe@redhat.com> Link: https://lore.kernel.org/r/20211110101325.899657959@infradead.org
2021-12-11x86/extable: Extend extable functionalityPeter Zijlstra1-24/+42
In order to remove further .fixup usage, extend the extable infrastructure to take additional information from the extable entry sites. Specifically add _ASM_EXTABLE_TYPE_REG() and EX_TYPE_IMM_REG that extend the existing _ASM_EXTABLE_TYPE() by taking an additional register argument and encoding that and an s16 immediate into the existing s32 type field. This limits the actual types to the first byte, 255 seem plenty. Also add a few flags into the type word, specifically CLEAR_AX and CLEAR_DX which clear the return and extended return register. Notes: - due to the % in our register names it's hard to make it more generally usable as arm64 did. - the s16 is far larger than used in these patches, future extentions can easily shrink this to get more bits. - without the bitfield fix this will not compile, because: 0xFF > -1 and we can't even extract the TYPE field. [nathanchance: Build fix for clang-lto builds: https://lkml.kernel.org/r/20211210234953.3420108-1-nathan@kernel.org ] Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Josh Poimboeuf <jpoimboe@redhat.com> Reviewed-by: Nick Desaulniers <ndesaulniers@google.com> Tested-by: Nick Desaulniers <ndesaulniers@google.com> Link: https://lore.kernel.org/r/20211110101325.303890153@infradead.org
2021-12-11x86/copy_mc_64: Remove .fixup usagePeter Zijlstra1-8/+4
Place the anonymous .fixup code at the tail of the regular functions. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Josh Poimboeuf <jpoimboe@redhat.com> Reviewed-by: Borislav Petkov <bp@suse.de> Link: https://lore.kernel.org/r/20211110101325.127055887@infradead.org
2021-12-11x86/copy_user_64: Remove .fixup usagePeter Zijlstra1-21/+11
Place the anonymous .fixup code at the tail of the regular functions. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Josh Poimboeuf <jpoimboe@redhat.com> Reviewed-by: Borislav Petkov <bp@suse.de> Link: https://lore.kernel.org/r/20211110101325.068505810@infradead.org
2021-12-11x86/mmx_32: Remove X86_USE_3DNOWPeter Zijlstra4-394/+0
This code puts an exception table entry on the PREFETCH instruction to overwrite it with a JMP.d8 when it triggers an exception. Except of course, our code is no longer writable, also SMP. Instead of fixing this broken mess, simply take it out. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/YZKQzUmeNuwyvZpk@hirez.programming.kicks-ass.net
2021-12-09x86: Add straight-line-speculation mitigationPeter Zijlstra2-2/+2
Make use of an upcoming GCC feature to mitigate straight-line-speculation for x86: https://gcc.gnu.org/g:53a643f8568067d7700a9f2facc8ba39974973d3 https://gcc.gnu.org/bugzilla/show_bug.cgi?id=102952 https://bugs.llvm.org/show_bug.cgi?id=52323 It's built tested on x86_64-allyesconfig using GCC-12 and GCC-11. Maintenance overhead of this should be fairly low due to objtool validation. Size overhead of all these additional int3 instructions comes to: text data bss dec hex filename 22267751 6933356 2011368 31212475 1dc43bb defconfig-build/vmlinux 22804126 6933356 1470696 31208178 1dc32f2 defconfig-build/vmlinux.sls Or roughly 2.4% additional text. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lore.kernel.org/r/20211204134908.140103474@infradead.org
2021-12-08x86: Prepare inline-asm for straight-line-speculationPeter Zijlstra1-1/+2
Replace all ret/retq instructions with ASM_RET in preparation of making it more than a single instruction. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lore.kernel.org/r/20211204134907.964635458@infradead.org
2021-12-08x86: Prepare asm files for straight-line-speculationPeter Zijlstra19-63/+63
Replace all ret/retq instructions with RET in preparation of making RET a macro. Since AS is case insensitive it's a big no-op without RET defined. find arch/x86/ -name \*.S | while read file do sed -i 's/\<ret[q]*\>/RET/' $file done Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lore.kernel.org/r/20211204134907.905503893@infradead.org
2021-12-08x86/lib/atomic64_386_32: Rename thingsPeter Zijlstra1-38/+46
Principally, in order to get rid of #define RET in this code to make place for a new RET, but also to clarify the code, rename a bunch of things: s/UNLOCK/IRQ_RESTORE/ s/LOCK/IRQ_SAVE/ s/BEGIN/BEGIN_IRQ_SAVE/ s/\<RET\>/RET_IRQ_RESTORE/ s/RET_ENDP/\tRET_IRQ_RESTORE\rENDP/ which then leaves RET unused so it can be removed. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lore.kernel.org/r/20211204134907.841623970@infradead.org
2021-12-08x86/csum: Rewrite/optimize csum_partial()Eric Dumazet1-92/+91
With more NICs supporting CHECKSUM_COMPLETE, and IPv6 being widely used csum_partial() is heavily used with small amount of bytes, and is consuming many cycles. IPv6 header size, for instance, is 40 bytes. Another thing to consider is that NET_IP_ALIGN is 0 on x86, meaning that network headers are not word-aligned, unless the driver forces this. This means that csum_partial() fetches one u16 to 'align the buffer', then performs three u64 additions with carry in a loop, then a remaining u32, then a remaining u16. With this new version, it performs a loop only for the 64 bytes blocks, then the remaining is bisected. Testing on various CPUs, all of them show a big reduction in csum_partial() cost (by 50 to 80 %) Before: 4.16% [kernel] [k] csum_partial After: 0.83% [kernel] [k] csum_partial If run in a loop 1,000,000 times: Before: 26,922,913 cycles # 3846130.429 GHz 80,302,961 instructions # 2.98 insn per cycle 21,059,816 branches # 3008545142.857 M/sec 2,896 branch-misses # 0.01% of all branches After: 17,960,709 cycles # 3592141.800 GHz 41,292,805 instructions # 2.30 insn per cycle 11,058,119 branches # 2211623800.000 M/sec 2,997 branch-misses # 0.03% of all branches [ bp: Massage, merge in subsequent fixes into a single patch: - um compilation error due to missing load_unaligned_zeropad(): - Reported-by: kernel test robot <lkp@intel.com> - Link: https://lkml.kernel.org/r/20211118175239.1525650-1-eric.dumazet@gmail.com - Fix initial seed for odd buffers - Reported-by: Noah Goldstein <goldstein.w.n@gmail.com> - Link: https://lkml.kernel.org/r/20211125141817.3541501-1-eric.dumazet@gmail.com ] Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Alexander Duyck <alexanderduyck@fb.com> Link: https://lore.kernel.org/r/20211112161950.528886-1-eric.dumazet@gmail.com
2021-11-30x86/insn-eval: Introduce insn_decode_mmio()Kirill A. Shutemov1-0/+84
In preparation for sharing MMIO instruction decode between SEV-ES and TDX, factor out the common decode into a new insn_decode_mmio() helper. For regular virtual machine, MMIO is handled by the VMM and KVM emulates instructions that caused MMIO. But, this model doesn't work for a secure VMs (like SEV or TDX) as VMM doesn't have access to the guest memory and register state. So, for TDX or SEV VMM needs assistance in handling MMIO. It induces exception in the guest. Guest has to decode the instruction and handle it on its own. The code is based on the current SEV MMIO implementation. Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> Reviewed-by: Andi Kleen <ak@linux.intel.com> Reviewed-by: Tony Luck <tony.luck@intel.com> Tested-by: Joerg Roedel <jroedel@suse.de> Acked-by: Tom Lendacky <thomas.lendacky@amd.com> Link: https://lkml.kernel.org/r/20211130184933.31005-4-kirill.shutemov@linux.intel.com
2021-11-30x86/insn-eval: Introduce insn_get_modrm_reg_ptr()Kirill A. Shutemov1-0/+20
The helper returns a pointer to the register indicated by ModRM byte. It's going to replace vc_insn_get_reg() in the SEV MMIO implementation. TDX MMIO implementation will also use it. Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> Reviewed-by: Andi Kleen <ak@linux.intel.com> Reviewed-by: Tony Luck <tony.luck@intel.com> Tested-by: Joerg Roedel <jroedel@suse.de> Acked-by: Tom Lendacky <thomas.lendacky@amd.com> Link: https://lkml.kernel.org/r/20211130184933.31005-3-kirill.shutemov@linux.intel.com
2021-11-30x86/insn-eval: Handle insn_get_opcode() failureKirill A. Shutemov1-2/+3
is_string_insn() calls insn_get_opcode() that can fail, but does not handle the failure. is_string_insn() interface does not allow to communicate an error to the caller. Push insn_get_opcode() to the only non-static user of is_string_insn() and fail it early if insn_get_opcode() fails. [ dhansen: fix tabs-versus-spaces breakage ] Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> Tested-by: Joerg Roedel <jroedel@suse.de> Acked-by: Tom Lendacky <thomas.lendacky@amd.com> Link: https://lkml.kernel.org/r/20211130184933.31005-2-kirill.shutemov@linux.intel.com
2021-11-02Merge tag 'x86_core_for_v5.16_rc1' of ↵Linus Torvalds2-7/+13
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86 core updates from Borislav Petkov: - Do not #GP on userspace use of CLI/STI but pretend it was a NOP to keep old userspace from breaking. Adjust the corresponding iopl selftest to that. - Improve stack overflow warnings to say which stack got overflowed and raise the exception stack sizes to 2 pages since overflowing the single page of exception stack is very easy to do nowadays with all the tracing machinery enabled. With that, rip out the custom mapping of AMD SEV's too. - A bunch of changes in preparation for FGKASLR like supporting more than 64K section headers in the relocs tool, correct ORC lookup table size to cover the whole kernel .text and other adjustments. * tag 'x86_core_for_v5.16_rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: selftests/x86/iopl: Adjust to the faked iopl CLI/STI usage vmlinux.lds.h: Have ORC lookup cover entire _etext - _stext x86/boot/compressed: Avoid duplicate malloc() implementations x86/boot: Allow a "silent" kaslr random byte fetch x86/tools/relocs: Support >64K section headers x86/sev: Make the #VC exception stacks part of the default stacks storage x86: Increase exception stack sizes x86/mm/64: Improve stack overflow warnings x86/iopl: Fake iopl(3) CLI/STI usage
2021-11-01Merge tag 'overflow-v5.16-rc1' of ↵Linus Torvalds1-0/+1
git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux Pull overflow updates from Kees Cook: "The end goal of the current buffer overflow detection work[0] is to gain full compile-time and run-time coverage of all detectable buffer overflows seen via array indexing or memcpy(), memmove(), and memset(). The str*() family of functions already have full coverage. While much of the work for these changes have been on-going for many releases (i.e. 0-element and 1-element array replacements, as well as avoiding false positives and fixing discovered overflows[1]), this series contains the foundational elements of several related buffer overflow detection improvements by providing new common helpers and FORTIFY_SOURCE changes needed to gain the introspection required for compiler visibility into array sizes. Also included are a handful of already Acked instances using the helpers (or related clean-ups), with many more waiting at the ready to be taken via subsystem-specific trees[2]. The new helpers are: - struct_group() for gaining struct member range introspection - memset_after() and memset_startat() for clearing to the end of structures - DECLARE_FLEX_ARRAY() for using flex arrays in unions or alone in structs Also included is the beginning of the refactoring of FORTIFY_SOURCE to support memcpy() introspection, fix missing and regressed coverage under GCC, and to prepare to fix the currently broken Clang support. Finishing this work is part of the larger series[0], but depends on all the false positives and buffer overflow bug fixes to have landed already and those that depend on this series to land. As part of the FORTIFY_SOURCE refactoring, a set of both a compile-time and run-time tests are added for FORTIFY_SOURCE and the mem*()-family functions respectively. The compile time tests have found a legitimate (though corner-case) bug[6] already. Please note that the appearance of "panic" and "BUG" in the FORTIFY_SOURCE refactoring are the result of relocating existing code, and no new use of those code-paths are expected nor desired. Finally, there are two tree-wide conversions for 0-element arrays and flexible array unions to gain sane compiler introspection coverage that result in no known object code differences. After this series (and the changes that have now landed via netdev and usb), we are very close to finally being able to build with -Warray-bounds and -Wzero-length-bounds. However, due corner cases in GCC[3] and Clang[4], I have not included the last two patches that turn on these options, as I don't want to introduce any known warnings to the build. Hopefully these can be solved soon" Link: https://lore.kernel.org/lkml/20210818060533.3569517-1-keescook@chromium.org/ [0] Link: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/log/?qt=grep&q=FORTIFY_SOURCE [1] Link: https://lore.kernel.org/lkml/202108220107.3E26FE6C9C@keescook/ [2] Link: https://lore.kernel.org/lkml/3ab153ec-2798-da4c-f7b1-81b0ac8b0c5b@roeck-us.net/ [3] Link: https://bugs.llvm.org/show_bug.cgi?id=51682 [4] Link: https://lore.kernel.org/lkml/202109051257.29B29745C0@keescook/ [5] Link: https://lore.kernel.org/lkml/20211020200039.170424-1-keescook@chromium.org/ [6] * tag 'overflow-v5.16-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux: (30 commits) fortify: strlen: Avoid shadowing previous locals compiler-gcc.h: Define __SANITIZE_ADDRESS__ under hwaddress sanitizer treewide: Replace 0-element memcpy() destinations with flexible arrays treewide: Replace open-coded flex arrays in unions stddef: Introduce DECLARE_FLEX_ARRAY() helper btrfs: Use memset_startat() to clear end of struct string.h: Introduce memset_startat() for wiping trailing members and padding xfrm: Use memset_after() to clear padding string.h: Introduce memset_after() for wiping trailing members/padding lib: Introduce CONFIG_MEMCPY_KUNIT_TEST fortify: Add compile-time FORTIFY_SOURCE tests fortify: Allow strlen() and strnlen() to pass compile-time known lengths fortify: Prepare to improve strnlen() and strlen() warnings fortify: Fix dropped strcpy() compile-time write overflow check fortify: Explicitly disable Clang support fortify: Move remaining fortify helpers into fortify-string.h lib/string: Move helper functions out of string.c compiler_types.h: Remove __compiletime_object_size() cm4000_cs: Use struct_group() to zero struct cm4000_dev region can: flexcan: Use struct_group() to zero struct flexcan_regs regions ...
2021-11-01Merge tag 'x86_misc_for_v5.16_rc1' of ↵Linus Torvalds1-2/+3
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull misc x86 changes from Borislav Petkov: - Use the proper interface for the job: get_unaligned() instead of memcpy() in the insn decoder - A randconfig build fix * tag 'x86_misc_for_v5.16_rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86/insn: Use get_unaligned() instead of memcpy() x86/Kconfig: Fix an unused variable error in dell-smm-hwmon
2021-11-01Merge tag 'ras_core_for_v5.16_rc1' of ↵Linus Torvalds1-13/+0
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull RAS updates from Borislav Petkov: - Get rid of a bunch of function pointers used in MCA land in favor of normal functions. This is in preparation of making the MCA code noinstr-aware - When the kernel copies data from user addresses and it encounters a machine check, a SIGBUS is sent to that process. Change this action to either an -EFAULT which is returned to the user or a short write, making the recovery action a lot more user-friendly * tag 'ras_core_for_v5.16_rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86/mce: Sort mca_config members to get rid of unnecessary padding x86/mce: Get rid of the ->quirk_no_way_out() indirect call x86/mce: Get rid of msr_ops x86/mce: Get rid of machine_check_vector x86/mce: Get rid of the mce_severity function pointer x86/mce: Drop copyin special case for #MC x86/mce: Change to not send SIGBUS error during copy from user
2021-11-01Merge tag 'x86-fpu-2021-11-01' of ↵Linus Torvalds1-4/+4
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86 fpu updates from Thomas Gleixner: - Cleanup of extable fixup handling to be more robust, which in turn allows to make the FPU exception fixups more robust as well. - Change the return code for signal frame related failures from explicit error codes to a boolean fail/success as that's all what the calling code evaluates. - A large refactoring of the FPU code to prepare for adding AMX support: - Distangle the public header maze and remove especially the misnomed kitchen sink internal.h which is despite it's name included all over the place. - Add a proper abstraction for the register buffer storage (struct fpstate) which allows to dynamically size the buffer at runtime by flipping the pointer to the buffer container from the default container which is embedded in task_struct::tread::fpu to a dynamically allocated container with a larger register buffer. - Convert the code over to the new fpstate mechanism. - Consolidate the KVM FPU handling by moving the FPU related code into the FPU core which removes the number of exports and avoids adding even more export when AMX has to be supported in KVM. This also removes duplicated code which was of course unnecessary different and incomplete in the KVM copy. - Simplify the KVM FPU buffer handling by utilizing the new fpstate container and just switching the buffer pointer from the user space buffer to the KVM guest buffer when entering vcpu_run() and flipping it back when leaving the function. This cuts the memory requirements of a vCPU for FPU buffers in half and avoids pointless memory copy operations. This also solves the so far unresolved problem of adding AMX support because the current FPU buffer handling of KVM inflicted a circular dependency between adding AMX support to the core and to KVM. With the new scheme of switching fpstate AMX support can be added to the core code without affecting KVM. - Replace various variables with proper data structures so the extra information required for adding dynamically enabled FPU features (AMX) can be added in one place - Add AMX (Advanced Matrix eXtensions) support (finally): AMX is a large XSTATE component which is going to be available with Saphire Rapids XEON CPUs. The feature comes with an extra MSR (MSR_XFD) which allows to trap the (first) use of an AMX related instruction, which has two benefits: 1) It allows the kernel to control access to the feature 2) It allows the kernel to dynamically allocate the large register state buffer instead of burdening every task with the the extra 8K or larger state storage. It would have been great to gain this kind of control already with AVX512. The support comes with the following infrastructure components: 1) arch_prctl() to - read the supported features (equivalent to XGETBV(0)) - read the permitted features for a task - request permission for a dynamically enabled feature Permission is granted per process, inherited on fork() and cleared on exec(). The permission policy of the kernel is restricted to sigaltstack size validation, but the syscall obviously allows further restrictions via seccomp etc. 2) A stronger sigaltstack size validation for sys_sigaltstack(2) which takes granted permissions and the potentially resulting larger signal frame into account. This mechanism can also be used to enforce factual sigaltstack validation independent of dynamic features to help with finding potential victims of the 2K sigaltstack size constant which is broken since AVX512 support was added. 3) Exception handling for #NM traps to catch first use of a extended feature via a new cause MSR. If the exception was caused by the use of such a feature, the handler checks permission for that feature. If permission has not been granted, the handler sends a SIGILL like the #UD handler would do if the feature would have been disabled in XCR0. If permission has been granted, then a new fpstate which fits the larger buffer requirement is allocated. In the unlikely case that this allocation fails, the handler sends SIGSEGV to the task. That's not elegant, but unavoidable as the other discussed options of preallocation or full per task permissions come with their own set of horrors for kernel and/or userspace. So this is the lesser of the evils and SIGSEGV caused by unexpected memory allocation failures is not a fundamentally new concept either. When allocation succeeds, the fpstate properties are filled in to reflect the extended feature set and the resulting sizes, the fpu::fpstate pointer is updated accordingly and the trap is disarmed for this task permanently. 4) Enumeration and size calculations 5) Trap switching via MSR_XFD The XFD (eXtended Feature Disable) MSR is context switched with the same life time rules as the FPU register state itself. The mechanism is keyed off with a static key which is default disabled so !AMX equipped CPUs have zero overhead. On AMX enabled CPUs the overhead is limited by comparing the tasks XFD value with a per CPU shadow variable to avoid redundant MSR writes. In case of switching from a AMX using task to a non AMX using task or vice versa, the extra MSR write is obviously inevitable. All other places which need to be aware of the variable feature sets and resulting variable sizes are not affected at all because they retrieve the information (feature set, sizes) unconditonally from the fpstate properties. 6) Enable the new AMX states Note, this is relatively new code despite the fact that AMX support is in the works for more than a year now. The big refactoring of the FPU code, which allowed to do a proper integration has been started exactly 3 weeks ago. Refactoring of the existing FPU code and of the original AMX patches took a week and has been subject to extensive review and testing. The only fallout which has not been caught in review and testing right away was restricted to AMX enabled systems, which is completely irrelevant for anyone outside Intel and their early access program. There might be dragons lurking as usual, but so far the fine grained refactoring has held up and eventual yet undetected fallout is bisectable and should be easily addressable before the 5.16 release. Famous last words... Many thanks to Chang Bae and Dave Hansen for working hard on this and also to the various test teams at Intel who reserved extra capacity to follow the rapid development of this closely which provides the confidence level required to offer this rather large update for inclusion into 5.16-rc1 * tag 'x86-fpu-2021-11-01' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (110 commits) Documentation/x86: Add documentation for using dynamic XSTATE features x86/fpu: Include vmalloc.h for vzalloc() selftests/x86/amx: Add context switch test selftests/x86/amx: Add test cases for AMX state management x86/fpu/amx: Enable the AMX feature in 64-bit mode x86/fpu: Add XFD handling for dynamic states x86/fpu: Calculate the default sizes independently x86/fpu/amx: Define AMX state components and have it used for boot-time checks x86/fpu/xstate: Prepare XSAVE feature table for gaps in state component numbers x86/fpu/xstate: Add fpstate_realloc()/free() x86/fpu/xstate: Add XFD #NM handler x86/fpu: Update XFD state where required x86/fpu: Add sanity checks for XFD x86/fpu: Add XFD state to fpstate x86/msr-index: Add MSRs for XFD x86/cpufeatures: Add eXtended Feature Disabling (XFD) feature bit x86/fpu: Reset permission and fpstate on exec() x86/fpu: Prepare fpu_clone() for dynamically enabled features x86/fpu/signal: Prepare for variable sigframe length x86/signal: Use fpu::__state_user_size for sigalt stack validation ...
2021-10-28x86/retpoline: Create a retpoline thunk arrayPeter Zijlstra1-5/+9
Stick all the retpolines in a single symbol and have the individual thunks as inner labels, this should guarantee thunk order and layout. Previously there were 16 (or rather 15 without rsp) separate symbols and a toolchain might reasonably expect it could displace them however it liked, with disregard for their relative position. However, now they're part of a larger symbol. Any change to their relative position would disrupt this larger _array symbol and thus not be sound. This is the same reasoning used for data symbols. On their own there is no guarantee about their relative position wrt to one aonther, but we're still able to do arrays because an array as a whole is a single larger symbol. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Borislav Petkov <bp@suse.de> Acked-by: Josh Poimboeuf <jpoimboe@redhat.com> Tested-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/r/20211026120310.169659320@infradead.org
2021-10-28x86/asm: Fixup odd GEN-for-each-reg.h usagePeter Zijlstra1-2/+2
Currently GEN-for-each-reg.h usage leaves GEN defined, relying on any subsequent usage to start with #undef, which is rude. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Borislav Petkov <bp@suse.de> Acked-by: Josh Poimboeuf <jpoimboe@redhat.com> Tested-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/r/20211026120310.041792350@infradead.org
2021-10-28x86/retpoline: Remove unused replacement symbolsPeter Zijlstra1-42/+0
Now that objtool no longer creates alternatives, these replacement symbols are no longer needed, remove them. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Borislav Petkov <bp@suse.de> Acked-by: Josh Poimboeuf <jpoimboe@redhat.com> Tested-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/r/20211026120309.915051744@infradead.org
2021-10-27x86/boot: Allow a "silent" kaslr random byte fetchKees Cook1-6/+12
Under earlyprintk, each RNG call produces a debug report line. To support the future FGKASLR feature, which will fetch random bytes during function shuffling, this is not useful information (each line is identical and tells us nothing new), needlessly spamming the console. Instead, allow for a NULL "purpose" to suppress the debug reporting. Signed-off-by: Kees Cook <keescook@chromium.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Nick Desaulniers <ndesaulniers@google.com> Link: https://lore.kernel.org/r/20211013175742.1197608-3-keescook@chromium.org
2021-10-16Merge branch 'x86/urgent' into x86/fpu, to resolve a conflictIngo Molnar1-2/+2
Resolve the conflict between these commits: x86/fpu: 1193f408cd51 ("x86/fpu/signal: Change return type of __fpu_restore_sig() to boolean") x86/urgent: d298b03506d3 ("x86/fpu: Restore the masking out of reserved MXCSR bits") b2381acd3fd9 ("x86/fpu: Mask out the invalid MXCSR bits properly") Conflicts: arch/x86/kernel/fpu/signal.c Signed-off-by: Ingo Molnar <mingo@kernel.org>
2021-10-06x86/insn: Use get_unaligned() instead of memcpy()Borislav Petkov1-2/+3
Use get_unaligned() instead of memcpy() to access potentially unaligned memory, which, when accessed through a pointer, leads to undefined behavior. get_unaligned() describes much better what is happening there anyway even if memcpy() does the job. In addition, since perf tool builds with -Werror, it would fire with: util/intel-pt-decoder/../../../arch/x86/lib/insn.c: In function '__insn_get_emulate_prefix': tools/include/../include/asm-generic/unaligned.h:10:15: error: packed attribute is unnecessary [-Werror=packed] 10 | const struct { type x; } __packed *__pptr = (typeof(__pptr))(ptr); \ because -Werror=packed would complain if the packed attribute would have no effect on the layout of the structure. In this case, that is intentional so disable the warning only for that compilation unit. That part is Reported-by: Stephen Rothwell <sfr@canb.auug.org.au> No functional changes. Fixes: 5ba1071f7554 ("x86/insn, tools/x86: Fix undefined behavior due to potential unaligned accesses") Suggested-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Borislav Petkov <bp@suse.de> Acked-by: Masami Hiramatsu <mhiramat@kernel.org> Tested-by: Stephen Rothwell <sfr@canb.auug.org.au> Link: https://lkml.kernel.org/r/YVSsIkj9Z29TyUjE@zn.tnic