summaryrefslogtreecommitdiffstats
path: root/arch/xtensa/kernel/irq.c
diff options
context:
space:
mode:
authorLinus Torvalds <torvalds@linux-foundation.org>2022-03-21 19:32:04 -0700
committerLinus Torvalds <torvalds@linux-foundation.org>2022-03-21 19:32:04 -0700
commit2142b7f0c6bbe1f9515ce3383de9f7a32a5a025b (patch)
treee1c28d1fc2cf8a905254b6f4475a4e65dfddce82 /arch/xtensa/kernel/irq.c
parentfd2d7a4a354539dc141f702c6c277bf3380e8778 (diff)
parentafcf5441b9ff22ac57244cd45ff102ebc2e32d1a (diff)
downloadlinux-2142b7f0c6bbe1f9515ce3383de9f7a32a5a025b.tar.bz2
Merge tag 'hardening-v5.18-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux
Pull kernel hardening updates from Kees Cook: - Add arm64 Shadow Call Stack support for GCC 12 (Dan Li) - Avoid memset with stack offset randomization under Clang (Marco Elver) - Clean up stackleak plugin to play nice with .noinstr (Kees Cook) - Check stack depth for greater usercopy hardening coverage (Kees Cook) * tag 'hardening-v5.18-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux: arm64: Add gcc Shadow Call Stack support m68k: Implement "current_stack_pointer" xtensa: Implement "current_stack_pointer" usercopy: Check valid lifetime via stack depth stack: Constrain and fix stack offset randomization with Clang builds stack: Introduce CONFIG_RANDOMIZE_KSTACK_OFFSET gcc-plugins/stackleak: Ignore .noinstr.text and .entry.text gcc-plugins/stackleak: Exactly match strings instead of prefixes gcc-plugins/stackleak: Provide verbose mode
Diffstat (limited to 'arch/xtensa/kernel/irq.c')
-rw-r--r--arch/xtensa/kernel/irq.c3
1 files changed, 1 insertions, 2 deletions
diff --git a/arch/xtensa/kernel/irq.c b/arch/xtensa/kernel/irq.c
index 15051a8a1539..529fe9245821 100644
--- a/arch/xtensa/kernel/irq.c
+++ b/arch/xtensa/kernel/irq.c
@@ -36,9 +36,8 @@ asmlinkage void do_IRQ(int hwirq, struct pt_regs *regs)
#ifdef CONFIG_DEBUG_STACKOVERFLOW
/* Debugging check for stack overflow: is there less than 1KB free? */
{
- unsigned long sp;
+ unsigned long sp = current_stack_pointer;
- __asm__ __volatile__ ("mov %0, a1\n" : "=a" (sp));
sp &= THREAD_SIZE - 1;
if (unlikely(sp < (sizeof(thread_info) + 1024)))