From a2c42bbabbe260b7626d8459093631a6e16ee0ee Mon Sep 17 00:00:00 2001 From: Will Deacon Date: Thu, 18 Feb 2021 14:03:46 +0000 Subject: arm64: spectre: Prevent lockdep splat on v4 mitigation enable path The Spectre-v4 workaround is re-configured when resuming from suspend, as the firmware may have re-enabled the mitigation despite the user previously asking for it to be disabled. Enabling or disabling the workaround can result in an undefined instruction exception on CPUs which implement PSTATE.SSBS but only allow it to be configured by adjusting the SPSR on exception return. We handle this by installing an 'undef hook' which effectively emulates the access. Installing this hook requires us to take a couple of spinlocks both to avoid corrupting the internal list of hooks but also to ensure that we don't run into an unhandled exception. Unfortunately, when resuming from suspend, we haven't yet called rcu_idle_exit() and so lockdep gets angry about "suspicious RCU usage". In doing so, it tries to print a warning, which leads it to get even more suspicious, this time about itself: | rcu_scheduler_active = 2, debug_locks = 1 | RCU used illegally from extended quiescent state! | 1 lock held by swapper/0: | #0: (logbuf_lock){-.-.}-{2:2}, at: vprintk_emit+0x88/0x198 | | Call trace: | dump_backtrace+0x0/0x1d8 | show_stack+0x18/0x24 | dump_stack+0xe0/0x17c | lockdep_rcu_suspicious+0x11c/0x134 | trace_lock_release+0xa0/0x160 | lock_release+0x3c/0x290 | _raw_spin_unlock+0x44/0x80 | vprintk_emit+0xbc/0x198 | vprintk_default+0x44/0x6c | vprintk_func+0x1f4/0x1fc | printk+0x54/0x7c | lockdep_rcu_suspicious+0x30/0x134 | trace_lock_acquire+0xa0/0x188 | lock_acquire+0x50/0x2fc | _raw_spin_lock+0x68/0x80 | spectre_v4_enable_mitigation+0xa8/0x30c | __cpu_suspend_exit+0xd4/0x1a8 | cpu_suspend+0xa0/0x104 | psci_cpu_suspend_enter+0x3c/0x5c | psci_enter_idle_state+0x44/0x74 | cpuidle_enter_state+0x148/0x2f8 | cpuidle_enter+0x38/0x50 | do_idle+0x1f0/0x2b4 Prevent these splats by running __cpu_suspend_exit() with RCU watching. Cc: Mark Rutland Cc: Lorenzo Pieralisi Cc: Peter Zijlstra Cc: Boqun Feng Cc: Marc Zyngier Cc: Saravana Kannan Suggested-by: "Paul E . McKenney" Reported-by: Sami Tolvanen Fixes: c28762070ca6 ("arm64: Rewrite Spectre-v4 mitigation code") Cc: Acked-by: Paul E. McKenney Acked-by: Marc Zyngier Acked-by: Mark Rutland Link: https://lore.kernel.org/r/20210218140346.5224-1-will@kernel.org Signed-off-by: Will Deacon --- arch/arm64/kernel/suspend.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/arm64/kernel/suspend.c b/arch/arm64/kernel/suspend.c index a67b37a7a47e..d7564891ffe1 100644 --- a/arch/arm64/kernel/suspend.c +++ b/arch/arm64/kernel/suspend.c @@ -119,7 +119,7 @@ int cpu_suspend(unsigned long arg, int (*fn)(unsigned long)) if (!ret) ret = -EOPNOTSUPP; } else { - __cpu_suspend_exit(); + RCU_NONIDLE(__cpu_suspend_exit()); } unpause_graph_tracing(); -- cgit v1.2.3 From 656d1d58d8e0958d372db86c24f0b2ea36f50888 Mon Sep 17 00:00:00 2001 From: qiuguorui1 Date: Thu, 18 Feb 2021 20:59:00 +0800 Subject: arm64: kexec_file: fix memory leakage in create_dtb() when fdt_open_into() fails in function create_dtb(), if fdt_open_into() fails, we need to vfree buf before return. Fixes: 52b2a8af7436 ("arm64: kexec_file: load initrd and device-tree") Cc: stable@vger.kernel.org # v5.0 Signed-off-by: qiuguorui1 Link: https://lore.kernel.org/r/20210218125900.6810-1-qiuguorui1@huawei.com Signed-off-by: Will Deacon --- arch/arm64/kernel/machine_kexec_file.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/arch/arm64/kernel/machine_kexec_file.c b/arch/arm64/kernel/machine_kexec_file.c index 03210f644790..0cde47a63beb 100644 --- a/arch/arm64/kernel/machine_kexec_file.c +++ b/arch/arm64/kernel/machine_kexec_file.c @@ -182,8 +182,10 @@ static int create_dtb(struct kimage *image, /* duplicate a device tree blob */ ret = fdt_open_into(initial_boot_params, buf, buf_size); - if (ret) + if (ret) { + vfree(buf); return -EINVAL; + } ret = setup_dtb(image, initrd_load_addr, initrd_len, cmdline, buf); -- cgit v1.2.3 From f5c6d0fcf90ce07ee0d686d465b19b247ebd5ed7 Mon Sep 17 00:00:00 2001 From: Shaoying Xu Date: Tue, 16 Feb 2021 18:32:34 +0000 Subject: arm64 module: set plt* section addresses to 0x0 These plt* and .text.ftrace_trampoline sections specified for arm64 have non-zero addressses. Non-zero section addresses in a relocatable ELF would confuse GDB when it tries to compute the section offsets and it ends up printing wrong symbol addresses. Therefore, set them to zero, which mirrors the change in commit 5d8591bc0fba ("module: set ksymtab/kcrctab* section addresses to 0x0"). Reported-by: Frank van der Linden Signed-off-by: Shaoying Xu Cc: Link: https://lore.kernel.org/r/20210216183234.GA23876@amazon.com Signed-off-by: Will Deacon --- arch/arm64/include/asm/module.lds.h | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/arch/arm64/include/asm/module.lds.h b/arch/arm64/include/asm/module.lds.h index 691f15af788e..810045628c66 100644 --- a/arch/arm64/include/asm/module.lds.h +++ b/arch/arm64/include/asm/module.lds.h @@ -1,7 +1,7 @@ #ifdef CONFIG_ARM64_MODULE_PLTS SECTIONS { - .plt (NOLOAD) : { BYTE(0) } - .init.plt (NOLOAD) : { BYTE(0) } - .text.ftrace_trampoline (NOLOAD) : { BYTE(0) } + .plt 0 (NOLOAD) : { BYTE(0) } + .init.plt 0 (NOLOAD) : { BYTE(0) } + .text.ftrace_trampoline 0 (NOLOAD) : { BYTE(0) } } #endif -- cgit v1.2.3 From 2596b6ae412be3d29632efc63976a2132032e620 Mon Sep 17 00:00:00 2001 From: Pavel Tatashin Date: Fri, 19 Feb 2021 14:51:42 -0500 Subject: kexec: move machine_kexec_post_load() to public interface The kernel test robot reports the following compiler warning: | arch/arm64/kernel/machine_kexec.c:62:5: warning: no previous prototype for | function 'machine_kexec_post_load' [-Wmissing-prototypes] | int machine_kexec_post_load(struct kimage *kimage) Fix it by moving the declaration of machine_kexec_post_load() from kexec_internal.h to the public header instead. Reported-by: kernel test robot Link: https://lore.kernel.org/linux-arm-kernel/202102030727.gqTokACH-lkp@intel.com Signed-off-by: Pavel Tatashin Link: https://lore.kernel.org/r/20210219195142.13571-1-pasha.tatashin@soleen.com Fixes: 4c3c31230c91 ("arm64: kexec: move relocation function setup") Signed-off-by: Will Deacon --- include/linux/kexec.h | 2 ++ kernel/kexec_internal.h | 2 -- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/include/linux/kexec.h b/include/linux/kexec.h index 9e93bef52968..3671b845cf28 100644 --- a/include/linux/kexec.h +++ b/include/linux/kexec.h @@ -309,6 +309,8 @@ extern void machine_kexec_cleanup(struct kimage *image); extern int kernel_kexec(void); extern struct page *kimage_alloc_control_pages(struct kimage *image, unsigned int order); +int machine_kexec_post_load(struct kimage *image); + extern void __crash_kexec(struct pt_regs *); extern void crash_kexec(struct pt_regs *); int kexec_should_crash(struct task_struct *); diff --git a/kernel/kexec_internal.h b/kernel/kexec_internal.h index 39d30ccf8d87..48aaf2ac0d0d 100644 --- a/kernel/kexec_internal.h +++ b/kernel/kexec_internal.h @@ -13,8 +13,6 @@ void kimage_terminate(struct kimage *image); int kimage_is_destination_range(struct kimage *image, unsigned long start, unsigned long end); -int machine_kexec_post_load(struct kimage *image); - extern struct mutex kexec_mutex; #ifdef CONFIG_KEXEC_FILE -- cgit v1.2.3 From d47422d953e258ad587b5edf2274eb95d08bdc7d Mon Sep 17 00:00:00 2001 From: He Zhe Date: Tue, 23 Feb 2021 16:25:34 +0800 Subject: arm64: uprobe: Return EOPNOTSUPP for AARCH32 instruction probing As stated in linux/errno.h, ENOTSUPP should never be seen by user programs. When we set up uprobe with 32-bit perf and arm64 kernel, we would see the following vague error without useful hint. The sys_perf_event_open() syscall returned with 524 (INTERNAL ERROR: strerror_r(524, [buf], 128)=22) Use EOPNOTSUPP instead to indicate such cases. Signed-off-by: He Zhe Link: https://lore.kernel.org/r/20210223082535.48730-1-zhe.he@windriver.com Cc: Signed-off-by: Will Deacon --- arch/arm64/kernel/probes/uprobes.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/arm64/kernel/probes/uprobes.c b/arch/arm64/kernel/probes/uprobes.c index a412d8edbcd2..2c247634552b 100644 --- a/arch/arm64/kernel/probes/uprobes.c +++ b/arch/arm64/kernel/probes/uprobes.c @@ -38,7 +38,7 @@ int arch_uprobe_analyze_insn(struct arch_uprobe *auprobe, struct mm_struct *mm, /* TODO: Currently we do not support AARCH32 instruction probing */ if (mm->context.flags & MMCF_AARCH32) - return -ENOTSUPP; + return -EOPNOTSUPP; else if (!IS_ALIGNED(addr, AARCH64_INSN_SIZE)) return -EINVAL; -- cgit v1.2.3 From 2e8acca1911b14e0cc7464db796b804785a3831a Mon Sep 17 00:00:00 2001 From: Zhiyuan Dai Date: Mon, 22 Feb 2021 09:43:51 +0800 Subject: arm64/mm: Fixed some coding style issues Adjust whitespace for fixmap_pXd() functions returning pointers for consistency with the kernel coding style. Signed-off-by: Zhiyuan Dai Link: https://lore.kernel.org/r/1613958231-5474-1-git-send-email-daizhiyuan@phytium.com.cn Signed-off-by: Will Deacon --- arch/arm64/mm/mmu.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 25af183e4bed..9ce8641941fb 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -1155,7 +1155,7 @@ void vmemmap_free(unsigned long start, unsigned long end, } #endif /* CONFIG_SPARSEMEM_VMEMMAP */ -static inline pud_t * fixmap_pud(unsigned long addr) +static inline pud_t *fixmap_pud(unsigned long addr) { pgd_t *pgdp = pgd_offset_k(addr); p4d_t *p4dp = p4d_offset(pgdp, addr); @@ -1166,7 +1166,7 @@ static inline pud_t * fixmap_pud(unsigned long addr) return pud_offset_kimg(p4dp, addr); } -static inline pmd_t * fixmap_pmd(unsigned long addr) +static inline pmd_t *fixmap_pmd(unsigned long addr) { pud_t *pudp = fixmap_pud(addr); pud_t pud = READ_ONCE(*pudp); @@ -1176,7 +1176,7 @@ static inline pmd_t * fixmap_pmd(unsigned long addr) return pmd_offset_kimg(pudp, addr); } -static inline pte_t * fixmap_pte(unsigned long addr) +static inline pte_t *fixmap_pte(unsigned long addr) { return &bm_pte[pte_index(addr)]; } -- cgit v1.2.3 From 610e4dc8ac463815f5180ae2e6fadae834891b86 Mon Sep 17 00:00:00 2001 From: Joey Gouly Date: Mon, 22 Feb 2021 16:49:56 +0000 Subject: KVM: arm64: make the hyp vector table entries local Make the hyp vector table entries local functions so they are not accidentally referred to outside of this file. Using SYM_CODE_START_LOCAL matches the other vector tables (in hyp-stub.S, hibernate-asm.S and entry.S) Signed-off-by: Joey Gouly Acked-by: Will Deacon Acked-by: Marc Zyngier Link: https://lore.kernel.org/r/20210222164956.43514-1-joey.gouly@arm.com Signed-off-by: Will Deacon --- arch/arm64/kvm/hyp/hyp-entry.S | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/arm64/kvm/hyp/hyp-entry.S b/arch/arm64/kvm/hyp/hyp-entry.S index d179056e1af8..5f49df4ffdd8 100644 --- a/arch/arm64/kvm/hyp/hyp-entry.S +++ b/arch/arm64/kvm/hyp/hyp-entry.S @@ -119,7 +119,7 @@ el2_error: .macro invalid_vector label, target = __guest_exit_panic .align 2 -SYM_CODE_START(\label) +SYM_CODE_START_LOCAL(\label) b \target SYM_CODE_END(\label) .endm -- cgit v1.2.3 From f1b6cff7c98be2747d2fe16e42dcdcf2fc02c7e6 Mon Sep 17 00:00:00 2001 From: Marc Zyngier Date: Wed, 24 Feb 2021 09:37:36 +0000 Subject: arm64: VHE: Enable EL2 MMU from the idmap Enabling the MMU requires the write to SCTLR_ELx (and the ISB that follows) to live in some identity-mapped memory. Otherwise, the translation will result in something totally unexpected (either fetching the wrong instruction stream, or taking a fault of some sort). This is exactly what happens in mutate_to_vhe(), as this code lives in the .hyp.text section, which isn't identity-mapped. With the right configuration, this explodes badly. Extract the MMU-enabling part of mutate_to_vhe(), and move it to its own function that lives in the idmap. This ensures nothing bad happens. Fixes: f359182291c7 ("arm64: Provide an 'upgrade to VHE' stub hypercall") Reported-by: "kernelci.org bot" Tested-by: Guillaume Tucker Signed-off-by: Marc Zyngier Link: https://lore.kernel.org/r/20210224093738.3629662-2-maz@kernel.org Signed-off-by: Will Deacon --- arch/arm64/kernel/hyp-stub.S | 39 ++++++++++++++++++++++++++------------- 1 file changed, 26 insertions(+), 13 deletions(-) diff --git a/arch/arm64/kernel/hyp-stub.S b/arch/arm64/kernel/hyp-stub.S index 678cd2c618ee..ae56787ea7c1 100644 --- a/arch/arm64/kernel/hyp-stub.S +++ b/arch/arm64/kernel/hyp-stub.S @@ -75,9 +75,6 @@ SYM_CODE_END(el1_sync) // nVHE? No way! Give me the real thing! SYM_CODE_START_LOCAL(mutate_to_vhe) - // Be prepared to fail - mov_q x0, HVC_STUB_ERR - // Sanity check: MMU *must* be off mrs x1, sctlr_el2 tbnz x1, #0, 1f @@ -96,8 +93,11 @@ SYM_CODE_START_LOCAL(mutate_to_vhe) cmp x1, xzr and x2, x2, x1 csinv x2, x2, xzr, ne - cbz x2, 1f + cbnz x2, 2f +1: mov_q x0, HVC_STUB_ERR + eret +2: // Engage the VHE magic! mov_q x0, HCR_HOST_VHE_FLAGS msr hcr_el2, x0 @@ -131,6 +131,24 @@ SYM_CODE_START_LOCAL(mutate_to_vhe) msr mair_el1, x0 isb + // Hack the exception return to stay at EL2 + mrs x0, spsr_el1 + and x0, x0, #~PSR_MODE_MASK + mov x1, #PSR_MODE_EL2h + orr x0, x0, x1 + msr spsr_el1, x0 + + b enter_vhe +SYM_CODE_END(mutate_to_vhe) + + // At the point where we reach enter_vhe(), we run with + // the MMU off (which is enforced by mutate_to_vhe()). + // We thus need to be in the idmap, or everything will + // explode when enabling the MMU. + + .pushsection .idmap.text, "ax" + +SYM_CODE_START_LOCAL(enter_vhe) // Invalidate TLBs before enabling the MMU tlbi vmalle1 dsb nsh @@ -143,17 +161,12 @@ SYM_CODE_START_LOCAL(mutate_to_vhe) mov_q x0, INIT_SCTLR_EL1_MMU_OFF msr_s SYS_SCTLR_EL12, x0 - // Hack the exception return to stay at EL2 - mrs x0, spsr_el1 - and x0, x0, #~PSR_MODE_MASK - mov x1, #PSR_MODE_EL2h - orr x0, x0, x1 - msr spsr_el1, x0 - mov x0, xzr -1: eret -SYM_CODE_END(mutate_to_vhe) + eret +SYM_CODE_END(enter_vhe) + + .popsection .macro invalid_vector label SYM_CODE_START_LOCAL(\label) -- cgit v1.2.3 From 9d41053e8dc115c92b8002c3db5f545d7602498b Mon Sep 17 00:00:00 2001 From: Marc Zyngier Date: Wed, 24 Feb 2021 09:37:37 +0000 Subject: arm64: Add missing ISB after invalidating TLB in __primary_switch Although there has been a bit of back and forth on the subject, it appears that invalidating TLBs requires an ISB instruction when FEAT_ETS is not implemented by the CPU. From the bible: | In an implementation that does not implement FEAT_ETS, a TLB | maintenance instruction executed by a PE, PEx, can complete at any | time after it is issued, but is only guaranteed to be finished for a | PE, PEx, after the execution of DSB by the PEx followed by a Context | synchronization event Add the missing ISB in __primary_switch, just in case. Fixes: 3c5e9f238bc4 ("arm64: head.S: move KASLR processing out of __enable_mmu()") Suggested-by: Will Deacon Signed-off-by: Marc Zyngier Acked-by: Mark Rutland Link: https://lore.kernel.org/r/20210224093738.3629662-3-maz@kernel.org Signed-off-by: Will Deacon --- arch/arm64/kernel/head.S | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index 1e30b5550d2a..66b0e0b66e31 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -837,6 +837,7 @@ SYM_FUNC_START_LOCAL(__primary_switch) tlbi vmalle1 // Remove any stale TLB entries dsb nsh + isb set_sctlr_el1 x19 // re-enable the MMU -- cgit v1.2.3 From 430251cc864beb11ac5b6d2f5c6ef54ddd432612 Mon Sep 17 00:00:00 2001 From: Marc Zyngier Date: Wed, 24 Feb 2021 09:37:38 +0000 Subject: arm64: Add missing ISB after invalidating TLB in enter_vhe Although there has been a bit of back and forth on the subject, it appears that invalidating TLBs requires an ISB instruction after the TLBI/DSB sequence when FEAT_ETS is not implemented by the CPU. From the bible: | In an implementation that does not implement FEAT_ETS, a TLB | maintenance instruction executed by a PE, PEx, can complete at any | time after it is issued, but is only guaranteed to be finished for a | PE, PEx, after the execution of DSB by the PEx followed by a Context | synchronization event Add the missing ISB in enter_vhe(), just in case. Fixes: f359182291c7 ("arm64: Provide an 'upgrade to VHE' stub hypercall") Suggested-by: Will Deacon Signed-off-by: Marc Zyngier Acked-by: Mark Rutland Link: https://lore.kernel.org/r/20210224093738.3629662-4-maz@kernel.org Signed-off-by: Will Deacon --- arch/arm64/kernel/hyp-stub.S | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/arm64/kernel/hyp-stub.S b/arch/arm64/kernel/hyp-stub.S index ae56787ea7c1..5eccbd62fec8 100644 --- a/arch/arm64/kernel/hyp-stub.S +++ b/arch/arm64/kernel/hyp-stub.S @@ -152,6 +152,7 @@ SYM_CODE_START_LOCAL(enter_vhe) // Invalidate TLBs before enabling the MMU tlbi vmalle1 dsb nsh + isb // Enable the EL2 S1 MMU, as set up from EL1 mrs_s x0, SYS_SCTLR_EL12 -- cgit v1.2.3 From df84fe94708985cdfb78a83148322bcd0a699472 Mon Sep 17 00:00:00 2001 From: Timothy E Baldwin Date: Sat, 16 Jan 2021 15:18:54 +0000 Subject: arm64: ptrace: Fix seccomp of traced syscall -1 (NO_SYSCALL) Since commit f086f67485c5 ("arm64: ptrace: add support for syscall emulation"), if system call number -1 is called and the process is being traced with PTRACE_SYSCALL, for example by strace, the seccomp check is skipped and -ENOSYS is returned unconditionally (unless altered by the tracer) rather than carrying out action specified in the seccomp filter. The consequence of this is that it is not possible to reliably strace a seccomp based implementation of a foreign system call interface in which r7/x8 is permitted to be -1 on entry to a system call. Also trace_sys_enter and audit_syscall_entry are skipped if a system call is skipped. Fix by removing the in_syscall(regs) check restoring the previous behaviour which is like AArch32, x86 (which uses generic code) and everything else. Cc: Oleg Nesterov Cc: Catalin Marinas Cc: Fixes: f086f67485c5 ("arm64: ptrace: add support for syscall emulation") Reviewed-by: Kees Cook Reviewed-by: Sudeep Holla Tested-by: Sudeep Holla Signed-off-by: Timothy E Baldwin Link: https://lore.kernel.org/r/90edd33b-6353-1228-791f-0336d94d5f8c@majoroak.me.uk Signed-off-by: Will Deacon --- arch/arm64/kernel/ptrace.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/arm64/kernel/ptrace.c b/arch/arm64/kernel/ptrace.c index 3d5c8afca75b..170f42fd6101 100644 --- a/arch/arm64/kernel/ptrace.c +++ b/arch/arm64/kernel/ptrace.c @@ -1797,7 +1797,7 @@ int syscall_trace_enter(struct pt_regs *regs) if (flags & (_TIF_SYSCALL_EMU | _TIF_SYSCALL_TRACE)) { tracehook_report_syscall(regs, PTRACE_SYSCALL_ENTER); - if (!in_syscall(regs) || (flags & _TIF_SYSCALL_EMU)) + if (flags & _TIF_SYSCALL_EMU) return NO_SYSCALL; } -- cgit v1.2.3 From 3c02600144bdb0a1280a9090d3a7e37e2f9fdcc8 Mon Sep 17 00:00:00 2001 From: Mark Brown Date: Wed, 24 Feb 2021 16:50:37 +0000 Subject: arm64: stacktrace: Report when we reach the end of the stack Currently the arm64 unwinder code returns -EINVAL whenever it can't find the next stack frame, not distinguishing between cases where the stack has been corrupted or is otherwise in a state it shouldn't be and cases where we have reached the end of the stack. At the minute none of the callers care what error code is returned but this will be important for reliable stack trace which needs to be sure that the stack is intact. Change to return -ENOENT in the case where we reach the bottom of the stack. The error codes from this function are only used in kernel, this particular code is chosen as we are indicating that we know there is no frame there. Signed-off-by: Mark Brown Acked-by: Mark Rutland Link: https://lore.kernel.org/r/20210224165037.24138-1-broonie@kernel.org Signed-off-by: Will Deacon --- arch/arm64/kernel/stacktrace.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c index 0fb42129b469..ad20981dfda4 100644 --- a/arch/arm64/kernel/stacktrace.c +++ b/arch/arm64/kernel/stacktrace.c @@ -46,7 +46,7 @@ int notrace unwind_frame(struct task_struct *tsk, struct stackframe *frame) /* Terminal record; nothing to unwind */ if (!fp) - return -EINVAL; + return -ENOENT; if (fp & 0xf) return -EINVAL; -- cgit v1.2.3