summaryrefslogtreecommitdiffstats
path: root/arch/powerpc/kernel/head_32.h
AgeCommit message (Collapse)AuthorFilesLines
2020-12-04powerpc/32: Use SPRN_SPRG_SCRATCH2 in exception prologsChristophe Leroy1-15/+7
Use SPRN_SPRG_SCRATCH2 as a third scratch register in exception prologs in order to simplify them and avoid data going back and forth from/to CR. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/6f5c8a7faa8cc54acb89c55c20aa579a2f30a4e9.1606285014.git.christophe.leroy@csgroup.eu
2020-12-04powerpc/32: Simplify EXCEPTION_PROLOG_1 macroChristophe Leroy1-6/+4
Make code more readable with a clear CONFIG_VMAP_STACK section and a clear non CONFIG_VMAP_STACK section. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/c0f16cf432d22fc80097264d94649460d3dd761d.1606285014.git.christophe.leroy@csgroup.eu
2020-11-19powerpc: Remove RFI macroChristophe Leroy1-1/+4
RFI macro is just there to add an infinite loop past rfi in order to avoid prefetch on 40x in half a dozen of places in entry_32 and head_32. Those places are already full of #ifdefs, so just add a few more to explicitely show those loops and remove RFI. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/f7e9cb9e9240feec63cb330abf40b67d1aad852f.1604854583.git.christophe.leroy@csgroup.eu
2020-10-08powerpc: Drop SYNC_601() ISYNC_601() and SYNC()Christophe Leroy1-1/+0
Those macros are now empty at all time. Drop them. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/7990bb63fc53e460bfa94f8040184881d9e6fbc3.1601362098.git.christophe.leroy@csgroup.eu
2020-09-15powerpc/32: Fix vmap stack - Properly set r1 before activating MMUChristophe Leroy1-14/+29
We need r1 to be properly set before activating MMU, otherwise any new exception taken while saving registers into the stack in exception prologs will use the user stack, which is wrong and will even lockup or crash when KUAP is selected. Do that by switching the meaning of r11 and r1 until we have saved r1 to the stack: copy r1 into r11 and setup the new stack pointer in r1. To avoid complicating and impacting all generic and specific prolog code (and more), copy back r1 into r11 once r11 is save onto the stack. We could get rid of copying r1 back and forth at the cost of rewriting everything to use r1 instead of r11 all the way when CONFIG_VMAP_STACK is set, but the effort is probably not worth it. Fixes: 028474876f47 ("powerpc/32: prepare for CONFIG_VMAP_STACK") Cc: stable@vger.kernel.org Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/8f85e8752ac5af602db7237ef53d634f4f3d3892.1599486108.git.christophe.leroy@csgroup.eu
2020-09-15powerpc/32: Fix vmap stack - Do not activate MMU before reading task structChristophe Leroy1-25/+6
We need r1 to be properly set before activating MMU, so reading task_struct->stack must be done with MMU off. This means we need an additional register to play with MSR bits while r11 now points to the stack. For that, move r10 back to CR (As is already done for hash MMU) and use r10. We still don't have r1 correct yet when we activate MMU. It is done in following patch. Fixes: 028474876f47 ("powerpc/32: prepare for CONFIG_VMAP_STACK") Cc: stable@vger.kernel.org Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/a027d447022a006c9c4958ac734128e577a3c5c1.1599486108.git.christophe.leroy@csgroup.eu
2020-03-10Merge branch 'fixes' into nextMichael Ellerman1-1/+20
Merge in our fixes branch. In particular we want to merge the TM and KUAP fixes, so we can add selftests for them in next.
2020-02-19powerpc: Don't use thread struct for saving SRR0/1 on syscall.Christophe Leroy1-9/+7
CR0 can be saved later, and CTR can also be used for saving. Keep SRR1 in r9 and stash SRR0 in CTR, this avoids using thread_struct in memory for that. Saves 3 cycles (ie 1%) in null_syscall selftest on 8xx. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/b94c3bc03bac9431fec2dadb686384c481889422.1580470483.git.christophe.leroy@c-s.fr
2020-02-19powerpc/32: Warn and return ENOSYS on syscalls from kernelChristophe Leroy1-7/+9
Since commit b86fb88855ea ("powerpc/32: implement fast entry for syscalls on non BOOKE") and commit 1a4b739bbb4f ("powerpc/32: implement fast entry for syscalls on BOOKE"), syscalls from kernel are unexpected and can have catastrophic consequences as it will destroy the kernel stack. Test MSR_PR on syscall entry. In case syscall is from kernel, emit a warning and return ENOSYS error. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/8ee3bdbbdfdfc64ca7001e90c43b2aee6f333578.1580470482.git.christophe.leroy@c-s.fr
2020-02-18powerpc/32s: Fix DSI and ISI exceptions for CONFIG_VMAP_STACKChristophe Leroy1-1/+20
hash_page() needs to read page tables from kernel memory. When entire kernel memory is mapped by BATs, which is normally the case when CONFIG_STRICT_KERNEL_RWX is not set, it works even if the page hosting the page table is not referenced in the MMU hash table. However, if the page where the page table resides is not covered by a BAT, a DSI fault can be encountered from hash_page(), and it loops forever. This can happen when CONFIG_STRICT_KERNEL_RWX is selected and the alignment of the different regions is too small to allow covering the entire memory with BATs. This also happens when CONFIG_DEBUG_PAGEALLOC is selected or when booting with 'nobats' flag. Also, if the page containing the kernel stack is not present in the MMU hash table, registers cannot be saved and a recursive DSI fault is encountered. To allow hash_page() to properly do its job at all time and load the MMU hash table whenever needed, it must run with data MMU disabled. This means it must be called before re-enabling data MMU. To allow this, registers clobbered by hash_page() and create_hpte() have to be saved in the thread struct together with SRR0, SSR1, DAR and DSISR. It is also necessary to ensure that DSI prolog doesn't overwrite regs saved by prolog of the current running exception. That means: - DSI can only use SPRN_SPRG_SCRATCH0 - Exceptions must free SPRN_SPRG_SCRATCH0 before writing to the stack. This also fixes the Oops reported by Erhard when create_hpte() is called by add_hash_page(). Due to prolog size increase, a few more exceptions had to get split in two parts. Fixes: cd08f109e262 ("powerpc/32s: Enable CONFIG_VMAP_STACK") Reported-by: Erhard F. <erhard_f@mailbox.org> Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Tested-by: Erhard F. <erhard_f@mailbox.org> Tested-by: Larry Finger <Larry.Finger@lwfinger.net> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://bugzilla.kernel.org/show_bug.cgi?id=206501 Link: https://lore.kernel.org/r/64a4aa44686e9fd4b01333401367029771d9b231.1581761633.git.christophe.leroy@c-s.fr
2020-01-27powerpc/32s: Enable CONFIG_VMAP_STACKChristophe Leroy1-1/+3
A few changes to retrieve DAR and DSISR from struct regs instead of retrieving them directly, as they may have changed due to a TLB miss. Also modifies hash_page() and friends to work with virtual data addresses instead of physical ones. Same on load_up_fpu() and load_up_altivec(). Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> [mpe: Fix tovirt_vmstack call in head_32.S to fix CHRP build] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/2e2509a242fd5f3e23df4a06530c18060c4d321e.1576916812.git.christophe.leroy@c-s.fr
2020-01-27powerpc/32: Add early stack overflow detection with VMAP stack.Christophe Leroy1-0/+28
To avoid recursive faults, stack overflow detection has to be performed before writing in the stack in exception prologs. Do it by checking the alignment. If the stack pointer alignment is wrong, it means it is pointing to the following or preceding page. Without VMAP stack, a stack overflow is catastrophic. With VMAP stack, a stack overflow isn't destructive, so don't panic. Kill the task with SIGSEGV instead. A dedicated overflow stack is set up for each CPU. lkdtm: Performing direct entry EXHAUST_STACK lkdtm: Calling function with 512 frame size to depth 32 ... lkdtm: loop 32/32 ... lkdtm: loop 31/32 ... lkdtm: loop 30/32 ... lkdtm: loop 29/32 ... lkdtm: loop 28/32 ... lkdtm: loop 27/32 ... lkdtm: loop 26/32 ... lkdtm: loop 25/32 ... lkdtm: loop 24/32 ... lkdtm: loop 23/32 ... lkdtm: loop 22/32 ... lkdtm: loop 21/32 ... lkdtm: loop 20/32 ... Kernel stack overflow in process test[359], r1=c900c008 Oops: Kernel stack overflow, sig: 6 [#1] BE PAGE_SIZE=4K MMU=Hash PowerMac Modules linked in: CPU: 0 PID: 359 Comm: test Not tainted 5.3.0-rc7+ #2225 NIP: c0622060 LR: c0626710 CTR: 00000000 REGS: c0895f48 TRAP: 0000 Not tainted (5.3.0-rc7+) MSR: 00001032 <ME,IR,DR,RI> CR: 28004224 XER: 00000000 GPR00: c0626ca4 c900c008 c783c000 c07335cc c900c010 c07335cc c900c0f0 c07335cc GPR08: c900c0f0 00000001 00000000 00000000 28008222 00000000 00000000 00000000 GPR16: 00000000 00000000 10010128 10010000 b799c245 10010158 c07335cc 00000025 GPR24: c0690000 c08b91d4 c068f688 00000020 c900c0f0 c068f668 c08b95b4 c08b91d4 NIP [c0622060] format_decode+0x0/0x4d4 LR [c0626710] vsnprintf+0x80/0x5fc Call Trace: [c900c068] [c0626ca4] vscnprintf+0x18/0x48 [c900c078] [c007b944] vprintk_store+0x40/0x214 [c900c0b8] [c007bf50] vprintk_emit+0x90/0x1dc [c900c0e8] [c007c5cc] printk+0x50/0x60 [c900c128] [c03da5b0] recursive_loop+0x44/0x6c [c900c338] [c03da5c4] recursive_loop+0x58/0x6c [c900c548] [c03da5c4] recursive_loop+0x58/0x6c [c900c758] [c03da5c4] recursive_loop+0x58/0x6c [c900c968] [c03da5c4] recursive_loop+0x58/0x6c [c900cb78] [c03da5c4] recursive_loop+0x58/0x6c [c900cd88] [c03da5c4] recursive_loop+0x58/0x6c [c900cf98] [c03da5c4] recursive_loop+0x58/0x6c [c900d1a8] [c03da5c4] recursive_loop+0x58/0x6c [c900d3b8] [c03da5c4] recursive_loop+0x58/0x6c [c900d5c8] [c03da5c4] recursive_loop+0x58/0x6c [c900d7d8] [c03da5c4] recursive_loop+0x58/0x6c [c900d9e8] [c03da5c4] recursive_loop+0x58/0x6c [c900dbf8] [c03da5c4] recursive_loop+0x58/0x6c [c900de08] [c03da67c] lkdtm_EXHAUST_STACK+0x30/0x4c [c900de18] [c03da3e8] direct_entry+0xc8/0x140 [c900de48] [c029fb40] full_proxy_write+0x64/0xcc [c900de68] [c01500f8] __vfs_write+0x30/0x1d0 [c900dee8] [c0152cb8] vfs_write+0xb8/0x1d4 [c900df08] [c0152f7c] ksys_write+0x58/0xe8 [c900df38] [c0014208] ret_from_syscall+0x0/0x34 --- interrupt: c01 at 0xf806664 LR = 0x1000c868 Instruction dump: 4bffff91 80010014 7c832378 7c0803a6 38210010 4e800020 3d20c08a 3ca0c089 8089a0cc 38a58f0c 38600001 4ba2d494 <9421ffe0> 7c0802a6 bfc10018 7c9f2378 Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/1b89c121b4070c7ee99e4f22cc178f15a736b07b.1576916812.git.christophe.leroy@c-s.fr
2020-01-26powerpc/32: prepare for CONFIG_VMAP_STACKChristophe Leroy1-15/+113
To support CONFIG_VMAP_STACK, the kernel has to activate Data MMU Translation for accessing the stack. Before doing that it must save SRR0, SRR1 and also DAR and DSISR when relevant, in order to not loose them in case there is a Data TLB Miss once the translation is reactivated. This patch adds fields in thread struct for saving those registers. It prepares entry_32.S to handle exception entry with Data MMU Translation enabled and alters EXCEPTION_PROLOG macros to save SRR0, SRR1, DAR and DSISR then reenables Data MMU. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/a775a1fea60f190e0f63503463fb775310a2009b.1576916812.git.christophe.leroy@c-s.fr
2020-01-26powerpc/32: add a macro to get and/or save DAR and DSISR on stack.Christophe Leroy1-0/+11
Refactor reading and saving of DAR and DSISR in exception vectors. This will ease the implementation of VMAP stack. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/1286b3e51b07727c6b4b05f2df9af3f9b1717fb5.1576916812.git.christophe.leroy@c-s.fr
2020-01-26powerpc/32: move MSR_PR test into EXCEPTION_PROLOG_0Christophe Leroy1-2/+2
In order to simplify VMAP stack implementation, move MSR_PR test into EXCEPTION_PROLOG_0. This requires to not modify cr0 between EXCEPTION_PROLOG_0 and EXCEPTION_PROLOG_1. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/5c8b5bba692b92654dbd363a229a1ba91db725bb.1576916812.git.christophe.leroy@c-s.fr
2020-01-26powerpc/32: Add EXCEPTION_PROLOG_0 in head_32.hChristophe Leroy1-3/+6
This patch creates a macro for the very first part of exception prolog, this will help when implementing CONFIG_VMAP_STACK Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/2249fe62f481121a180e9655ad2b998093f318f3.1576916812.git.christophe.leroy@c-s.fr
2020-01-26powerpc/32: replace MTMSRD() by mtmsrChristophe Leroy1-2/+2
On PPC32, MTMSRD() is simply defined as mtmsr. Replace MTMSRD(reg) by mtmsr reg in files dedicated to PPC32, this makes the code less obscure. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/22469e78230edea3dbd0c79a555d73124f6c6d93.1576916812.git.christophe.leroy@c-s.fr
2019-08-27powerpc/32: replace LOAD_MSR_KERNEL() by LOAD_REG_IMMEDIATE()Christophe Leroy1-17/+4
LOAD_MSR_KERNEL() and LOAD_REG_IMMEDIATE() are doing the same thing in the same way. Drop LOAD_MSR_KERNEL() Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/8f04a6df0bc8949517fd8236d50c15008ccf9231.1566311636.git.christophe.leroy@c-s.fr
2019-05-03powerpc/32: implement fast entry for syscalls on non BOOKEChristophe Leroy1-4/+81
This patch implements a fast entry for syscalls. Syscalls don't have to preserve non volatile registers except LR. This patch then implement a fast entry for syscalls, where volatile registers get clobbered. As this entry is dedicated to syscall it always sets MSR_EE and warns in case MSR_EE was previously off It also assumes that the call is always from user, system calls are unexpected from kernel. The overall series improves null_syscall selftest by 12,5% on an 83xx and by 17% on a 8xx. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2019-05-03powerpc/32: get rid of COPY_EE in exception entryChristophe Leroy1-8/+4
EXC_XFER_TEMPLATE() is not called with COPY_EE anymore so we can get rid of copyee parameters and related COPY_EE and NOCOPY macros. Suggested-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> [splited out from benh RFC patch] Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2019-05-03powerpc/32: Enter exceptions with MSR_EE unsetChristophe Leroy1-8/+0
All exceptions handlers know when to reenable interrupts, so it is safer to enter all of them with MSR_EE unset, except for syscalls. Suggested-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> [splited out from benh RFC patch] Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2019-05-03powerpc/32: enter syscall with MSR_EE inconditionaly setChristophe Leroy1-0/+4
syscalls are expected to be entered with MSR_EE set. Lets make it inconditional by forcing MSR_EE on syscalls. This patch adds EXC_XFER_SYS for that. Suggested-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> [splited out from benh RFC patch] Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2019-05-03powerpc/40x: Refactor exception entry macros by using head_32.hChristophe Leroy1-0/+4
Refactor exception entry macros by using the ones defined in head_32.h Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2019-05-03powerpc/32: make the 6xx/8xx EXC_XFER_TEMPLATE() similar to the 40x/booke oneChristophe Leroy1-7/+6
6xx/8xx EXC_XFER_TEMPLATE() macro adds a i##n symbol which is unused and can be removed. 40x and booke EXC_XFER_TEMPLATE() macros takes msr from the caller while the 6xx/8xx version uses only MSR_KERNEL as msr value. This patch modifies the 6xx/8xx version to make it similar to the 40x and booke versions. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2019-05-03powerpc/32: move LOAD_MSR_KERNEL() into head_32.h and use itChristophe Leroy1-1/+14
As preparation for using head_32.h for head_40x.S, move LOAD_MSR_KERNEL() there and use it to load r10 with MSR_KERNEL value. In the mean time, this patch modifies it so that it takes into account the size of the passed value to determine if 'li' can be used or if 'lis/ori' is needed instead of using the size of MSR_KERNEL. This is done by using gas macro. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2019-05-03powerpc/32: Refactor EXCEPTION entry macros for head_8xx.S and head_32.SChristophe Leroy1-0/+118
EXCEPTION_PROLOG is similar in head_8xx.S and head_32.S This patch creates head_32.h and moves EXCEPTION_PROLOG macro into it. It also converts it from a GCC macro to a GAS macro in order to ease refactorisation with 40x later, since GAS macros allows the use of #ifdef/#else/#endif inside it. And it also has the advantage of not requiring the uggly "; \" at the end of each line. This patch also moves EXCEPTION() and EXC_XFER_XXXX() macros which are also similar while adding START_EXCEPTION() out of EXCEPTION(). Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>