summaryrefslogtreecommitdiffstats
path: root/arch/x86/include/asm/cpu_entry_area.h
AgeCommit message (Collapse)AuthorFilesLines
2018-02-26x86: avoid per-cpu system call trampolineLinus Torvalds1-2/+0
The per-cpu system call trampoline was a clever trick, and allows us to have percpu data even before swapgs is done by just doing %rip-relative addressing. And that was important, because syscall doesn't have a kernel stack, so we needed that percpu data very very early, just to get a temporary register to switch the page tables around. However, it turns out to be unnecessary. Because we actually have a temporary register that we can use: %r11 is destroyed by the 'syscall' instruction anyway. Ok, technically it contains the user mode flags register, but we *have* that information anyway: it's still in %rflags, we've just masked off a few unimportant bits. We'll destroy the rest too when we do the "and" of the CR3 value, but who cares? It's a system call. Btw, there are a few bits in eflags that might matter to user space: DF and AC. Right now this clears them, but that is fixable by just changing the MSR_SYSCALL_MASK value to not include them, and clearing them by hand the way we do for all other kernel entry points anyway. So the only _real_ flags we'd destroy are IF and the arithmetic flags that get trampled on by the arithmetic instructions that are part of the %cr3 reload logic. However, if we really end up caring, we can save off even those: we'd take advantage of the fact that %rcx - which contains the returning IP of the system call - also has 8 bits free. Why 8? Even with 5-level paging, we only have 57 bits of virtual address space, and the high address space is for the kernel (and vsyscall, but we'd just disable native vsyscall). So the %rip value saved in %rcx can have only 56 valid bits, which means that we have 8 bits free. So *if* we care about IF and the arithmetic flags being saved over a system call, we'd do: shlq $8,%rcx movb %r11b,%cl shrl $8,%r11d andl $8,%r11d orb %r11b,%cl to save those bits off before we then user %r11 as a temporary register (we'd obviously need to then undo that as we save the user space state on the stack). Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-12-23x86/cpu_entry_area: Add debugstore entries to cpu_entry_areaThomas Gleixner1-0/+13
The Intel PEBS/BTS debug store is a design trainwreck as it expects virtual addresses which must be visible in any execution context. So it is required to make these mappings visible to user space when kernel page table isolation is active. Provide enough room for the buffer mappings in the cpu_entry_area so the buffers are available in the user space visible page tables. At the point where the kernel side entry area is populated there is no buffer available yet, but the kernel PMD must be populated. To achieve this set the entries for these buffers to non present. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: Andy Lutomirski <luto@kernel.org> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Gerst <brgerst@gmail.com> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: David Laight <David.Laight@aculab.com> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: Eduardo Valentin <eduval@amazon.com> Cc: Greg KH <gregkh@linuxfoundation.org> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Juergen Gross <jgross@suse.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Will Deacon <will.deacon@arm.com> Cc: aliguori@amazon.com Cc: daniel.gruss@iaik.tugraz.at Cc: hughd@google.com Cc: keescook@google.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-12-22x86/cpu_entry_area: Move it out of the fixmapThomas Gleixner1-1/+17
Put the cpu_entry_area into a separate P4D entry. The fixmap gets too big and 0-day already hit a case where the fixmap PTEs were cleared by cleanup_highmap(). Aside of that the fixmap API is a pain as it's all backwards. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: Andy Lutomirski <luto@kernel.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Juergen Gross <jgross@suse.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: linux-kernel@vger.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-12-22x86/cpu_entry_area: Move it to a separate unitThomas Gleixner1-0/+52
Separate the cpu_entry_area code out of cpu/common.c and the fixmap. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: Andy Lutomirski <luto@kernel.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Juergen Gross <jgross@suse.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org>