summaryrefslogtreecommitdiffstats
path: root/arch/x86/entry
diff options
context:
space:
mode:
authorDenys Vlasenko <dvlasenk@redhat.com>2015-06-03 15:58:50 +0200
committerIngo Molnar <mingo@kernel.org>2015-06-05 13:41:28 +0200
commit7a5a9824c18f93415944c997dc6bb8eecfddd2e7 (patch)
treec5035a0440a4182ad18e3953bf0fe323a91e566d /arch/x86/entry
parent5cdc683b7d8b3341a3d18e0c5498bc1e4f3fb990 (diff)
downloadlinux-7a5a9824c18f93415944c997dc6bb8eecfddd2e7.tar.bz2
x86/asm/entry/32: Remove unnecessary optimization in stub32_clone
Really swap arguments #4 and #5 in stub32_clone instead of "optimizing" it into a move. Yes, tls_val is currently unused. Yes, on some CPUs XCHG is a little bit more expensive than MOV. But a cycle or two on an expensive syscall like clone() is way below noise floor, and this optimization is simply not worth the obfuscation of logic. [ There's also ongoing work on the clone() ABI by Josh Triplett that will depend on this change later on. ] Signed-off-by: Denys Vlasenko <dvlasenk@redhat.com> Cc: Alexei Starovoitov <ast@plumgrid.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Gerst <brgerst@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Josh Triplett <josh@joshtriplett.org> Cc: Kees Cook <keescook@chromium.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Drewry <wad@chromium.org> Link: http://lkml.kernel.org/r/1433339930-20880-2-git-send-email-dvlasenk@redhat.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'arch/x86/entry')
-rw-r--r--arch/x86/entry/ia32entry.S13
1 files changed, 6 insertions, 7 deletions
diff --git a/arch/x86/entry/ia32entry.S b/arch/x86/entry/ia32entry.S
index d0c7b28d5670..9558dacf32b9 100644
--- a/arch/x86/entry/ia32entry.S
+++ b/arch/x86/entry/ia32entry.S
@@ -529,14 +529,13 @@ GLOBAL(\label)
GLOBAL(stub32_clone)
leaq sys_clone(%rip), %rax
/*
- * 32-bit clone API is clone(..., int tls_val, int *child_tidptr).
- * 64-bit clone API is clone(..., int *child_tidptr, int tls_val).
- * Native 64-bit kernel's sys_clone() implements the latter.
- * We need to swap args here. But since tls_val is in fact ignored
- * by sys_clone(), we can get away with an assignment
- * (arg4 = arg5) instead of a full swap:
+ * The 32-bit clone ABI is: clone(..., int tls_val, int *child_tidptr).
+ * The 64-bit clone ABI is: clone(..., int *child_tidptr, int tls_val).
+ *
+ * The native 64-bit kernel's sys_clone() implements the latter,
+ * so we need to swap arguments here before calling it:
*/
- mov %r8, %rcx
+ xchg %r8, %rcx
jmp ia32_ptregs_common
ALIGN