diff options
author | Linus Torvalds <torvalds@linux-foundation.org> | 2016-05-22 19:40:39 -0700 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2016-05-22 19:40:39 -0700 |
commit | 7639dad93a5564579987abded4ec05e3db13659d (patch) | |
tree | d786a45e79672ee3ea04e93d60ec1da02cb17940 /arch | |
parent | 77ed402b7f53cbe50ae8384a172a5b9713020400 (diff) | |
parent | 8329e818f14926a6040df86b2668568bde342ebf (diff) | |
download | linux-7639dad93a5564579987abded4ec05e3db13659d.tar.bz2 |
Merge tag 'trace-v4.7-2' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace
Pull motr tracing updates from Steven Rostedt:
"Three more changes.
- I forgot that I had another selftest to stress test the ftrace
instance creation. It was actually suppose to go into the 4.6
merge window, but I never committed it. I almost forgot about it
again, but noticed it was missing from your tree.
- Soumya PN sent me a clean up patch to not disable interrupts when
taking the tasklist_lock for read, as it's unnecessary because that
lock is never taken for write in irq context.
- Newer gcc's can cause the jump in the function_graph code to the
global ftrace_stub label to be a short jump instead of a long one.
As that jump is dynamically converted to jump to the trace code to
do function graph tracing, and that conversion expects a long jump
it can corrupt the ftrace_stub itself (it's directly after that
call). One way to prevent gcc from using a short jump is to
declare the ftrace_stub as a weak function, which we do here to
keep gcc from optimizing too much"
* tag 'trace-v4.7-2' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace:
ftrace/x86: Set ftrace_stub to weak to prevent gcc from using short jumps to it
ftrace: Don't disable irqs when taking the tasklist_lock read_lock
ftracetest: Add instance created, delete, read and enable event test
Diffstat (limited to 'arch')
-rw-r--r-- | arch/x86/kernel/mcount_64.S | 3 |
1 files changed, 2 insertions, 1 deletions
diff --git a/arch/x86/kernel/mcount_64.S b/arch/x86/kernel/mcount_64.S index ed48a9f465f8..61924222a9e1 100644 --- a/arch/x86/kernel/mcount_64.S +++ b/arch/x86/kernel/mcount_64.S @@ -182,7 +182,8 @@ GLOBAL(ftrace_graph_call) jmp ftrace_stub #endif -GLOBAL(ftrace_stub) +/* This is weak to keep gas from relaxing the jumps */ +WEAK(ftrace_stub) retq END(ftrace_caller) |