summaryrefslogtreecommitdiffstats
path: root/scripts/recordmcount.pl
AgeCommit message (Collapse)AuthorFilesLines
2017-02-27scripts/spelling.txt: add "swith" pattern and fix typo instancesMasahiro Yamada1-1/+1
Fix typos and add the following to the scripts/spelling.txt: swith||switch swithable||switchable swithed||switched swithing||switching While we are here, fix the "update" to "updates" in the touched hunk in drivers/net/wireless/marvell/mwifiex/wmm.c. Link: http://lkml.kernel.org/r/1481573103-11329-2-git-send-email-yamada.masahiro@socionext.com Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-10-07nmi_backtrace: generate one-line reports for idle cpusChris Metcalf1-0/+1
When doing an nmi backtrace of many cores, most of which are idle, the output is a little overwhelming and very uninformative. Suppress messages for cpus that are idling when they are interrupted and just emit one line, "NMI backtrace for N skipped: idling at pc 0xNNN". We do this by grouping all the cpuidle code together into a new .cpuidle.text section, and then checking the address of the interrupted PC to see if it lies within that section. This commit suitably tags x86 and tile idle routines, and only adds in the minimal framework for other architectures. Link: http://lkml.kernel.org/r/1472487169-14923-5-git-send-email-cmetcalf@mellanox.com Signed-off-by: Chris Metcalf <cmetcalf@mellanox.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Tested-by: Peter Zijlstra (Intel) <peterz@infradead.org> Tested-by: Daniel Thompson <daniel.thompson@linaro.org> [arm] Tested-by: Petr Mladek <pmladek@suse.com> Cc: Aaron Tomlin <atomlin@redhat.com> Cc: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net> Cc: Russell King <linux@arm.linux.org.uk> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@elte.hu> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-09-28scripts/recordmcount.c: account for .softirqentry.textDmitry Vyukov1-0/+1
be7635e7287e ("arch, ftrace: for KASAN put hard/soft IRQ entries into separate sections") added .softirqentry.text section, but it was not added to recordmcount. So functions in the section are untracable. Add the section to scripts/recordmcount.c and scripts/recordmcount.pl. Fixes: be7635e7287e ("arch, ftrace: for KASAN put hard/soft IRQ entries into separate sections") Link: http://lkml.kernel.org/r/1474902626-73468-1-git-send-email-dvyukov@google.com Signed-off-by: Dmitry Vyukov <dvyukov@google.com> Acked-by: Steve Rostedt <rostedt@goodmis.org> Cc: <stable@vger.kernel.org> [4.6+] Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-01-13scripts/recordmcount.pl: support data in text section on powerpcUlrich Weigand1-1/+2
If a text section starts out with a data blob before the first function start label, disassembly parsing doing in recordmcount.pl gets confused on powerpc, leading to creation of corrupted module objects. This was not a problem so far since the compiler would never create such text sections. However, this has changed with a recent change in GCC 6 to support distances of > 2GB between a function and its assoicated TOC in the ELFv2 ABI, exposing this problem. There is already code in recordmcount.pl to handle such data blobs on the sparc64 platform. This patch uses the same method to handle those on powerpc as well. Cc: stable@vger.kernel.org Acked-by: Steven Rostedt <rostedt@goodmis.org> Signed-off-by: Ulrich Weigand <ulrich.weigand@de.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2015-02-11Merge branch 'for-linus' of ↵Linus Torvalds1-2/+7
git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux Pull s390 updates from Martin Schwidefsky: - The remaining patches for the z13 machine support: kernel build option for z13, the cache synonym avoidance, SMT support, compare-and-delay for spinloops and the CES5S crypto adapater. - The ftrace support for function tracing with the gcc hotpatch option. This touches common code Makefiles, Steven is ok with the changes. - The hypfs file system gets an extension to access diagnose 0x0c data in user space for performance analysis for Linux running under z/VM. - The iucv hvc console gets wildcard spport for the user id filtering. - The cacheinfo code is converted to use the generic infrastructure. - Cleanup and bug fixes. * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux: (42 commits) s390/process: free vx save area when releasing tasks s390/hypfs: Eliminate hypfs interval s390/hypfs: Add diagnose 0c support s390/cacheinfo: don't use smp_processor_id() in preemptible context s390/zcrypt: fixed domain scanning problem (again) s390/smp: increase maximum value of NR_CPUS to 512 s390/jump label: use different nop instruction s390/jump label: add sanity checks s390/mm: correct missing space when reporting user process faults s390/dasd: cleanup profiling s390/dasd: add locking for global_profile access s390/ftrace: hotpatch support for function tracing ftrace: let notrace function attribute disable hotpatching if necessary ftrace: allow architectures to specify ftrace compile options s390: reintroduce diag 44 calls for cpu_relax() s390/zcrypt: Add support for new crypto express (CEX5S) adapter. s390/zcrypt: Number of supported ap domains is not retrievable. s390/spinlock: add compare-and-delay to lock wait loops s390/tape: remove redundant if statement s390/hvc_iucv: add simple wildcard matches to the iucv allow filter ...
2015-01-29s390/ftrace: hotpatch support for function tracingHeiko Carstens1-2/+7
Make use of gcc's hotpatch support to generate better code for ftrace function tracing. The generated code now contains only a six byte nop in each function prologue instead of a 24 byte code block which will be runtime patched to support function tracing. With the new code generation the runtime overhead for supporting function tracing is close to zero, while the original code did show a significant performance impact. Acked-by: Steven Rostedt <rostedt@goodmis.org> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2015-01-19scripts/recordmcount.pl: There is no -m32 gcc option on Super-H anymoreMichael Karcher1-1/+0
Compiling SH with gcc-4.8 fails due to the -m32 option not being supported. From http://buildd.debian-ports.org/status/fetch.php?pkg=linux&arch=sh4&ver=3.16.7-ckt4-1&stamp=1421425783 CC init/main.o gcc-4.8: error: unrecognized command line option '-m32' ld: cannot find init/.tmp_mc_main.o: No such file or directory objcopy: 'init/.tmp_mx_main.o': No such file rm: cannot remove 'init/.tmp_mx_main.o': No such file or directory rm: cannot remove 'init/.tmp_mc_main.o': No such file or directory Link: http://lkml.kernel.org/r/1421537778-29001-1-git-send-email-kernel@mkarcher.dialup.fu-berlin.de Link: http://lkml.kernel.org/r/54BCBDD4.10102@physik.fu-berlin.de Cc: stable@vger.kernel.org Cc: Matt Fleming <matt@console-pimps.org> Reported-by: John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de> Signed-off-by: Michael Karcher <kernel@mkarcher.dialup.fu-berlin.de> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2014-10-27s390/ftrace,kprobes: allow to patch first instructionHeiko Carstens1-1/+1
If the function tracer is enabled, allow to set kprobes on the first instruction of a function (which is the function trace caller): If no kprobe is set handling of enabling and disabling function tracing of a function simply patches the first instruction. Either it is a nop (right now it's an unconditional branch, which skips the mcount block), or it's a branch to the ftrace_caller() function. If a kprobe is being placed on a function tracer calling instruction we encode if we actually have a nop or branch in the remaining bytes after the breakpoint instruction (illegal opcode). This is possible, since the size of the instruction used for the nop and branch is six bytes, while the size of the breakpoint is only two bytes. Therefore the first two bytes contain the illegal opcode and the last four bytes contain either "0" for nop or "1" for branch. The kprobes code will then execute/simulate the correct instruction. Instruction patching for kprobes and function tracer is always done with stop_machine(). Therefore we don't have any races where an instruction is patched concurrently on a different cpu. Besides that also the program check handler which executes the function trace caller instruction won't be executed concurrently to any stop_machine() execution. This allows to keep full fault based kprobes handling which generates correct pt_regs contents automatically. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2014-10-09s390/ftrace: remove 31 bit ftrace supportHeiko Carstens1-7/+0
31 bit and 64 bit diverge more and more and it is rather painful to keep both parts running. To make things simpler just remove the 31 bit support which nobody uses anyway. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2014-05-29ftrace: Add arm64 support to recordmcountAKASHI Takahiro1-0/+5
Recordmcount utility under scripts is run, after compiling each object, to find out all the locations of calling _mcount() and put them into specific seciton named __mcount_loc. Then linker collects all such information into a table in the kernel image (between __start_mcount_loc and __stop_mcount_loc) for later use by ftrace. This patch adds arm64 specific definitions to identify such locations. There are two types of implementation, C and Perl. On arm64, only C version is used to build the kernel now that CONFIG_HAVE_C_RECORDMCOUNT is on. But Perl version is also maintained. This patch also contains a workaround just in case where a header file, elf.h, on host machine doesn't have definitions of EM_AARCH64 nor R_AARCH64_ABS64. Without them, compiling C version of recordmcount will fail. Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
2013-12-05ftrace: default to tilegx if ARCH=tile is specifiedTony Lu1-1/+2
This matches the existing behavior in arch/tile/Makefile for defconfig. Reported-by: fengguang.wu@intel.com Acked-by: Steven Rostedt <rostedt@goodmis.org> Signed-off-by: Tony Lu <zlu@tilera.com> Signed-off-by: Chris Metcalf <cmetcalf@tilera.com>
2013-11-05recordmcount.pl: Add support for __fentry__Jamie Iles1-2/+2
With gcc 4.6.0 the -mfentry feature places the function profiling call at the start of the function. When this is used, the call is to __fentry__ and not mcount. This is required for Ksplice as the C version of recordmcount doesn't insert section symbols for the __mcount_loc section so we fall back to the perl version. Based on 48bb5dc6cd9d30fe0d594947563da1f8bd9abada (ftrace: Make recordmcount.c handle __fentry__). Link: http://lkml.kernel.org/r/1383648129-10724-1-git-send-email-jamie.iles@oracle.com Signed-off-by: Jamie Iles <jamie.iles@oracle.com> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2013-08-30tile: support ftrace on tilegxTony Lu1-0/+4
This commit adds support for static ftrace, graph function support, and dynamic tracer support. Signed-off-by: Tony Lu <zlu@tilera.com> Signed-off-by: Chris Metcalf <cmetcalf@tilera.com>
2011-05-16ftrace/s390: mcount offset calculationMartin Schwidefsky1-0/+2
Do the mcount offset adjustment in the recordmcount.pl/recordmcount.[ch] at compile time and not in ftrace_call_adjust at run time. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2011-05-16ftrace/x86: mcount offset calculationMartin Schwidefsky1-0/+2
Do the mcount offset adjustment in the recordmcount.pl/recordmcount.[ch] at compile time and not in ftrace_call_adjust at run time. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2011-05-16ftrace: Add .kprobe.text section to whitelist for recordmcount.cSteven Rostedt1-0/+1
The .kprobe.text section is safe to modify mcount to nop and tracing. Add it to the whitelist in recordmcount.c and recordmcount.pl. Cc: John Reiser <jreiser@bitwagon.com> Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Link: http://lkml.kernel.org/r/20110421023737.743350547@goodmis.org Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2011-03-10ftrace: Add .ref.text as one of the safe areas to traceSteven Rostedt1-0/+1
The section .ref.text will not go away unexpectedly and is safe to trace. Add it to the safe list of sections to allow tracing. Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2010-09-02ARM: 6319/1: ftrace: add Thumb-2 support to dynamic ftraceRabin Vincent1-1/+1
Handle the different nop and call instructions for Thumb-2. Also, we need to adjust the recorded mcount_loc addresses because they have the lsb set. Cc: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Steven Rostedt <rostedt@goodmis.org> [recordmcount.pl change] Signed-off-by: Rabin Vincent <rabin@rab.in> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-09-02ARM: 6318/1: ftrace: fix and update dynamic ftraceRabin Vincent1-0/+2
This adds mcount recording and updates dynamic ftrace for ARM to work with the new ftrace dyamic tracing implementation. It also adds support for the mcount format used by newer ARM compilers. With dynamic tracing, mcount() is implemented as a nop. Callsites are patched on startup with nops, and dynamically patched to call to the ftrace_caller() routine as needed. Acked-by: Steven Rostedt <rostedt@goodmis.org> [recordmcount.pl change] Signed-off-by: Rabin Vincent <rabin@rab.in> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-08-16Merge branch 'tip/perf/urgent-3' of ↵Steven Rostedt1-1/+6
git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-2.6-trace into trace/tip/perf/urgent-4 Conflicts: kernel/trace/trace_events.c Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2010-08-12tracing: Extend recordmcount to better support Blackfin mcountMike Frysinger1-1/+6
The mcount call on Blackfin systems includes some stack manipulation around the actual call site, so extend the build time perl script to support this. This way we can avoid doing the calculation at runtime. Signed-off-by: Mike Frysinger <vapier@gentoo.org> LKML-Reference: <1281079584-21205-1-git-send-email-vapier@gentoo.org> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2010-07-22tracing: Fix $mcount_regex for MIPS in recordmcount.plDavid Daney1-1/+1
I found this issue in a locally patched 2.6.32.x, current kernels have moved the offending code to an __init function which is skipped by recordmcount.pl, so the bug is not currently being exercised. However, I think the patch is still a good idea, to avoid future problems if _mcount were to ever have its address taken in normal code. This is what I originally saw: Although arch/mips/kernel/ftrace.c is built without -pg, and thus contains no calls to _mcount, it does use the address of _mcount in ftrace_make_nop(). This was causing relocations to be emitted for _mcount which recordmcount.pl erronously took to be _mcount call sites. The result was that the text of ftrace_make_nop() would be patched with garbage leading to a system crash. In non-module code, all _mcount call sites will have R_MIPS_26 relocations, so we restrict $mcount_regex to only match on these. Acked-by: Ralf Baechle <ralf@linux-mips.org> Acked-by: Wu Zhangjin <wuzhangjin@gmail.com> Signed-off-by: David Daney <ddaney@caviumnetworks.com> LKML-Reference: <1278712325-12050-1-git-send-email-ddaney@caviumnetworks.com> Cc: Li Hong <lihong.hi@gmail.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Matt Fleming <matt@console-pimps.org> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2010-02-26Merge commit 'v2.6.33' into tracing/coreIngo Molnar1-2/+6
Conflicts: scripts/recordmcount.pl Merge reason: Merge up to v2.6.33. Signed-off-by: Ingo Molnar <mingo@elte.hu>
2010-01-11tracing/x86: Derive arch from bits argument in recordmcount.plJan Kiszka1-1/+1
Let the arch argument be overruled by bits. Otherwise, building of external modules against a i386 target on a x86-64 host (and likely vice versa as well) fails unless ARCH=i386 is explicitly passed to make. Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com> LKML-Reference: <4B4AFE10.8050109@siemens.com> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2010-01-06tracing: Use appropriate perl constructs in recordmcount.plWolfram Sang1-18/+11
Modified recordmcount.pl to use perl constructs that are still understandable by C hackers that are not perl programmers. Signed-off-by: Wolfram Sang <w.sang@pengutronix.de> LKML-Reference: <1262724082-9517-1-git-send-email-w.sang@pengutronix.de> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2010-01-06tracing: optimize recordmcount.pl for offsets-handlingWolfram Sang1-9/+9
- move check for open file in front of the writing loop - use perl-constructs to access the array Signed-off-by: Wolfram Sang <w.sang@pengutronix.de> LKML-Reference: <1262716072-14414-2-git-send-email-w.sang@pengutronix.de> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2009-12-17MIPS: Tracing: Add dynamic function tracer supportWu Zhangjin1-0/+52
With dynamic function tracer, by default, _mcount is defined as an "empty" function, it returns directly without any more action . When enabling it in user-space, it will jump to a real tracing function(ftrace_caller), and do the real job for us. Differ from the static function tracer, dynamic function tracer provides two functions ftrace_make_call()/ftrace_make_nop() to enable/disable the tracing of some indicated kernel functions(set_ftrace_filter). In the -v4 version, the implementation of this support is basically the same as X86 version does: _mcount is implemented as an empty function and ftrace_caller is implemented as a real tracing function respectively. But in this version, to support module tracing with the help of -mlong-calls in arch/mips/Makefile: MODFLAGS += -mlong-calls. The stuff becomes a little more complex. We need to cope with two different type of calling to _mcount. For the kernel part, the calling to _mcount(result of "objdump -hdr vmlinux"). is like this: 108: 03e0082d move at,ra 10c: 0c000000 jal 0 <fpcsr_pending> 10c: R_MIPS_26 _mcount 10c: R_MIPS_NONE *ABS* 10c: R_MIPS_NONE *ABS* 110: 00020021 nop For the module with -mlong-calls, it looks like this: c: 3c030000 lui v1,0x0 c: R_MIPS_HI16 _mcount c: R_MIPS_NONE *ABS* c: R_MIPS_NONE *ABS* 10: 64630000 daddiu v1,v1,0 10: R_MIPS_LO16 _mcount 10: R_MIPS_NONE *ABS* 10: R_MIPS_NONE *ABS* 14: 03e0082d move at,ra 18: 0060f809 jalr v1 In the kernel version, there is only one "_mcount" string for every kernel function, so, we just need to match this one in mcount_regex of scripts/recordmcount.pl, but in the module version, we need to choose one of the two to match. Herein, I choose the first one with "R_MIPS_HI16 _mcount". and In the kernel verion, without module tracing support, we just need to replace "jal _mcount" by "jal ftrace_caller" to do real tracing, and filter the tracing of some kernel functions via replacing it by a nop instruction. but as we have described before, the instruction "jal ftrace_caller" only left 32bit length for the address of ftrace_caller, it will fail when calling from the module space. so, herein, we must replace something else. the basic idea is loading the address of ftrace_caller to v1 via changing these two instructions: lui v1,0x0 addiu v1,v1,0 If we want to enable the tracing, we need to replace the above instructions to: lui v1, HI_16BIT_ftrace_caller addiu v1, v1, LOW_16BIT_ftrace_caller If we want to stop the tracing of the indicated kernel functions, we just need to replace the "jalr v1" to a nop instruction. but we need to replace two instructions and encode the above two instructions oursevles. Is there a simpler solution? Yes! Here it is, in this version, we put _mcount and ftrace_caller together, which means the address of _mcount and ftrace_caller is the same: _mcount: ftrace_caller: j ftrace_stub nop ...(do real tracing here)... ftrace_stub: jr ra move ra, at By default, the kernel functions call _mcount, and then jump to ftrace_stub and return. and when we want to do real tracing, we just need to remove that "j ftrace_stub", and it will run through the two "nop" instructions and then do the real tracing job. what about filtering job? we just need to do this: lui v1, hi_16bit_of_mcount <--> b 1f (0x10000004) addiu v1, v1, low_16bit_of_mcount move at, ra jalr v1 nop 1f: (rec->ip + 12) In linux-mips64, there will be some local symbols, whose name are prefixed by $L, which need to be filtered. thanks goes to Steven for writing the mips64-specific function_regex. In a conclusion, with RISC, things becomes easier with such a "stupid" trick, RISC is something like K.I.S.S, and also, there are lots of "simple" tricks in the whole ftrace support, thanks goes to Steven and the other folks for providing such a wonderful tracing framework! Signed-off-by: Wu Zhangjin <wuzhangjin@gmail.com> Cc: Nicholas Mc Guire <der.herr@hofr.at> Cc: zhangfx@lemote.com Cc: Wu Zhangjin <wuzhangjin@gmail.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: linux-kernel@vger.kernel.org Cc: linux-mips@linux-mips.org Patchwork: http://patchwork.linux-mips.org/patch/675/ Acked-by: Steven Rostedt <rostedt@goodmis.org> Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2009-12-17MIPS: Tracing: Add an endian argument to scripts/recordmcount.plWu Zhangjin1-3/+3
MIPS and some other architectures need this argument to handle big/little endian respectively. Signed-off-by: Wu Zhangjin <wuzj@lemote.com> Cc: Nicholas Mc Guire <der.herr@hofr.at> Cc: zhangfx@lemote.com Cc: Wu Zhangjin <wuzhangjin@gmail.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: linux-kernel@vger.kernel.org Cc: linux-mips@linux-mips.org Patchwork: http://patchwork.linux-mips.org/patch/674/ Acked-by: Steven Rostedt <rostedt@goodmis.org> Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2009-12-14microblaze: ftrace: Add dynamic trace supportMichal Simek1-0/+3
With dynamic function tracer, by default, _mcount is defined as an "empty" function, it returns directly without any more action. When enabling it in user-space, it will jump to a real tracing function(ftrace_caller), and do the real job for us. Differ from the static function tracer, dynamic function tracer provides two functions ftrace_make_call()/ftrace_make_nop() to enable/disable the tracing of some indicated kernel functions(set_ftrace_filter). In the kernel version, there is only one "_mcount" string for every kernel function, so, we just need to match this one in mcount_regex of scripts/recordmcount.pl. For more information please look at code and Documentation/trace folder. Steven ACK that scripts/recordmcount.pl part. Acked-by: Steven Rostedt <rostedt@goodmis.org> Signed-off-by: Michal Simek <monstr@monstr.eu>
2009-11-17tracing: Only print objcopy version warning once from recordmcountSteven Rostedt1-2/+10
If the user has an older version of objcopy, that can not handle converting local symbols to global and vice versa, then some functions will not be part of the dynamic function tracer. The current code in recordmcount.pl will print a warning in this case. Unfortunately, there exists lots of files that may have this issue with older objcopys and this will cause a warning for every file compiled with this issue. This patch solves this overwhelming output by creating a .tmp_quiet_recordmcount file on the first instance the warning is encountered. The warning will not print if this file exists. The temp file is deleted at the beginning of the compile to ensure that the warning will happen once again on new compiles (because the issue is still present). Reported-by: Andrew Morton <akpm@linux-foundation.org> Cc: Sam Ravnborg <sam@ravnborg.org> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2009-10-29tracing: Exit with error if a weak function is used in recordmcount.plLi Hong1-7/+3
If a weak function is used as a relocation reference for mcount callers and that function is overridden, it will cause ftrace to fail at run time. The current code should prevent a weak function from being used, but if one is, the code should exit with an error to fail at compile time. Signed-off-by: Li Hong <lihong.hi@gmail.com> LKML-Reference: <20091028050743.GH30758@uhli> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2009-10-29tracing: Move conditional into update_funcs() in recordmcount.plLi Hong1-5/+3
Move all the condition validations into the function update_funcs(). Also update_funcs should not die if $ref_func is undefined for there may be more than one valid section in an object file. Signed-off-by: Li Hong <lihong.hi@gmail.com> LKML-Reference: <20091028050703.GG30758@uhli> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2009-10-29tracing: Add regex for weak functions in recordmcount.plLi Hong1-7/+9
Add a variable to contain the regex needed to find weak functions in the 'nm' output. This will allow other archs to easily override it. Also rename the regex variable $nm_regex to $local_regex to be more descriptive. Signed-off-by: Li Hong <lihong.hi@gmail.com> LKML-Reference: <20091028050619.GF30758@uhli> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2009-10-29tracing: Move mcount section search to front of loop in recordmcount.plLi Hong1-14/+18
Move the mcount section check to the beginning of the objdump read loop. This makes the code easier to follow since the search for the mcount section is performed first before the mcount callers are processed. Signed-off-by: Li Hong <lihong.hi@gmail.com> LKML-Reference: <20091028050523.GE30758@uhli> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2009-10-29tracing: Fix objcopy revision check in recordmcount.plLi Hong1-29/+27
The current logic to check objcopy's version is incorrect. This patch fixes the algorithm and disables the use of local functions as a reference if the objcopy version does not support static to global conversions. Also remove some usused variables. Signed-off-by: Li Hong <lihong.hi@gmail.com> LKML-Reference: <20091028050421.GD30758@uhli> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2009-10-29tracing: Check absolute path of input file in recordmcount.plLi Hong1-1/+1
The ftrace.c file may reference the mcount function and this may interfere with the recordmcount.pl processing. To avoid this, the code does not process the kernel/trace/ftrace.o. But currently the check is against a relative path. This patch modifies the check to succeed if the path is an absolute path. Signed-off-by: Li Hong <lihong.hi@gmail.com> LKML-Reference: <20091028050332.GC30758@uhli> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2009-10-29tracing: Correct the check for number of arguments in recordmcount.plLi Hong1-1/+1
The number of arguments passed into recordmcount.pl is 10, but the code checks if only 7 are passed in. Signed-off-by: Li Hong <lihong.hi@gmail.com> LKML-Reference: <20091027065733.GB22032@uhli> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2009-10-29tracing: Amend documentation in recordmcount.pl to reflect implementationLi Hong1-35/+49
The documentation currently says we will use the first function in a section as a reference. The actual algorithm is: choose the first global function we meet as a reference. If there is none, choose the first local one. Change the documentation to be consistent with the code. Also add several other clarifications. Signed-off-by: Li Hong <lihong.hi@gmail.com> LKML-Reference: <20091028050138.GA30758@uhli> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2009-10-14tracing: Enable "__cold" functionsJiri Olsa1-0/+1
Based on the commit: a586df06 "x86: Support __attribute__((__cold__)) in gcc 4.3" some of the functions goes to the ".text.unlikely" section. Looks like there's not many of them (I found printk, panic, __ssb_dma_not_implemented, fat_fs_error), but still worth to include I think. Signed-off-by: Jiri Olsa <jolsa@redhat.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Signed-off-by: Steven Rostedt <rostedt@goodmis.org> LKML-Reference: <20091013203426.175845614@goodmis.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-08-11Merge branch 'linus' into tracing/coreIngo Molnar1-4/+8
Conflicts: kernel/trace/trace_events_filter.c We use the tracing/core version. Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-08-07tracing: Fix recordmcount.pl to handle sections with only weak functionsSteven Rostedt1-2/+2
Roland Dreier found that a section that contained only a weak function in one of the staging drivers and this caused recordmcount.pl to spit out a warning and fail. Although it is strange that a driver would have a weak function, and this function only be used in one place, it should not be something to make recordmcount.pl fail. This patch fixes the issue in a simple manner: if only weak functions exist in a section, then that section will not be recorded. Reported-by: Roland Dreier <rdreier@cisco.com> Signed-off-by: Steven Rostedt <rostedt@goodmis.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-08-05tracing: do not use functions starting with .L in recordmcount.plSteven Rostedt1-1/+4
On Wed, 5 Aug 2009, Ingo Molnar wrote: > * Dave Airlie <airlied@gmail.com> wrote: > > > Hey, > > > > So I spent 3-4 hrs today (I'm stupid yes) tracking down a .o > > breakage by blaming rawhide gcc/binutils as I was using make > > V=1and seeing only the compiler chain running, > > Hm, is this that powerpc related build bug you just reported? Well we tracked it down and it is powerpc64 specific. Seems that in drivers/hwmon/lm93.c there's a function called: LM93_IN_FROM_REG() But PPC64 has function descriptors and the real function names (the ones you see in objdump) start with a '.'. Thus this in objdump you have: Disassembly of section .text: 0000000000000000 <.LM93_IN_FROM_REG>: 0: 7c 08 02 a6 mflr r0 4: fb 81 ff e0 std r28,-32(r1) The function name used is .LM93_IN_FROM_REG. But gcc considers symbols that start with ".L" as a special symbol that is used inside the assembly stage. The nm passed into recordmcount uses the --synthetic option which shows the ".L" symbols (my runs outside of the build did not include the --synthetic option, so my older patch worked). We see the function as a local. Now to capture all the locations that use "mcount" we need to have a reference to link into the object file a list of mcount callers. We need a reference that will not disappear. We try to use a global function and if that does not work, we use a local function as a reference. But to relink the section back into the object, we need to make it global. In this case, we run objcopy using --globalize-symbol and --localize-symbol to convert the symbol into a global symbol, link the mcount list, then convert it back to a local symbol. This works great except for this case. .L* symbols can not be converted into a global symbol, and the mcount section referencing it will remain unresolved. Reported-by: Dave Airlie <airlied@gmail.com> LKML-Reference: <alpine.DEB.2.00.0908052011590.5010@gandalf.stny.rr.com> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2009-07-23ftrace: Only update $offset when we update $ref_funcMatt Fleming1-1/+2
The value of $offset should be the offset of $ref_func from the beginning of the object file. Therefore, we should set both variables together. This fixes a bug I was hitting on sh where $offset (which is used to calcualte the addends for the __mcount_loc entries) was being set multiple times and didn't correspond to $ref_func's offset in the object file. The addends in __mcount_loc were calculated incorrectly, resulting in ftrace dynamically modifying addresses that weren't mcount call sites. Signed-off-by: Matt Fleming <matt@console-pimps.org> LKML-Reference: <1248365775-25196-2-git-send-email-matt@console-pimps.org> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2009-07-23ftrace: Fix the conditional that updates $ref_funcMatt Fleming1-1/+1
Fix the conditional that checks if we already have a $ref_func and that the new function is weak. The code as previously checking whether either condition was false, and we really need to only update $ref_func is both cconditions are false. Signed-off-by: Matt Fleming <matt@console-pimps.org> LKML-Reference: <1248365775-25196-1-git-send-email-matt@console-pimps.org> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2009-07-18tracing: Remove .globl in the scripts/recordmcount.pl docjolsa@redhat.com1-1/+0
I was reading throught the recordmcount.pl starting comment, and spotted a tiny discrepancy. The second example is about my_func not being global, but the example code has the ".globl my_func" statement just moved. Signed-off-by: Jiri Olsa <jolsa@redhat.com> Cc: rostedt@goodmis.org LKML-Reference: <1247773468-11594-4-git-send-email-jolsa@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-06-16sparc64: Add proper dynamic ftrace support.David S. Miller1-0/+20
Signed-off-by: David S. Miller <davem@davemloft.net> Acked-by: Steven Rostedt <rostedt@goodmis.org> Acked-by: Ingo Molnar <mingo@elte.hu>
2009-06-12[S390] ftrace: add dynamic ftrace supportHeiko Carstens1-0/+13
Dynamic ftrace support for s390. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2009-05-05ftrace: use .sched.text, not .text.sched in recordmcount.plTim Abbott1-3/+3
The only references in the kernel to the .text.sched section are in recordmcount.pl. Since the code it has is intended to be example code it should refer to real kernel sections. So change it to .sched.text instead. [ Impact: consistency in comments ] Signed-off-by: Tim Abbott <tabbott@mit.edu> LKML-Reference: <1241136371-10768-1-git-send-email-tabbott@mit.edu> Acked-by: Sam Ravnborg <sam@ravnborg.org> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2009-01-18ftrace: test for running of recordmcount.pl twice on an objectSteven Rostedt1-3/+18
Impact: fix failure of dynamic function tracer selftest In a course of development, a developer does several makes on their kernel. Sometimes, the make might do something abnormal. In the case of running the recordmcount.pl script on an object twice, the script will duplicate all the calls to mcount in the __mcount_loc section. On boot up, the dynamic function tracer is careful when it modifies code, and performs several consistency checks. One is to not modify the call site if it is not what it expects it to be. If a function call site is listed twice, the first entry will convert the site to a nop, and the second will fail because it expected to see a call to mcount, but instead it sees a nop. Thus, the function tracer is disabled. Eric Sesterhenn reported seeing: [ 1.055440] ftrace: converting mcount calls to 0f 1f 44 00 00 [ 1.055568] ftrace: allocating 29418 entries in 116 pages [ 1.061000] ------------[ cut here ]------------ [ 1.061000] WARNING: at kernel/trace/ftrace.c:441 [...] [ 1.060000] ---[ end trace 4eaa2a86a8e2da23 ]--- [ 1.060000] ftrace failed to modify [<c0118072>] check_corruption+0x3/0x2d [ 1.060000] actual: 0f:1f:44:00:00 This warning shows that check_corruption+0x3 already had a nop in its place (0x0f1f440000). After compiling another kernel the problem went away. Later Eric Paris notice the same type of issue. Luckily, he saved the vmlinux file that caused it. In the file we found a bunch of duplicate mcount call site records, which lead us to the script. Perhaps this problem only happens to people named Eric. This patch changes the script to test if the __mcount_loc already exists in the object file, and if it does, it will print out an error message and kill the compile. Reported-by: Eric Sesterhenn <snakebyte@gmx.de> Reported-by: Eric Paris <eparis@redhat.com> Signed-off-by: Steven Rostedt <srostedt@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-01-14ftrace, ia64: Add recordmcount for ia64Shaohua Li1-0/+7
Add recordmcount for ia64. Signed-off-by: Shaohua Li <shaohua.li@intel.com> Signed-off-by: Steven Rostedt <srostedt@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>