summaryrefslogtreecommitdiffstats
path: root/scripts/recordmcount.pl
AgeCommit message (Collapse)AuthorFilesLines
2013-12-05ftrace: default to tilegx if ARCH=tile is specifiedTony Lu1-1/+2
This matches the existing behavior in arch/tile/Makefile for defconfig. Reported-by: fengguang.wu@intel.com Acked-by: Steven Rostedt <rostedt@goodmis.org> Signed-off-by: Tony Lu <zlu@tilera.com> Signed-off-by: Chris Metcalf <cmetcalf@tilera.com>
2013-11-05recordmcount.pl: Add support for __fentry__Jamie Iles1-2/+2
With gcc 4.6.0 the -mfentry feature places the function profiling call at the start of the function. When this is used, the call is to __fentry__ and not mcount. This is required for Ksplice as the C version of recordmcount doesn't insert section symbols for the __mcount_loc section so we fall back to the perl version. Based on 48bb5dc6cd9d30fe0d594947563da1f8bd9abada (ftrace: Make recordmcount.c handle __fentry__). Link: http://lkml.kernel.org/r/1383648129-10724-1-git-send-email-jamie.iles@oracle.com Signed-off-by: Jamie Iles <jamie.iles@oracle.com> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2013-08-30tile: support ftrace on tilegxTony Lu1-0/+4
This commit adds support for static ftrace, graph function support, and dynamic tracer support. Signed-off-by: Tony Lu <zlu@tilera.com> Signed-off-by: Chris Metcalf <cmetcalf@tilera.com>
2011-05-16ftrace/s390: mcount offset calculationMartin Schwidefsky1-0/+2
Do the mcount offset adjustment in the recordmcount.pl/recordmcount.[ch] at compile time and not in ftrace_call_adjust at run time. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2011-05-16ftrace/x86: mcount offset calculationMartin Schwidefsky1-0/+2
Do the mcount offset adjustment in the recordmcount.pl/recordmcount.[ch] at compile time and not in ftrace_call_adjust at run time. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2011-05-16ftrace: Add .kprobe.text section to whitelist for recordmcount.cSteven Rostedt1-0/+1
The .kprobe.text section is safe to modify mcount to nop and tracing. Add it to the whitelist in recordmcount.c and recordmcount.pl. Cc: John Reiser <jreiser@bitwagon.com> Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Link: http://lkml.kernel.org/r/20110421023737.743350547@goodmis.org Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2011-03-10ftrace: Add .ref.text as one of the safe areas to traceSteven Rostedt1-0/+1
The section .ref.text will not go away unexpectedly and is safe to trace. Add it to the safe list of sections to allow tracing. Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2010-09-02ARM: 6319/1: ftrace: add Thumb-2 support to dynamic ftraceRabin Vincent1-1/+1
Handle the different nop and call instructions for Thumb-2. Also, we need to adjust the recorded mcount_loc addresses because they have the lsb set. Cc: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Steven Rostedt <rostedt@goodmis.org> [recordmcount.pl change] Signed-off-by: Rabin Vincent <rabin@rab.in> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-09-02ARM: 6318/1: ftrace: fix and update dynamic ftraceRabin Vincent1-0/+2
This adds mcount recording and updates dynamic ftrace for ARM to work with the new ftrace dyamic tracing implementation. It also adds support for the mcount format used by newer ARM compilers. With dynamic tracing, mcount() is implemented as a nop. Callsites are patched on startup with nops, and dynamically patched to call to the ftrace_caller() routine as needed. Acked-by: Steven Rostedt <rostedt@goodmis.org> [recordmcount.pl change] Signed-off-by: Rabin Vincent <rabin@rab.in> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-08-16Merge branch 'tip/perf/urgent-3' of ↵Steven Rostedt1-1/+6
git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-2.6-trace into trace/tip/perf/urgent-4 Conflicts: kernel/trace/trace_events.c Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2010-08-12tracing: Extend recordmcount to better support Blackfin mcountMike Frysinger1-1/+6
The mcount call on Blackfin systems includes some stack manipulation around the actual call site, so extend the build time perl script to support this. This way we can avoid doing the calculation at runtime. Signed-off-by: Mike Frysinger <vapier@gentoo.org> LKML-Reference: <1281079584-21205-1-git-send-email-vapier@gentoo.org> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2010-07-22tracing: Fix $mcount_regex for MIPS in recordmcount.plDavid Daney1-1/+1
I found this issue in a locally patched 2.6.32.x, current kernels have moved the offending code to an __init function which is skipped by recordmcount.pl, so the bug is not currently being exercised. However, I think the patch is still a good idea, to avoid future problems if _mcount were to ever have its address taken in normal code. This is what I originally saw: Although arch/mips/kernel/ftrace.c is built without -pg, and thus contains no calls to _mcount, it does use the address of _mcount in ftrace_make_nop(). This was causing relocations to be emitted for _mcount which recordmcount.pl erronously took to be _mcount call sites. The result was that the text of ftrace_make_nop() would be patched with garbage leading to a system crash. In non-module code, all _mcount call sites will have R_MIPS_26 relocations, so we restrict $mcount_regex to only match on these. Acked-by: Ralf Baechle <ralf@linux-mips.org> Acked-by: Wu Zhangjin <wuzhangjin@gmail.com> Signed-off-by: David Daney <ddaney@caviumnetworks.com> LKML-Reference: <1278712325-12050-1-git-send-email-ddaney@caviumnetworks.com> Cc: Li Hong <lihong.hi@gmail.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Matt Fleming <matt@console-pimps.org> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2010-02-26Merge commit 'v2.6.33' into tracing/coreIngo Molnar1-2/+6
Conflicts: scripts/recordmcount.pl Merge reason: Merge up to v2.6.33. Signed-off-by: Ingo Molnar <mingo@elte.hu>
2010-01-11tracing/x86: Derive arch from bits argument in recordmcount.plJan Kiszka1-1/+1
Let the arch argument be overruled by bits. Otherwise, building of external modules against a i386 target on a x86-64 host (and likely vice versa as well) fails unless ARCH=i386 is explicitly passed to make. Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com> LKML-Reference: <4B4AFE10.8050109@siemens.com> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2010-01-06tracing: Use appropriate perl constructs in recordmcount.plWolfram Sang1-18/+11
Modified recordmcount.pl to use perl constructs that are still understandable by C hackers that are not perl programmers. Signed-off-by: Wolfram Sang <w.sang@pengutronix.de> LKML-Reference: <1262724082-9517-1-git-send-email-w.sang@pengutronix.de> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2010-01-06tracing: optimize recordmcount.pl for offsets-handlingWolfram Sang1-9/+9
- move check for open file in front of the writing loop - use perl-constructs to access the array Signed-off-by: Wolfram Sang <w.sang@pengutronix.de> LKML-Reference: <1262716072-14414-2-git-send-email-w.sang@pengutronix.de> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2009-12-17MIPS: Tracing: Add dynamic function tracer supportWu Zhangjin1-0/+52
With dynamic function tracer, by default, _mcount is defined as an "empty" function, it returns directly without any more action . When enabling it in user-space, it will jump to a real tracing function(ftrace_caller), and do the real job for us. Differ from the static function tracer, dynamic function tracer provides two functions ftrace_make_call()/ftrace_make_nop() to enable/disable the tracing of some indicated kernel functions(set_ftrace_filter). In the -v4 version, the implementation of this support is basically the same as X86 version does: _mcount is implemented as an empty function and ftrace_caller is implemented as a real tracing function respectively. But in this version, to support module tracing with the help of -mlong-calls in arch/mips/Makefile: MODFLAGS += -mlong-calls. The stuff becomes a little more complex. We need to cope with two different type of calling to _mcount. For the kernel part, the calling to _mcount(result of "objdump -hdr vmlinux"). is like this: 108: 03e0082d move at,ra 10c: 0c000000 jal 0 <fpcsr_pending> 10c: R_MIPS_26 _mcount 10c: R_MIPS_NONE *ABS* 10c: R_MIPS_NONE *ABS* 110: 00020021 nop For the module with -mlong-calls, it looks like this: c: 3c030000 lui v1,0x0 c: R_MIPS_HI16 _mcount c: R_MIPS_NONE *ABS* c: R_MIPS_NONE *ABS* 10: 64630000 daddiu v1,v1,0 10: R_MIPS_LO16 _mcount 10: R_MIPS_NONE *ABS* 10: R_MIPS_NONE *ABS* 14: 03e0082d move at,ra 18: 0060f809 jalr v1 In the kernel version, there is only one "_mcount" string for every kernel function, so, we just need to match this one in mcount_regex of scripts/recordmcount.pl, but in the module version, we need to choose one of the two to match. Herein, I choose the first one with "R_MIPS_HI16 _mcount". and In the kernel verion, without module tracing support, we just need to replace "jal _mcount" by "jal ftrace_caller" to do real tracing, and filter the tracing of some kernel functions via replacing it by a nop instruction. but as we have described before, the instruction "jal ftrace_caller" only left 32bit length for the address of ftrace_caller, it will fail when calling from the module space. so, herein, we must replace something else. the basic idea is loading the address of ftrace_caller to v1 via changing these two instructions: lui v1,0x0 addiu v1,v1,0 If we want to enable the tracing, we need to replace the above instructions to: lui v1, HI_16BIT_ftrace_caller addiu v1, v1, LOW_16BIT_ftrace_caller If we want to stop the tracing of the indicated kernel functions, we just need to replace the "jalr v1" to a nop instruction. but we need to replace two instructions and encode the above two instructions oursevles. Is there a simpler solution? Yes! Here it is, in this version, we put _mcount and ftrace_caller together, which means the address of _mcount and ftrace_caller is the same: _mcount: ftrace_caller: j ftrace_stub nop ...(do real tracing here)... ftrace_stub: jr ra move ra, at By default, the kernel functions call _mcount, and then jump to ftrace_stub and return. and when we want to do real tracing, we just need to remove that "j ftrace_stub", and it will run through the two "nop" instructions and then do the real tracing job. what about filtering job? we just need to do this: lui v1, hi_16bit_of_mcount <--> b 1f (0x10000004) addiu v1, v1, low_16bit_of_mcount move at, ra jalr v1 nop 1f: (rec->ip + 12) In linux-mips64, there will be some local symbols, whose name are prefixed by $L, which need to be filtered. thanks goes to Steven for writing the mips64-specific function_regex. In a conclusion, with RISC, things becomes easier with such a "stupid" trick, RISC is something like K.I.S.S, and also, there are lots of "simple" tricks in the whole ftrace support, thanks goes to Steven and the other folks for providing such a wonderful tracing framework! Signed-off-by: Wu Zhangjin <wuzhangjin@gmail.com> Cc: Nicholas Mc Guire <der.herr@hofr.at> Cc: zhangfx@lemote.com Cc: Wu Zhangjin <wuzhangjin@gmail.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: linux-kernel@vger.kernel.org Cc: linux-mips@linux-mips.org Patchwork: http://patchwork.linux-mips.org/patch/675/ Acked-by: Steven Rostedt <rostedt@goodmis.org> Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2009-12-17MIPS: Tracing: Add an endian argument to scripts/recordmcount.plWu Zhangjin1-3/+3
MIPS and some other architectures need this argument to handle big/little endian respectively. Signed-off-by: Wu Zhangjin <wuzj@lemote.com> Cc: Nicholas Mc Guire <der.herr@hofr.at> Cc: zhangfx@lemote.com Cc: Wu Zhangjin <wuzhangjin@gmail.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: linux-kernel@vger.kernel.org Cc: linux-mips@linux-mips.org Patchwork: http://patchwork.linux-mips.org/patch/674/ Acked-by: Steven Rostedt <rostedt@goodmis.org> Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2009-12-14microblaze: ftrace: Add dynamic trace supportMichal Simek1-0/+3
With dynamic function tracer, by default, _mcount is defined as an "empty" function, it returns directly without any more action. When enabling it in user-space, it will jump to a real tracing function(ftrace_caller), and do the real job for us. Differ from the static function tracer, dynamic function tracer provides two functions ftrace_make_call()/ftrace_make_nop() to enable/disable the tracing of some indicated kernel functions(set_ftrace_filter). In the kernel version, there is only one "_mcount" string for every kernel function, so, we just need to match this one in mcount_regex of scripts/recordmcount.pl. For more information please look at code and Documentation/trace folder. Steven ACK that scripts/recordmcount.pl part. Acked-by: Steven Rostedt <rostedt@goodmis.org> Signed-off-by: Michal Simek <monstr@monstr.eu>
2009-11-17tracing: Only print objcopy version warning once from recordmcountSteven Rostedt1-2/+10
If the user has an older version of objcopy, that can not handle converting local symbols to global and vice versa, then some functions will not be part of the dynamic function tracer. The current code in recordmcount.pl will print a warning in this case. Unfortunately, there exists lots of files that may have this issue with older objcopys and this will cause a warning for every file compiled with this issue. This patch solves this overwhelming output by creating a .tmp_quiet_recordmcount file on the first instance the warning is encountered. The warning will not print if this file exists. The temp file is deleted at the beginning of the compile to ensure that the warning will happen once again on new compiles (because the issue is still present). Reported-by: Andrew Morton <akpm@linux-foundation.org> Cc: Sam Ravnborg <sam@ravnborg.org> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2009-10-29tracing: Exit with error if a weak function is used in recordmcount.plLi Hong1-7/+3
If a weak function is used as a relocation reference for mcount callers and that function is overridden, it will cause ftrace to fail at run time. The current code should prevent a weak function from being used, but if one is, the code should exit with an error to fail at compile time. Signed-off-by: Li Hong <lihong.hi@gmail.com> LKML-Reference: <20091028050743.GH30758@uhli> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2009-10-29tracing: Move conditional into update_funcs() in recordmcount.plLi Hong1-5/+3
Move all the condition validations into the function update_funcs(). Also update_funcs should not die if $ref_func is undefined for there may be more than one valid section in an object file. Signed-off-by: Li Hong <lihong.hi@gmail.com> LKML-Reference: <20091028050703.GG30758@uhli> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2009-10-29tracing: Add regex for weak functions in recordmcount.plLi Hong1-7/+9
Add a variable to contain the regex needed to find weak functions in the 'nm' output. This will allow other archs to easily override it. Also rename the regex variable $nm_regex to $local_regex to be more descriptive. Signed-off-by: Li Hong <lihong.hi@gmail.com> LKML-Reference: <20091028050619.GF30758@uhli> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2009-10-29tracing: Move mcount section search to front of loop in recordmcount.plLi Hong1-14/+18
Move the mcount section check to the beginning of the objdump read loop. This makes the code easier to follow since the search for the mcount section is performed first before the mcount callers are processed. Signed-off-by: Li Hong <lihong.hi@gmail.com> LKML-Reference: <20091028050523.GE30758@uhli> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2009-10-29tracing: Fix objcopy revision check in recordmcount.plLi Hong1-29/+27
The current logic to check objcopy's version is incorrect. This patch fixes the algorithm and disables the use of local functions as a reference if the objcopy version does not support static to global conversions. Also remove some usused variables. Signed-off-by: Li Hong <lihong.hi@gmail.com> LKML-Reference: <20091028050421.GD30758@uhli> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2009-10-29tracing: Check absolute path of input file in recordmcount.plLi Hong1-1/+1
The ftrace.c file may reference the mcount function and this may interfere with the recordmcount.pl processing. To avoid this, the code does not process the kernel/trace/ftrace.o. But currently the check is against a relative path. This patch modifies the check to succeed if the path is an absolute path. Signed-off-by: Li Hong <lihong.hi@gmail.com> LKML-Reference: <20091028050332.GC30758@uhli> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2009-10-29tracing: Correct the check for number of arguments in recordmcount.plLi Hong1-1/+1
The number of arguments passed into recordmcount.pl is 10, but the code checks if only 7 are passed in. Signed-off-by: Li Hong <lihong.hi@gmail.com> LKML-Reference: <20091027065733.GB22032@uhli> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2009-10-29tracing: Amend documentation in recordmcount.pl to reflect implementationLi Hong1-35/+49
The documentation currently says we will use the first function in a section as a reference. The actual algorithm is: choose the first global function we meet as a reference. If there is none, choose the first local one. Change the documentation to be consistent with the code. Also add several other clarifications. Signed-off-by: Li Hong <lihong.hi@gmail.com> LKML-Reference: <20091028050138.GA30758@uhli> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2009-10-14tracing: Enable "__cold" functionsJiri Olsa1-0/+1
Based on the commit: a586df06 "x86: Support __attribute__((__cold__)) in gcc 4.3" some of the functions goes to the ".text.unlikely" section. Looks like there's not many of them (I found printk, panic, __ssb_dma_not_implemented, fat_fs_error), but still worth to include I think. Signed-off-by: Jiri Olsa <jolsa@redhat.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Signed-off-by: Steven Rostedt <rostedt@goodmis.org> LKML-Reference: <20091013203426.175845614@goodmis.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-08-11Merge branch 'linus' into tracing/coreIngo Molnar1-4/+8
Conflicts: kernel/trace/trace_events_filter.c We use the tracing/core version. Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-08-07tracing: Fix recordmcount.pl to handle sections with only weak functionsSteven Rostedt1-2/+2
Roland Dreier found that a section that contained only a weak function in one of the staging drivers and this caused recordmcount.pl to spit out a warning and fail. Although it is strange that a driver would have a weak function, and this function only be used in one place, it should not be something to make recordmcount.pl fail. This patch fixes the issue in a simple manner: if only weak functions exist in a section, then that section will not be recorded. Reported-by: Roland Dreier <rdreier@cisco.com> Signed-off-by: Steven Rostedt <rostedt@goodmis.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-08-05tracing: do not use functions starting with .L in recordmcount.plSteven Rostedt1-1/+4
On Wed, 5 Aug 2009, Ingo Molnar wrote: > * Dave Airlie <airlied@gmail.com> wrote: > > > Hey, > > > > So I spent 3-4 hrs today (I'm stupid yes) tracking down a .o > > breakage by blaming rawhide gcc/binutils as I was using make > > V=1and seeing only the compiler chain running, > > Hm, is this that powerpc related build bug you just reported? Well we tracked it down and it is powerpc64 specific. Seems that in drivers/hwmon/lm93.c there's a function called: LM93_IN_FROM_REG() But PPC64 has function descriptors and the real function names (the ones you see in objdump) start with a '.'. Thus this in objdump you have: Disassembly of section .text: 0000000000000000 <.LM93_IN_FROM_REG>: 0: 7c 08 02 a6 mflr r0 4: fb 81 ff e0 std r28,-32(r1) The function name used is .LM93_IN_FROM_REG. But gcc considers symbols that start with ".L" as a special symbol that is used inside the assembly stage. The nm passed into recordmcount uses the --synthetic option which shows the ".L" symbols (my runs outside of the build did not include the --synthetic option, so my older patch worked). We see the function as a local. Now to capture all the locations that use "mcount" we need to have a reference to link into the object file a list of mcount callers. We need a reference that will not disappear. We try to use a global function and if that does not work, we use a local function as a reference. But to relink the section back into the object, we need to make it global. In this case, we run objcopy using --globalize-symbol and --localize-symbol to convert the symbol into a global symbol, link the mcount list, then convert it back to a local symbol. This works great except for this case. .L* symbols can not be converted into a global symbol, and the mcount section referencing it will remain unresolved. Reported-by: Dave Airlie <airlied@gmail.com> LKML-Reference: <alpine.DEB.2.00.0908052011590.5010@gandalf.stny.rr.com> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2009-07-23ftrace: Only update $offset when we update $ref_funcMatt Fleming1-1/+2
The value of $offset should be the offset of $ref_func from the beginning of the object file. Therefore, we should set both variables together. This fixes a bug I was hitting on sh where $offset (which is used to calcualte the addends for the __mcount_loc entries) was being set multiple times and didn't correspond to $ref_func's offset in the object file. The addends in __mcount_loc were calculated incorrectly, resulting in ftrace dynamically modifying addresses that weren't mcount call sites. Signed-off-by: Matt Fleming <matt@console-pimps.org> LKML-Reference: <1248365775-25196-2-git-send-email-matt@console-pimps.org> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2009-07-23ftrace: Fix the conditional that updates $ref_funcMatt Fleming1-1/+1
Fix the conditional that checks if we already have a $ref_func and that the new function is weak. The code as previously checking whether either condition was false, and we really need to only update $ref_func is both cconditions are false. Signed-off-by: Matt Fleming <matt@console-pimps.org> LKML-Reference: <1248365775-25196-1-git-send-email-matt@console-pimps.org> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2009-07-18tracing: Remove .globl in the scripts/recordmcount.pl docjolsa@redhat.com1-1/+0
I was reading throught the recordmcount.pl starting comment, and spotted a tiny discrepancy. The second example is about my_func not being global, but the example code has the ".globl my_func" statement just moved. Signed-off-by: Jiri Olsa <jolsa@redhat.com> Cc: rostedt@goodmis.org LKML-Reference: <1247773468-11594-4-git-send-email-jolsa@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-06-16sparc64: Add proper dynamic ftrace support.David S. Miller1-0/+20
Signed-off-by: David S. Miller <davem@davemloft.net> Acked-by: Steven Rostedt <rostedt@goodmis.org> Acked-by: Ingo Molnar <mingo@elte.hu>
2009-06-12[S390] ftrace: add dynamic ftrace supportHeiko Carstens1-0/+13
Dynamic ftrace support for s390. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2009-05-05ftrace: use .sched.text, not .text.sched in recordmcount.plTim Abbott1-3/+3
The only references in the kernel to the .text.sched section are in recordmcount.pl. Since the code it has is intended to be example code it should refer to real kernel sections. So change it to .sched.text instead. [ Impact: consistency in comments ] Signed-off-by: Tim Abbott <tabbott@mit.edu> LKML-Reference: <1241136371-10768-1-git-send-email-tabbott@mit.edu> Acked-by: Sam Ravnborg <sam@ravnborg.org> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2009-01-18ftrace: test for running of recordmcount.pl twice on an objectSteven Rostedt1-3/+18
Impact: fix failure of dynamic function tracer selftest In a course of development, a developer does several makes on their kernel. Sometimes, the make might do something abnormal. In the case of running the recordmcount.pl script on an object twice, the script will duplicate all the calls to mcount in the __mcount_loc section. On boot up, the dynamic function tracer is careful when it modifies code, and performs several consistency checks. One is to not modify the call site if it is not what it expects it to be. If a function call site is listed twice, the first entry will convert the site to a nop, and the second will fail because it expected to see a call to mcount, but instead it sees a nop. Thus, the function tracer is disabled. Eric Sesterhenn reported seeing: [ 1.055440] ftrace: converting mcount calls to 0f 1f 44 00 00 [ 1.055568] ftrace: allocating 29418 entries in 116 pages [ 1.061000] ------------[ cut here ]------------ [ 1.061000] WARNING: at kernel/trace/ftrace.c:441 [...] [ 1.060000] ---[ end trace 4eaa2a86a8e2da23 ]--- [ 1.060000] ftrace failed to modify [<c0118072>] check_corruption+0x3/0x2d [ 1.060000] actual: 0f:1f:44:00:00 This warning shows that check_corruption+0x3 already had a nop in its place (0x0f1f440000). After compiling another kernel the problem went away. Later Eric Paris notice the same type of issue. Luckily, he saved the vmlinux file that caused it. In the file we found a bunch of duplicate mcount call site records, which lead us to the script. Perhaps this problem only happens to people named Eric. This patch changes the script to test if the __mcount_loc already exists in the object file, and if it does, it will print out an error message and kill the compile. Reported-by: Eric Sesterhenn <snakebyte@gmx.de> Reported-by: Eric Paris <eparis@redhat.com> Signed-off-by: Steven Rostedt <srostedt@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-01-14ftrace, ia64: Add recordmcount for ia64Shaohua Li1-0/+7
Add recordmcount for ia64. Signed-off-by: Shaohua Li <shaohua.li@intel.com> Signed-off-by: Steven Rostedt <srostedt@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-01-14ftrace, ia64: explictly ignore a file in recordmcount.plShaohua Li1-0/+5
In IA64, a function pointer isn't a 'unsigned long' but a 'struct {unsigned long ip, unsigned long gp}'. MCOUNT_ADDR is determined at link time not compile time, so explictly ignore kernel/trace/ftrace.o in recordmcount.pl. Signed-off-by: Shaohua Li <shaohua.li@intel.com> Signed-off-by: Steven Rostedt <srostedt@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-01-14ftrace, ia64: make recordmcount distinct module compileShaohua Li1-3/+3
In IA64, module build and kernel build use different option. Make recordmcount.pl differentiate the two cases. Signed-off-by: Shaohua Li <shaohua.li@intel.com> Acked-by: Sam Ravnborg <sam@ravnborg.org> Signed-off-by: Steven Rostedt <srostedt@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-12-12tracing/function-graph-tracer: add a new .irqentry.text sectionFrederic Weisbecker1-0/+1
Impact: let the function-graph-tracer be aware of the irq entrypoints Add a new .irqentry.text section to store the irq entrypoints functions inside the same section. This way, the tracer will be able to signal an interrupts triggering on output by recognizing these entrypoints. Also, make this section recordable for dynamic tracing. Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-11-26ftrace: adding other non-leaving .text sectionsLiming Wang1-0/+2
Impact: widen the scope of recordmcount.pl Besides .text section, there are three .text sections that won't be freed after kernel booting. They are: .sched.text, .spinlock.text and .kprobes.text, which contain functions we can trace. But the last section ".kprobes.text" is particular, which has been marked as "notrace", we ignore it. Thus we add other two sections. Signed-off-by: Liming Wang <liming.wang@windriver.com> Acked-by: Steven Rostedt <rostedt@goodmis.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-11-23ftrace: scripts/recordmcount.pl support for ARMJim Radford1-1/+7
Impact: extend scripts/recordmcount.pl to ARM Arm uses %progbits instead of @progbits and requires only 4 byte alignment. [ Thanks to Sam Ravnborg for mentioning that ARM uses %progbits ] Signed-off-by: Jim Radford <radford@galvanix.com> Signed-off-by: Steven Rostedt <srostedt@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-11-23ftrace: specify $alignment for sh architectureMatt Fleming1-1/+2
Impact: extend scripts/recordmcount.pl with default alignment for SH Set $alignment=2 for the sh architecture so that a ".align 2" directive will be emitted for all __mcount_loc sections. Fix a whitspace error while I'm here (converted spaces to tabs). Signed-off-by: Matt Fleming <mjf@gentoo.org> Signed-off-by: Steven Rostedt <srostedt@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-11-23ftrace: create default variables for archs in recordmcount.plSteven Rostedt1-16/+11
Impact: cleanup of recordmcount.pl Now that more architectures are being ported to the MCOUNT_RECORD method, there is no reason to have each declare their own arch specific variable if most of them share the same value. This patch creates a set of default values for the arch specific variables based off of i386. Signed-off-by: Steven Rostedt <srostedt@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-11-23ftrace: add support for powerpc to recordmcount.pl scriptSteven Rostedt1-2/+17
Impact: Add PowerPC port to recordmcount.pl script This patch updates the recordmcount.pl script to process PowerPC. Signed-off-by: Steven Rostedt <srostedt@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-11-23sh: dynamic ftrace support.Matt Fleming1-0/+11
First cut at dynamic ftrace support. [ Steven Rostedt - only updated the recordmcount.pl file. There are updates for PowerPC that will conflict with this, and we need to base off of these changes. ] Signed-off-by: Matt Fleming <mjf@gentoo.org> Signed-off-by: Paul Mundt <lethal@linux-sh.org> Signed-off-by: Steven Rostedt <srostedt@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-11-08ftrace: align __mcount_loc sectionsMatt Fleming1-0/+4
Impact: add alignment option for recordmcount.pl script Align the __mcount_loc sections so that architectures with strict alignment requirements need not worry about performing unaligned accesses. This fixes an issue where I was seeing unaligned accesses, which are not supported on our architecture (the results of an unaligned access are undefined). Signed-off-by: Matt Fleming <matthew.fleming@imgtec.com> Signed-off-by: Steven Rostedt <srostedt@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>