summaryrefslogtreecommitdiffstats
path: root/tools
AgeCommit message (Collapse)AuthorFilesLines
2020-05-28perf intel-pt: Refine kernel decoding only warning messageAdrian Hunter1-1/+2
Stop the message displaying when user space is not being traced. Example: Prerequisites: sudo setcap "cap_sys_rawio,cap_sys_admin,cap_sys_ptrace,cap_syslog,cap_ipc_lock=ep" ~/bin/perf sudo chmod +r /proc/kcore Before: $ perf record --no-switch-events --kcore -a -e intel_pt//k -- sleep 0.001 Warning: Intel Processor Trace decoding will not be possible except for kernel tracing! [ perf record: Woken up 1 times to write data ] [ perf record: Captured and wrote 0.838 MB perf.data ] After: $ perf record --no-switch-events --kcore -a -e intel_pt//k -- sleep 0.001 [ perf record: Woken up 1 times to write data ] [ perf record: Captured and wrote 1.068 MB perf.data ] $ sudo chmod go-r /proc/kcore $ sudo setcap -r ~/bin/perf Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Mathieu Poirier <mathieu.poirier@linaro.org> Link: http://lore.kernel.org/lkml/20200528120859.21604-2-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-05-28perf record: Respect --no-switch-eventsAdrian Hunter5-5/+16
Context switch events are added automatically by Intel PT and Coresight. Make it possible to suppress them. That is useful for tracing the scheduler without the disturbance that the switch event processing creates. Example: Prerequisites: $ which perf ~/bin/perf $ sudo setcap "cap_sys_rawio,cap_sys_admin,cap_sys_ptrace,cap_syslog,cap_ipc_lock=ep" ~/bin/perf $ sudo chmod +r /proc/kcore Before: $ perf record --no-switch-events --kcore -a -e intel_pt//k -- sleep 0.001 [ perf record: Woken up 1 times to write data ] [ perf record: Captured and wrote 0.938 MB perf.data ] $ perf script -D | grep PERF_RECORD_SWITCH | wc -l 572 After: $ perf record --no-switch-events --kcore -a -e intel_pt//k -- sleep 0.001 Warning: Intel Processor Trace decoding will not be possible except for kernel tracing! [ perf record: Woken up 1 times to write data ] [ perf record: Captured and wrote 0.838 MB perf.data ] $ perf script -D | grep PERF_RECORD_SWITCH | wc -l 0 $ sudo chmod go-r /proc/kcore $ sudo setcap -r ~/bin/perf Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Mathieu Poirier <mathieu.poirier@linaro.org> Link: http://lore.kernel.org/lkml/20200528120859.21604-1-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-05-28perf script: Fix --call-trace for Intel PTAdrian Hunter1-4/+15
Make process_attr() respect -F-ip, noting also that the condition in process_attr() (callchain_param.record_mode != CALLCHAIN_NONE) is always true so test the sample type directly. Example: Before: $ perf record -e intel_pt//u uname Linux [ perf record: Woken up 1 times to write data ] [ perf record: Captured and wrote 0.033 MB perf.data ] $ perf script --call-trace | head -5 uname 30992 [006] 41758.313696574: cbr: 42 freq: 4219 MHz (156%) 0 [unknown] ([unknown] ) uname 30992 [006] 41758.313696907: _start 7f71792c4100 _start+0x0 (/usr/lib/x86_64-linux-gnu/ld-2.31.so ) uname 30992 [006] 41758.313699574: _dl_start 7f71792c4103 _start+0x3 (/usr/lib/x86_64-linux-gnu/ld-2.31.so ) uname 30992 [006] 41758.313699907: _dl_start 7f71792c4e18 _dl_start+0x28 (/usr/lib/x86_64-linux-gnu/ld-2.31.so ) uname 30992 [006] 41758.313701574: _dl_start 7f71792c5128 _dl_start+0x338 (/usr/lib/x86_64-linux-gnu/ld-2.31.so ) After: $ perf script --call-trace | head -5 uname 30992 [006] 41758.313696574: cbr: 42 freq: 4219 MHz (156%) uname 30992 [006] 41758.313696907: (/usr/lib/x86_64-linux-gnu/ld-2.31.so ) _start uname 30992 [006] 41758.313699574: (/usr/lib/x86_64-linux-gnu/ld-2.31.so ) _dl_start uname 30992 [006] 41758.313699907: (/usr/lib/x86_64-linux-gnu/ld-2.31.so ) _dl_start uname 30992 [006] 41758.313701574: (/usr/lib/x86_64-linux-gnu/ld-2.31.so ) _dl_start Fixes: f288e8e1aa4f ("perf script: Enable IP fields for callchains") Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Jiri Olsa <jolsa@redhat.com> Link: http://lore.kernel.org/lkml/20200527180250.16723-1-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-05-28perf evlist: Disable 'immediate' events lastAdrian Hunter1-10/+21
Events marked as 'immediate' are started before other events to ensure that there is context at the start of the main tracing events. The same is true at the end of tracing, so disable 'immediate' events after other events. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Borislav Petkov <bp@alien8.de> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Leo Yan <leo.yan@linaro.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Mathieu Poirier <mathieu.poirier@linaro.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Steven Rostedt (VMware) <rostedt@goodmis.org> Cc: x86@kernel.org Link: http://lore.kernel.org/lkml/20200512121922.8997-11-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-05-28perf kcore_copy: Fix module map when there are no modules loadedAdrian Hunter1-0/+7
In the absence of any modules, no "modules" map is created, but there are other executable pages to map, due to eBPF JIT, kprobe or ftrace. Map them by recognizing that the first "module" symbol is not necessarily from a module, and adjust the map accordingly. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Borislav Petkov <bp@alien8.de> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Leo Yan <leo.yan@linaro.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Mathieu Poirier <mathieu.poirier@linaro.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Steven Rostedt (VMware) <rostedt@goodmis.org> Cc: x86@kernel.org Link: http://lore.kernel.org/lkml/20200512121922.8997-10-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-05-28perf jvmti: Fix demangling Java symbolsNick Gasson1-6/+7
For a Java method signature like: Ljava/lang/AbstractStringBuilder;appendChars(Ljava/lang/String;II)V The demangler produces: void class java.lang.AbstractStringBuilder.appendChars(class java.lang., shorttring., int, int) The arguments should be (java.lang.String, int, int) but the demangler interprets the "S" in String as the type code for "short". Correct this and two other minor things: - There is no "bool" type in Java, should be "boolean". - The demangler prepends "class" to every Java class name. This is not standard Java syntax and it wastes a lot of horizontal space if the signature is long. Remove this as there isn't any ambiguity between class names and primitives. Committer notes: This was split from a larger patch that also added a java demangler 'perf test' entry, that, before this patch shows the error being fixed by it: $ perf test java 65: Demangle Java : FAILED! $ perf test -v java Couldn't bump rlimit(MEMLOCK), failures may take place when creating BPF maps, etc 65: Demangle Java : --- start --- test child forked, pid 307264 FAILED: Ljava/lang/StringLatin1;equals([B[B)Z: bool class java.lang.StringLatin1.equals(byte[], byte[]) != boolean java.lang.StringLatin1.equals(byte[], byte[]) FAILED: Ljava/util/zip/ZipUtils;CENSIZ([BI)J: long class java.util.zip.ZipUtils.CENSIZ(byte[], int) != long java.util.zip.ZipUtils.CENSIZ(byte[], int) FAILED: Ljava/util/regex/Pattern$BmpCharProperty;match(Ljava/util/regex/Matcher;ILjava/lang/CharSequence;)Z: bool class java.util.regex.Pattern$BmpCharProperty.match(class java.util.regex.Matcher., int, class java.lang., charhar, shortequence) != boolean java.util.regex.Pattern$BmpCharProperty.match(java.util.regex.Matcher, int, java.lang.CharSequence) FAILED: Ljava/lang/AbstractStringBuilder;appendChars(Ljava/lang/String;II)V: void class java.lang.AbstractStringBuilder.appendChars(class java.lang., shorttring., int, int) != void java.lang.AbstractStringBuilder.appendChars(java.lang.String, int, int) FAILED: Ljava/lang/Object;<init>()V: void class java.lang.Object<init>() != void java.lang.Object<init>() test child finished with -1 ---- end ---- Demangle Java: FAILED! $ After applying this patch: $ perf test java 65: Demangle Java : Ok $ Signed-off-by: Nick Gasson <nick.gasson@arm.com> Reviewed-by: Ian Rogers <irogers@google.com> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Tested-by: Ian Rogers <irogers@google.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lore.kernel.org/lkml/20200427061520.24905-4-nick.gasson@arm.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-05-28perf tests: Add test for the java demanglerNick Gasson4-0/+48
Split from a larger patch that was also fixing a problem with the java demangler, so, before applying that patch we see: $ perf test java 65: Demangle Java : FAILED! $ perf test -v java 65: Demangle Java : --- start --- test child forked, pid 307264 FAILED: Ljava/lang/StringLatin1;equals([B[B)Z: bool class java.lang.StringLatin1.equals(byte[], byte[]) != boolean java.lang.StringLatin1.equals(byte[], byte[]) FAILED: Ljava/util/zip/ZipUtils;CENSIZ([BI)J: long class java.util.zip.ZipUtils.CENSIZ(byte[], int) != long java.util.zip.ZipUtils.CENSIZ(byte[], int) FAILED: Ljava/util/regex/Pattern$BmpCharProperty;match(Ljava/util/regex/Matcher;ILjava/lang/CharSequence;)Z: bool class java.util.regex.Pattern$BmpCharProperty.match(class java.util.regex.Matcher., int, class java.lang., charhar, shortequence) != boolean java.util.regex.Pattern$BmpCharProperty.match(java.util.regex.Matcher, int, java.lang.CharSequence) FAILED: Ljava/lang/AbstractStringBuilder;appendChars(Ljava/lang/String;II)V: void class java.lang.AbstractStringBuilder.appendChars(class java.lang., shorttring., int, int) != void java.lang.AbstractStringBuilder.appendChars(java.lang.String, int, int) FAILED: Ljava/lang/Object;<init>()V: void class java.lang.Object<init>() != void java.lang.Object<init>() test child finished with -1 ---- end ---- Demangle Java: FAILED! $ Next patch should fix this. Signed-off-by: Nick Gasson <nick.gasson@arm.com> Reviewed-by: Ian Rogers <irogers@google.com> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Tested-by: Ian Rogers <irogers@google.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lore.kernel.org/lkml/20200427061520.24905-4-nick.gasson@arm.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-05-28perf jvmti: Do not report error when missing debug informationNick Gasson1-2/+11
If the Java sources are compiled with -g:none to disable debug information the perf JVMTI plugin reports a lot of errors like: java: GetLineNumberTable failed with JVMTI_ERROR_ABSENT_INFORMATION java: GetLineNumberTable failed with JVMTI_ERROR_ABSENT_INFORMATION java: GetLineNumberTable failed with JVMTI_ERROR_ABSENT_INFORMATION java: GetLineNumberTable failed with JVMTI_ERROR_ABSENT_INFORMATION java: GetLineNumberTable failed with JVMTI_ERROR_ABSENT_INFORMATION Instead if GetLineNumberTable returns JVMTI_ERROR_ABSENT_INFORMATION simply skip emitting line number information for that method. Unlike the previous patch these errors don't affect the jitdump generation, they just generate a lot of noise. Similarly for native methods which also don't have line tables. Signed-off-by: Nick Gasson <nick.gasson@arm.com> Reviewed-by: Ian Rogers <irogers@google.com> Tested-by: Ian Rogers <irogers@google.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lore.kernel.org/lkml/20200427061520.24905-3-nick.gasson@arm.com [ Moved || operator to the end of the line, not at the start of 2nd if condition ] Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-05-28perf jvmti: Fix jitdump for methods without debug infoNick Gasson1-11/+0
If a Java class is compiled with -g:none to omit debug information, the JVMTI plugin won't write jitdump entries for any method in this class and prints a lot of errors like: java: GetSourceFileName failed with JVMTI_ERROR_ABSENT_INFORMATION The call to GetSourceFileName is used to derive the file name `fn`, but this value is not actually used since commit ca58d7e64bdf ("perf jvmti: Generate correct debug information for inlined code") which moved the file name lookup into fill_source_filenames(). So the call to GetSourceFileName and related code can be safely removed. Signed-off-by: Nick Gasson <nick.gasson@arm.com> Reviewed-by: Ian Rogers <irogers@google.com> Tested-by: Ian Rogers <irogers@google.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lore.kernel.org/lkml/20200427061520.24905-2-nick.gasson@arm.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-05-28perf symbols: Fix debuginfo search for UbuntuAdrian Hunter4-0/+20
Reportedly, from 19.10 Ubuntu has begun mixing up the location of some debug symbol files, putting files expected to be in /usr/lib/debug/usr/lib into /usr/lib/debug/lib instead. Fix by adding another dso_binary_type. Example on Ubuntu 20.04 Before: $ perf record -e intel_pt//u uname Linux [ perf record: Woken up 1 times to write data ] [ perf record: Captured and wrote 0.030 MB perf.data ] $ perf script --call-trace | head -5 uname 14003 [005] 15321.764958566: cbr: 42 freq: 4219 MHz (156%) uname 14003 [005] 15321.764958566: (/usr/lib/x86_64-linux-gnu/ld-2.31.so ) 7f1e71cc4100 uname 14003 [005] 15321.764961566: (/usr/lib/x86_64-linux-gnu/ld-2.31.so ) 7f1e71cc4df0 uname 14003 [005] 15321.764961900: (/usr/lib/x86_64-linux-gnu/ld-2.31.so ) 7f1e71cc4e18 uname 14003 [005] 15321.764963233: (/usr/lib/x86_64-linux-gnu/ld-2.31.so ) 7f1e71cc5128 After: $ perf script --call-trace | head -5 uname 14003 [005] 15321.764958566: cbr: 42 freq: 4219 MHz (156%) uname 14003 [005] 15321.764958566: (/usr/lib/x86_64-linux-gnu/ld-2.31.so ) _start uname 14003 [005] 15321.764961566: (/usr/lib/x86_64-linux-gnu/ld-2.31.so ) _dl_start uname 14003 [005] 15321.764961900: (/usr/lib/x86_64-linux-gnu/ld-2.31.so ) _dl_start uname 14003 [005] 15321.764963233: (/usr/lib/x86_64-linux-gnu/ld-2.31.so ) _dl_start Reported-by: Travis Downs <travis.downs@gmail.com> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: stable@vger.kernel.org Link: http://lore.kernel.org/lkml/20200526155207.9172-1-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-05-28perf parse: Add 'struct parse_events_state' pointer to scannerJiri Olsa3-10/+14
We need to pass more data to the scanner so let's start with having it to take pointer to 'struct parse_events_state' object instead of just start token. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Ian Rogers <irogers@google.com> Cc: Michael Petlan <mpetlan@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Link: http://lore.kernel.org/lkml/20200524224219.234847-4-jolsa@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-05-28perf stat: Do not pass avg to generic_metricJiri Olsa1-8/+2
There's no need to pass the given evsel's count to metric data, because it will be pushed again within the following metric_events loop. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Ian Rogers <irogers@google.com> Cc: Michael Petlan <mpetlan@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Link: http://lore.kernel.org/lkml/20200524224219.234847-3-jolsa@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-05-28perf tests: Consider subtests when searching for user specified testsJiri Olsa1-8/+26
It's now possible to put subtest name as a test filter: $ perf test 'PMU event table sanity' 10: PMU events : 10.1: PMU event table sanity : Ok Committer testing: Before: $ perf test 'PMU event table sanity' $ After: $ perf test 'PMU event table sanity' 10: PMU events : 10.1: PMU event table sanity : Ok $ Signed-off-by: Jiri Olsa <jolsa@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Ian Rogers <irogers@google.com> Cc: Michael Petlan <mpetlan@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Link: http://lore.kernel.org/lkml/20200524224219.234847-2-jolsa@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-05-28perf list: Add metrics to command line usageIan Rogers1-1/+1
Before: Usage: perf list [<options>] [hw|sw|cache|tracepoint|pmu|sdt|event_glob] After: Usage: perf list [<options>] [hw|sw|cache|tracepoint|pmu|sdt|metric|metricgroup|event_glob] Committer testing: Before and after we get these outputs on a Lenovo t480s (i7-8650U): # perf list metricgroup List of pre-defined events (to be used in -e): Metric Groups: BrMispredicts BrMispredicts_SMT Branches Cache_Misses DSB FLOPS FLOPS_SMT Fetch_BW IcMiss Instruction_Type Memory_BW Memory_Bound Memory_Lat No_group PGO Pipeline Power Retire SMT Summary TLB TLB_SMT TopDownL1 TopDownL1_SMT TopdownL1 TopdownL1_SMT # # perf list metric | head -11 Metrics: Backend_Bound [This category represents fraction of slots where no uops are being delivered due to a lack of required resources for accepting new uops in the Backend] Backend_Bound_SMT [This category represents fraction of slots where no uops are being delivered due to a lack of required resources for accepting new uops in the Backend. SMT version; use when SMT is enabled and measuring per logical CPU] Bad_Speculation [This category represents fraction of slots wasted due to incorrect speculations] Bad_Speculation_SMT [This category represents fraction of slots wasted due to incorrect speculations. SMT version; use when SMT is enabled and measuring per logical CPU] # Signed-off-by: Ian Rogers <irogers@google.com> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Link: http://lore.kernel.org/lkml/20200522064546.164259-1-irogers@google.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-05-28perf script: Don't force less for non tty output with --xedAndi Kleen1-1/+4
--xed currently forces less. When piping the output to other scripts this can waste a lot of CPU time because less is rather slow. I've seen it using up a full core on its own in a pipeline. Only force less when the output is actually a terminal. Signed-off-by: Andi Kleen <ak@linux.intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Link: http://lore.kernel.org/lkml/20200522020914.527564-1-andi@firstfloor.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-05-28perf metricgroup: Remove unnecessary ',' from eventsIan Rogers1-2/+7
Remove unnecessary commas from events before they are parsed. This avoids ',' being echoed by parse-events.l. Signed-off-by: Ian Rogers <irogers@google.com> Acked-by: Jiri Olsa <jolsa@redhat.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Andrii Nakryiko <andriin@fb.com> Cc: Cong Wang <xiyou.wangcong@gmail.com> Cc: Jin Yao <yao.jin@linux.intel.com> Cc: John Garry <john.garry@huawei.com> Cc: Kajol Jain <kjain@linux.ibm.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Kim Phillips <kim.phillips@amd.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Paul Clarke <pc@us.ibm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Song Liu <songliubraving@fb.com> Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com> Cc: Stephane Eranian <eranian@google.com> Cc: Vince Weaver <vincent.weaver@maine.edu> Cc: bpf@vger.kernel.org Cc: netdev@vger.kernel.org Link: http://lore.kernel.org/lkml/20200520182011.32236-8-irogers@google.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-05-28perf metricgroup: Add options to not group or mergeIan Rogers5-19/+87
Add --metric-no-group that causes all events within metrics to not be grouped. This can allow the event to get more time when multiplexed, but may also lower accuracy. Add --metric-no-merge option. By default events in different metrics may be shared if the group of events for one metric is the same or larger than that of the second. Sharing may increase or lower accuracy and so is now configurable. Signed-off-by: Ian Rogers <irogers@google.com> Acked-by: Jiri Olsa <jolsa@redhat.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Andrii Nakryiko <andriin@fb.com> Cc: Cong Wang <xiyou.wangcong@gmail.com> Cc: Jin Yao <yao.jin@linux.intel.com> Cc: John Garry <john.garry@huawei.com> Cc: Kajol Jain <kjain@linux.ibm.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Kim Phillips <kim.phillips@amd.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Paul Clarke <pc@us.ibm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Song Liu <songliubraving@fb.com> Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com> Cc: Stephane Eranian <eranian@google.com> Cc: Vince Weaver <vincent.weaver@maine.edu> Cc: bpf@vger.kernel.org Cc: netdev@vger.kernel.org Link: http://lore.kernel.org/lkml/20200520182011.32236-7-irogers@google.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-05-28perf metricgroup: Remove duped metric group eventsIan Rogers1-29/+62
A metric group contains multiple metrics. These metrics may use the same events. If metrics use separate events then it leads to more multiplexing and overall metric counts fail to sum to 100%. Modify how metrics are associated with events so that if the events in an earlier group satisfy the current metric, the same events are used. A record of used events is kept and at the end of processing unnecessary events are eliminated. Before: $ perf stat -a -M TopDownL1 sleep 1 Performance counter stats for 'system wide': 920,211,343 uops_issued.any # 0.5 Backend_Bound (16.56%) 1,977,733,128 idq_uops_not_delivered.core (16.56%) 51,668,510 int_misc.recovery_cycles (16.56%) 732,305,692 uops_retired.retire_slots (16.56%) 1,497,621,849 cycles (16.56%) 721,098,274 uops_issued.any # 0.1 Bad_Speculation (16.79%) 1,332,681,791 cycles (16.79%) 552,475,482 uops_retired.retire_slots (16.79%) 47,708,340 int_misc.recovery_cycles (16.79%) 1,383,713,292 cycles # 0.4 Frontend_Bound (16.76%) 2,013,757,701 idq_uops_not_delivered.core (16.76%) 1,373,363,790 cycles # 0.1 Retiring (33.54%) 577,302,589 uops_retired.retire_slots (33.54%) 392,766,987 inst_retired.any # 0.3 IPC (50.24%) 1,351,873,350 cpu_clk_unhalted.thread (50.24%) 1,332,510,318 cycles # 5330041272.0 SLOTS (49.90%) 1.006336145 seconds time elapsed After: $ perf stat -a -M TopDownL1 sleep 1 Performance counter stats for 'system wide': 765,949,145 uops_issued.any # 0.1 Bad_Speculation # 0.5 Backend_Bound (50.09%) 1,883,830,591 idq_uops_not_delivered.core # 0.3 Frontend_Bound (50.09%) 48,237,080 int_misc.recovery_cycles (50.09%) 581,798,385 uops_retired.retire_slots # 0.1 Retiring (50.09%) 1,361,628,527 cycles # 5446514108.0 SLOTS (50.09%) 391,415,714 inst_retired.any # 0.3 IPC (49.91%) 1,336,486,781 cpu_clk_unhalted.thread (49.91%) 1.005469298 seconds time elapsed Note: Bad_Speculation + Backend_Bound + Frontend_Bound + Retiring = 100% after, where as before it is 110%. After there are 2 groups, whereas before there are 6. After the cycles event appears once, before it appeared 5 times. Signed-off-by: Ian Rogers <irogers@google.com> Acked-by: Jiri Olsa <jolsa@redhat.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Andrii Nakryiko <andriin@fb.com> Cc: Cong Wang <xiyou.wangcong@gmail.com> Cc: Jin Yao <yao.jin@linux.intel.com> Cc: John Garry <john.garry@huawei.com> Cc: Kajol Jain <kjain@linux.ibm.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Kim Phillips <kim.phillips@amd.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Paul Clarke <pc@us.ibm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Song Liu <songliubraving@fb.com> Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com> Cc: Stephane Eranian <eranian@google.com> Cc: Vince Weaver <vincent.weaver@maine.edu> Cc: bpf@vger.kernel.org Cc: netdev@vger.kernel.org Link: http://lore.kernel.org/lkml/20200520182011.32236-6-irogers@google.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-05-28perf metricgroup: Order event groups by sizeIan Rogers1-1/+15
When adding event groups to the group list, insert them in size order. This performs an insertion sort on the group list. By placing the largest groups at the front of the group list it is possible to see if a larger group contains the same events as a later group. This can make the later group redundant - it can reuse the events from the large group. A later patch will add this sharing. Signed-off-by: Ian Rogers <irogers@google.com> Acked-by: Jiri Olsa <jolsa@redhat.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Andrii Nakryiko <andriin@fb.com> Cc: Cong Wang <xiyou.wangcong@gmail.com> Cc: Jin Yao <yao.jin@linux.intel.com> Cc: John Garry <john.garry@huawei.com> Cc: Kajol Jain <kjain@linux.ibm.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Kim Phillips <kim.phillips@amd.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Paul Clarke <pc@us.ibm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Song Liu <songliubraving@fb.com> Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com> Cc: Stephane Eranian <eranian@google.com> Cc: Vince Weaver <vincent.weaver@maine.edu> Cc: bpf@vger.kernel.org Cc: netdev@vger.kernel.org Link: http://lore.kernel.org/lkml/20200520182011.32236-5-irogers@google.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-05-28perf metricgroup: Delay events string creationIan Rogers1-12/+21
Currently event groups are placed into groups_list at the same time as the events string containing the events is built. Separate these two operations and build the groups_list first, then the event string from the groups_list. This adds an ability to reorder the groups_list that will be used in a later patch. Signed-off-by: Ian Rogers <irogers@google.com> Acked-by: Jiri Olsa <jolsa@redhat.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Andrii Nakryiko <andriin@fb.com> Cc: Cong Wang <xiyou.wangcong@gmail.com> Cc: Jin Yao <yao.jin@linux.intel.com> Cc: John Garry <john.garry@huawei.com> Cc: Kajol Jain <kjain@linux.ibm.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Kim Phillips <kim.phillips@amd.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Paul Clarke <pc@us.ibm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Song Liu <songliubraving@fb.com> Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com> Cc: Stephane Eranian <eranian@google.com> Cc: Vince Weaver <vincent.weaver@maine.edu> Cc: bpf@vger.kernel.org Cc: netdev@vger.kernel.org Link: http://lore.kernel.org/lkml/20200520182011.32236-4-irogers@google.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-05-28perf metricgroup: Use early return in add_metricIan Rogers1-7/+15
Use early return in metricgroup__add_metric and try to make the intent of the returns more intention revealing. Suggested-by: Jiri Olsa <jolsa@redhat.com> Signed-off-by: Ian Rogers <irogers@google.com> Acked-by: Jiri Olsa <jolsa@redhat.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Andrii Nakryiko <andriin@fb.com> Cc: Cong Wang <xiyou.wangcong@gmail.com> Cc: Jin Yao <yao.jin@linux.intel.com> Cc: John Garry <john.garry@huawei.com> Cc: Kajol Jain <kjain@linux.ibm.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Kim Phillips <kim.phillips@amd.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Paul Clarke <pc@us.ibm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Song Liu <songliubraving@fb.com> Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com> Cc: Stephane Eranian <eranian@google.com> Cc: Vince Weaver <vincent.weaver@maine.edu> Cc: bpf@vger.kernel.org Cc: netdev@vger.kernel.org Link: http://lore.kernel.org/lkml/20200520182011.32236-3-irogers@google.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-05-28perf metricgroup: Always place duration_time lastIan Rogers1-9/+9
If a metric contains the duration_time event then the event is placed outside of the metric's group of events. Rather than split the group, make it so the duration_time is immediately after the group. Signed-off-by: Ian Rogers <irogers@google.com> Acked-by: Jiri Olsa <jolsa@redhat.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Andrii Nakryiko <andriin@fb.com> Cc: Cong Wang <xiyou.wangcong@gmail.com> Cc: Jin Yao <yao.jin@linux.intel.com> Cc: John Garry <john.garry@huawei.com> Cc: Kajol Jain <kjain@linux.ibm.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Kim Phillips <kim.phillips@amd.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Paul Clarke <pc@us.ibm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Song Liu <songliubraving@fb.com> Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com> Cc: Stephane Eranian <eranian@google.com> Cc: Vince Weaver <vincent.weaver@maine.edu> Cc: bpf@vger.kernel.org Cc: netdev@vger.kernel.org Link: http://lore.kernel.org/lkml/20200520182011.32236-2-irogers@google.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-05-28perf metricgroup: Free metric_events on errorIan Rogers1-0/+3
Avoid a simple memory leak. Signed-off-by: Ian Rogers <irogers@google.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Alexei Starovoitov <ast@kernel.org> Cc: Andi Kleen <ak@linux.intel.com> Cc: Andrii Nakryiko <andriin@fb.com> Cc: Cong Wang <xiyou.wangcong@gmail.com> Cc: Daniel Borkmann <daniel@iogearbox.net> Cc: Jin Yao <yao.jin@linux.intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: John Fastabend <john.fastabend@gmail.com> Cc: John Garry <john.garry@huawei.com> Cc: Kajol Jain <kjain@linux.ibm.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Kim Phillips <kim.phillips@amd.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Martin KaFai Lau <kafai@fb.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Song Liu <songliubraving@fb.com> Cc: Stephane Eranian <eranian@google.com> Cc: Vince Weaver <vincent.weaver@maine.edu> Cc: Yonghong Song <yhs@fb.com> Cc: bpf@vger.kernel.org Cc: kp singh <kpsingh@chromium.org> Cc: netdev@vger.kernel.org Link: http://lore.kernel.org/lkml/20200508053629.210324-10-irogers@google.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-05-28perf util: Fix potential SEGFAULT in put_tracepoints_path error pathLi Bin1-1/+1
This patch fix potential segment fault triggered in put_tracepoints_path() when the address of the local variable 'path' be freed in error path of record_saved_cmdline. Signed-off-by: Li Bin <huawei.libin@huawei.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Hongbo Yao <yaohongbo@huawei.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Xie XiuQi <xiexiuqi@huawei.com> Link: http://lore.kernel.org/lkml/20200521133218.30150-5-liwei391@huawei.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-05-28perf util: Fix memory leak of prefix_if_not_inXie XiuQi1-1/+1
Need to free "str" before return when asprintf() failed to avoid memory leak. Signed-off-by: Xie XiuQi <xiexiuqi@huawei.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Hongbo Yao <yaohongbo@huawei.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Li Bin <huawei.libin@huawei.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Namhyung Kim <namhyung@kernel.org> Link: http://lore.kernel.org/lkml/20200521133218.30150-4-liwei391@huawei.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-05-28perf ftrace: Detect workload failureChangbin Du1-2/+11
Currently there's no error message prompted if we failed to start workload. And we still get some trace which is confusing. Let's tell users what happened. Committer testing: Before: # perf ftrace nonsense |& head 5) | switch_mm_irqs_off() { 5) 0.400 us | load_new_mm_cr3(); 5) 3.261 us | } ------------------------------------------ 5) <idle>-0 => <...>-3494 ------------------------------------------ 5) | finish_task_switch() { 5) ==========> | 5) | smp_irq_work_interrupt() { # type nonsense -bash: type: nonsense: not found # After: # perf ftrace nonsense |& head workload failed: No such file or directory # type nonsense -bash: type: nonsense: not found # Signed-off-by: Changbin Du <changbin.du@gmail.com> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Steven Rostedt (VMware) <rostedt@goodmis.org> Link: http://lore.kernel.org/lkml/20200510150628.16610-3-changbin.du@gmail.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-05-28perf ftrace: Trace system wide if no target is givenChangbin Du1-1/+1
This align ftrace to other perf sub-commands that if no target specified then we trace all functions. Signed-off-by: Changbin Du <changbin.du@gmail.com> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Steven Rostedt (VMware) <rostedt@goodmis.org> Link: http://lore.kernel.org/lkml/20200510150628.16610-2-changbin.du@gmail.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-05-28perf branch: Replace zero-length array with flexible-arrayGustavo A. R. Silva1-1/+1
The current codebase makes use of the zero-length array language extension to the C90 standard, but the preferred mechanism to declare variable-length types such as these ones is a flexible array member[1][2], introduced in C99: struct foo { int stuff; struct boo array[]; }; By making use of the mechanism above, we will get a compiler warning in case the flexible array does not occur last in the structure, which will help us prevent some kind of undefined behavior bugs from being inadvertently introduced[3] to the codebase from now on. Also, notice that, dynamic memory allocations won't be affected by this change: "Flexible array members have incomplete type, and so the sizeof operator may not be applied. As a quirk of the original implementation of zero-length arrays, sizeof evaluates to zero."[1] sizeof(flexible-array-member) triggers a warning because flexible array members have incomplete type[1]. There are some instances of code in which the sizeof operator is being incorrectly/erroneously applied to zero-length arrays and the result is zero. Such instances may be hiding some bugs. So, this work (flexible-array member conversions) will also help to get completely rid of those sorts of issues. This issue was found with the help of Coccinelle and audited _manually_. [1] https://gcc.gnu.org/onlinedocs/gcc/Zero-Length.html [2] https://github.com/KSPP/linux/issues/21 [3] commit 76497732932f ("cxgb3/l2t: Fix undefined behaviour") Signed-off-by: Gustavo A. R. Silva <gustavoars@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Gustavo A. R. Silva <gustavo@embeddedor.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lore.kernel.org/lkml/20200520191613.GA26869@embeddedor Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-05-28perf config: Add stat.big-num supportPaul A. Clarke5-1/+29
Add support for new "stat.big-num" boolean option. This allows a user to set a default for "--no-big-num" for "perf stat" commands. -- $ perf config stat.big-num $ perf stat --event cycles /bin/true Performance counter stats for '/bin/true': 778,849 cycles [...] $ perf config stat.big-num=false $ perf config stat.big-num stat.big-num=false $ perf stat --event cycles /bin/true Performance counter stats for '/bin/true': 769622 cycles [...] -- There is an interaction with "--field-separator" that must be accommodated, such that specifying "--big-num --field-separator={x}" still reports an invalid combination of options. Documentation for perf-config and perf-stat updated. Signed-off-by: Paul Clarke <pc@us.ibm.com> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Link: http://lore.kernel.org/lkml/1589991815-17951-1-git-send-email-pc@us.ibm.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-05-28perf bpf-loader: Add missing '*' for key_scan_posWang ShaoBo1-1/+1
key_scan_pos is a pointer for getting scan position in bpf__obj_config_map() for each BPF map configuration term, but it's misused when error not happened. Committer notes: The point is that the only user of this is: tools/perf/util/parse-events.c err = bpf__config_obj(obj, term, parse_state->evlist, &error_pos); if (err) bpf__strerror_config_obj(obj, term, parse_state->evlist, &error_pos, err, errbuf, sizeof(errbuf)); And then: int bpf__strerror_config_obj(struct bpf_object *obj __maybe_unused, struct parse_events_term *term __maybe_unused, struct evlist *evlist __maybe_unused, int *error_pos __maybe_unused, int err, char *buf, size_t size) { bpf__strerror_head(err, buf, size); bpf__strerror_entry(BPF_LOADER_ERRNO__OBJCONF_MAP_TYPE, "Can't use this config term with this map type"); bpf__strerror_end(buf, size); return 0; } So this is infrastructure that Wang Nan put in place for providing better error messages but that he ended up not using, so I'll apply the fix, its correct even not fixing any real problem at this time. Fixes: 066dacbf2a32 ("perf bpf: Add API to set values to map entries in a bpf object") Signed-off-by: Wang ShaoBo <bobo.shaobowang@huawei.com> Acked-by: Jiri Olsa <jolsa@redhat.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Cheng Jian <cj.chengjian@huawei.com> Cc: Hanjun Guo <guohanjun@huawei.com> Cc: Li Bin <huawei.libin@huawei.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Wang Nan <wangnan0@huawei.com> Cc: Xie XiuQi <xiexiuqi@huawei.com> Cc: bpf@vger.kernel.org Link: http://lore.kernel.org/lkml/20200520033216.48310-1-bobo.shaobowang@huawei.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-05-28perf stat: Report summary for interval modeJin Yao3-5/+29
Currently 'perf stat' supports to print counts at regular interval (-I), but it's not very easy for user to get the overall statistics. The patch uses 'evsel->prev_raw_counts' to get counts for summary. Copy the counts to 'evsel->counts' after printing the interval results. Next, we just follow the non-interval processing. Let's see some examples, root@kbl-ppc:~# perf stat -e cycles -I1000 --interval-count 2 # time counts unit events 1.000412064 2,281,114 cycles 2.001383658 2,547,880 cycles Performance counter stats for 'system wide': 4,828,994 cycles 2.002860349 seconds time elapsed root@kbl-ppc:~# perf stat -e cycles,instructions -I1000 --interval-count 2 # time counts unit events 1.000389902 1,536,093 cycles 1.000389902 420,226 instructions # 0.27 insn per cycle 2.001433453 2,213,952 cycles 2.001433453 735,465 instructions # 0.33 insn per cycle Performance counter stats for 'system wide': 3,750,045 cycles 1,155,691 instructions # 0.31 insn per cycle 2.003023361 seconds time elapsed root@kbl-ppc:~# perf stat -M CPI,IPC -I1000 --interval-count 2 # time counts unit events 1.000435121 905,303 inst_retired.any # 2.9 CPI 1.000435121 2,663,333 cycles 1.000435121 914,702 inst_retired.any # 0.3 IPC 1.000435121 2,676,559 cpu_clk_unhalted.thread 2.001615941 1,951,092 inst_retired.any # 1.8 CPI 2.001615941 3,551,357 cycles 2.001615941 1,950,837 inst_retired.any # 0.5 IPC 2.001615941 3,551,044 cpu_clk_unhalted.thread Performance counter stats for 'system wide': 2,856,395 inst_retired.any # 2.2 CPI 6,214,690 cycles 2,865,539 inst_retired.any # 0.5 IPC 6,227,603 cpu_clk_unhalted.thread 2.003403078 seconds time elapsed Committer testing: Before: # perf stat -e cycles -I1000 --interval-count 2 # time counts unit events 1.000618627 26,877,408 cycles 2.001417968 233,672,829 cycles # After: # perf stat -e cycles -I1000 --interval-count 2 # time counts unit events 1.001531815 5,341,388,792 cycles 2.002936530 100,073,912 cycles Performance counter stats for 'system wide': 5,441,462,704 cycles 2.004893794 seconds time elapsed # Signed-off-by: Jin Yao <yao.jin@linux.intel.com> Reviewed-by: Jiri Olsa <jolsa@redhat.com> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lore.kernel.org/lkml/20200520042737.24160-6-yao.jin@linux.intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-05-28perf stat: Save aggr value to first member of prev_raw_countsJin Yao2-0/+21
To collect the overall statistics for interval mode, we copy the counts from evsel->prev_raw_counts to evsel->counts. For AGGR_GLOBAL mode, because the perf_stat_process_counter creates aggr values from per cpu values, but the per cpu values are 0, so the calculated aggr values will be always 0. This patch uses a trick that saves the previous aggr value to the first member of perf_counts, then aggr calculation in process_counter_values can work correctly for AGGR_GLOBAL. v6: --- Add comments in perf_evlist__save_aggr_prev_raw_counts. Signed-off-by: Jin Yao <yao.jin@linux.intel.com> Reviewed-by: Jiri Olsa <jolsa@redhat.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jin Yao <yao.jin@intel.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lore.kernel.org/lkml/20200520042737.24160-5-yao.jin@linux.intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-05-28perf stat: Copy counts from prev_raw_counts to evsel->countsJin Yao2-0/+25
It would be useful to support the overall statistics for perf-stat interval mode. For example, report the summary at the end of "perf-stat -I" output. But since perf-stat can support many aggregation modes, such as --per-thread, --per-socket, -M and etc, we need a solution which doesn't bring much complexity. The idea is to use 'evsel->prev_raw_counts' which is updated in each interval and it's saved with the latest counts. Before reporting the summary, we copy the counts from evsel->prev_raw_counts to evsel->counts, and next we just follow non-interval processing. v5: --- Don't save the previous aggr value to the member of [cpu0,thread0] in perf_counts. Originally that was a trick because the perf_stat_process_counter would create aggr values from per cpu values. But we don't need to do that all the time. We will handle it in next patch. Signed-off-by: Jin Yao <yao.jin@linux.intel.com> Reviewed-by: Jiri Olsa <jolsa@redhat.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jin Yao <yao.jin@intel.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lore.kernel.org/lkml/20200520042737.24160-4-yao.jin@linux.intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-05-28perf counts: Reset prev_raw_counts countsJin Yao3-6/+6
When we want to reset the evsel->prev_raw_counts, zeroing the aggr is not enough, we need to reset the perf_counts too. The perf_counts__reset zeros the perf_counts, and it should zero the aggr too. This patch changes perf_counts__reset to non-static, and calls it in evsel__reset_prev_raw_counts to reset the prev_raw_counts. Signed-off-by: Jin Yao <yao.jin@linux.intel.com> Reviewed-by: Jiri Olsa <jolsa@redhat.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jin Yao <yao.jin@intel.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lore.kernel.org/lkml/20200520042737.24160-3-yao.jin@linux.intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-05-28perf stat: Fix wrong per-thread runtime stat for interval modeJin Yao1-29/+41
root@kbl-ppc:~# perf stat --per-thread -e cycles,instructions -I1000 --interval-count 2 1.004171683 perf-3696 8,747,311 cycles ... 1.004171683 perf-3696 691,730 instructions # 0.08 insn per cycle ... 2.006490373 perf-3696 1,749,936 cycles ... 2.006490373 perf-3696 1,484,582 instructions # 0.28 insn per cycle ... Let's see interval 2.006490373 perf-3696 1,749,936 cycles perf-3696 1,484,582 instructions # 0.28 insn per cycle insn per cycle = 1,484,582 / 1,749,936 = 0.85. But now it's 0.28, that's not correct. stat_config.stats[] records the per-thread runtime stat. But for interval mode, it should be reset for each interval. So now, with this patch, root@kbl-ppc:~# perf stat --per-thread -e cycles,instructions -I1000 --interval-count 2 1.005818121 perf-8633 9,898,045 cycles ... 1.005818121 perf-8633 693,298 instructions # 0.07 insn per cycle ... 2.007863743 perf-8633 1,551,619 cycles ... 2.007863743 perf-8633 1,317,514 instructions # 0.85 insn per cycle ... Let's check interval 2.007863743. insn per cycle = 1,317,514 / 1,551,619 = 0.85. It's correct. This patch creates runtime_stat_reset, places it next to untime_stat_new/runtime_stat_delete and moves all runtime_stat functions before process_interval. Committer testing: After the patch: # perf stat --per-thread -e cycles,instructions -I1000 --interval-count 2 |& grep sssd_nss-1130 2.011309774 sssd_nss-1130 56,585 cycles 2.011309774 sssd_nss-1130 13,121 instructions # 0.23 insn per cycle # python >>> 13121.0 / 56585 0.23188124061146947 >>> Fixes: commit 14e72a21c783 ("perf stat: Update or print per-thread stats") Signed-off-by: Jin Yao <yao.jin@linux.intel.com> Reviewed-by: Jiri Olsa <jolsa@redhat.com> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jin Yao <yao.jin@intel.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lore.kernel.org/lkml/20200520042737.24160-2-yao.jin@linux.intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-05-28perf expr: Allow numbers to be followed by a dotIan Rogers2-1/+2
Metrics like UNC_M_POWER_SELF_REFRESH encode 100 as "100." and consequently the 100 is treated as a symbol. Alter the regular expression to allow the dot to be before or after the number. Note, this passed the pmu-events test as that tests the validity of a number using strtod rather than lex code. strtod allows the dot after. Add a test for this behavior. Fixes: 26226a97724d (perf expr: Move expr lexer to flex) Signed-off-by: Ian Rogers <irogers@google.com> Acked-by: Jiri Olsa <jolsa@redhat.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: John Garry <john.garry@huawei.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Paul Clarke <pc@us.ibm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-05-28perf metricgroup: Make 'evlist_used' variable a bitmap instead of array of boolsIan Rogers1-10/+8
Use a bitmap rather than an array of bools. Signed-off-by: Ian Rogers <irogers@google.com> Acked-by: Jiri Olsa <jolsa@redhat.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Andrii Nakryiko <andriin@fb.com> Cc: Cong Wang <xiyou.wangcong@gmail.com> Cc: Jin Yao <yao.jin@linux.intel.com> Cc: John Garry <john.garry@huawei.com> Cc: Kajol Jain <kjain@linux.ibm.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Kim Phillips <kim.phillips@amd.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Paul Clarke <pc@us.ibm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Song Liu <songliubraving@fb.com> Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com> Cc: Stephane Eranian <eranian@google.com> Cc: Vince Weaver <vincent.weaver@maine.edu> Cc: bpf@vger.kernel.org Cc: netdev@vger.kernel.org Link: http://lore.kernel.org/lkml/20200520072814.128267-2-irogers@google.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-05-28perf stat: Fail on extra comma while parsing eventsJiri Olsa2-1/+4
Ian reported that we allow to parse following: $ perf stat -e ,cycles true which is wrong and we should fail, like we do with this fix: $ perf stat -e ,cycles true event syntax error: ',cycles' \___ parser error The reason is that we don't have rule for ',' in 'event' start condition and it's matched and accepted by default rule. Add scanner debug support (that Ian already added for expr code), which was really useful for finding this. It's enabled together with bison debug via 'make PARSER_DEBUG=1'. Reported-by: Ian Rogers <irogers@google.com> Signed-off-by: Jiri Olsa <jolsa@kernel.org> Acked-by: Ian Rogers <irogers@google.com> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Michael Petlan <mpetlan@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lore.kernel.org/lkml/20200520074050.156988-1-jolsa@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-05-28perf script: Better align register values in dumpPaul A. Clarke1-1/+1
Before: $ perf script --dump-raw-trace [...] 2492031077254920 0x1e08 [0x308]: PERF_RECORD_SAMPLE(IP, 0x1): 47557/47557: 0xc00000000012eeb0 period: 1 addr: 0 ... user regs: mask 0x1fffffffffff ABI 64-bit .... r0 0xb .... r1 0x7ffff3b90fa0 .... r2 0x7fffbabf7300 .... r3 0x7ffff3b9ed60 .... r4 0x7ffff3b95cc0 .... r5 0x1000c5a2940 .... r6 0xfefefefefefefeff .... r7 0x7f7f7f7f7f7f7f7f .... r8 0x7ffff3b9ed60 .... r9 0x0 [...] After: [...] 2492031077254920 0x1e08 [0x308]: PERF_RECORD_SAMPLE(IP, 0x1): 47557/47557: 0xc00000000012eeb0 period: 1 addr: 0 ... user regs: mask 0x1fffffffffff ABI 64-bit .... r0 0x000000000000000b .... r1 0x00007ffff3b90fa0 .... r2 0x00007fffbabf7300 .... r3 0x00007ffff3b9ed60 .... r4 0x00007ffff3b95cc0 .... r5 0x000001000c5a2940 .... r6 0xfefefefefefefeff .... r7 0x7f7f7f7f7f7f7f7f .... r8 0x00007ffff3b9ed60 .... r9 0x0000000000000000 [...] Committer testing: Full set of instructions, testing on x86_64: # perf record -I ^C[ perf record: Woken up 1 times to write data ] [ perf record: Captured and wrote 2.855 MB perf.data (4902 samples) ] # perf evlist -v cycles: size: 120, { sample_period, sample_freq }: 4000, sample_type: IP|TID|TIME|ID|CPU|PERIOD|REGS_INTR, read_format: ID, disabled: 1, inherit: 1, freq: 1, precise_ip: 3, sample_id_all: 1, exclude_guest: 1, sample_regs_intr: 0xff0fff dummy:HG: type: 1, size: 120, config: 0x9, { sample_period, sample_freq }: 4000, sample_type: IP|TID|TIME|ID|CPU|PERIOD|REGS_INTR, read_format: ID, inherit: 1, mmap: 1, comm: 1, freq: 1, task: 1, sample_id_all: 1, mmap2: 1, comm_exec: 1, ksymbol: 1, bpf_event: 1, sample_regs_intr: 0xff0fff # Before: # perf script --dump-raw-trace [...] 0 1542674658099675 0x1cb700 [0xe0]: PERF_RECORD_SAMPLE(IP, 0x4001): 1825/1825: 0xffffffff9506e544 period: 1 addr: 0 ... intr regs: mask 0xff0fff ABI 64-bit .... AX 0xf .... BX 0xffff96e1064125a0 .... CX 0x38f .... DX 0x7 .... SI 0xf .... DI 0x38f .... BP 0x1 .... SP 0xfffffe000000bdf0 .... IP 0xffffffff9506e544 .... FLAGS 0xa .... CS 0x10 .... SS 0x18 .... R8 0x0 .... R9 0x0 .... R10 0xfffffe00000260c8 .... R11 0xfffffe000000bef8 .... R12 0x1 .... R13 0x64 .... R14 0x390 .... R15 0xffff96e1064125a0 ... thread: perf:1825 ...... dso: /proc/kcore perf 1825 [000] 1542674.658099: 1 cycles: ffffffff9506e544 native_write_msr+0x4 (vmlinux [...] After: # perf script --dump-raw-trace [...] 0 1542674658096068 0x1cb620 [0xe0]: PERF_RECORD_SAMPLE(IP, 0x4001): 1825/1825: 0xffffffff9506e544 period: 1 addr: 0 ... intr regs: mask 0xff0fff ABI 64-bit .... AX 0x000000000000000f .... BX 0xffff96e1064125a0 .... CX 0x000000000000038f .... DX 0x0000000000000007 .... SI 0x000000000000000f .... DI 0x000000000000038f .... BP 0x0000000000000000 .... SP 0xffffb3e788fb7c20 .... IP 0xffffffff9506e544 .... FLAGS 0x000000000000000a .... CS 0x0000000000000010 .... SS 0x0000000000000018 .... R8 0x00057b0deeffdfe3 .... R9 0xffff96e106432480 .... R10 0x0000000000000000 .... R11 0xffff96e106412cc0 .... R12 0xffffb3e788fb7d00 .... R13 0xffff96e106432408 .... R14 0xffff96e106432400 .... R15 0xffff96e0e09a4800 ... thread: perf:1825 ...... dso: /proc/kcore perf 1825 [000] 1542674.658096: 1 cycles: ffffffff9506e544 native_write_msr+0x4 (vmlinux) [...] Signed-off-by: Paul Clarke <pc@us.ibm.com> Reviewed-by: Andi Kleen <ak@linux.intel.com> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Jin Yao <yao.jin@linux.intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Namhyung Kim <namhyung@kernel.org> LPU-Reference: 1589911102-9460-1-git-send-email-pc@us.ibm.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-05-28perf stat: POWER9 metrics: expand "ICT" acronymPaul A. Clarke1-10/+10
Uses of "ICT" and "Ict" are expanded to "Instruction Completion Table". Signed-off-by: Paul Clarke <pc@us.ibm.com> Cc: Ananth N Mavinakayanahalli <ananth@linux.vnet.ibm.com> Cc: Ian Rogers <irogers@google.com> Cc: Madhavan Srinivasan <maddy@linux.vnet.ibm.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Cc: Sukadev Bhattiprolu <sukadev@linux.ibm.com> Link: http://lore.kernel.org/lkml/1589915886-22992-1-git-send-email-pc@us.ibm.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-05-28perf tools: Replace zero-length array with flexible-arrayGustavo A. R. Silva14-18/+18
The current codebase makes use of the zero-length array language extension to the C90 standard, but the preferred mechanism to declare variable-length types such as these ones is a flexible array member[1][2], introduced in C99: struct foo { int stuff; struct boo array[]; }; By making use of the mechanism above, we will get a compiler warning in case the flexible array does not occur last in the structure, which will help us prevent some kind of undefined behavior bugs from being inadvertently introduced[3] to the codebase from now on. Also, notice that, dynamic memory allocations won't be affected by this change: "Flexible array members have incomplete type, and so the sizeof operator may not be applied. As a quirk of the original implementation of zero-length arrays, sizeof evaluates to zero."[1] sizeof(flexible-array-member) triggers a warning because flexible array members have incomplete type[1]. There are some instances of code in which the sizeof operator is being incorrectly/erroneously applied to zero-length arrays and the result is zero. Such instances may be hiding some bugs. So, this work (flexible-array member conversions) will also help to get completely rid of those sorts of issues. This issue was found with the help of Coccinelle. [1] https://gcc.gnu.org/onlinedocs/gcc/Zero-Length.html [2] https://github.com/KSPP/linux/issues/21 [3] commit 76497732932f ("cxgb3/l2t: Fix undefined behaviour") Signed-off-by: Gustavo A. R. Silva <gustavoars@kernel.org> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Gustavo A. R. Silva <gustavo@embeddedor.com> Cc: Ian Rogers <irogers@google.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lore.kernel.org/lkml/20200515172926.GA31976@embeddedor Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-05-28perf intel-pt: Use allocated branch stack for PEBS sampleAdrian Hunter1-18/+13
To avoid having struct branch_stack as a non-last structure member, use allocated branch stack for PEBS sample. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Acked-by: Gustavo A. R. Silva <gustavoars@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Ian Rogers <irogers@google.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lore.kernel.org/lkml/2540ed9a-89f1-6d59-10c9-a66cc90db5d2@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-05-28perf docs: Introduce security.txt file to document related issuesAlexey Budankov1-0/+237
Publish instructions on how to apply LSM hooks for access control to perf_event_open() syscall on Fedora distro with Targeted SELinux policy and then manage access to the syscall. Signed-off-by: Alexey Budankov <alexey.budankov@linux.intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: linux-security-module@vger.kernel.org Cc: selinux@vger.kernel.org Link: http://lore.kernel.org/lkml/290ded0a-c422-3749-5180-918fed1ee30f@linux.intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-05-28perf tool: Make perf tool aware of SELinux access controlAlexey Budankov2-17/+26
Implement selinux sysfs check to see the system is in enforcing mode and print warning message with pointer to check audit logs. Signed-off-by: Alexey Budankov <alexey.budankov@linux.intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: linux-security-module@vger.kernel.org Cc: selinux@vger.kernel.org Link: http://lore.kernel.org/lkml/819338ce-d160-4a2f-f1aa-d756a2e7c6fc@linux.intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-05-28perf docs: Extend CAP_SYS_ADMIN with CAP_PERFMON where neededAlexey Budankov1-1/+1
Extend CAP_SYS_ADMIN with CAP_PERFMON in the docs. Signed-off-by: Alexey Budankov <alexey.budankov@linux.intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: linux-security-module@vger.kernel.org Cc: selinux@vger.kernel.org Link: http://lore.kernel.org/lkml/3b19cf79-f02d-04b4-b8b1-0039ac023b2c@linux.intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-05-28perf expr: Migrate expr ids table to a hashmapIan Rogers7-190/+200
Use a hashmap between a char* string and a double* value. While bpf's hashmap entries are size_t in size, we can't guarantee sizeof(size_t) >= sizeof(double). Avoid a memory allocation when gathering ids by making 0.0 a special value encoded as NULL. Original map suggestion by Andi Kleen: https://lore.kernel.org/lkml/20200224210308.GQ160988@tassilo.jf.intel.com/ and seconded by Jiri Olsa: https://lore.kernel.org/lkml/20200423112915.GH1136647@krava/ Committer notes: There are fixes that need to land upstream before we can use libbpf's headers, for now use our copy unconditionally, since the data structures at this point are exactly the same, no problem. When the fixes for libbpf's hashmap land upstream, we can fix this up. Testing it: Building with LIBBPF=1, i.e. the default: $ perf -vv | grep -i bpf bpf: [ on ] # HAVE_LIBBPF_SUPPORT $ nm ~/bin/perf | grep -i libbpf_ | wc -l 39 $ nm ~/bin/perf | grep -i hashmap_ | wc -l 17 $ Explicitely building without LIBBPF: $ perf -vv | grep -i bpf bpf: [ OFF ] # HAVE_LIBBPF_SUPPORT $ $ nm ~/bin/perf | grep -i libbpf_ | wc -l 0 $ nm ~/bin/perf | grep -i hashmap_ | wc -l 9 $ Signed-off-by: Ian Rogers <irogers@google.com> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Alexei Starovoitov <ast@kernel.org> Cc: Andi Kleen <ak@linux.intel.com> Cc: Andrii Nakryiko <andriin@fb.com> Cc: Cong Wang <xiyou.wangcong@gmail.com> Cc: Daniel Borkmann <daniel@iogearbox.net> Cc: Jin Yao <yao.jin@linux.intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: John Fastabend <john.fastabend@gmail.com> Cc: John Garry <john.garry@huawei.com> Cc: Kajol Jain <kjain@linux.ibm.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Kim Phillips <kim.phillips@amd.com> Cc: Leo Yan <leo.yan@linaro.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Martin KaFai Lau <kafai@fb.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Song Liu <songliubraving@fb.com> Cc: Stephane Eranian <eranian@google.com> Cc: Yonghong Song <yhs@fb.com> Cc: bpf@vger.kernel.org Cc: kp singh <kpsingh@chromium.org> Cc: netdev@vger.kernel.org Link: http://lore.kernel.org/lkml/20200515221732.44078-8-irogers@google.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-05-28perf tools: Grab a copy of libbpf's hashmapIan Rogers4-0/+422
Allow use of hashmap in perf. Modify perf's check-headers.sh script to check that the files are kept in sync, in the same way kernel headers are checked. This will warn if they are out of sync at the start of a perf build. Committer note: This starts out of synch as a fix went thru the bpf tree, namely the one removing the needless libbpf_internal.h include in hashmap.h. There is also another change related to __WORDSIZE, that as is in tools/lib/bpf/hashmap.h causes the tools/perf/ build to fail in systems such as Alpine Linus, that uses the Musl libc, so we need an alternative way of having __WORDSIZE available, use the one used by tools/include/linux/bitops.h, that builds in all the systems I have build containers for. These differences will be resolved at some point, so keep the warning in check-headers.sh as a reminder. Signed-off-by: Ian Rogers <irogers@google.com> Acked-by: Andrii Nakryiko <andriin@fb.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Alexei Starovoitov <ast@kernel.org> Cc: Andi Kleen <ak@linux.intel.com> Cc: Cong Wang <xiyou.wangcong@gmail.com> Cc: Daniel Borkmann <daniel@iogearbox.net> Cc: Jin Yao <yao.jin@linux.intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: John Fastabend <john.fastabend@gmail.com> Cc: John Garry <john.garry@huawei.com> Cc: Kajol Jain <kjain@linux.ibm.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Kim Phillips <kim.phillips@amd.com> Cc: Leo Yan <leo.yan@linaro.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Martin KaFai Lau <kafai@fb.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Song Liu <songliubraving@fb.com> Cc: Stephane Eranian <eranian@google.com> Cc: Yonghong Song <yhs@fb.com> Cc: bpf@vger.kernel.org Cc: kp singh <kpsingh@chromium.org> Cc: netdev@vger.kernel.org Link: http://lore.kernel.org/lkml/20200515221732.44078-5-irogers@google.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-05-28perf stat: Fix duration_time value for higher intervalsJiri Olsa1-1/+1
Joakim reported wrong duration_time value for interval bigger than 4000 [1]. The problem is in the interval value we pass to update_stats function, which is typed as 'unsigned int' and overflows when we get over 2^32 (happens between intervals 4000 and 5000). Retyping the passed value to unsigned long long. [1] https://www.spinics.net/lists/linux-perf-users/msg11777.html Fixes: b90f1333ef08 ("perf stat: Update walltime_nsecs_stats in interval mode") Reported-by: Joakim Zhang <qiangqing.zhang@nxp.com> Signed-off-by: Jiri Olsa <jolsa@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Michael Petlan <mpetlan@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lore.kernel.org/lkml/20200518131445.3745083-1-jolsa@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-05-28perf trace: Fix compilation error for make NO_LIBBPF=1 DEBUG=1Jiri Olsa2-20/+36
The perf compilation fails for NO_LIBBPF=1 DEBUG=1 with: $ make NO_LIBBPF=1 DEBUG=1 BUILD: Doing 'make -j8' parallel build CC builtin-trace.o LD perf-in.o LINK perf /usr/bin/ld: perf-in.o: in function `trace__find_bpf_map_by_name': /home/jolsa/kernel/linux-perf/tools/perf/builtin-trace.c:4608: undefined reference to `bpf_object__find_map_by_name' collect2: error: ld returned 1 exit status make[2]: *** [Makefile.perf:631: perf] Error 1 make[1]: *** [Makefile.perf:225: sub-make] Error 2 make: *** [Makefile:70: all] Error 2 Move trace__find_bpf_map_by_name calls under HAVE_LIBBPF_SUPPORT ifdef and add make test for this. Committer notes: Add missing: run += make_no_libbpf_DEBUG Signed-off-by: Jiri Olsa <jolsa@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Michael Petlan <mpetlan@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lore.kernel.org/lkml/20200518141027.3765877-1-jolsa@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-05-28perf beauty: Allow the CC used in the arch errno names script to acccept CFLAGSIan Rogers1-1/+1
Allow the CC compiler to accept a CFLAGS environment variable. This doesn't change the code generated but makes it easier to integrate running the shell script in build systems like bazel. Signed-off-by: Ian Rogers <irogers@google.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Alexios Zavras <alexios.zavras@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Igor Lubashev <ilubashe@akamai.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Mathieu Poirier <mathieu.poirier@linaro.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Nick Desaulniers <ndesaulniers@google.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Wei Li <liwei391@huawei.com> Link: http://lore.kernel.org/lkml/20200306071110.130202-4-irogers@google.com [ split from a larger patch ] Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>