- 06 5月, 2020 17 次提交
-
-
由 Adrian Hunter 提交于
Change Intel PT's branch stack support to use thread stacks. The advantages of using branch stack support from the thread-stack are: 1. the branches are accumulated separately for each thread 2. the branch stack is cleared only in between continuous traces This helps pave the way for adding branch stacks to regular events, not just synthesized events as at present. While the 2 approaches are not identical, in simple cases the results can be identical e.g. Before: # perf record --kcore -e intel_pt// uname # perf script --itrace=i10usl -F+brstacksym,+addr,+flags > cmp1.txt After: # perf script --itrace=i10usl -F+brstacksym,+addr,+flags > cmp2.txt # diff -s cmp1.txt cmp2.txt Files cmp1.txt and cmp2.txt are identical Signed-off-by: NAdrian Hunter <adrian.hunter@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Link: http://lore.kernel.org/lkml/20200429150751.12570-4-adrian.hunter@intel.comSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
由 Adrian Hunter 提交于
The components of the condition do not change, so consolidate them in one variable. Signed-off-by: NAdrian Hunter <adrian.hunter@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Link: http://lore.kernel.org/lkml/20200429150751.12570-3-adrian.hunter@intel.comSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
由 Adrian Hunter 提交于
Intel PT already has support for creating branch stacks for each context (per-cpu or per-thread). In the more common per-cpu case, the branch stack is not separated for different threads, instead being cleared in between each sample. That approach will not work very well for adding branch stacks to regular events. The branch stacks really need to be accumulated separately for each thread. As a start to accomplishing that, this patch adds support for putting branch stack support into the thread-stack. The advantages are: 1. the branches are accumulated separately for each thread 2. the branch stack is cleared only in between continuous traces This helps pave the way for adding branch stacks to regular events, not just synthesized events as at present. Signed-off-by: NAdrian Hunter <adrian.hunter@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Link: http://lore.kernel.org/lkml/20200429150751.12570-2-adrian.hunter@intel.comSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
由 Konstantin Khlebnikov 提交于
SMT now could be disabled via "/sys/devices/system/cpu/smt/control". Status is shown in "/sys/devices/system/cpu/smt/active" simply as "0" / "1". If this knob isn't here then fallback to checking topology as before. Signed-off-by: NKonstantin Khlebnikov <khlebnikov@yandex-team.ru> Cc: Andi Kleen <ak@linux.intel.com> Cc: Dmitry Monakhov <dmtrmonakhov@yandex-team.ru> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lore.kernel.org/lkml/158817741394.748034.9273604089138009552.stgit@buzzSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
由 Konstantin Khlebnikov 提交于
Check if access("devices/system/cpu/cpu%d/topology/core_cpus", F_OK) fails, which will happen unless the current directory is "/sys". Simply try to read this file first. Fixes: 0ccdb840 ("perf tools: Apply new CPU topology sysfs attributes") Signed-off-by: NKonstantin Khlebnikov <khlebnikov@yandex-team.ru> Cc: Andi Kleen <ak@linux.intel.com> Cc: Dmitry Monakhov <dmtrmonakhov@yandex-team.ru> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lore.kernel.org/lkml/158817718710.747528.11009278875028211991.stgit@buzzSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
由 Ian Rogers 提交于
Fix another memory leak found by applying LLVM's libfuzzer on parse_events(). Signed-off-by: NIan Rogers <irogers@google.com> Acked-by: NJiri Olsa <jolsa@redhat.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Leo Yan <leo.yan@linaro.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Cc: clang-built-linux@googlegroups.com Link: http://lore.kernel.org/lkml/20200319023101.82458-1-irogers@google.com [ split from a larger patch ] Signed-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
由 Ian Rogers 提交于
free_list_evsel() deals with tools/perf/ evsels, not with libperf perf_evsels, use the right destructor and avoid a leak, as evsel__delete() will delete something perf_evsel__delete() doesn't. Signed-off-by: NIan Rogers <irogers@google.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Acked-by: NJiri Olsa <jolsa@redhat.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Leo Yan <leo.yan@linaro.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Cc: clang-built-linux@googlegroups.com Link: http://lore.kernel.org/lkml/20200319023101.82458-1-irogers@google.com [ split from a larger patch ] Signed-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
由 Ian Rogers 提交于
Fix a memory leak found by applying LLVM's libfuzzer on parse_events(). Signed-off-by: NIan Rogers <irogers@google.com> Acked-by: NJiri Olsa <jolsa@redhat.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Leo Yan <leo.yan@linaro.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Cc: clang-built-linux@googlegroups.com Link: http://lore.kernel.org/lkml/20200319023101.82458-1-irogers@google.com [ split from a larger patch, use zfree() ] Signed-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
由 Arnaldo Carvalho de Melo 提交于
It is quite big by now, move that code to a separate record__setup_sb_evlist() routine. Suggested-by: NJiri Olsa <jolsa@redhat.com> Acked-by: NJiri Olsa <jolsa@redhat.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Song Liu <songliubraving@fb.com> Link: http://lore.kernel.org/lkml/20200429131106.27974-9-acme@kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
由 Arnaldo Carvalho de Melo 提交于
Now we can use it with --overwrite to have a flight recorder mode that gets snapshot requests from arbitrary events that are processed in the side band thread together with the PERF_RECORD_BPF_EVENT processing. Example: To collect scheduler events until a recvmmsg syscall happens, system wide: [root@five a]# rm -f perf.data.2020042717* [root@five a]# perf record --overwrite -e sched:*switch,syscalls:*recvmmsg --switch-output-event syscalls:sys_enter_recvmmsg [ perf record: dump data: Woken up 1 times ] [ perf record: Dump perf.data.2020042717585458 ] [ perf record: dump data: Woken up 1 times ] [ perf record: Dump perf.data.2020042717590235 ] [ perf record: dump data: Woken up 1 times ] [ perf record: Dump perf.data.2020042717590398 ] ^C[ perf record: Woken up 1 times to write data ] [ perf record: Dump perf.data.2020042717590511 ] [ perf record: Captured and wrote 7.244 MB perf.data.<timestamp> ] So in the above case we had 3 snapshots, the fourth was forced by control+C: [root@five a]# ls -la total 20440 drwxr-xr-x. 2 root root 4096 Apr 27 17:59 . dr-xr-x---. 12 root root 4096 Apr 27 17:46 .. -rw-------. 1 root root 3936125 Apr 27 17:58 perf.data.2020042717585458 -rw-------. 1 root root 5074869 Apr 27 17:59 perf.data.2020042717590235 -rw-------. 1 root root 4291037 Apr 27 17:59 perf.data.2020042717590398 -rw-------. 1 root root 7617037 Apr 27 17:59 perf.data.2020042717590511 [root@five a]# One can make this more precise by adding the switch output event to the main -e events list, as since this is done asynchronously, a few events after the signal event will appear in the snapshots, as can be seen with: [root@five a]# rm -f perf.data.20200427175* [root@five a]# perf record --overwrite -e sched:*switch,syscalls:*recvmmsg --switch-output-event syscalls:sys_enter_recvmmsg [ perf record: dump data: Woken up 1 times ] [ perf record: Dump perf.data.2020042718024203 ] [ perf record: dump data: Woken up 1 times ] [ perf record: Dump perf.data.2020042718024301 ] [ perf record: dump data: Woken up 1 times ] [ perf record: Dump perf.data.2020042718024484 ] ^C[ perf record: Woken up 1 times to write data ] [ perf record: Dump perf.data.2020042718024562 ] [ perf record: Captured and wrote 7.337 MB perf.data.<timestamp> ] [root@five a]# perf script -i perf.data.2020042718024203 | tail -15 PacerThread 148586 [005] 122.830729: sched:sched_switch: prev_comm=PacerThread prev_pid=148586... swapper 0 [000] 122.833588: sched:sched_switch: prev_comm=swapper/0 prev_pid=... NetworkManager 1251 [000] 122.833619: syscalls:sys_enter_recvmmsg: fd: 0x0000001c, mmsg: 0x7ffe83054a1... swapper 0 [002] 122.833624: sched:sched_switch: prev_comm=swapper/2 prev_pid=... swapper 0 [003] 122.833624: sched:sched_switch: prev_comm=swapper/3 prev_pid=... NetworkManager 1251 [000] 122.833626: syscalls:sys_exit_recvmmsg: 0x1 kworker/3:3-eve 158946 [003] 122.833628: sched:sched_switch: prev_comm=kworker/3:3 prev_pid=15894... swapper 0 [004] 122.833641: sched:sched_switch: prev_comm=swapper/4 prev_pid=... NetworkManager 1251 [000] 122.833642: sched:sched_switch: prev_comm=NetworkManage... perf 228273 [002] 122.833645: sched:sched_switch: prev_comm=perf prev_pid=22827... swapper 0 [011] 122.833646: sched:sched_switch: prev_comm=swapper/1... swapper 0 [002] 122.833648: sched:sched_switch: prev_comm=swapper/... kworker/0:2-eve 207387 [000] 122.833648: sched:sched_switch: prev_comm=kworker/0:2 prev_pid=20738... kworker/2:3-eve 232038 [002] 122.833652: sched:sched_switch: prev_comm=kworker/2:3 prev_pid=23203... perf 235825 [003] 122.833653: sched:sched_switch: prev_comm=perf prev_pid=23582... [root@five a]# Acked-by: NJiri Olsa <jolsa@redhat.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Song Liu <songliubraving@fb.com> Cc: Wang Nan <wangnan0@huawei.com> Link: http://lore.kernel.org/lkml/20200429131106.27974-8-acme@kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
由 Arnaldo Carvalho de Melo 提交于
I.e. so far we had just one event in that side band thread, a dummy one with attr.bpf_event set, so that 'perf record' can go ahead and ask the kernel for further information about BPF programs being loaded. Allow for more than one event to be there, so that we can use it as well for the upcoming --switch-output-event feature. Acked-by: NJiri Olsa <jolsa@redhat.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Song Liu <songliubraving@fb.com> Link: http://lore.kernel.org/lkml/20200429131106.27974-6-acme@kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
由 Arnaldo Carvalho de Melo 提交于
To avoid dragging more stuff into the perf python binding in the following csets. Reported-by: NJiri Olsa <jolsa@kernel.org> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Namhyung Kim <namhyung@kernel.org> Signed-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
由 Arnaldo Carvalho de Melo 提交于
For the upcoming --switch-output-event option we want to create the side band event, populate it with the specified events and then, if it is present multiple times, go on adding to it, then, if the BPF tracking is required, use the first event to set its attr.bpf_event to get those PERF_RECORD_BPF_EVENT metadata events too. Acked-by: NJiri Olsa <jolsa@redhat.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Song Liu <songliubraving@fb.com> Link: http://lore.kernel.org/lkml/20200429131106.27974-5-acme@kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
由 Arnaldo Carvalho de Melo 提交于
Renaming bpf_event__add_sb_event() to evlist__add_sb_event() and requiring that the evlist be allocated beforehand. This will allow using the same side band thread and evlist to be used for multiple purposes in addition to react to PERF_RECORD_BPF_EVENT soon after they are generated. Acked-by: NJiri Olsa <jolsa@redhat.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Song Liu <songliubraving@fb.com> Link: http://lore.kernel.org/lkml/20200429131106.27974-4-acme@kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
由 Arnaldo Carvalho de Melo 提交于
Where state related to a 'perf top' session is grouped. Acked-by: NJiri Olsa <jolsa@redhat.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Song Liu <songliubraving@fb.com> Link: http://lore.kernel.org/lkml/20200429131106.27974-3-acme@kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
由 Arnaldo Carvalho de Melo 提交于
Where state related to a 'perf record' session is grouped. Acked-by: NJiri Olsa <jolsa@redhat.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Song Liu <songliubraving@fb.com> Link: http://lore.kernel.org/lkml/20200429131106.27974-2-acme@kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
由 Arnaldo Carvalho de Melo 提交于
Trying to disentangle this a bit further, unfortunately it uses parse_events(), its interesting to have it separated anyway, so do it. Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Signed-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
- 30 4月, 2020 12 次提交
-
-
由 Kajol Jain 提交于
The hv_24×7 feature in IBM® POWER9™ processor-based servers provide the facility to continuously collect large numbers of hardware performance metrics efficiently and accurately. This patch adds hv_24x7 metric file for different Socket/chip resources. Result: power9 platform: command:# ./perf stat --metric-only -M Memory_RD_BW_Chip -C 0 -I 1000 1.000096188 0.9 0.3 2.000285720 0.5 0.1 3.000424990 0.4 0.1 command:# ./perf stat --metric-only -M PowerBUS_Frequency -C 0 -I 1000 1.000097981 2.3 2.3 2.000291713 2.3 2.3 3.000421719 2.3 2.3 4.000550912 2.3 2.3 Signed-off-by: NKajol Jain <kjain@linux.ibm.com> Acked-by: NJiri Olsa <jolsa@redhat.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Anju T Sudhakar <anju@linux.vnet.ibm.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Jin Yao <yao.jin@linux.intel.com> Cc: Joe Mario <jmario@redhat.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Madhavan Srinivasan <maddy@linux.vnet.ibm.com> Cc: Mamatha Inamdar <mamatha4@linux.vnet.ibm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michael Petlan <mpetlan@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Paul Mackerras <paulus@ozlabs.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ravi Bangoria <ravi.bangoria@linux.ibm.com> Cc: Sukadev Bhattiprolu <sukadev@linux.vnet.ibm.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linuxppc-dev@lists.ozlabs.org Link: http://lore.kernel.org/lkml/20200401203340.31402-8-kjain@linux.ibm.comSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
由 Kajol Jain 提交于
Commit 54b50916 ("perf stat: Implement --metric-only mode") added function 'valid_only_metric()' which drops "Hz" or "hz", if it is part of "ScaleUnit". This patch enable it since hv_24x7 supports couple of frequency events. Signed-off-by: NKajol Jain <kjain@linux.ibm.com> Acked-by: NJiri Olsa <jolsa@redhat.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Anju T Sudhakar <anju@linux.vnet.ibm.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Jin Yao <yao.jin@linux.intel.com> Cc: Joe Mario <jmario@redhat.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Madhavan Srinivasan <maddy@linux.vnet.ibm.com> Cc: Mamatha Inamdar <mamatha4@linux.vnet.ibm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michael Petlan <mpetlan@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Paul Mackerras <paulus@ozlabs.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ravi Bangoria <ravi.bangoria@linux.ibm.com> Cc: Sukadev Bhattiprolu <sukadev@linux.vnet.ibm.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linuxppc-dev@lists.ozlabs.org Link: http://lore.kernel.org/lkml/20200401203340.31402-7-kjain@linux.ibm.comSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
由 Kajol Jain 提交于
Added test case for parsing "?" in metric expression. Signed-off-by: NKajol Jain <kjain@linux.ibm.com> Acked-by: NJiri Olsa <jolsa@redhat.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Anju T Sudhakar <anju@linux.vnet.ibm.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Jin Yao <yao.jin@linux.intel.com> Cc: Joe Mario <jmario@redhat.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Madhavan Srinivasan <maddy@linux.vnet.ibm.com> Cc: Mamatha Inamdar <mamatha4@linux.vnet.ibm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michael Petlan <mpetlan@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Paul Mackerras <paulus@ozlabs.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ravi Bangoria <ravi.bangoria@linux.ibm.com> Cc: Sukadev Bhattiprolu <sukadev@linux.vnet.ibm.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linuxppc-dev@lists.ozlabs.org Link: http://lore.kernel.org/lkml/20200401203340.31402-6-kjain@linux.ibm.comSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
由 Kajol Jain 提交于
Patch enhances current metric infrastructure to handle "?" in the metric expression. The "?" can be use for parameters whose value not known while creating metric events and which can be replace later at runtime to the proper value. It also add flexibility to create multiple events out of single metric event added in JSON file. Patch adds function 'arch_get_runtimeparam' which is a arch specific function, returns the count of metric events need to be created. By default it return 1. This infrastructure needed for hv_24x7 socket/chip level events. "hv_24x7" chip level events needs specific chip-id to which the data is requested. Function 'arch_get_runtimeparam' implemented in header.c which extract number of sockets from sysfs file "sockets" under "/sys/devices/hv_24x7/interface/". With this patch basically we are trying to create as many metric events as define by runtime_param. For that one loop is added in function 'metricgroup__add_metric', which create multiple events at run time depend on return value of 'arch_get_runtimeparam' and merge that event in 'group_list'. To achieve that we are actually passing this parameter value as part of `expr__find_other` function and changing "?" present in metric expression with this value. As in our JSON file, there gonna be single metric event, and out of which we are creating multiple events. To understand which data count belongs to which parameter value, we also printing param value in generic_metric function. For example, command:# ./perf stat -M PowerBUS_Frequency -C 0 -I 1000 1.000101867 9,356,933 hv_24x7/pm_pb_cyc,chip=0/ # 2.3 GHz PowerBUS_Frequency_0 1.000101867 9,366,134 hv_24x7/pm_pb_cyc,chip=1/ # 2.3 GHz PowerBUS_Frequency_1 2.000314878 9,365,868 hv_24x7/pm_pb_cyc,chip=0/ # 2.3 GHz PowerBUS_Frequency_0 2.000314878 9,366,092 hv_24x7/pm_pb_cyc,chip=1/ # 2.3 GHz PowerBUS_Frequency_1 So, here _0 and _1 after PowerBUS_Frequency specify parameter value. Signed-off-by: NKajol Jain <kjain@linux.ibm.com> Acked-by: NJiri Olsa <jolsa@redhat.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Anju T Sudhakar <anju@linux.vnet.ibm.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Jin Yao <yao.jin@linux.intel.com> Cc: Joe Mario <jmario@redhat.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Madhavan Srinivasan <maddy@linux.vnet.ibm.com> Cc: Mamatha Inamdar <mamatha4@linux.vnet.ibm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michael Petlan <mpetlan@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Paul Mackerras <paulus@ozlabs.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ravi Bangoria <ravi.bangoria@linux.ibm.com> Cc: Sukadev Bhattiprolu <sukadev@linux.vnet.ibm.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linuxppc-dev@lists.ozlabs.org Link: http://lore.kernel.org/lkml/20200401203340.31402-5-kjain@linux.ibm.comSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
由 Shaokun Zhang 提交于
get_cpuid_str() is used in tools/perf/arch/xxx/util/header.c, fix the name in comment. Signed-off-by: NShaokun Zhang <zhangshaokun@hisilicon.com> Cc: Andi Kleen <ak@linux.intel.com> Link: http://lore.kernel.org/lkml/1588141992-48382-1-git-send-email-zhangshaokun@hisilicon.comSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
由 Zou Wei 提交于
Fixes coccicheck warning: tools/perf/builtin-report.c:1403:2-34: WARNING: Assignment of 0/1 to bool variable Reported-by: NHulk Robot <hulkci@huawei.com> Signed-off-by: NZou Wei <zou_wei@huawei.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lore.kernel.org/lkml/1587904683-3510-1-git-send-email-zou_wei@huawei.comSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
由 Zou Wei 提交于
Fixes coccicheck warnings: tools/perf/builtin-diff.c:1565:2-3: Unneeded semicolon tools/perf/builtin-lock.c:778:2-3: Unneeded semicolon tools/perf/builtin-mem.c:126:2-3: Unneeded semicolon tools/perf/util/intel-pt-decoder/intel-pt-pkt-decoder.c:555:2-3: Unneeded semicolon tools/perf/util/ordered-events.c:317:2-3: Unneeded semicolon tools/perf/util/synthetic-events.c:1131:2-3: Unneeded semicolon tools/perf/util/trace-event-read.c:78:2-3: Unneeded semicolon Reported-by: NHulk Robot <hulkci@huawei.com> Signed-off-by: NZou Wei <zou_wei@huawei.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lore.kernel.org/lkml/1588065523-71423-1-git-send-email-zou_wei@huawei.comSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
由 Zou Wei 提交于
Fixes coccicheck warnings: tools/perf/builtin-c2c.c:1712:2-3: Unneeded semicolon tools/perf/builtin-c2c.c:1928:2-3: Unneeded semicolon tools/perf/builtin-c2c.c:2962:2-3: Unneeded semicolon Reported-by: NHulk Robot <hulkci@huawei.com> Signed-off-by: NZou Wei <zou_wei@huawei.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lore.kernel.org/lkml/1588064336-70456-1-git-send-email-zou_wei@huawei.comSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
由 Stephane Eranian 提交于
When printing iregs, there was a double newline printed because perf_sample__fprintf_regs() was printing its own and then at the end of all fields, perf script was adding one. This was causing blank line in the output: Before: $ perf script -Fip,iregs 401b8d ABI:2 DX:0x100 SI:0x4a8340 DI:0x4a9340 401b8d ABI:2 DX:0x100 SI:0x4a9340 DI:0x4a8340 401b8d ABI:2 DX:0x100 SI:0x4a8340 DI:0x4a9340 401b8d ABI:2 DX:0x100 SI:0x4a9340 DI:0x4a8340 After: $ perf script -Fip,iregs 401b8d ABI:2 DX:0x100 SI:0x4a8340 DI:0x4a9340 401b8d ABI:2 DX:0x100 SI:0x4a9340 DI:0x4a8340 401b8d ABI:2 DX:0x100 SI:0x4a8340 DI:0x4a9340 Committer testing: First we need to figure out how to request that registers be recorded, so we use: # perf record -h reg Usage: perf record [<options>] [<command>] or: perf record [<options>] -- <command> [<options>] -I, --intr-regs[=<any register>] sample selected machine registers on interrupt, use '-I?' to list register names --buildid-all Record build-id of all DSOs regardless of hits --user-regs[=<any register>] sample selected machine registers on interrupt, use '--user-regs=?' to list register names # Ok, now lets ask for them all: # perf record -a --intr-regs --user-regs sleep 1 [ perf record: Woken up 1 times to write data ] [ perf record: Captured and wrote 4.105 MB perf.data (2760 samples) ] # Lets look at the first 6 output lines: # perf script -Fip,iregs | head -6 ffffffff8a06f2f4 ABI:2 AX:0xffffd168fee0a980 BX:0xffff8a23b087f000 CX:0xfffeb69aaeb25d73 DX:0xffff8a253e8310f0 SI:0xfffffff9bafe7359 DI:0xffffb1690204fb10 BP:0xffffd168fee0a950 SP:0xffffb1690204fb88 IP:0xffffffff8a06f2f4 FLAGS:0x4e CS:0x10 SS:0x18 R8:0x1495f0a91129a R9:0xffff8a23b087f000 R10:0x1 R11:0xffffffff R12:0x0 R13:0xffff8a253e827e00 R14:0xffffd168fee0aa5c R15:0xffffd168fee0a980 ffffffff8a06f2f4 ABI:2 AX:0x0 BX:0xffffd168fee0a950 CX:0x5684cc1118491900 DX:0x0 SI:0xffffd168fee0a9d0 DI:0x202 BP:0xffffb1690204fd70 SP:0xffffb1690204fd20 IP:0xffffffff8a06f2f4 FLAGS:0x24e CS:0x10 SS:0x18 R8:0x0 R9:0xffffd168fee0a9d0 R10:0x1 R11:0xffffffff R12:0xffffffff8a23e480 R13:0xffff8a23b087f240 R14:0xffff8a23b087f000 R15:0xffffd168fee0a950 ffffffff8a06f2f4 ABI:2 AX:0x0 BX:0x0 CX:0x7f25f334335b DX:0x0 SI:0x2400 DI:0x4 BP:0x7fff5f264570 SP:0x7fff5f264538 IP:0xffffffff8a06f2f4 FLAGS:0x24e CS:0x10 SS:0x2b R8:0x0 R9:0x2312d20 R10:0x0 R11:0x246 R12:0x22cc0e0 R13:0x0 R14:0x0 R15:0x22d0780 # Reproduced, apply the patch and: [root@five ~]# perf script -Fip,iregs | head -6 ffffffff8a06f2f4 ABI:2 AX:0xffffd168fee0a980 BX:0xffff8a23b087f000 CX:0xfffeb69aaeb25d73 DX:0xffff8a253e8310f0 SI:0xfffffff9bafe7359 DI:0xffffb1690204fb10 BP:0xffffd168fee0a950 SP:0xffffb1690204fb88 IP:0xffffffff8a06f2f4 FLAGS:0x4e CS:0x10 SS:0x18 R8:0x1495f0a91129a R9:0xffff8a23b087f000 R10:0x1 R11:0xffffffff R12:0x0 R13:0xffff8a253e827e00 R14:0xffffd168fee0aa5c R15:0xffffd168fee0a980 ffffffff8a06f2f4 ABI:2 AX:0x0 BX:0xffffd168fee0a950 CX:0x5684cc1118491900 DX:0x0 SI:0xffffd168fee0a9d0 DI:0x202 BP:0xffffb1690204fd70 SP:0xffffb1690204fd20 IP:0xffffffff8a06f2f4 FLAGS:0x24e CS:0x10 SS:0x18 R8:0x0 R9:0xffffd168fee0a9d0 R10:0x1 R11:0xffffffff R12:0xffffffff8a23e480 R13:0xffff8a23b087f240 R14:0xffff8a23b087f000 R15:0xffffd168fee0a950 ffffffff8a06f2f4 ABI:2 AX:0x0 BX:0x0 CX:0x7f25f334335b DX:0x0 SI:0x2400 DI:0x4 BP:0x7fff5f264570 SP:0x7fff5f264538 IP:0xffffffff8a06f2f4 FLAGS:0x24e CS:0x10 SS:0x2b R8:0x0 R9:0x2312d20 R10:0x0 R11:0x246 R12:0x22cc0e0 R13:0x0 R14:0x0 R15:0x22d0780 ffffffff8a24074b ABI:2 AX:0xcb BX:0xcb CX:0x0 DX:0x0 SI:0xffffb1690204ff58 DI:0xcb BP:0xffffb1690204ff58 SP:0xffffb1690204ff40 IP:0xffffffff8a24074b FLAGS:0x24e CS:0x10 SS:0x18 R8:0x0 R9:0x0 R10:0x0 R11:0x0 R12:0x0 R13:0x0 R14:0x0 R15:0x0 ffffffff8a310600 ABI:2 AX:0x0 BX:0xffffffff8b8c39a0 CX:0x0 DX:0xffff8a2503890300 SI:0xffffb1690204ff20 DI:0xffff8a23e4080000 BP:0xffff8a23e4080000 SP:0xffffb1690204fec0 IP:0xffffffff8a310600 FLAGS:0x28e CS:0x10 SS:0x18 R8:0x0 R9:0x0 R10:0x0 R11:0x0 R12:0xffffffffffffffea R13:0xffff8a23e4080020 R14:0x0 R15:0x0 ffffffff8a11b688 ABI:2 AX:0x0 BX:0xffff8a237b7c8800 CX:0xffffb1690204fae0 DX:0x78 SI:0xffff8a237b7c8800 DI:0xffffb1690204fa10 BP:0xffffb1690204fb00 SP:0xffffb1690204fa00 IP:0xffffffff8a11b688 FLAGS:0x8a CS:0x10 SS:0x18 R8:0x1495f0a917eba R9:0xffffd168fde19a48 R10:0xffffb1690204fd98 R11:0xffff8a253e82afb0 R12:0xffff8a237b7c8800 R13:0xffffb1690204fb00 R14:0x0 R15:0xffff8a237b7c8800 [root@five ~]# To see it more clearly, lets get just two of those registers by sample: # perf record -a --intr-regs=ax,bx --user-regs=cx,dx sleep 1 [ perf record: Woken up 1 times to write data ] [ perf record: Captured and wrote 3.502 MB perf.data (1653 samples) ] # Extra info, lets see what gets setup in that 'struct perf_event_attr': # perf evlist -v cycles: size: 120, { sample_period, sample_freq }: 4000, sample_type: IP|TID|TIME|CPU|PERIOD|REGS_USER|REGS_INTR, read_format: ID, disabled: 1, inherit: 1, mmap: 1, comm: 1, freq: 1, task: 1, precise_ip: 2, sample_id_all: 1, exclude_guest: 1, mmap2: 1, comm_exec: 1, ksymbol: 1, bpf_event: 1, sample_regs_user: 0xc, sample_regs_intr: 0x3 # Cook, some PERF_SAMPLE_REGS_USER|PERF_SAMPLE_REGS_INTR + attr.sample_regs_user and attr.sample_regs_intr register masks, now lets see if those newlines are gone in a more compact fashion: # perf script -Fip,iregs,uregs ffffffff8a56df78 ABI:2 AX:0xffff8a25137b6028 BX:0xffff8a2502f18000 ABI:2 CX:0x7f204460e49b DX:0xf42920 ffffffff8a56df78 ABI:2 AX:0xffff8a25137b6028 BX:0xffff8a2502f18000 ABI:2 CX:0x7f204460e49b DX:0xf42920 ffffffff8a56df78 ABI:2 AX:0xffff8a25137b6028 BX:0xffff8a2502f18000 ABI:2 CX:0x7f204460e49b DX:0xf42920 ffffffff8a56df78 ABI:2 AX:0xffff8a25137b6028 BX:0xffff8a2502f18000 ABI:2 CX:0x7f204460e49b DX:0xf42920 ffffffff8a56df78 ABI:2 AX:0xffff8a25137b6028 BX:0xffff8a2502f18000 ABI:2 CX:0x7f204460e49b DX:0xf42920 ffffffff8a56df78 ABI:2 AX:0xffff8a25137b6028 BX:0xffff8a2502f18000 ABI:2 CX:0x7f204460e49b DX:0xf42920 ffffffff8a29b78d ABI:2 AX:0x2a20ffcd6000 BX:0x2ec7d9000 ABI:2 CX:0x7f204460e49b DX:0xf42920 # And where was that? # perf script -Fip,iregs,uregs,sym,dso ffffffff8a56df78 strrchr (/lib/modules/5.7.0-rc2/build/vmlinux) ABI:2 AX:0xffff8a25137b6028 BX:0xffff8a2502f18000 ABI:2 CX:0x7f204460e49b DX:0xf42920 ffffffff8a56df78 strrchr (/lib/modules/5.7.0-rc2/build/vmlinux) ABI:2 AX:0xffff8a25137b6028 BX:0xffff8a2502f18000 ABI:2 CX:0x7f204460e49b DX:0xf42920 ffffffff8a56df78 strrchr (/lib/modules/5.7.0-rc2/build/vmlinux) ABI:2 AX:0xffff8a25137b6028 BX:0xffff8a2502f18000 ABI:2 CX:0x7f204460e49b DX:0xf42920 ffffffff8a56df78 strrchr (/lib/modules/5.7.0-rc2/build/vmlinux) ABI:2 AX:0xffff8a25137b6028 BX:0xffff8a2502f18000 ABI:2 CX:0x7f204460e49b DX:0xf42920 ffffffff8a56df78 strrchr (/lib/modules/5.7.0-rc2/build/vmlinux) ABI:2 AX:0xffff8a25137b6028 BX:0xffff8a2502f18000 ABI:2 CX:0x7f204460e49b DX:0xf42920 ffffffff8a56df78 strrchr (/lib/modules/5.7.0-rc2/build/vmlinux) ABI:2 AX:0xffff8a25137b6028 BX:0xffff8a2502f18000 ABI:2 CX:0x7f204460e49b DX:0xf42920 ffffffff8a29b78d __vma_link_rb (/lib/modules/5.7.0-rc2/build/vmlinux) ABI:2 AX:0x2a20ffcd6000 BX:0x2ec7d9000 ABI:2 CX:0x7f204460e49b DX:0xf42920 # Signed-off-by: NStephane Eranian <eranian@google.com> Tested-by: NArnaldo Carvalho de Melo <acme@redhat.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Ian Rogers <irogers@google.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lore.kernel.org/lkml/20200418231908.152212-1-eranian@google.comSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
由 Ian Rogers 提交于
The synthesize benchmark, run on a single process and thread, shows perf_event__synthesize_mmap_events as the hottest function with fgets and sscanf taking the majority of execution time. fscanf performs similarly well. Replace the scanf call with manual reading of each field of the /proc/pid/maps line, and remove some unnecessary buffering. This change also addresses potential, but unlikely, buffer overruns for the string values read by scanf. Performance before is: $ sudo perf bench internals synthesize -m 16 -M 16 -s -t \# Running 'internals/synthesize' benchmark: Computing performance of single threaded perf event synthesis by synthesizing events on the perf process itself: Average synthesis took: 102.810 usec (+- 0.027 usec) Average num. events: 17.000 (+- 0.000) Average time per event 6.048 usec Average data synthesis took: 106.325 usec (+- 0.018 usec) Average num. events: 89.000 (+- 0.000) Average time per event 1.195 usec Computing performance of multi threaded perf event synthesis by synthesizing events on CPU 0: Number of synthesis threads: 16 Average synthesis took: 68103.100 usec (+- 441.234 usec) Average num. events: 30703.000 (+- 0.730) Average time per event 2.218 usec And after is: $ sudo perf bench internals synthesize -m 16 -M 16 -s -t \# Running 'internals/synthesize' benchmark: Computing performance of single threaded perf event synthesis by synthesizing events on the perf process itself: Average synthesis took: 50.388 usec (+- 0.031 usec) Average num. events: 17.000 (+- 0.000) Average time per event 2.964 usec Average data synthesis took: 52.693 usec (+- 0.020 usec) Average num. events: 89.000 (+- 0.000) Average time per event 0.592 usec Computing performance of multi threaded perf event synthesis by synthesizing events on CPU 0: Number of synthesis threads: 16 Average synthesis took: 45022.400 usec (+- 552.740 usec) Average num. events: 30624.200 (+- 10.037) Average time per event 1.470 usec On a Intel Xeon 6154 compiling with Debian gcc 9.2.1. Committer testing: On a AMD Ryzen 5 3600X 6-Core Processor: Before: # perf bench internals synthesize --min-threads 12 --max-threads 12 --st --mt # Running 'internals/synthesize' benchmark: Computing performance of single threaded perf event synthesis by synthesizing events on the perf process itself: Average synthesis took: 267.491 usec (+- 0.176 usec) Average num. events: 56.000 (+- 0.000) Average time per event 4.777 usec Average data synthesis took: 277.257 usec (+- 0.169 usec) Average num. events: 287.000 (+- 0.000) Average time per event 0.966 usec Computing performance of multi threaded perf event synthesis by synthesizing events on CPU 0: Number of synthesis threads: 12 Average synthesis took: 81599.500 usec (+- 346.315 usec) Average num. events: 36096.100 (+- 2.523) Average time per event 2.261 usec # After: # perf bench internals synthesize --min-threads 12 --max-threads 12 --st --mt # Running 'internals/synthesize' benchmark: Computing performance of single threaded perf event synthesis by synthesizing events on the perf process itself: Average synthesis took: 110.125 usec (+- 0.080 usec) Average num. events: 56.000 (+- 0.000) Average time per event 1.967 usec Average data synthesis took: 118.518 usec (+- 0.057 usec) Average num. events: 287.000 (+- 0.000) Average time per event 0.413 usec Computing performance of multi threaded perf event synthesis by synthesizing events on CPU 0: Number of synthesis threads: 12 Average synthesis took: 43490.700 usec (+- 284.527 usec) Average num. events: 37028.500 (+- 0.563) Average time per event 1.175 usec # Signed-off-by: NIan Rogers <irogers@google.com> Tested-by: NArnaldo Carvalho de Melo <acme@redhat.com> Acked-by: NJiri Olsa <jolsa@redhat.com> Acked-by: NNamhyung Kim <namhyung@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andrey Zhizhikin <andrey.z@gmail.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Kefeng Wang <wangkefeng.wang@huawei.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Petr Mladek <pmladek@suse.com> Cc: Stephane Eranian <eranian@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lore.kernel.org/lkml/20200415054050.31645-4-irogers@google.comSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
由 Ian Rogers 提交于
The synthesize benchmark shows the majority of execution time going to fgets and sscanf, necessary to parse /proc/pid/maps. Add a new buffered reading library that will be used to replace these calls in a follow-up CL. Add tests for the library to perf test. Committer tests: $ perf test api 63: Test api io : Ok $ Signed-off-by: NIan Rogers <irogers@google.com> Acked-by: NJiri Olsa <jolsa@redhat.com> Acked-by: NNamhyung Kim <namhyung@kernel.org> Tested-by: NArnaldo Carvalho de Melo <acme@redhat.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andrey Zhizhikin <andrey.z@gmail.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Kefeng Wang <wangkefeng.wang@huawei.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Petr Mladek <pmladek@suse.com> Cc: Stephane Eranian <eranian@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lore.kernel.org/lkml/20200415054050.31645-3-irogers@google.comSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
由 Ian Rogers 提交于
By default this isn't run as it reads /proc and may not have access. For consistency, modify the single threaded benchmark to compute an average time per event. Committer testing: $ grep -m1 "model name" /proc/cpuinfo model name : Intel(R) Core(TM) i7-8650U CPU @ 1.90GHz $ grep "model name" /proc/cpuinfo | wc -l 8 $ $ perf bench internals synthesize -h # Running 'internals/synthesize' benchmark: Usage: perf bench internals synthesize <options> -I, --multi-iterations <n> Number of iterations used to compute multi-threaded average -i, --single-iterations <n> Number of iterations used to compute single-threaded average -M, --max-threads <n> Maximum number of threads in multithreaded bench -m, --min-threads <n> Minimum number of threads in multithreaded bench -s, --st Run single threaded benchmark -t, --mt Run multi-threaded benchmark $ $ perf bench internals synthesize -t # Running 'internals/synthesize' benchmark: Computing performance of multi threaded perf event synthesis by synthesizing events on CPU 0: Number of synthesis threads: 1 Average synthesis took: 65449.000 usec (+- 586.442 usec) Average num. events: 9405.400 (+- 0.306) Average time per event 6.959 usec Number of synthesis threads: 2 Average synthesis took: 37838.300 usec (+- 130.259 usec) Average num. events: 9501.800 (+- 20.469) Average time per event 3.982 usec Number of synthesis threads: 3 Average synthesis took: 48551.400 usec (+- 225.686 usec) Average num. events: 9544.000 (+- 0.000) Average time per event 5.087 usec Number of synthesis threads: 4 Average synthesis took: 29632.500 usec (+- 50.808 usec) Average num. events: 9544.000 (+- 0.000) Average time per event 3.105 usec Number of synthesis threads: 5 Average synthesis took: 33920.400 usec (+- 284.509 usec) Average num. events: 9544.000 (+- 0.000) Average time per event 3.554 usec Number of synthesis threads: 6 Average synthesis took: 27604.100 usec (+- 72.344 usec) Average num. events: 9548.000 (+- 0.000) Average time per event 2.891 usec Number of synthesis threads: 7 Average synthesis took: 25406.300 usec (+- 933.371 usec) Average num. events: 9545.500 (+- 0.167) Average time per event 2.662 usec Number of synthesis threads: 8 Average synthesis took: 24110.400 usec (+- 73.229 usec) Average num. events: 9551.000 (+- 0.000) Average time per event 2.524 usec $ Signed-off-by: NIan Rogers <irogers@google.com> Tested-by: NArnaldo Carvalho de Melo <acme@redhat.com> Acked-by: NJiri Olsa <jolsa@redhat.com> Acked-by: NNamhyung Kim <namhyung@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andrey Zhizhikin <andrey.z@gmail.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Kefeng Wang <wangkefeng.wang@huawei.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Petr Mladek <pmladek@suse.com> Cc: Stephane Eranian <eranian@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lore.kernel.org/lkml/20200415054050.31645-2-irogers@google.comSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
- 23 4月, 2020 4 次提交
-
-
由 Stephane Eranian 提交于
To control degree of parallelism of the synthesize_mmap() code which is scanning /proc/PID/task/PID/maps and can be time consuming. Mimic perf top way of handling the option. If not specified will default to 1 thread, i.e. default behavior before this option. On a desktop computer the processing of /proc/PID/task/PID/maps isn't slow enough to warrant parallel processing and the thread creation has some cost - hence the default of 1. On a loaded server with >100 cores it is possible to see synthesis times in the order of seconds and in this case having the option is desirable. As the processing is a synchronization point, it is legitimate to worry if Amdahl's law will apply to this patch. Profiling with this patch in place: https://lore.kernel.org/lkml/20200415054050.31645-4-irogers@google.com/ shows: ... - 32.59% __perf_event__synthesize_threads - 32.54% __event__synthesize_thread + 22.13% perf_event__synthesize_mmap_events + 6.68% perf_event__get_comm_ids.constprop.0 + 1.49% process_synthesized_event + 1.29% __GI___readdir64 + 0.60% __opendir ... That is the processing is 1.49% of execution time and there is plenty to make parallel. This is shown in the benchmark in this patch: https://lore.kernel.org/lkml/20200415054050.31645-2-irogers@google.com/ Computing performance of multi threaded perf event synthesis by synthesizing events on CPU 0: Number of synthesis threads: 1 Average synthesis took: 127729.000 usec (+- 3372.880 usec) Average num. events: 21548.600 (+- 0.306) Average time per event 5.927 usec Number of synthesis threads: 2 Average synthesis took: 88863.500 usec (+- 385.168 usec) Average num. events: 21552.800 (+- 0.327) Average time per event 4.123 usec Number of synthesis threads: 3 Average synthesis took: 83257.400 usec (+- 348.617 usec) Average num. events: 21553.200 (+- 0.327) Average time per event 3.863 usec Number of synthesis threads: 4 Average synthesis took: 75093.000 usec (+- 422.978 usec) Average num. events: 21554.200 (+- 0.200) Average time per event 3.484 usec Number of synthesis threads: 5 Average synthesis took: 64896.600 usec (+- 353.348 usec) Average num. events: 21558.000 (+- 0.000) Average time per event 3.010 usec Number of synthesis threads: 6 Average synthesis took: 59210.200 usec (+- 342.890 usec) Average num. events: 21560.000 (+- 0.000) Average time per event 2.746 usec Number of synthesis threads: 7 Average synthesis took: 54093.900 usec (+- 306.247 usec) Average num. events: 21562.000 (+- 0.000) Average time per event 2.509 usec Number of synthesis threads: 8 Average synthesis took: 48938.700 usec (+- 341.732 usec) Average num. events: 21564.000 (+- 0.000) Average time per event 2.269 usec Where average time per synthesized event goes from 5.927 usec with 1 thread to 2.269 usec with 8. This isn't a linear speed up as not all of synthesize code has been made parallel. If the synthesis time was about 10 seconds then using 8 threads may bring this down to less than 4. Signed-off-by: NStephane Eranian <eranian@google.com> Reviewed-by: NIan Rogers <irogers@google.com> Acked-by: NJiri Olsa <jolsa@redhat.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Alexey Budankov <alexey.budankov@linux.intel.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Tony Jones <tonyj@suse.de> Cc: yuzhoujian <yuzhoujian@didichuxing.com> Link: http://lore.kernel.org/lkml/20200422155038.9380-1-irogers@google.comSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
由 Tommi Rantala 提交于
Commit 2d4f2799 ("perf data: Add global path holder") missed path conversion in tests/topology.c, causing the "Session topology" testcase to "hang" (waits forever for input from stdin) when doing "ssh $VM perf test". Can be reproduced by running "cat | perf test topo", and crashed by replacing cat with true: $ true | perf test -v topo 40: Session topology : --- start --- test child forked, pid 3638 templ file: /tmp/perf-test-QPvAch incompatible file format incompatible file format (rerun with -v to learn more) free(): invalid pointer test child interrupted ---- end ---- Session topology: FAILED! Committer testing: Reproduced the above result before the patch and after it is back working: # true | perf test -v topo 41: Session topology : --- start --- test child forked, pid 19374 templ file: /tmp/perf-test-YOTEQg CPU 0, core 0, socket 0 CPU 1, core 1, socket 0 CPU 2, core 2, socket 0 CPU 3, core 3, socket 0 CPU 4, core 0, socket 0 CPU 5, core 1, socket 0 CPU 6, core 2, socket 0 CPU 7, core 3, socket 0 test child finished with 0 ---- end ---- Session topology: Ok # Fixes: 2d4f2799 ("perf data: Add global path holder") Signed-off-by: NTommi Rantala <tommi.t.rantala@nokia.com> Tested-by: NArnaldo Carvalho de Melo <acme@redhat.com> Acked-by: NJiri Olsa <jolsa@redhat.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Mamatha Inamdar <mamatha4@linux.vnet.ibm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ravi Bangoria <ravi.bangoria@linux.ibm.com> Link: http://lore.kernel.org/lkml/20200423115341.562782-1-tommi.t.rantala@nokia.comSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
由 Jin Yao 提交于
For interval mode, the metric is printed after the '#' character if it exists. But it's not calculated by the counts generated in this interval. See the following examples: root@kbl-ppc:~# perf stat -M CPI -I1000 --interval-count 2 # time counts unit events 1.000422803 764,809 inst_retired.any # 2.9 CPI 1.000422803 2,234,932 cycles 2.001464585 1,960,061 inst_retired.any # 1.6 CPI 2.001464585 4,022,591 cycles The second CPI should not be 1.6 (4,022,591/1,960,061 is 2.1) root@kbl-ppc:~# perf stat -e cycles,instructions -I1000 --interval-count 2 # time counts unit events 1.000429493 2,869,311 cycles 1.000429493 816,875 instructions # 0.28 insn per cycle 2.001516426 9,260,973 cycles 2.001516426 5,250,634 instructions # 0.87 insn per cycle The second 'insn per cycle' should not be 0.87 (5,250,634/9,260,973 is 0.57). The current code uses a global variable 'rt_stat' for tracking and updating the std dev of runtime stat. Unlike the counts, 'rt_stat' is not reset for interval. While the counts are reset for interval. perf_stat_process_counter() { if (config->interval) init_stats(ps->res_stats); } So for interval mode, the 'rt_stat' variable should be reset too. This patch resets 'rt_stat' before read_counters(), so the runtime stat is only calculated by the counts generated in this interval. With this patch: root@kbl-ppc:~# perf stat -M CPI -I1000 --interval-count 2 # time counts unit events 1.000420924 2,408,818 inst_retired.any # 2.1 CPI 1.000420924 5,010,111 cycles 2.001448579 2,798,407 inst_retired.any # 1.6 CPI 2.001448579 4,599,861 cycles root@kbl-ppc:~# perf stat -e cycles,instructions -I1000 --interval-count 2 # time counts unit events 1.000428555 2,769,714 cycles 1.000428555 774,462 instructions # 0.28 insn per cycle 2.001471562 3,595,904 cycles 2.001471562 1,243,703 instructions # 0.35 insn per cycle Now the second 'insn per cycle' and CPI are calculated by the counts generated in this interval. Signed-off-by: NJin Yao <yao.jin@linux.intel.com> Acked-by: NJiri Olsa <jolsa@redhat.com> Tested-By: NKajol Jain <kjain@linux.ibm.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jin Yao <yao.jin@intel.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lore.kernel.org/lkml/20200420145417.6864-1-yao.jin@linux.intel.comSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
由 Jin Yao 提交于
As the code comments in perf_stat_process_counter() say, we calculate counter's data every interval, and the display code shows ps->res_stats avg value. We need to zero the stats for interval mode. But the current code only zeros the res_stats[0], it doesn't zero the res_stats[1] and res_stats[2], which are for ena and run of counter. This patch zeros the whole res_stats[] for interval mode. Fixes: 51fd2df1 ("perf stat: Fix interval output values") Signed-off-by: NJin Yao <yao.jin@linux.intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jin Yao <yao.jin@intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lore.kernel.org/lkml/20200409070755.17261-1-yao.jin@linux.intel.comSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
- 22 4月, 2020 3 次提交
-
-
由 Ian Rogers 提交于
al->sym may be NULL given current if conditions and may cause a segv. Fixes: d2bedb78 ("perf script: Allow --symbol to accept hexadecimal addresses") Signed-off-by: NIan Rogers <irogers@google.com> Acked-by: NJiri Olsa <jolsa@redhat.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Link: http://lore.kernel.org/lkml/20200421004329.43109-1-irogers@google.comSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
由 Tommi Rantala 提交于
Fix div-by-zero if runtime is zero: $ perf bench futex hash --runtime=0 # Running 'futex/hash' benchmark: Run summary [PID 12090]: 4 threads, each operating on 1024 [private] futexes for 0 secs. Floating point exception (core dumped) Signed-off-by: NTommi Rantala <tommi.t.rantala@nokia.com> Acked-by: NJiri Olsa <jolsa@redhat.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Darren Hart <dvhart@infradead.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lore.kernel.org/lkml/20200417132330.119407-4-tommi.t.rantala@nokia.comSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
由 Tommi Rantala 提交于
Do not bother with close() if fd is not valid, just to silence valgrind: $ valgrind ./perf script ==59169== Memcheck, a memory error detector ==59169== Copyright (C) 2002-2017, and GNU GPL'd, by Julian Seward et al. ==59169== Using Valgrind-3.14.0 and LibVEX; rerun with -h for copyright info ==59169== Command: ./perf script ==59169== ==59169== Warning: invalid file descriptor -1 in syscall close() ==59169== Warning: invalid file descriptor -1 in syscall close() ==59169== Warning: invalid file descriptor -1 in syscall close() ==59169== Warning: invalid file descriptor -1 in syscall close() ==59169== Warning: invalid file descriptor -1 in syscall close() ==59169== Warning: invalid file descriptor -1 in syscall close() ==59169== Warning: invalid file descriptor -1 in syscall close() ==59169== Warning: invalid file descriptor -1 in syscall close() Signed-off-by: NTommi Rantala <tommi.t.rantala@nokia.com> Acked-by: NJiri Olsa <jolsa@redhat.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lore.kernel.org/lkml/20200417132330.119407-1-tommi.t.rantala@nokia.comSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
- 18 4月, 2020 4 次提交
-
-
由 Kan Liang 提交于
Perf checks the duplicate entries in a callchain before adding an entry. However the check is very slow especially with deeper call stack. Almost ~50% elapsed time of perf report is spent on the check when the call stack is always depth of 32. The hist_entry__cmp() is used to compare the new entry with the old entries. It will go through all the available sorts in the sort_list, and call the specific cmp of each sort, which is very slow. Actually, for most cases, there are no duplicate entries in callchain. The symbols are usually different. It's much faster to do a quick check for symbols first. Only do the full cmp when the symbols are exactly the same. The quick check is only to check symbols, not dso. Export _sort__sym_cmp. $ perf record --call-graph lbr ./tchain_edit_64 Without the patch $time perf report --stdio real 0m21.142s user 0m21.110s sys 0m0.033s With the patch $time perf report --stdio real 0m10.977s user 0m10.948s sys 0m0.027s Signed-off-by: NKan Liang <kan.liang@linux.intel.com> Acked-by: NJiri Olsa <jolsa@redhat.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexey Budankov <alexey.budankov@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Mathieu Poirier <mathieu.poirier@linaro.org> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Pavel Gerasimov <pavel.gerasimov@intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ravi Bangoria <ravi.bangoria@linux.ibm.com> Cc: Stephane Eranian <eranian@google.com> Cc: Vitaly Slobodskoy <vitaly.slobodskoy@intel.com> Link: http://lore.kernel.org/lkml/20200319202517.23423-18-kan.liang@linux.intel.comSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
由 Kan Liang 提交于
With the LBR stitching approach, the reconstructed LBR call stack can break the HW limitation. However, it may reconstruct invalid call stacks in some cases, e.g. exception handing such as setjmp/longjmp. Also, it may impact the processing time especially when the number of samples with stitched LBRs are huge. Add an option to enable the approach. Signed-off-by: NKan Liang <kan.liang@linux.intel.com> Reviewed-by: NAndi Kleen <ak@linux.intel.com> Acked-by: NJiri Olsa <jolsa@redhat.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexey Budankov <alexey.budankov@linux.intel.com> Cc: Mathieu Poirier <mathieu.poirier@linaro.org> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Pavel Gerasimov <pavel.gerasimov@intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ravi Bangoria <ravi.bangoria@linux.ibm.com> Cc: Stephane Eranian <eranian@google.com> Cc: Vitaly Slobodskoy <vitaly.slobodskoy@intel.com> Link: http://lore.kernel.org/lkml/20200319202517.23423-17-kan.liang@linux.intel.comSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
由 Kan Liang 提交于
With the LBR stitching approach, the reconstructed LBR call stack can break the HW limitation. However, it may reconstruct invalid call stacks in some cases, e.g. exception handing such as setjmp/longjmp. Also, it may impact the processing time especially when the number of samples with stitched LBRs are huge. Add an option to enable the approach. The option must be used with --call-graph lbr. Signed-off-by: NKan Liang <kan.liang@linux.intel.com> Reviewed-by: NAndi Kleen <ak@linux.intel.com> Acked-by: NJiri Olsa <jolsa@redhat.com> Tested-by: NArnaldo Carvalho de Melo <acme@redhat.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexey Budankov <alexey.budankov@linux.intel.com> Cc: Mathieu Poirier <mathieu.poirier@linaro.org> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Pavel Gerasimov <pavel.gerasimov@intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ravi Bangoria <ravi.bangoria@linux.ibm.com> Cc: Stephane Eranian <eranian@google.com> Cc: Vitaly Slobodskoy <vitaly.slobodskoy@intel.com> Link: http://lore.kernel.org/lkml/20200319202517.23423-16-kan.liang@linux.intel.comSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
由 Kan Liang 提交于
With the LBR stitching approach, the reconstructed LBR call stack can break the HW limitation. However, it may reconstruct invalid call stacks in some cases, e.g. exception handing such as setjmp/longjmp. Also, it may impact the processing time especially when the number of samples with stitched LBRs are huge. Add an option to enable the approach. Committer testing: Using the same perf.data as with the latest cset committer testing section: $ perf script --stitch-lbr <SNIP> tchain_edit 11131 15164.984292: 437491 cycles:u: 401106 f43+0x0 (/wb/tchain_edit) 40114c f42+0x18 (/wb/tchain_edit) 401172 f41+0xe (/wb/tchain_edit) 401194 f40+0x0 (/wb/tchain_edit) 40119b f39+0x0 (/wb/tchain_edit) 4011a2 f38+0x0 (/wb/tchain_edit) 4011a9 f37+0x0 (/wb/tchain_edit) 4011b0 f36+0x0 (/wb/tchain_edit) 4011b7 f35+0x0 (/wb/tchain_edit) 4011be f34+0x0 (/wb/tchain_edit) 4011c5 f33+0x0 (/wb/tchain_edit) 4011cc f32+0x0 (/wb/tchain_edit) 401207 f31+0x34 (/wb/tchain_edit) 401212 f30+0x0 (/wb/tchain_edit) 401219 f29+0x0 (/wb/tchain_edit) 401220 f28+0x0 (/wb/tchain_edit) 401227 f27+0x0 (/wb/tchain_edit) 40122e f26+0x0 (/wb/tchain_edit) 401235 f25+0x0 (/wb/tchain_edit) 40123c f24+0x0 (/wb/tchain_edit) 401243 f23+0x0 (/wb/tchain_edit) 40124a f22+0x0 (/wb/tchain_edit) 401251 f21+0x0 (/wb/tchain_edit) 401258 f20+0x0 (/wb/tchain_edit) 40125f f19+0x0 (/wb/tchain_edit) 401266 f18+0x0 (/wb/tchain_edit) 40126d f17+0x0 (/wb/tchain_edit) 401274 f16+0x0 (/wb/tchain_edit) 40127b f15+0x0 (/wb/tchain_edit) 401282 f14+0x0 (/wb/tchain_edit) 401289 f13+0x0 (/wb/tchain_edit) 401290 f12+0x0 (/wb/tchain_edit) 401297 f11+0x0 (/wb/tchain_edit) 40129e f10+0x0 (/wb/tchain_edit) 4012a5 f9+0x0 (/wb/tchain_edit) 4012ac f8+0x0 (/wb/tchain_edit) 4012b3 f7+0x0 (/wb/tchain_edit) 4012ba f6+0x0 (/wb/tchain_edit) 4012c1 f5+0x0 (/wb/tchain_edit) 4012c8 f4+0x0 (/wb/tchain_edit) 4012cf f3+0x0 (/wb/tchain_edit) 4012d6 f2+0x0 (/wb/tchain_edit) 4012dd f1+0x0 (/wb/tchain_edit) 4012e4 main+0x0 (/wb/tchain_edit) 7f41a5016f41 __libc_start_main+0xf1 (/usr/lib64/libc-2.29.so) <SNIP> $ Signed-off-by: NKan Liang <kan.liang@linux.intel.com> Reviewed-by: NAndi Kleen <ak@linux.intel.com> Acked-by: NJiri Olsa <jolsa@redhat.com> Tested-by: NArnaldo Carvalho de Melo <acme@redhat.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexey Budankov <alexey.budankov@linux.intel.com> Cc: Mathieu Poirier <mathieu.poirier@linaro.org> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Pavel Gerasimov <pavel.gerasimov@intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ravi Bangoria <ravi.bangoria@linux.ibm.com> Cc: Stephane Eranian <eranian@google.com> Cc: Vitaly Slobodskoy <vitaly.slobodskoy@intel.com> Link: http://lore.kernel.org/lkml/20200319202517.23423-15-kan.liang@linux.intel.comSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-