1. 28 5月, 2020 3 次提交
    • J
      perf stat: Report summary for interval mode · c7e5b328
      Jin Yao 提交于
      Currently 'perf stat' supports to print counts at regular interval (-I),
      but it's not very easy for user to get the overall statistics.
      
      The patch uses 'evsel->prev_raw_counts' to get counts for summary.  Copy
      the counts to 'evsel->counts' after printing the interval results.
      Next, we just follow the non-interval processing.
      
      Let's see some examples,
      
       root@kbl-ppc:~# perf stat -e cycles -I1000 --interval-count 2
       #           time             counts unit events
            1.000412064          2,281,114      cycles
            2.001383658          2,547,880      cycles
      
        Performance counter stats for 'system wide':
      
                4,828,994      cycles
      
              2.002860349 seconds time elapsed
      
       root@kbl-ppc:~# perf stat -e cycles,instructions -I1000 --interval-count 2
       #           time             counts unit events
            1.000389902          1,536,093      cycles
            1.000389902            420,226      instructions              #    0.27  insn per cycle
            2.001433453          2,213,952      cycles
            2.001433453            735,465      instructions              #    0.33  insn per cycle
      
        Performance counter stats for 'system wide':
      
                3,750,045      cycles
                1,155,691      instructions              #    0.31  insn per cycle
      
              2.003023361 seconds time elapsed
      
       root@kbl-ppc:~# perf stat -M CPI,IPC -I1000 --interval-count 2
       #           time             counts unit events
            1.000435121            905,303      inst_retired.any          #      2.9 CPI
            1.000435121          2,663,333      cycles
            1.000435121            914,702      inst_retired.any          #      0.3 IPC
            1.000435121          2,676,559      cpu_clk_unhalted.thread
            2.001615941          1,951,092      inst_retired.any          #      1.8 CPI
            2.001615941          3,551,357      cycles
            2.001615941          1,950,837      inst_retired.any          #      0.5 IPC
            2.001615941          3,551,044      cpu_clk_unhalted.thread
      
        Performance counter stats for 'system wide':
      
                2,856,395      inst_retired.any          #      2.2 CPI
                6,214,690      cycles
                2,865,539      inst_retired.any          #      0.5 IPC
                6,227,603      cpu_clk_unhalted.thread
      
              2.003403078 seconds time elapsed
      
      Committer testing:
      
      Before:
      
        # perf stat -e cycles -I1000 --interval-count 2
        #           time             counts unit events
             1.000618627         26,877,408      cycles
             2.001417968        233,672,829      cycles
        #
      
      After:
      
        # perf stat -e cycles -I1000 --interval-count 2
        #           time             counts unit events
             1.001531815      5,341,388,792      cycles
             2.002936530        100,073,912      cycles
      
         Performance counter stats for 'system wide':
      
             5,441,462,704      cycles
      
               2.004893794 seconds time elapsed
      
        #
      Signed-off-by: NJin Yao <yao.jin@linux.intel.com>
      Reviewed-by: NJiri Olsa <jolsa@redhat.com>
      Tested-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: Kan Liang <kan.liang@linux.intel.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Link: http://lore.kernel.org/lkml/20200520042737.24160-6-yao.jin@linux.intel.comSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      c7e5b328
    • J
      perf stat: Save aggr value to first member of prev_raw_counts · 905365f4
      Jin Yao 提交于
      To collect the overall statistics for interval mode, we copy the counts
      from evsel->prev_raw_counts to evsel->counts.
      
      For AGGR_GLOBAL mode, because the perf_stat_process_counter creates aggr
      values from per cpu values, but the per cpu values are 0, so the
      calculated aggr values will be always 0.
      
      This patch uses a trick that saves the previous aggr value to the first
      member of perf_counts, then aggr calculation in process_counter_values
      can work correctly for AGGR_GLOBAL.
      
       v6:
       ---
       Add comments in perf_evlist__save_aggr_prev_raw_counts.
      Signed-off-by: NJin Yao <yao.jin@linux.intel.com>
      Reviewed-by: NJiri Olsa <jolsa@redhat.com>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: Jin Yao <yao.jin@intel.com>
      Cc: Kan Liang <kan.liang@linux.intel.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Link: http://lore.kernel.org/lkml/20200520042737.24160-5-yao.jin@linux.intel.comSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      905365f4
    • J
      perf stat: Copy counts from prev_raw_counts to evsel->counts · 297767ac
      Jin Yao 提交于
      It would be useful to support the overall statistics for perf-stat
      interval mode. For example, report the summary at the end of "perf-stat
      -I" output.
      
      But since perf-stat can support many aggregation modes, such as
      --per-thread, --per-socket, -M and etc, we need a solution which doesn't
      bring much complexity.
      
      The idea is to use 'evsel->prev_raw_counts' which is updated in each
      interval and it's saved with the latest counts. Before reporting the
      summary, we copy the counts from evsel->prev_raw_counts to
      evsel->counts, and next we just follow non-interval processing.
      
       v5:
       ---
       Don't save the previous aggr value to the member of [cpu0,thread0]
       in perf_counts. Originally that was a trick because the
       perf_stat_process_counter would create aggr values from per cpu
       values. But we don't need to do that all the time. We will
       handle it in next patch.
      Signed-off-by: NJin Yao <yao.jin@linux.intel.com>
      Reviewed-by: NJiri Olsa <jolsa@redhat.com>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: Jin Yao <yao.jin@intel.com>
      Cc: Kan Liang <kan.liang@linux.intel.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Link: http://lore.kernel.org/lkml/20200520042737.24160-4-yao.jin@linux.intel.comSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      297767ac
  2. 04 3月, 2020 1 次提交
    • J
      perf stat: Show percore counts in per CPU output · 1af62ce6
      Jin Yao 提交于
      We have supported the event modifier "percore" which sums up the event
      counts for all hardware threads in a core and show the counts per core.
      
      For example,
      
       # perf stat -e cpu/event=cpu-cycles,percore/ -a -A -- sleep 1
      
        Performance counter stats for 'system wide':
      
       S0-D0-C0                395,072      cpu/event=cpu-cycles,percore/
       S0-D0-C1                851,248      cpu/event=cpu-cycles,percore/
       S0-D0-C2                954,226      cpu/event=cpu-cycles,percore/
       S0-D0-C3              1,233,659      cpu/event=cpu-cycles,percore/
      
      This patch provides a new option "--percore-show-thread". It is used
      with event modifier "percore" together to sum up the event counts for
      all hardware threads in a core but show the counts per hardware thread.
      
      This is essentially a replacement for the any bit (which is gone in
      Icelake). Per core counts are useful for some formulas, e.g. CoreIPC.
      The original percore version was inconvenient to post process. This
      variant matches the output of the any bit.
      
      With this patch, for example,
      
       # perf stat -e cpu/event=cpu-cycles,percore/ -a -A --percore-show-thread  -- sleep 1
      
        Performance counter stats for 'system wide':
      
       CPU0               2,453,061      cpu/event=cpu-cycles,percore/
       CPU1               1,823,921      cpu/event=cpu-cycles,percore/
       CPU2               1,383,166      cpu/event=cpu-cycles,percore/
       CPU3               1,102,652      cpu/event=cpu-cycles,percore/
       CPU4               2,453,061      cpu/event=cpu-cycles,percore/
       CPU5               1,823,921      cpu/event=cpu-cycles,percore/
       CPU6               1,383,166      cpu/event=cpu-cycles,percore/
       CPU7               1,102,652      cpu/event=cpu-cycles,percore/
      
      We can see counts are duplicated in CPU pairs (CPU0/CPU4, CPU1/CPU5,
      CPU2/CPU6, CPU3/CPU7).
      
      The interval mode also works. For example,
      
       # perf stat -e cpu/event=cpu-cycles,percore/ -a -A --percore-show-thread  -I 1000
       #           time CPU                    counts unit events
            1.000425421 CPU0                 925,032      cpu/event=cpu-cycles,percore/
            1.000425421 CPU1                 430,202      cpu/event=cpu-cycles,percore/
            1.000425421 CPU2                 436,843      cpu/event=cpu-cycles,percore/
            1.000425421 CPU3               1,192,504      cpu/event=cpu-cycles,percore/
            1.000425421 CPU4                 925,032      cpu/event=cpu-cycles,percore/
            1.000425421 CPU5                 430,202      cpu/event=cpu-cycles,percore/
            1.000425421 CPU6                 436,843      cpu/event=cpu-cycles,percore/
            1.000425421 CPU7               1,192,504      cpu/event=cpu-cycles,percore/
      
      If we offline CPU5, the result is:
      
       # perf stat -e cpu/event=cpu-cycles,percore/ -a -A --percore-show-thread -- sleep 1
      
        Performance counter stats for 'system wide':
      
       CPU0               2,752,148      cpu/event=cpu-cycles,percore/
       CPU1               1,009,312      cpu/event=cpu-cycles,percore/
       CPU2               2,784,072      cpu/event=cpu-cycles,percore/
       CPU3               2,427,922      cpu/event=cpu-cycles,percore/
       CPU4               2,752,148      cpu/event=cpu-cycles,percore/
       CPU6               2,784,072      cpu/event=cpu-cycles,percore/
       CPU7               2,427,922      cpu/event=cpu-cycles,percore/
      
              1.001416041 seconds time elapsed
      
       v4:
       ---
       Ravi Bangoria reports an issue in v3. Once we offline a CPU,
       the output is not correct. The issue is we should use the cpu
       idx in print_percore_thread rather than using the cpu value.
      
       v3:
       ---
       1. Fix the interval mode output error
       2. Use cpu value (not cpu index) in config->aggr_get_id().
       3. Refine the code according to Jiri's comments.
      
       v2:
       ---
       Add the explanation in change log. This is essentially a replacement
       for the any bit. No code change.
      Signed-off-by: NJin Yao <yao.jin@linux.intel.com>
      Tested-by: NRavi Bangoria <ravi.bangoria@linux.ibm.com>
      Acked-by: NJiri Olsa <jolsa@kernel.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: Kan Liang <kan.liang@linux.intel.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Link: http://lore.kernel.org/lkml/20200214080452.26402-1-yao.jin@linux.intel.comSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      1af62ce6
  3. 29 11月, 2019 1 次提交
  4. 07 11月, 2019 1 次提交
    • J
      perf stat: Add --per-node agregation support · 86895b48
      Jiri Olsa 提交于
      Adding new --per-node option to aggregate counts per NUMA
      nodes for system-wide mode measurements.
      
      You can specify --per-node in live mode:
      
        # perf stat  -a -I 1000 -e cycles --per-node
        #           time node   cpus             counts unit events
             1.000542550 N0       20          6,202,097      cycles
             1.000542550 N1       20            639,559      cycles
             2.002040063 N0       20          7,412,495      cycles
             2.002040063 N1       20          2,185,577      cycles
             3.003451699 N0       20          6,508,917      cycles
             3.003451699 N1       20            765,607      cycles
        ...
      
      Or in the record/report stat session:
      
        # perf stat record -a -I 1000 -e cycles
        #           time             counts unit events
             1.000536937         10,008,468      cycles
             2.002090152          9,578,539      cycles
             3.003625233          7,647,869      cycles
             4.005135036          7,032,086      cycles
        ^C     4.340902364          3,923,893      cycles
      
        # perf stat report --per-node
        #           time node   cpus             counts unit events
             1.000536937 N0       20          9,355,086      cycles
             1.000536937 N1       20            653,382      cycles
             2.002090152 N0       20          7,712,838      cycles
             2.002090152 N1       20          1,865,701      cycles
             3.003625233 N0       20          6,604,441      cycles
             3.003625233 N1       20          1,043,428      cycles
             4.005135036 N0       20          6,350,522      cycles
             4.005135036 N1       20            681,564      cycles
             4.340902364 N0       20          3,403,188      cycles
             4.340902364 N1       20            520,705      cycles
      Signed-off-by: NJiri Olsa <jolsa@kernel.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Alexey Budankov <alexey.budankov@linux.intel.com>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: Joe Mario <jmario@redhat.com>
      Cc: Kan Liang <kan.liang@linux.intel.com>
      Cc: Michael Petlan <mpetlan@redhat.com>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Link: http://lkml.kernel.org/r/20190904073415.723-4-jolsa@kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      86895b48
  5. 15 10月, 2019 1 次提交
  6. 20 9月, 2019 2 次提交
    • S
      perf stat: Reset previous counts on repeat with interval · b63fd11c
      Srikar Dronamraju 提交于
      When using 'perf stat' with repeat and interval option, it shows wrong
      values for events.
      
      The wrong values will be shown for the first interval on the second and
      subsequent repetitions.
      
      Without the fix:
      
        # perf stat -r 3 -I 2000 -e faults -e sched:sched_switch -a sleep 5
      
           2.000282489                 53      faults
           2.000282489                513      sched:sched_switch
           4.005478208              3,721      faults
           4.005478208              2,666      sched:sched_switch
           5.025470933                395      faults
           5.025470933              1,307      sched:sched_switch
           2.009602825 1,84,46,74,40,73,70,95,47,520      faults 		<------
           2.009602825 1,84,46,74,40,73,70,95,49,568      sched:sched_switch  <------
           4.019612206              4,730      faults
           4.019612206              2,746      sched:sched_switch
           5.039615484              3,953      faults
           5.039615484              1,496      sched:sched_switch
           2.000274620 1,84,46,74,40,73,70,95,47,520      faults		<------
           2.000274620 1,84,46,74,40,73,70,95,47,520      sched:sched_switch	<------
           4.000480342              4,282      faults
           4.000480342              2,303      sched:sched_switch
           5.000916811              1,322      faults
           5.000916811              1,064      sched:sched_switch
        #
      
      prev_raw_counts is allocated when using intervals. This is used when
      calculating the difference in the counts of events when using interval.
      
      The current counts are stored in prev_raw_counts to calculate the
      differences in the next iteration.
      
      On the first interval of the second and subsequent repetitions,
      prev_raw_counts would be the values stored in the last interval of the
      previous repetitions, while the current counts will only be for the
      first interval of the current repetition.
      
      Hence there is a possibility of events showing up as big number.
      
      Fix this by resetting prev_raw_counts whenever perf stat repeats the
      command.
      
      With the fix:
      
        # perf stat -r 3 -I 2000 -e faults -e sched:sched_switch -a sleep 5
      
           2.019349347              2,597      faults
           2.019349347              2,753      sched:sched_switch
           4.019577372              3,098      faults
           4.019577372              2,532      sched:sched_switch
           5.019415481              1,879      faults
           5.019415481              1,356      sched:sched_switch
           2.000178813              8,468      faults
           2.000178813              2,254      sched:sched_switch
           4.000404621              7,440      faults
           4.000404621              1,266      sched:sched_switch
           5.040196079              2,458      faults
           5.040196079                556      sched:sched_switch
           2.000191939              6,870      faults
           2.000191939              1,170      sched:sched_switch
           4.000414103                541      faults
           4.000414103                902      sched:sched_switch
           5.000809863                450      faults
           5.000809863                364      sched:sched_switch
        #
      
      Committer notes:
      
      This was broken since the cset introducing the --interval feature, i.e.
      --repeat + --interval wasn't tested at that point, add the Fixes tag so
      that automatic scripts can pick this up.
      
      Fixes: 13370a9b ("perf stat: Add interval printing")
      Signed-off-by: NSrikar Dronamraju <srikar@linux.vnet.ibm.com>
      Acked-by: NJiri Olsa <jolsa@kernel.org>
      Tested-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      Tested-by: NRavi Bangoria <ravi.bangoria@linux.ibm.com>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: stable@vger.kernel.org # v3.9+
      Link: http://lore.kernel.org/lkml/20190904094738.9558-2-srikar@linux.vnet.ibm.com
      [ Fixed up conflicts with libperf, i.e. some perf_{evsel,evlist} lost the 'perf' prefix ]
      Signed-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      b63fd11c
    • A
      perf stat: Move perf_stat_synthesize_config() to event.h · b251892d
      Arnaldo Carvalho de Melo 提交于
      Together with the other synthsizers, and rename it to
      perf_event__synthesize_stat_events().
      
      This allows us to stop including event.h in util/stat.h.
      
      Cc: Adrian Hunter <adrian.hunter@intel.com>
      Cc: Jiri Olsa <jolsa@kernel.org>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Link: https://lkml.kernel.org/n/tip-q5ebhrp44txboobs86htu5r9@git.kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      b251892d
  7. 26 8月, 2019 2 次提交
  8. 30 7月, 2019 4 次提交
  9. 11 6月, 2019 1 次提交
    • K
      perf stat: Support per-die aggregation · db5742b6
      Kan Liang 提交于
      It is useful to aggregate counts per die. E.g. Uncore becomes die-scope
      on Xeon Cascade Lake-AP.
      
      Introduce a new option "--per-die" to support per-die aggregation.
      
      The global id for each core has been changed to socket + die id + core
      id. The global id for each die is socket + die id.
      
      Add die information for per-core aggregation. The output of per-core
      aggregation will be changed from "S0-C0" to "S0-D0-C0". Any scripts
      which rely on the output format of per-core aggregation probably be
      broken.
      
      For 'perf stat record/report', there is no die information when
      processing the old perf.data. The per-die result will be the same as
      per-socket.
      
      Committer notes:
      
      Renamed 'die' variable to 'die_id' to fix the build in some systems:
      
          CC       /tmp/build/perf/builtin-script.o
        cc1: warnings being treated as errors
        builtin-stat.c: In function 'perf_env__get_die':
        builtin-stat.c:963: error: declaration of 'die' shadows a global declaration
        util/util.h:19: error: shadowed declaration is here
        mv: cannot stat `/tmp/build/perf/.builtin-stat.o.tmp': No such file or directory
      Signed-off-by: NKan Liang <kan.liang@linux.intel.com>
      Reviewed-by: NJiri Olsa <jolsa@kernel.org>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Link: https://lkml.kernel.org/n/tip-bsnhx7vgsuu6ei307mw60mbj@git.kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      db5742b6
  10. 19 9月, 2018 1 次提交
  11. 31 8月, 2018 23 次提交