1. 16 7月, 2021 1 次提交
  2. 02 7月, 2021 1 次提交
    • R
      perf session: Add missing evlist__delete when deleting a session · cf96b8e4
      Riccardo Mancini 提交于
      ASan reports a memory leak caused by evlist not being deleted on exit in
      perf-report, perf-script and perf-data.
      The problem is caused by evlist->session not being deleted, which is
      allocated in perf_session__read_header, called in perf_session__new if
      perf_data is in read mode.
      In case of write mode, the session->evlist is filled by the caller.
      This patch solves the problem by calling evlist__delete in
      perf_session__delete if perf_data is in read mode.
      
      Changes in v2:
       - call evlist__delete from within perf_session__delete
      
      v1: https://lore.kernel.org/lkml/20210621234317.235545-1-rickyman7@gmail.com/
      
      ASan report follows:
      
      $ ./perf script report flamegraph
      =================================================================
      ==227640==ERROR: LeakSanitizer: detected memory leaks
      
      <SNIP unrelated>
      
      Indirect leak of 2704 byte(s) in 1 object(s) allocated from:
          #0 0x4f4137 in calloc (/home/user/linux/tools/perf/perf+0x4f4137)
          #1 0xbe3d56 in zalloc /home/user/linux/tools/lib/perf/../../lib/zalloc.c:8:9
          #2 0x7f999e in evlist__new /home/user/linux/tools/perf/util/evlist.c:77:26
          #3 0x8ad938 in perf_session__read_header /home/user/linux/tools/perf/util/header.c:3797:20
          #4 0x8ec714 in perf_session__open /home/user/linux/tools/perf/util/session.c:109:6
          #5 0x8ebe83 in perf_session__new /home/user/linux/tools/perf/util/session.c:213:10
          #6 0x60c6de in cmd_script /home/user/linux/tools/perf/builtin-script.c:3856:12
          #7 0x7b2930 in run_builtin /home/user/linux/tools/perf/perf.c:313:11
          #8 0x7b120f in handle_internal_command /home/user/linux/tools/perf/perf.c:365:8
          #9 0x7b2493 in run_argv /home/user/linux/tools/perf/perf.c:409:2
          #10 0x7b0c89 in main /home/user/linux/tools/perf/perf.c:539:3
          #11 0x7f5260654b74  (/lib64/libc.so.6+0x27b74)
      
      Indirect leak of 568 byte(s) in 1 object(s) allocated from:
          #0 0x4f4137 in calloc (/home/user/linux/tools/perf/perf+0x4f4137)
          #1 0xbe3d56 in zalloc /home/user/linux/tools/lib/perf/../../lib/zalloc.c:8:9
          #2 0x80ce88 in evsel__new_idx /home/user/linux/tools/perf/util/evsel.c:268:24
          #3 0x8aed93 in evsel__new /home/user/linux/tools/perf/util/evsel.h:210:9
          #4 0x8ae07e in perf_session__read_header /home/user/linux/tools/perf/util/header.c:3853:11
          #5 0x8ec714 in perf_session__open /home/user/linux/tools/perf/util/session.c:109:6
          #6 0x8ebe83 in perf_session__new /home/user/linux/tools/perf/util/session.c:213:10
          #7 0x60c6de in cmd_script /home/user/linux/tools/perf/builtin-script.c:3856:12
          #8 0x7b2930 in run_builtin /home/user/linux/tools/perf/perf.c:313:11
          #9 0x7b120f in handle_internal_command /home/user/linux/tools/perf/perf.c:365:8
          #10 0x7b2493 in run_argv /home/user/linux/tools/perf/perf.c:409:2
          #11 0x7b0c89 in main /home/user/linux/tools/perf/perf.c:539:3
          #12 0x7f5260654b74  (/lib64/libc.so.6+0x27b74)
      
      Indirect leak of 264 byte(s) in 1 object(s) allocated from:
          #0 0x4f4137 in calloc (/home/user/linux/tools/perf/perf+0x4f4137)
          #1 0xbe3d56 in zalloc /home/user/linux/tools/lib/perf/../../lib/zalloc.c:8:9
          #2 0xbe3e70 in xyarray__new /home/user/linux/tools/lib/perf/xyarray.c:10:23
          #3 0xbd7754 in perf_evsel__alloc_id /home/user/linux/tools/lib/perf/evsel.c:361:21
          #4 0x8ae201 in perf_session__read_header /home/user/linux/tools/perf/util/header.c:3871:7
          #5 0x8ec714 in perf_session__open /home/user/linux/tools/perf/util/session.c:109:6
          #6 0x8ebe83 in perf_session__new /home/user/linux/tools/perf/util/session.c:213:10
          #7 0x60c6de in cmd_script /home/user/linux/tools/perf/builtin-script.c:3856:12
          #8 0x7b2930 in run_builtin /home/user/linux/tools/perf/perf.c:313:11
          #9 0x7b120f in handle_internal_command /home/user/linux/tools/perf/perf.c:365:8
          #10 0x7b2493 in run_argv /home/user/linux/tools/perf/perf.c:409:2
          #11 0x7b0c89 in main /home/user/linux/tools/perf/perf.c:539:3
          #12 0x7f5260654b74  (/lib64/libc.so.6+0x27b74)
      
      Indirect leak of 32 byte(s) in 1 object(s) allocated from:
          #0 0x4f4137 in calloc (/home/user/linux/tools/perf/perf+0x4f4137)
          #1 0xbe3d56 in zalloc /home/user/linux/tools/lib/perf/../../lib/zalloc.c:8:9
          #2 0xbd77e0 in perf_evsel__alloc_id /home/user/linux/tools/lib/perf/evsel.c:365:14
          #3 0x8ae201 in perf_session__read_header /home/user/linux/tools/perf/util/header.c:3871:7
          #4 0x8ec714 in perf_session__open /home/user/linux/tools/perf/util/session.c:109:6
          #5 0x8ebe83 in perf_session__new /home/user/linux/tools/perf/util/session.c:213:10
          #6 0x60c6de in cmd_script /home/user/linux/tools/perf/builtin-script.c:3856:12
          #7 0x7b2930 in run_builtin /home/user/linux/tools/perf/perf.c:313:11
          #8 0x7b120f in handle_internal_command /home/user/linux/tools/perf/perf.c:365:8
          #9 0x7b2493 in run_argv /home/user/linux/tools/perf/perf.c:409:2
          #10 0x7b0c89 in main /home/user/linux/tools/perf/perf.c:539:3
          #11 0x7f5260654b74  (/lib64/libc.so.6+0x27b74)
      
      Indirect leak of 7 byte(s) in 1 object(s) allocated from:
          #0 0x4b8207 in strdup (/home/user/linux/tools/perf/perf+0x4b8207)
          #1 0x8b4459 in evlist__set_event_name /home/user/linux/tools/perf/util/header.c:2292:16
          #2 0x89d862 in process_event_desc /home/user/linux/tools/perf/util/header.c:2313:3
          #3 0x8af319 in perf_file_section__process /home/user/linux/tools/perf/util/header.c:3651:9
          #4 0x8aa6e9 in perf_header__process_sections /home/user/linux/tools/perf/util/header.c:3427:9
          #5 0x8ae3e7 in perf_session__read_header /home/user/linux/tools/perf/util/header.c:3886:2
          #6 0x8ec714 in perf_session__open /home/user/linux/tools/perf/util/session.c:109:6
          #7 0x8ebe83 in perf_session__new /home/user/linux/tools/perf/util/session.c:213:10
          #8 0x60c6de in cmd_script /home/user/linux/tools/perf/builtin-script.c:3856:12
          #9 0x7b2930 in run_builtin /home/user/linux/tools/perf/perf.c:313:11
          #10 0x7b120f in handle_internal_command /home/user/linux/tools/perf/perf.c:365:8
          #11 0x7b2493 in run_argv /home/user/linux/tools/perf/perf.c:409:2
          #12 0x7b0c89 in main /home/user/linux/tools/perf/perf.c:539:3
          #13 0x7f5260654b74  (/lib64/libc.so.6+0x27b74)
      
      SUMMARY: AddressSanitizer: 3728 byte(s) leaked in 7 allocation(s).
      Signed-off-by: NRiccardo Mancini <rickyman7@gmail.com>
      Acked-by: NIan Rogers <irogers@google.com>
      Acked-by: NJiri Olsa <jolsa@redhat.com>
      Cc: Adrian Hunter <adrian.hunter@intel.com>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Ian Rogers <irogers@google.com>
      Cc: Kan Liang <kan.liang@linux.intel.com>
      Cc: Leo Yan <leo.yan@linaro.org>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Link: http://lore.kernel.org/lkml/20210624231926.212208-1-rickyman7@gmail.comSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      cf96b8e4
  3. 11 6月, 2021 1 次提交
    • L
      perf session: Correct buffer copying when peeking events · 197eecb6
      Leo Yan 提交于
      When peeking an event, it has a short path and a long path.  The short
      path uses the session pointer "one_mmap_addr" to directly fetch the
      event; and the long path needs to read out the event header and the
      following event data from file and fill into the buffer pointer passed
      through the argument "buf".
      
      The issue is in the long path that it copies the event header and event
      data into the same destination address which pointer "buf", this means
      the event header is overwritten.  We are just lucky to run into the
      short path in most cases, so we don't hit the issue in the long path.
      
      This patch adds the offset "hdr_sz" to the pointer "buf" when copying
      the event data, so that it can reserve the event header which can be
      used properly by its caller.
      
      Fixes: 5a52f33a ("perf session: Add perf_session__peek_event()")
      Signed-off-by: NLeo Yan <leo.yan@linaro.org>
      Acked-by: NAdrian Hunter <adrian.hunter@intel.com>
      Acked-by: NJiri Olsa <jolsa@redhat.com>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Kan Liang <kan.liang@linux.intel.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Link: http://lore.kernel.org/lkml/20210605052957.1070720-1-leo.yan@linaro.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      197eecb6
  4. 12 5月, 2021 1 次提交
  5. 10 5月, 2021 1 次提交
  6. 29 4月, 2021 3 次提交
    • L
      perf session: Dump PERF_RECORD_TIME_CONV event · 81e70d7e
      Leo Yan 提交于
      Now perf tool uses the common stub function process_event_op2_stub() for
      dumping TIME_CONV event, thus it doesn't output the clock parameters
      contained in the event.
      
      This patch adds the callback function for dumping the hardware clock
      parameters in TIME_CONV event.
      
      Before:
      
        # perf report -D
      
        0x978 [0x38]: event: 79
        .
        . ... raw event: size 56 bytes
        .  0000:  4f 00 00 00 00 00 38 00 15 00 00 00 00 00 00 00  O.....8.........
        .  0010:  00 00 40 01 00 00 00 00 86 89 0b bf df ff ff ff  ..@........<BF><DF><FF><FF><FF>
        .  0020:  d1 c1 b2 39 03 00 00 00 ff ff ff ff ff ff ff 00  <D1><C1><B2>9....<FF><FF><FF><FF><FF><FF><FF>.
        .  0030:  01 01 00 00 00 00 00 00                          ........
      
        0 0 0x978 [0x38]: PERF_RECORD_TIME_CONV
        : unhandled!
      
        [...]
      
      After:
      
        # perf report -D
      
        0x978 [0x38]: event: 79
        .
        . ... raw event: size 56 bytes
        .  0000:  4f 00 00 00 00 00 38 00 15 00 00 00 00 00 00 00  O.....8.........
        .  0010:  00 00 40 01 00 00 00 00 86 89 0b bf df ff ff ff  ..@........<BF><DF><FF><FF><FF>
        .  0020:  d1 c1 b2 39 03 00 00 00 ff ff ff ff ff ff ff 00  <D1><C1><B2>9....<FF><FF><FF><FF><FF><FF><FF>.
        .  0030:  01 01 00 00 00 00 00 00                          ........
      
        0 0 0x978 [0x38]: PERF_RECORD_TIME_CONV
        ... Time Shift      21
        ... Time Muliplier  20971520
        ... Time Zero       18446743935180835206
        ... Time Cycles     13852918225
        ... Time Mask       0xffffffffffffff
        ... Cap Time Zero   1
        ... Cap Time Short  1
        : unhandled!
      
        [...]
      Signed-off-by: NLeo Yan <leo.yan@linaro.org>
      Acked-by: NAdrian Hunter <adrian.hunter@intel.com>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Gustavo A. R. Silva <gustavoars@kernel.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Kan Liang <kan.liang@linux.intel.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Steve MacLean <Steve.MacLean@Microsoft.com>
      Cc: Yonatan Goldschmidt <yonatan.goldschmidt@granulate.io>
      Link: https://lore.kernel.org/r/20210428120915.7123-5-leo.yan@linaro.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      81e70d7e
    • L
      perf session: Add swap operation for event TIME_CONV · 050ffc44
      Leo Yan 提交于
      Since commit d110162c ("perf tsc: Support cap_user_time_short for
      event TIME_CONV"), the event PERF_RECORD_TIME_CONV has extended the data
      structure for clock parameters.
      
      To be backwards-compatible, this patch adds a dedicated swap operation
      for the event PERF_RECORD_TIME_CONV, based on checking if the event
      contains field "time_cycles", it can support both for the old and new
      event formats.
      
      Fixes: d110162c ("perf tsc: Support cap_user_time_short for event TIME_CONV")
      Signed-off-by: NLeo Yan <leo.yan@linaro.org>
      Acked-by: NAdrian Hunter <adrian.hunter@intel.com>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Gustavo A. R. Silva <gustavoars@kernel.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Kan Liang <kan.liang@linux.intel.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Steve MacLean <Steve.MacLean@Microsoft.com>
      Cc: Yonatan Goldschmidt <yonatan.goldschmidt@granulate.io>
      Link: https://lore.kernel.org/r/20210428120915.7123-4-leo.yan@linaro.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      050ffc44
    • N
      perf report: Add --skip-empty option to suppress 0 event stat · 2775de0b
      Namhyung Kim 提交于
      To make the output more readable, I think it's better to remove 0's in
      the output.  Also the dummy event has no event stats so it just wasts
      the space.  Let's use the --skip-empty option to suppress it.
      
        $ perf report --stat --skip-empty
      
        Aggregated stats:
                   TOTAL events:      16530
                    MMAP events:        226
                    COMM events:       1596
                    EXIT events:          2
                THROTTLE events:        121
              UNTHROTTLE events:        117
                    FORK events:       1595
                  SAMPLE events:        719
                   MMAP2 events:      12147
                  CGROUP events:          2
          FINISHED_ROUND events:          2
              THREAD_MAP events:          1
                 CPU_MAP events:          1
               TIME_CONV events:          1
        cycles stats:
                  SAMPLE events:        719
      Reviewed-by: NAndi Kleen <ak@linux.intel.com>
      Signed-off-by: NNamhyung Kim <namhyung@kernel.org>
      Tested-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      Acked-by: NJiri Olsa <jolsa@redhat.com>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Ian Rogers <irogers@google.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Link: https://lore.kernel.org/r/20210427013717.1651674-5-namhyung@kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      2775de0b
  7. 26 3月, 2021 1 次提交
    • A
      perf tools: Support pipeline stage cycles for powerpc · 06e5ca74
      Athira Rajeev 提交于
      The pipeline stage cycles details can be recorded on powerpc from the
      contents of Performance Monitor Unit (PMU) registers. On ISA v3.1
      platform, sampling registers exposes the cycles spent in different
      pipeline stages. Patch adds perf tools support to present two of the
      cycle counter information along with memory latency (weight).
      
      Re-use the field 'ins_lat' for storing the first pipeline stage cycle.
      This is stored in 'var2_w' field of 'perf_sample_weight'.
      
      Add a new field 'p_stage_cyc' to store the second pipeline stage cycle
      which is stored in 'var3_w' field of perf_sample_weight.
      
      Add new sort function 'Pipeline Stage Cycle' and include this in
      default_mem_sort_order[]. This new sort function may be used to denote
      some other pipeline stage in another architecture. So add this to list
      of sort entries that can have dynamic header string.
      Signed-off-by: NAthira Rajeev <atrajeev@linux.vnet.ibm.com>
      Reviewed-by: NMadhavan Srinivasan <maddy@linux.ibm.com>
      Acked-by: NJiri Olsa <jolsa@redhat.com>
      Cc: Jiri Olsa <jolsa@kernel.org>
      Cc: Kajol Jain <kjain@linux.ibm.com>
      Cc: Kan Liang <kan.liang@linux.intel.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Ravi Bangoria <ravi.bangoria@linux.ibm.com>
      Link: https://lore.kernel.org/r/1616425047-1666-5-git-send-email-atrajeev@linux.vnet.ibm.comSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      06e5ca74
  8. 24 3月, 2021 1 次提交
  9. 19 2月, 2021 2 次提交
  10. 09 2月, 2021 2 次提交
    • K
      perf report: Support instruction latency · 590db42d
      Kan Liang 提交于
      The instruction latency information can be recorded on some platforms,
      e.g., the Intel Sapphire Rapids server. With both memory latency
      (weight) and the new instruction latency information, users can easily
      locate the expensive load instructions, and also understand the time
      spent in different stages. The users can optimize their applications in
      different pipeline stages.
      
      The 'weight' field is shared among different architectures. Reusing the
      'weight' field may impacts other architectures. Add a new field to store
      the instruction latency.
      
      Like the 'weight' support, introduce a 'ins_lat' for the global
      instruction latency, and a 'local_ins_lat' for the local instruction
      latency version.
      
      Add new sort functions, INSTR Latency and Local INSTR Latency,
      accordingly.
      
      Add local_ins_lat to the default_mem_sort_order[].
      Signed-off-by: NKan Liang <kan.liang@linux.intel.com>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: Jin Yao <yao.jin@linux.intel.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Link: http://lore.kernel.org/lkml/1612296553-21962-7-git-send-email-kan.liang@linux.intel.comSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      590db42d
    • K
      perf tools: Support PERF_SAMPLE_WEIGHT_STRUCT · ea8d0ed6
      Kan Liang 提交于
      The new sample type, PERF_SAMPLE_WEIGHT_STRUCT, is an alternative of the
      PERF_SAMPLE_WEIGHT sample type. Users can apply either the
      PERF_SAMPLE_WEIGHT sample type or the PERF_SAMPLE_WEIGHT_STRUCT sample
      type to retrieve the sample weight, but they cannot apply both sample
      types simultaneously.
      
      The new sample type shares the same space as the PERF_SAMPLE_WEIGHT
      sample type. The lower 32 bits are exactly the same for both sample
      type. The higher 32 bits may be different for different architecture.
      
      Add arch specific arch_evsel__set_sample_weight() to set the new sample
      type for X86. Only store the lower 32 bits for the sample->weight if the
      new sample type is applied. In practice, no memory access could last
      than 4G cycles. No data will be lost.
      
      If the kernel doesn't support the new sample type. Fall back to the
      PERF_SAMPLE_WEIGHT sample type.
      
      There is no impact for other architectures.
      
      Committer notes:
      
      Fixup related to PERF_SAMPLE_CODE_PAGE_SIZE, present in acme/perf/core
      but not upstream yet.
      Signed-off-by: NKan Liang <kan.liang@linux.intel.com>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: Jin Yao <yao.jin@linux.intel.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Link: http://lore.kernel.org/lkml/1612296553-21962-6-git-send-email-kan.liang@linux.intel.comSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      ea8d0ed6
  11. 21 1月, 2021 1 次提交
  12. 16 1月, 2021 1 次提交
  13. 28 12月, 2020 1 次提交
  14. 20 12月, 2020 1 次提交
  15. 01 12月, 2020 6 次提交
  16. 30 11月, 2020 1 次提交
  17. 17 11月, 2020 1 次提交
    • N
      perf data: Allow to use stdio functions for pipe mode · 60136667
      Namhyung Kim 提交于
      When perf data is in a pipe, it reads each event separately using
      read(2) syscall.  This is a huge performance bottleneck when
      processing large data like in perf inject.  Also perf inject needs to
      use write(2) syscall for the output.
      
      So convert it to use buffer I/O functions in stdio library for pipe
      data.  This makes inject-build-id bench time drops from 20ms to 8ms.
      
        $ perf bench internals inject-build-id
        # Running 'internals/inject-build-id' benchmark:
          Average build-id injection took: 8.074 msec (+- 0.013 msec)
          Average time per event: 0.792 usec (+- 0.001 usec)
          Average memory usage: 8328 KB (+- 0 KB)
          Average build-id-all injection took: 5.490 msec (+- 0.008 msec)
          Average time per event: 0.538 usec (+- 0.001 usec)
          Average memory usage: 7563 KB (+- 0 KB)
      
      This patch enables it just for perf inject when used with pipe (it's a
      default behavior).  Maybe we could do it for perf record and/or report
      later..
      
      Committer testing:
      
      Before:
      
        $ perf stat -r 5 perf bench internals inject-build-id
        # Running 'internals/inject-build-id' benchmark:
          Average build-id injection took: 13.605 msec (+- 0.064 msec)
          Average time per event: 1.334 usec (+- 0.006 usec)
          Average memory usage: 12220 KB (+- 7 KB)
          Average build-id-all injection took: 11.458 msec (+- 0.058 msec)
          Average time per event: 1.123 usec (+- 0.006 usec)
          Average memory usage: 11546 KB (+- 8 KB)
        # Running 'internals/inject-build-id' benchmark:
          Average build-id injection took: 13.673 msec (+- 0.057 msec)
          Average time per event: 1.341 usec (+- 0.006 usec)
          Average memory usage: 12508 KB (+- 8 KB)
          Average build-id-all injection took: 11.437 msec (+- 0.046 msec)
          Average time per event: 1.121 usec (+- 0.004 usec)
          Average memory usage: 11812 KB (+- 7 KB)
        # Running 'internals/inject-build-id' benchmark:
          Average build-id injection took: 13.641 msec (+- 0.069 msec)
          Average time per event: 1.337 usec (+- 0.007 usec)
          Average memory usage: 12302 KB (+- 8 KB)
          Average build-id-all injection took: 10.820 msec (+- 0.106 msec)
          Average time per event: 1.061 usec (+- 0.010 usec)
          Average memory usage: 11616 KB (+- 7 KB)
        # Running 'internals/inject-build-id' benchmark:
          Average build-id injection took: 13.379 msec (+- 0.074 msec)
          Average time per event: 1.312 usec (+- 0.007 usec)
          Average memory usage: 12334 KB (+- 8 KB)
          Average build-id-all injection took: 11.288 msec (+- 0.071 msec)
          Average time per event: 1.107 usec (+- 0.007 usec)
          Average memory usage: 11657 KB (+- 8 KB)
        # Running 'internals/inject-build-id' benchmark:
          Average build-id injection took: 13.534 msec (+- 0.058 msec)
          Average time per event: 1.327 usec (+- 0.006 usec)
          Average memory usage: 12264 KB (+- 8 KB)
          Average build-id-all injection took: 11.557 msec (+- 0.076 msec)
          Average time per event: 1.133 usec (+- 0.007 usec)
          Average memory usage: 11593 KB (+- 8 KB)
      
         Performance counter stats for 'perf bench internals inject-build-id' (5 runs):
      
                  4,060.05 msec task-clock:u              #    1.566 CPUs utilized            ( +-  0.65% )
                         0      context-switches:u        #    0.000 K/sec
                         0      cpu-migrations:u          #    0.000 K/sec
                   101,888      page-faults:u             #    0.025 M/sec                    ( +-  0.12% )
             3,745,833,163      cycles:u                  #    0.923 GHz                      ( +-  0.10% )  (83.22%)
               194,346,613      stalled-cycles-frontend:u #    5.19% frontend cycles idle     ( +-  0.57% )  (83.30%)
               708,495,034      stalled-cycles-backend:u  #   18.91% backend cycles idle      ( +-  0.48% )  (83.48%)
             5,629,328,628      instructions:u            #    1.50  insn per cycle
                                                          #    0.13  stalled cycles per insn  ( +-  0.21% )  (83.57%)
             1,236,697,927      branches:u                #  304.602 M/sec                    ( +-  0.16% )  (83.44%)
                17,564,877      branch-misses:u           #    1.42% of all branches          ( +-  0.23% )  (82.99%)
      
                    2.5934 +- 0.0128 seconds time elapsed  ( +-  0.49% )
      
        $
      
      After:
      
        $ perf stat -r 5 perf bench internals inject-build-id
        # Running 'internals/inject-build-id' benchmark:
          Average build-id injection took: 8.560 msec (+- 0.125 msec)
          Average time per event: 0.839 usec (+- 0.012 usec)
          Average memory usage: 12520 KB (+- 8 KB)
          Average build-id-all injection took: 5.789 msec (+- 0.054 msec)
          Average time per event: 0.568 usec (+- 0.005 usec)
          Average memory usage: 11919 KB (+- 9 KB)
        # Running 'internals/inject-build-id' benchmark:
          Average build-id injection took: 8.639 msec (+- 0.111 msec)
          Average time per event: 0.847 usec (+- 0.011 usec)
          Average memory usage: 12732 KB (+- 8 KB)
          Average build-id-all injection took: 5.647 msec (+- 0.069 msec)
          Average time per event: 0.554 usec (+- 0.007 usec)
          Average memory usage: 12093 KB (+- 7 KB)
        # Running 'internals/inject-build-id' benchmark:
          Average build-id injection took: 8.551 msec (+- 0.096 msec)
          Average time per event: 0.838 usec (+- 0.009 usec)
          Average memory usage: 12739 KB (+- 8 KB)
          Average build-id-all injection took: 5.617 msec (+- 0.061 msec)
          Average time per event: 0.551 usec (+- 0.006 usec)
          Average memory usage: 12105 KB (+- 7 KB)
        # Running 'internals/inject-build-id' benchmark:
          Average build-id injection took: 8.403 msec (+- 0.097 msec)
          Average time per event: 0.824 usec (+- 0.010 usec)
          Average memory usage: 12770 KB (+- 8 KB)
          Average build-id-all injection took: 5.611 msec (+- 0.085 msec)
          Average time per event: 0.550 usec (+- 0.008 usec)
          Average memory usage: 12134 KB (+- 8 KB)
        # Running 'internals/inject-build-id' benchmark:
          Average build-id injection took: 8.518 msec (+- 0.102 msec)
          Average time per event: 0.835 usec (+- 0.010 usec)
          Average memory usage: 12518 KB (+- 10 KB)
          Average build-id-all injection took: 5.503 msec (+- 0.073 msec)
          Average time per event: 0.540 usec (+- 0.007 usec)
          Average memory usage: 11882 KB (+- 8 KB)
      
         Performance counter stats for 'perf bench internals inject-build-id' (5 runs):
      
                  2,394.88 msec task-clock:u              #    1.577 CPUs utilized            ( +-  0.83% )
                         0      context-switches:u        #    0.000 K/sec
                         0      cpu-migrations:u          #    0.000 K/sec
                   103,181      page-faults:u             #    0.043 M/sec                    ( +-  0.11% )
             3,548,172,030      cycles:u                  #    1.482 GHz                      ( +-  0.30% )  (83.26%)
                81,537,700      stalled-cycles-frontend:u #    2.30% frontend cycles idle     ( +-  1.54% )  (83.24%)
               876,631,544      stalled-cycles-backend:u  #   24.71% backend cycles idle      ( +-  1.14% )  (83.45%)
             5,960,361,707      instructions:u            #    1.68  insn per cycle
                                                          #    0.15  stalled cycles per insn  ( +-  0.27% )  (83.26%)
             1,269,413,491      branches:u                #  530.054 M/sec                    ( +-  0.10% )  (83.48%)
                11,372,453      branch-misses:u           #    0.90% of all branches          ( +-  0.52% )  (83.31%)
      
                   1.51874 +- 0.00642 seconds time elapsed  ( +-  0.42% )
      
        $
      Signed-off-by: NNamhyung Kim <namhyung@kernel.org>
      Acked-by: NJiri Olsa <jolsa@redhat.com>
      Tested-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Ian Rogers <irogers@google.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Link: http://lore.kernel.org/lkml/20201030054742.87740-1-namhyung@kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      60136667
  18. 03 11月, 2020 2 次提交
  19. 01 9月, 2020 1 次提交
  20. 10 7月, 2020 1 次提交
  21. 23 6月, 2020 2 次提交
  22. 02 6月, 2020 1 次提交
  23. 28 5月, 2020 2 次提交
    • P
      perf script: Better align register values in dump · 498ef715
      Paul A. Clarke 提交于
      Before:
      
        $ perf script --dump-raw-trace
        [...]
        2492031077254920 0x1e08 [0x308]: PERF_RECORD_SAMPLE(IP, 0x1): 47557/47557: 0xc00000000012eeb0 period: 1 addr: 0
        ... user regs: mask 0x1fffffffffff ABI 64-bit
        .... r0    0xb
        .... r1    0x7ffff3b90fa0
        .... r2    0x7fffbabf7300
        .... r3    0x7ffff3b9ed60
        .... r4    0x7ffff3b95cc0
        .... r5    0x1000c5a2940
        .... r6    0xfefefefefefefeff
        .... r7    0x7f7f7f7f7f7f7f7f
        .... r8    0x7ffff3b9ed60
        .... r9    0x0
        [...]
      
      After:
      
        [...]
        2492031077254920 0x1e08 [0x308]: PERF_RECORD_SAMPLE(IP, 0x1): 47557/47557: 0xc00000000012eeb0 period: 1 addr: 0
        ... user regs: mask 0x1fffffffffff ABI 64-bit
        .... r0    0x000000000000000b
        .... r1    0x00007ffff3b90fa0
        .... r2    0x00007fffbabf7300
        .... r3    0x00007ffff3b9ed60
        .... r4    0x00007ffff3b95cc0
        .... r5    0x000001000c5a2940
        .... r6    0xfefefefefefefeff
        .... r7    0x7f7f7f7f7f7f7f7f
        .... r8    0x00007ffff3b9ed60
        .... r9    0x0000000000000000
        [...]
      
      Committer testing:
      
      Full set of instructions, testing on x86_64:
      
        # perf record -I
        ^C[ perf record: Woken up 1 times to write data ]
        [ perf record: Captured and wrote 2.855 MB perf.data (4902 samples) ]
        # perf evlist -v
        cycles: size: 120, { sample_period, sample_freq }: 4000, sample_type: IP|TID|TIME|ID|CPU|PERIOD|REGS_INTR, read_format: ID, disabled: 1, inherit: 1, freq: 1, precise_ip: 3, sample_id_all: 1, exclude_guest: 1, sample_regs_intr: 0xff0fff
        dummy:HG: type: 1, size: 120, config: 0x9, { sample_period, sample_freq }: 4000, sample_type: IP|TID|TIME|ID|CPU|PERIOD|REGS_INTR, read_format: ID, inherit: 1, mmap: 1, comm: 1, freq: 1, task: 1, sample_id_all: 1, mmap2: 1, comm_exec: 1, ksymbol: 1, bpf_event: 1, sample_regs_intr: 0xff0fff
        #
      
      Before:
      
        # perf script --dump-raw-trace
        [...]
        0 1542674658099675 0x1cb700 [0xe0]: PERF_RECORD_SAMPLE(IP, 0x4001): 1825/1825: 0xffffffff9506e544 period: 1 addr: 0
        ... intr regs: mask 0xff0fff ABI 64-bit
        .... AX    0xf
        .... BX    0xffff96e1064125a0
        .... CX    0x38f
        .... DX    0x7
        .... SI    0xf
        .... DI    0x38f
        .... BP    0x1
        .... SP    0xfffffe000000bdf0
        .... IP    0xffffffff9506e544
        .... FLAGS 0xa
        .... CS    0x10
        .... SS    0x18
        .... R8    0x0
        .... R9    0x0
        .... R10   0xfffffe00000260c8
        .... R11   0xfffffe000000bef8
        .... R12   0x1
        .... R13   0x64
        .... R14   0x390
        .... R15   0xffff96e1064125a0
         ... thread: perf:1825
         ...... dso: /proc/kcore
                    perf  1825 [000] 1542674.658099:          1   cycles:  ffffffff9506e544 native_write_msr+0x4 (vmlinux
        [...]
      
      After:
      
        # perf script --dump-raw-trace
        [...]
        0 1542674658096068 0x1cb620 [0xe0]: PERF_RECORD_SAMPLE(IP, 0x4001): 1825/1825: 0xffffffff9506e544 period: 1 addr: 0
        ... intr regs: mask 0xff0fff ABI 64-bit
        .... AX    0x000000000000000f
        .... BX    0xffff96e1064125a0
        .... CX    0x000000000000038f
        .... DX    0x0000000000000007
        .... SI    0x000000000000000f
        .... DI    0x000000000000038f
        .... BP    0x0000000000000000
        .... SP    0xffffb3e788fb7c20
        .... IP    0xffffffff9506e544
        .... FLAGS 0x000000000000000a
        .... CS    0x0000000000000010
        .... SS    0x0000000000000018
        .... R8    0x00057b0deeffdfe3
        .... R9    0xffff96e106432480
        .... R10   0x0000000000000000
        .... R11   0xffff96e106412cc0
        .... R12   0xffffb3e788fb7d00
        .... R13   0xffff96e106432408
        .... R14   0xffff96e106432400
        .... R15   0xffff96e0e09a4800
         ... thread: perf:1825
         ...... dso: /proc/kcore
                    perf  1825 [000] 1542674.658096:          1   cycles:  ffffffff9506e544 native_write_msr+0x4 (vmlinux)
        [...]
      Signed-off-by: NPaul Clarke <pc@us.ibm.com>
      Reviewed-by: NAndi Kleen <ak@linux.intel.com>
      Tested-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jin Yao <yao.jin@linux.intel.com>
      Cc: Jiri Olsa <jolsa@kernel.org>
      Cc: Kan Liang <kan.liang@linux.intel.com>
      Cc: Namhyung Kim <namhyung@kernel.org>
      LPU-Reference: 1589911102-9460-1-git-send-email-pc@us.ibm.com
      Signed-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      498ef715
    • J
      perf tools: Do not seek in pipe fd during tracing data processing · b491198d
      Jiri Olsa 提交于
      There's no need to set 'fd' position in pipe mode, the file descriptor
      is already in proper place. Moreover the lseek will fail on pipe
      descriptor and that's why it's been working properly.
      
      I was tempted to remove the lseek calls completely, because it seems
      that tracing data event was always synthesized only in pipe mode, so
      there's no need for 'file' mode handling. But I guess there was a reason
      behind this and there might (however unlikely) be a perf.data that we
      could break processing for.
      Signed-off-by: NJiri Olsa <jolsa@kernel.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Ian Rogers <irogers@google.com>
      Cc: Michael Petlan <mpetlan@redhat.com>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Paul Khuong <pvk@pvk.ca>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Link: http://lore.kernel.org/lkml/20200507095024.2789147-3-jolsa@kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      b491198d
  24. 06 5月, 2020 3 次提交
  25. 03 4月, 2020 1 次提交
  26. 10 3月, 2020 1 次提交
    • K
      perf tools: Add hw_idx in struct branch_stack · 42bbabed
      Kan Liang 提交于
      The low level index of raw branch records for the most recent branch can
      be recorded in a sample with PERF_SAMPLE_BRANCH_HW_INDEX
      branch_sample_type. Extend struct branch_stack to support it.
      
      However, if the PERF_SAMPLE_BRANCH_HW_INDEX is not applied, only nr and
      entries[] will be output by kernel. The pointer of entries[] could be
      wrong, since the output format is different with new struct
      branch_stack.  Add a variable no_hw_idx in struct perf_sample to
      indicate whether the hw_idx is output.  Add get_branch_entry() to return
      corresponding pointer of entries[0].
      
      To make dummy branch sample consistent as new branch sample, add hw_idx
      in struct dummy_branch_stack for cs-etm and intel-pt.
      
      Apply the new struct branch_stack for synthetic events as well.
      
      Extend test case sample-parsing to support new struct branch_stack.
      
      Committer notes:
      
      Renamed get_branch_entries() to perf_sample__branch_entries() to have
      proper namespacing and pave the way for this to be moved to libperf,
      eventually.
      
      Add 'static' to that inline as it is in a header.
      
      Add 'hw_idx' to 'struct dummy_branch_stack' in cs-etm.c to fix the build
      on arm64.
      Signed-off-by: NKan Liang <kan.liang@linux.intel.com>
      Cc: Adrian Hunter <adrian.hunter@intel.com>
      Cc: Alexey Budankov <alexey.budankov@linux.intel.com>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Mathieu Poirier <mathieu.poirier@linaro.org>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Pavel Gerasimov <pavel.gerasimov@intel.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Ravi Bangoria <ravi.bangoria@linux.ibm.com>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Vitaly Slobodskoy <vitaly.slobodskoy@intel.com>
      Link: http://lore.kernel.org/lkml/20200228163011.19358-2-kan.liang@linux.intel.comSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      42bbabed