1. 24 3月, 2021 1 次提交
  2. 07 3月, 2021 1 次提交
  3. 18 2月, 2021 1 次提交
  4. 16 1月, 2021 1 次提交
  5. 01 12月, 2020 2 次提交
  6. 30 11月, 2020 2 次提交
  7. 17 11月, 2020 1 次提交
    • N
      perf data: Allow to use stdio functions for pipe mode · 60136667
      Namhyung Kim 提交于
      When perf data is in a pipe, it reads each event separately using
      read(2) syscall.  This is a huge performance bottleneck when
      processing large data like in perf inject.  Also perf inject needs to
      use write(2) syscall for the output.
      
      So convert it to use buffer I/O functions in stdio library for pipe
      data.  This makes inject-build-id bench time drops from 20ms to 8ms.
      
        $ perf bench internals inject-build-id
        # Running 'internals/inject-build-id' benchmark:
          Average build-id injection took: 8.074 msec (+- 0.013 msec)
          Average time per event: 0.792 usec (+- 0.001 usec)
          Average memory usage: 8328 KB (+- 0 KB)
          Average build-id-all injection took: 5.490 msec (+- 0.008 msec)
          Average time per event: 0.538 usec (+- 0.001 usec)
          Average memory usage: 7563 KB (+- 0 KB)
      
      This patch enables it just for perf inject when used with pipe (it's a
      default behavior).  Maybe we could do it for perf record and/or report
      later..
      
      Committer testing:
      
      Before:
      
        $ perf stat -r 5 perf bench internals inject-build-id
        # Running 'internals/inject-build-id' benchmark:
          Average build-id injection took: 13.605 msec (+- 0.064 msec)
          Average time per event: 1.334 usec (+- 0.006 usec)
          Average memory usage: 12220 KB (+- 7 KB)
          Average build-id-all injection took: 11.458 msec (+- 0.058 msec)
          Average time per event: 1.123 usec (+- 0.006 usec)
          Average memory usage: 11546 KB (+- 8 KB)
        # Running 'internals/inject-build-id' benchmark:
          Average build-id injection took: 13.673 msec (+- 0.057 msec)
          Average time per event: 1.341 usec (+- 0.006 usec)
          Average memory usage: 12508 KB (+- 8 KB)
          Average build-id-all injection took: 11.437 msec (+- 0.046 msec)
          Average time per event: 1.121 usec (+- 0.004 usec)
          Average memory usage: 11812 KB (+- 7 KB)
        # Running 'internals/inject-build-id' benchmark:
          Average build-id injection took: 13.641 msec (+- 0.069 msec)
          Average time per event: 1.337 usec (+- 0.007 usec)
          Average memory usage: 12302 KB (+- 8 KB)
          Average build-id-all injection took: 10.820 msec (+- 0.106 msec)
          Average time per event: 1.061 usec (+- 0.010 usec)
          Average memory usage: 11616 KB (+- 7 KB)
        # Running 'internals/inject-build-id' benchmark:
          Average build-id injection took: 13.379 msec (+- 0.074 msec)
          Average time per event: 1.312 usec (+- 0.007 usec)
          Average memory usage: 12334 KB (+- 8 KB)
          Average build-id-all injection took: 11.288 msec (+- 0.071 msec)
          Average time per event: 1.107 usec (+- 0.007 usec)
          Average memory usage: 11657 KB (+- 8 KB)
        # Running 'internals/inject-build-id' benchmark:
          Average build-id injection took: 13.534 msec (+- 0.058 msec)
          Average time per event: 1.327 usec (+- 0.006 usec)
          Average memory usage: 12264 KB (+- 8 KB)
          Average build-id-all injection took: 11.557 msec (+- 0.076 msec)
          Average time per event: 1.133 usec (+- 0.007 usec)
          Average memory usage: 11593 KB (+- 8 KB)
      
         Performance counter stats for 'perf bench internals inject-build-id' (5 runs):
      
                  4,060.05 msec task-clock:u              #    1.566 CPUs utilized            ( +-  0.65% )
                         0      context-switches:u        #    0.000 K/sec
                         0      cpu-migrations:u          #    0.000 K/sec
                   101,888      page-faults:u             #    0.025 M/sec                    ( +-  0.12% )
             3,745,833,163      cycles:u                  #    0.923 GHz                      ( +-  0.10% )  (83.22%)
               194,346,613      stalled-cycles-frontend:u #    5.19% frontend cycles idle     ( +-  0.57% )  (83.30%)
               708,495,034      stalled-cycles-backend:u  #   18.91% backend cycles idle      ( +-  0.48% )  (83.48%)
             5,629,328,628      instructions:u            #    1.50  insn per cycle
                                                          #    0.13  stalled cycles per insn  ( +-  0.21% )  (83.57%)
             1,236,697,927      branches:u                #  304.602 M/sec                    ( +-  0.16% )  (83.44%)
                17,564,877      branch-misses:u           #    1.42% of all branches          ( +-  0.23% )  (82.99%)
      
                    2.5934 +- 0.0128 seconds time elapsed  ( +-  0.49% )
      
        $
      
      After:
      
        $ perf stat -r 5 perf bench internals inject-build-id
        # Running 'internals/inject-build-id' benchmark:
          Average build-id injection took: 8.560 msec (+- 0.125 msec)
          Average time per event: 0.839 usec (+- 0.012 usec)
          Average memory usage: 12520 KB (+- 8 KB)
          Average build-id-all injection took: 5.789 msec (+- 0.054 msec)
          Average time per event: 0.568 usec (+- 0.005 usec)
          Average memory usage: 11919 KB (+- 9 KB)
        # Running 'internals/inject-build-id' benchmark:
          Average build-id injection took: 8.639 msec (+- 0.111 msec)
          Average time per event: 0.847 usec (+- 0.011 usec)
          Average memory usage: 12732 KB (+- 8 KB)
          Average build-id-all injection took: 5.647 msec (+- 0.069 msec)
          Average time per event: 0.554 usec (+- 0.007 usec)
          Average memory usage: 12093 KB (+- 7 KB)
        # Running 'internals/inject-build-id' benchmark:
          Average build-id injection took: 8.551 msec (+- 0.096 msec)
          Average time per event: 0.838 usec (+- 0.009 usec)
          Average memory usage: 12739 KB (+- 8 KB)
          Average build-id-all injection took: 5.617 msec (+- 0.061 msec)
          Average time per event: 0.551 usec (+- 0.006 usec)
          Average memory usage: 12105 KB (+- 7 KB)
        # Running 'internals/inject-build-id' benchmark:
          Average build-id injection took: 8.403 msec (+- 0.097 msec)
          Average time per event: 0.824 usec (+- 0.010 usec)
          Average memory usage: 12770 KB (+- 8 KB)
          Average build-id-all injection took: 5.611 msec (+- 0.085 msec)
          Average time per event: 0.550 usec (+- 0.008 usec)
          Average memory usage: 12134 KB (+- 8 KB)
        # Running 'internals/inject-build-id' benchmark:
          Average build-id injection took: 8.518 msec (+- 0.102 msec)
          Average time per event: 0.835 usec (+- 0.010 usec)
          Average memory usage: 12518 KB (+- 10 KB)
          Average build-id-all injection took: 5.503 msec (+- 0.073 msec)
          Average time per event: 0.540 usec (+- 0.007 usec)
          Average memory usage: 11882 KB (+- 8 KB)
      
         Performance counter stats for 'perf bench internals inject-build-id' (5 runs):
      
                  2,394.88 msec task-clock:u              #    1.577 CPUs utilized            ( +-  0.83% )
                         0      context-switches:u        #    0.000 K/sec
                         0      cpu-migrations:u          #    0.000 K/sec
                   103,181      page-faults:u             #    0.043 M/sec                    ( +-  0.11% )
             3,548,172,030      cycles:u                  #    1.482 GHz                      ( +-  0.30% )  (83.26%)
                81,537,700      stalled-cycles-frontend:u #    2.30% frontend cycles idle     ( +-  1.54% )  (83.24%)
               876,631,544      stalled-cycles-backend:u  #   24.71% backend cycles idle      ( +-  1.14% )  (83.45%)
             5,960,361,707      instructions:u            #    1.68  insn per cycle
                                                          #    0.15  stalled cycles per insn  ( +-  0.27% )  (83.26%)
             1,269,413,491      branches:u                #  530.054 M/sec                    ( +-  0.10% )  (83.48%)
                11,372,453      branch-misses:u           #    0.90% of all branches          ( +-  0.52% )  (83.31%)
      
                   1.51874 +- 0.00642 seconds time elapsed  ( +-  0.42% )
      
        $
      Signed-off-by: NNamhyung Kim <namhyung@kernel.org>
      Acked-by: NJiri Olsa <jolsa@redhat.com>
      Tested-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Ian Rogers <irogers@google.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Link: http://lore.kernel.org/lkml/20201030054742.87740-1-namhyung@kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      60136667
  8. 04 11月, 2020 1 次提交
  9. 14 10月, 2020 4 次提交
  10. 13 8月, 2020 2 次提交
    • J
      perf tools: Fix module symbol processing · b2fe96a3
      Jiri Olsa 提交于
      The 'dso->kernel' condition is true also for kernel modules now,
      and there are several places that were omited by the initial change:
      
        - we need to identify modules separately in dso__process_kernel_symbol
        - we need to set 'dso->kernel' for module from buildid table
        - there's no need to use 'dso->kernel || kmodule' in one condition
      
      Committer testing:
      
      Before:
      
        # perf test -v object
        <SNIP>
        Objdump command is: objdump -z -d --start-address=0xffffffff813e682f --stop-address=0xffffffff813e68af /usr/lib/debug/lib/modules/5.7.14-200.fc32.x86_64/vmlinux
        Bytes read match those read by objdump
        Reading object code for memory address: 0xffffffffc02dc257
        File is: /lib/modules/5.7.14-200.fc32.x86_64/kernel/arch/x86/crypto/crc32c-intel.ko.xz
        On file address is: 0xffffffffc02dc2e7
        dso__data_read_offset failed
        test child finished with -1
        ---- end ----
        Object code reading: FAILED!
        #
      
      After:
      
        # perf test object
        26: Object code reading                                   : Ok
        # perf test object
        26: Object code reading                                   : Ok
        # perf test object
        26: Object code reading                                   : Ok
        # perf test object
        26: Object code reading                                   : Ok
        # perf test object
        26: Object code reading                                   : Ok
        #
      
      Fixes: 02213cec ("perf maps: Mark module DSOs with kernel type")
      Reported-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      Tested-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Adrian Hunter <adrian.hunter@intel.com>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Signed-off-by: NJiri Olsa <jolsa@kernel.org>
      b2fe96a3
    • J
      perf tools: Rename 'enum dso_kernel_type' to 'enum dso_space_type' · 1c695c88
      Jiri Olsa 提交于
      Rename enum dso_kernel_type to enum dso_space_type, which seems like
      better fit.
      
      Committer notes:
      
      This is used with 'struct dso'->kernel, which once was a boolean, so
      DSO_SPACE__USER is zero, !zero means some sort of kernel space, be it
      the host kernel space or a guest kernel space.
      Signed-off-by: NJiri Olsa <jolsa@kernel.org>
      Cc: Adrian Hunter <adrian.hunter@intel.com>
      Cc: Jiri Olsa <jolsa@kernel.org>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Signed-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      1c695c88
  11. 06 8月, 2020 2 次提交
    • J
      perf tools: Move clockid_res_ns under clock struct · 9d88a1a1
      Jiri Olsa 提交于
      Move the clockid_res_ns struct member to the clock struct, so we have
      the clock related stuff in one place.
      Signed-off-by: NJiri Olsa <jolsa@kernel.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: David Ahern <dsahern@gmail.com>
      Cc: Geneviève Bastien <gbastien@versatic.net>
      Cc: Ian Rogers <irogers@google.com>
      Cc: Jeremie Galarneau <jgalar@efficios.com>
      Cc: Michael Petlan <mpetlan@redhat.com>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Wang Nan <wangnan0@huawei.com>
      Link: http://lore.kernel.org/lkml/20200805093444.314999-5-jolsa@kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      9d88a1a1
    • J
      perf header: Store clock references for -k/--clockid option · d1e325cf
      Jiri Olsa 提交于
      Add a new CLOCK_DATA feature that stores reference times when
      -k/--clockid option is specified.
      
      It contains the clock id and its reference time together with wall clock
      time taken at the 'same time', both values are in nanoseconds.
      
      The format of data is as below:
      
        struct {
             u32 version;  /* version = 1 */
             u32 clockid;
             u64 wall_clock_ns;
             u64 clockid_time_ns;
        };
      
      This clock reference times will be used in following changes to display
      wall clock for perf events.
      
      It's available only for recording with clockid specified, because it's
      the only case where we can get reference time to wallclock time. It's
      can't do that with perf clock yet.
      
      Committer testing:
      
        $ perf record -h -k
      
         Usage: perf record [<options>] [<command>]
            or: perf record [<options>] -- <command> [<options>]
      
            -k, --clockid <clockid>
                                  clockid to use for events, see clock_gettime()
      
        $ perf record -k monotonic sleep 1
        [ perf record: Woken up 1 times to write data ]
        [ perf record: Captured and wrote 0.017 MB perf.data (8 samples) ]
        $ perf report --header-only | grep clockid -A1
        # event : name = cycles:u, , id = { 88815, 88816, 88817, 88818, 88819, 88820, 88821, 88822 }, size = 120, { sample_period, sample_freq } = 4000, sample_type = IP|TID|TIME|PERIOD, read_format = ID, disabled = 1, inherit = 1, exclude_kernel = 1, mmap = 1, comm = 1, freq = 1, enable_on_exec = 1, task = 1, precise_ip = 3, sample_id_all = 1, exclude_guest = 1, mmap2 = 1, comm_exec = 1, use_clockid = 1, ksymbol = 1, bpf_event = 1, clockid = 1
        # CPU_TOPOLOGY info available, use -I to display
        --
        # clockid frequency: 1000 MHz
        # cpu pmu capabilities: branches=32, max_precise=3, pmu_name=skylake
        # clockid: monotonic (1)
        # reference time: 2020-08-06 09:40:21.619290 = 1596717621.619290 (TOD) = 21931.077673635 (monotonic)
        $
      Original-patch-by: NDavid Ahern <dsahern@gmail.com>
      Signed-off-by: NJiri Olsa <jolsa@kernel.org>
      Tested-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: David Ahern <dsahern@gmail.com>
      Cc: Geneviève Bastien <gbastien@versatic.net>
      Cc: Ian Rogers <irogers@google.com>
      Cc: Jeremie Galarneau <jgalar@efficios.com>
      Cc: Michael Petlan <mpetlan@redhat.com>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Wang Nan <wangnan0@huawei.com>
      Link: http://lore.kernel.org/lkml/20200805093444.314999-4-jolsa@kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      d1e325cf
  12. 28 5月, 2020 2 次提交
    • J
      perf session: Try to read pipe data from file · 14d3d540
      Jiri Olsa 提交于
      Ian came with the idea of having support to read the pipe data also from
      file. Currently pipe mode files fail like:
      
        $ perf record -o - sleep 1 > /tmp/perf.pipe.data
        $ perf report -i /tmp/perf.pipe.data
        incompatible file format (rerun with -v to learn more)
      
      This patch adds the support to do that by trying the pipe header first,
      and if its successfully detected, switching the perf data to pipe mode.
      
      Committer testing:
      
        # ls
        # perf record -a -o - sleep 1 > /tmp/perf.pipe.data
        [ perf record: Woken up 1 times to write data ]
        [ perf record: Captured and wrote 0.000 MB - ]
        # ls
        # perf report -i /tmp/perf.pipe.data | head -25
        # To display the perf.data header info, please use --header/--header-only options.
        #
        #
        # Total Lost Samples: 0
        #
        # Samples: 511  of event 'cycles'
        # Event count (approx.): 178447276
        #
        # Overhead  Command   Shared Object      Symbol
        # ........  ........  .................  ...........................................................................................
        #
            65.49%  swapper   [kernel.kallsyms]  [k] native_safe_halt
             6.45%  chromium  libblink_core.so   [.] blink::SelectorChecker::CheckOne
             4.08%  chromium  libblink_core.so   [.] blink::SelectorQuery::ExecuteForTraverseRoot<blink::AllElementsSelectorQueryTrait>
             2.25%  chromium  libblink_core.so   [.] blink::SelectorQuery::FindTraverseRootsAndExecute<blink::AllElementsSelectorQueryTrait>
             2.11%  chromium  libblink_core.so   [.] blink::SelectorChecker::MatchSelector
             1.91%  chromium  libblink_core.so   [.] blink::Node::OwnerShadowHost
             1.31%  chromium  libblink_core.so   [.] blink::Node::parentNode@plt
             1.22%  chromium  libblink_core.so   [.] blink::Node::parentNode
             0.59%  chromium  libblink_core.so   [.] blink::AnyAttributeMatches
             0.58%  chromium  libv8.so           [.] v8::internal::GlobalHandles::Create
             0.58%  chromium  libblink_core.so   [.] blink::NodeTraversal::NextAncestorSibling
             0.55%  chromium  libv8.so           [.] v8::internal::RegExpGlobalCache::RegExpGlobalCache
             0.55%  chromium  libblink_core.so   [.] blink::Node::ContainingShadowRoot
             0.55%  chromium  libblink_core.so   [.] blink::NodeTraversal::NextAncestorSibling@plt
        #
      Original-patch-by: NIan Rogers <irogers@google.com>
      Signed-off-by: NJiri Olsa <jolsa@kernel.org>
      Tested-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Ian Rogers <irogers@google.com>
      Cc: Michael Petlan <mpetlan@redhat.com>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Paul Khuong <pvk@pvk.ca>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Link: http://lore.kernel.org/lkml/20200507095024.2789147-4-jolsa@kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      14d3d540
    • J
      perf tools: Do not seek in pipe fd during tracing data processing · b491198d
      Jiri Olsa 提交于
      There's no need to set 'fd' position in pipe mode, the file descriptor
      is already in proper place. Moreover the lseek will fail on pipe
      descriptor and that's why it's been working properly.
      
      I was tempted to remove the lseek calls completely, because it seems
      that tracing data event was always synthesized only in pipe mode, so
      there's no need for 'file' mode handling. But I guess there was a reason
      behind this and there might (however unlikely) be a perf.data that we
      could break processing for.
      Signed-off-by: NJiri Olsa <jolsa@kernel.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Ian Rogers <irogers@google.com>
      Cc: Michael Petlan <mpetlan@redhat.com>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Paul Khuong <pvk@pvk.ca>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Link: http://lore.kernel.org/lkml/20200507095024.2789147-3-jolsa@kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      b491198d
  13. 06 5月, 2020 2 次提交
  14. 18 4月, 2020 1 次提交
    • K
      perf header: Support CPU PMU capabilities · 6f91ea28
      Kan Liang 提交于
      To stitch LBR call stack, the max LBR information is required. So the
      CPU PMU capabilities information has to be stored in perf header.
      
      Add a new feature HEADER_CPU_PMU_CAPS for CPU PMU capabilities.
      Retrieve all CPU PMU capabilities, not just max LBR information.
      
      Add variable max_branches to facilitate future usage.
      
      Committer testing:
      
        # ls -la /sys/devices/cpu/caps/
        total 0
        drwxr-xr-x. 2 root root    0 Apr 17 10:53 .
        drwxr-xr-x. 6 root root    0 Apr 17 07:02 ..
        -r--r--r--. 1 root root 4096 Apr 17 10:53 max_precise
        #
        # cat /sys/devices/cpu/caps/max_precise
        0
        # perf record sleep 1
        [ perf record: Woken up 1 times to write data ]
        [ perf record: Captured and wrote 0.033 MB perf.data (7 samples) ]
        #
        # perf report --header-only | egrep 'cpu(desc|.*capabilities)'
        # cpudesc : AMD Ryzen 5 3600X 6-Core Processor
        # cpu pmu capabilities: max_precise=0
        #
      
      And then on an Intel machine:
      
        $ ls -la /sys/devices/cpu/caps/
        total 0
        drwxr-xr-x. 2 root root    0 Apr 17 10:51 .
        drwxr-xr-x. 6 root root    0 Apr 17 10:04 ..
        -r--r--r--. 1 root root 4096 Apr 17 11:37 branches
        -r--r--r--. 1 root root 4096 Apr 17 10:51 max_precise
        -r--r--r--. 1 root root 4096 Apr 17 11:37 pmu_name
        $ cat /sys/devices/cpu/caps/max_precise
        3
        $ cat /sys/devices/cpu/caps/branches
        32
        $ cat /sys/devices/cpu/caps/pmu_name
        skylake
        $ perf record sleep 1
        [ perf record: Woken up 1 times to write data ]
        [ perf record: Captured and wrote 0.001 MB perf.data (8 samples) ]
        $ perf report --header-only | egrep 'cpu(desc|.*capabilities)'
        # cpudesc : Intel(R) Core(TM) i5-7500 CPU @ 3.40GHz
        # cpu pmu capabilities: branches=32, max_precise=3, pmu_name=skylake
        $
      Signed-off-by: NKan Liang <kan.liang@linux.intel.com>
      Reviewed-by: NAndi Kleen <ak@linux.intel.com>
      Acked-by: NJiri Olsa <jolsa@redhat.com>
      Tested-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Adrian Hunter <adrian.hunter@intel.com>
      Cc: Alexey Budankov <alexey.budankov@linux.intel.com>
      Cc: Mathieu Poirier <mathieu.poirier@linaro.org>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Pavel Gerasimov <pavel.gerasimov@intel.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Ravi Bangoria <ravi.bangoria@linux.ibm.com>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Vitaly Slobodskoy <vitaly.slobodskoy@intel.com>
      Link: http://lore.kernel.org/lkml/20200319202517.23423-3-kan.liang@linux.intel.comSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      6f91ea28
  15. 10 3月, 2020 1 次提交
    • K
      perf header: Add check for unexpected use of reserved membrs in event attr · 277ce1ef
      Kan Liang 提交于
      The perf.data may be generated by a newer version of perf tool, which
      support new input bits in attr, e.g. new bit for branch_sample_type.
      
      The perf.data may be parsed by an older version of perf tool later.  The
      old perf tool may parse the perf.data incorrectly. There is no warning
      message for this case.
      
      Current perf header never check for unknown input bits in attr.
      
      When read the event desc from header, check the stored event attr.  The
      reserved bits, sample type, read format and branch sample type will be
      checked.
      Signed-off-by: NKan Liang <kan.liang@linux.intel.com>
      Cc: Adrian Hunter <adrian.hunter@intel.com>
      Cc: Alexey Budankov <alexey.budankov@linux.intel.com>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Mathieu Poirier <mathieu.poirier@linaro.org>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Pavel Gerasimov <pavel.gerasimov@intel.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Ravi Bangoria <ravi.bangoria@linux.ibm.com>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Vitaly Slobodskoy <vitaly.slobodskoy@intel.com>
      Link: http://lkml.kernel.org/r/20200228163011.19358-4-kan.liang@linux.intel.comSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      277ce1ef
  16. 15 1月, 2020 1 次提交
    • M
      perf header: Use last modification time for timestamp · 8af19d66
      Michael Petlan 提交于
      Using .st_ctime clobbers the timestamp information in perf report header
      whenever any operation is done with the file. Even tar-ing and untar-ing
      the perf.data file (which preserves the file last modification timestamp)
      doesn't prevent that:
      
          [Michael@Diego tmp]$ ls -l perf.data
      ->	-rw-------. 1 Michael Michael 169888 Dec  2 15:23 perf.data
      
      	[Michael@Diego tmp]$ perf report --header-only
      	# ========
      ->	# captured on    : Mon Dec  2 15:23:42 2019
      	 [...]
      
      	[Michael@Diego tmp]$ tar c perf.data | xz > perf.data.tar.xz
      	[Michael@Diego tmp]$ mkdir aaa
      	[Michael@Diego tmp]$ cd aaa
      	[Michael@Diego aaa]$ xzcat ../perf.data.tar.xz | tar x
      	[Michael@Diego aaa]$ ls -l -a
      	total 172
      	drwxrwxr-x. 2 Michael Michael     23 Jan 14 11:26 .
      	drwxrwxr-x. 6 Michael Michael   4096 Jan 14 11:26 ..
      ->	-rw-------. 1 Michael Michael 169888 Dec  2 15:23 perf.data
      
      	[Michael@Diego aaa]$ perf report --header-only
      	# ========
      ->	# captured on    : Tue Jan 14 11:26:16 2020
      	 [...]
      
      When using .st_mtime instead, correct information is printed:
      
      	[Michael@Diego aaa]$ ~/acme/tools/perf/perf report --header-only
      	# ========
      ->	# captured on    : Mon Dec  2 15:23:42 2019
      	 [...]
      Signed-off-by: NMichael Petlan <mpetlan@redhat.com>
      Acked-by: NJiri Olsa <jolsa@kernel.org>
      LPU-Reference: 20200114104236.31555-1-mpetlan@redhat.com
      Signed-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      8af19d66
  17. 11 12月, 2019 2 次提交
  18. 15 10月, 2019 1 次提交
  19. 26 9月, 2019 1 次提交
  20. 25 9月, 2019 5 次提交
  21. 21 9月, 2019 1 次提交
    • J
      perf tools: Fix segfault in cpu_cache_level__read() · 0216234c
      Jiri Olsa 提交于
      We release wrong pointer on error path in cpu_cache_level__read
      function, leading to segfault:
      
        (gdb) r record ls
        Starting program: /root/perf/tools/perf/perf record ls
        ...
        [ perf record: Woken up 1 times to write data ]
        double free or corruption (out)
      
        Thread 1 "perf" received signal SIGABRT, Aborted.
        0x00007ffff7463798 in raise () from /lib64/power9/libc.so.6
        (gdb) bt
        #0  0x00007ffff7463798 in raise () from /lib64/power9/libc.so.6
        #1  0x00007ffff7443bac in abort () from /lib64/power9/libc.so.6
        #2  0x00007ffff74af8bc in __libc_message () from /lib64/power9/libc.so.6
        #3  0x00007ffff74b92b8 in malloc_printerr () from /lib64/power9/libc.so.6
        #4  0x00007ffff74bb874 in _int_free () from /lib64/power9/libc.so.6
        #5  0x0000000010271260 in __zfree (ptr=0x7fffffffa0b0) at ../../lib/zalloc..
        #6  0x0000000010139340 in cpu_cache_level__read (cache=0x7fffffffa090, cac..
        #7  0x0000000010143c90 in build_caches (cntp=0x7fffffffa118, size=<optimiz..
        ...
      
      Releasing the proper pointer.
      
      Fixes: 720e98b5 ("perf tools: Add perf data cache feature")
      Signed-off-by: NJiri Olsa <jolsa@kernel.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Michael Petlan <mpetlan@redhat.com>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: stable@vger.kernel.org: # v4.6+
      Link: http://lore.kernel.org/lkml/20190912105235.10689-1-jolsa@kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      0216234c
  22. 20 9月, 2019 3 次提交
  23. 01 9月, 2019 1 次提交
  24. 30 8月, 2019 1 次提交