1. 24 8月, 2016 1 次提交
  2. 23 6月, 2016 1 次提交
  3. 22 6月, 2016 2 次提交
  4. 15 6月, 2016 3 次提交
  5. 20 5月, 2016 1 次提交
  6. 06 5月, 2016 1 次提交
  7. 23 3月, 2016 1 次提交
  8. 27 2月, 2016 1 次提交
  9. 08 1月, 2016 1 次提交
  10. 07 1月, 2016 2 次提交
  11. 13 8月, 2015 1 次提交
    • K
      perf callchain: Allow disabling call graphs per event · f9db0d0f
      Kan Liang 提交于
      This patch introduce "call-graph=no" to disable per-event callgraph.
      
      Here is an example.
      
        perf record -e 'cpu/cpu-cycles,call-graph=fp/,cpu/instructions,call-graph=no/' sleep 1
      
        perf report --stdio
      
        # To display the perf.data header info, please use
        --header/--header-only options.
        #
        #
        # Total Lost Samples: 0
        #
        # Samples: 6  of event 'cpu/cpu-cycles,call-graph=fp/'
        # Event count (approx.): 774218
        #
        # Children      Self  Command  Shared Object     Symbol
        # ........  ........  .......  ................  ........................................
        #
          61.94%     0.00%  sleep    [kernel.vmlinux]  [k] entry_SYSCALL_64_fastpath
                    |
                    ---entry_SYSCALL_64_fastpath
                       |
                       |--97.30%-- __brk
                       |
                        --2.70%-- mmap64
                                  _dl_check_map_versions
                                  _dl_check_all_versions
      
          61.94%     0.00%  sleep    [kernel.vmlinux]  [k] perf_event_mmap
                    |
                    ---perf_event_mmap
                       |
                       |--97.30%-- do_brk
                       |          sys_brk
                       |          entry_SYSCALL_64_fastpath
                       |          __brk
                       |
                        --2.70%-- mmap_region
                                  do_mmap_pgoff
                                  vm_mmap_pgoff
                                  sys_mmap_pgoff
                                  sys_mmap
                                  entry_SYSCALL_64_fastpath
                                  mmap64
                                  _dl_check_map_versions
                                  _dl_check_all_versions
        ......
      
        # Samples: 6  of event 'cpu/instructions,call-graph=no/'
        # Event count (approx.): 359692
        #
        # Children      Self  Command  Shared Object     Symbol
        # ........  ........  .......  ................  .................................
        #
           89.03%     0.00%  sleep    [unknown]         [.] 0xffff6598ffff6598
           89.03%     0.00%  sleep    ld-2.17.so        [.] _dl_resolve_conflicts
           89.03%     0.00%  sleep    [kernel.vmlinux]  [k] page_fault
      Signed-off-by: NKan Liang <kan.liang@intel.com>
      Tested-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: Jiri Olsa <jolsa@kernel.org>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Link: http://lkml.kernel.org/r/1439289050-40510-2-git-send-email-kan.liang@intel.comSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      f9db0d0f
  12. 09 5月, 2015 1 次提交
    • A
      perf machine: Protect the machine->threads with a rwlock · b91fc39f
      Arnaldo Carvalho de Melo 提交于
      In addition to using refcounts for the struct thread lifetime
      management, we need to protect access to machine->threads from
      concurrent access.
      
      That happens in 'perf top', where a thread processes events, inserting
      and deleting entries from that rb_tree while another thread decays
      hist_entries, that end up dropping references and ultimately deleting
      threads from the rb_tree and releasing its resources when no further
      hist_entry (or other data structures, like in 'perf sched') references
      it.
      
      So the rule is the same for refcounts + protected trees in the kernel,
      get the tree lock, find object, bump the refcount, drop the tree lock,
      return, use object, drop the refcount if no more use of it is needed,
      keep it if storing it in some other data structure, drop when releasing
      that data structure.
      
      I.e. pair "t = machine__find(new)_thread()" with a "thread__put(t)", and
      "perf_event__preprocess_sample(&al)" with "addr_location__put(&al)".
      
      The addr_location__put() one is because as we return references to
      several data structures, we may end up adding more reference counting
      for the other data structures and then we'll drop it at
      addr_location__put() time.
      Acked-by: NDavid Ahern <dsahern@gmail.com>
      Cc: Adrian Hunter <adrian.hunter@intel.com>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: Don Zickus <dzickus@redhat.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Stephane Eranian <eranian@google.com>
      Link: http://lkml.kernel.org/n/tip-bs9rt4n0jw3hi9f3zxyy3xln@git.kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      b91fc39f
  13. 25 3月, 2015 1 次提交
  14. 21 3月, 2015 1 次提交
  15. 11 3月, 2015 1 次提交
  16. 22 1月, 2015 7 次提交
    • N
      perf diff: Fix -o/--order option behavior · 566b5cfb
      Namhyung Kim 提交于
      The prior change fixes default output ordering with each column but it
      breaks -o/--order option.  This patch prepends a new hpp fmt struct to
      sort list but not to output field list so that it can affect ordering
      without adding a new output column.
      
      The new hpp fmt uses its own compare functions which treats dummy
      entries (which have no baseline) little differently - the delta field
      can be computed without baseline but others (ratio and wdiff) are not.
      
      The new output will look like below:
      
        $ perf diff -o 2 perf.data.{old,cur,new}
        ...
        # Baseline/0  Delta/1  Delta/2  Shared Object      Symbol
        # ..........  .......  .......  .................  ..........................................
              22.98%   +0.51%   +0.52%  libc-2.20.so       [.] _int_malloc
               5.70%   +0.28%   +0.30%  libc-2.20.so       [.] free
               4.38%   -0.21%   +0.25%  a.out              [.] main
               1.32%   -0.15%   +0.05%  a.out              [.] free@plt
                                +0.01%  [kernel.kallsyms]  [k] intel_pstate_timer_func
                                +0.01%  [kernel.kallsyms]  [k] _raw_spin_lock_irqsave
                                +0.01%  [kernel.kallsyms]  [k] timekeeping_update.constprop.8
                       +0.01%   +0.01%  [kernel.kallsyms]  [k] apic_timer_interrupt
               0.01%            -0.00%  [kernel.kallsyms]  [k] native_read_msr_safe
               0.01%   -0.01%   -0.01%  [kernel.kallsyms]  [k] native_write_msr_safe
               1.31%   +0.03%   -0.06%  a.out              [.] malloc@plt
              31.50%   -0.74%   -0.23%  libc-2.20.so       [.] _int_free
              32.75%   +0.28%   -0.83%  libc-2.20.so       [.] malloc
               0.01%                    [kernel.kallsyms]  [k] scheduler_tick
                       +0.01%           [kernel.kallsyms]  [k] read_tsc
                       +0.01%           [kernel.kallsyms]  [k] perf_adjust_freq_unthr_context.part.82
      
      In above example, the output was sorted by 'Delta/2' column first, and
      then 'Baseline/0' and finally 'Delta/1'.
      Signed-off-by: NNamhyung Kim <namhyung@kernel.org>
      Acked-by: NJiri Olsa <jolsa@kernel.org>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Kan Liang <kan.liang@intel.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Link: http://lkml.kernel.org/r/1420677949-6719-8-git-send-email-namhyung@kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      566b5cfb
    • N
      perf diff: Fix output ordering to honor next column · 56495a8a
      Namhyung Kim 提交于
      When perf diff prints output, it sorts the entries using baseline field
      by default, but entries which don't have baseline are not sorted
      properly.  This patch makes it sorted by values of next column.
      
      Before:
      
        # Baseline/0  Delta/1  Delta/2  Shared Object      Symbol
        # ..........  .......  .......  .................  ..........................................
        #
              32.75%   +0.28%   -0.83%  libc-2.20.so       [.] malloc
              31.50%   -0.74%   -0.23%  libc-2.20.so       [.] _int_free
              22.98%   +0.51%   +0.52%  libc-2.20.so       [.] _int_malloc
               5.70%   +0.28%   +0.30%  libc-2.20.so       [.] free
               4.38%   -0.21%   +0.25%  a.out              [.] main
               1.32%   -0.15%   +0.05%  a.out              [.] free@plt
               1.31%   +0.03%   -0.06%  a.out              [.] malloc@plt
               0.01%   -0.01%   -0.01%  [kernel.kallsyms]  [k] native_write_msr_safe
               0.01%                    [kernel.kallsyms]  [k] scheduler_tick
               0.01%            -0.00%  [kernel.kallsyms]  [k] native_read_msr_safe
                                +0.01%  [kernel.kallsyms]  [k] _raw_spin_lock_irqsave
                       +0.01%   +0.01%  [kernel.kallsyms]  [k] apic_timer_interrupt
                                +0.01%  [kernel.kallsyms]  [k] intel_pstate_timer_func
                       +0.01%           [kernel.kallsyms]  [k] perf_adjust_freq_unthr_context.part.82
                       +0.01%           [kernel.kallsyms]  [k] read_tsc
                                +0.01%  [kernel.kallsyms]  [k] timekeeping_update.constprop.8
      
      After:
      
        # Baseline/0  Delta/1  Delta/2  Shared Object      Symbol
        # ..........  .......  .......  .................  ..........................................
        #
              32.75%   +0.28%   -0.83%  libc-2.20.so       [.] malloc
              31.50%   -0.74%   -0.23%  libc-2.20.so       [.] _int_free
              22.98%   +0.51%   +0.52%  libc-2.20.so       [.] _int_malloc
               5.70%   +0.28%   +0.30%  libc-2.20.so       [.] free
               4.38%   -0.21%   +0.25%  a.out              [.] main
               1.32%   -0.15%   +0.05%  a.out              [.] free@plt
               1.31%   +0.03%   -0.06%  a.out              [.] malloc@plt
               0.01%   -0.01%   -0.01%  [kernel.kallsyms]  [k] native_write_msr_safe
               0.01%                    [kernel.kallsyms]  [k] scheduler_tick
               0.01%            -0.00%  [kernel.kallsyms]  [k] native_read_msr_safe
                       +0.01%   +0.01%  [kernel.kallsyms]  [k] apic_timer_interrupt
                       +0.01%           [kernel.kallsyms]  [k] read_tsc
                       +0.01%           [kernel.kallsyms]  [k] perf_adjust_freq_unthr_context.part.82
                                +0.01%  [kernel.kallsyms]  [k] intel_pstate_timer_func
                                +0.01%  [kernel.kallsyms]  [k] _raw_spin_lock_irqsave
                                +0.01%  [kernel.kallsyms]  [k] timekeeping_update.constprop.8
      Signed-off-by: NNamhyung Kim <namhyung@kernel.org>
      Acked-by: NJiri Olsa <jolsa@kernel.org>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Kan Liang <kan.liang@intel.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Link: http://lkml.kernel.org/r/1420677949-6719-7-git-send-email-namhyung@kernel.org
      [ Fixed up hist_entry__cmp_ method signatures, fallout from making previous cset buildable ]
      Signed-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      56495a8a
    • N
      perf tools: Pass struct perf_hpp_fmt to its callbacks · 87bbdf76
      Namhyung Kim 提交于
      Currently ->cmp, ->collapse and ->sort callbacks doesn't pass
      corresponding fmt.  But it'll be needed by upcoming changes in
      perf diff command.
      Suggested-by: NJiri Olsa <jolsa@kernel.org>
      Signed-off-by: NNamhyung Kim <namhyung@kernel.org>
      Acked-by: NJiri Olsa <jolsa@kernel.org>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Kan Liang <kan.liang@intel.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Link: http://lkml.kernel.org/r/1420677949-6719-6-git-send-email-namhyung@kernel.org
      [ fix build by passing perf_hpp_fmt pointer to hist_entry__cmp_ methods ]
      Signed-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      87bbdf76
    • N
      perf diff: Introduce fmt_to_data_file() helper · ff21cef6
      Namhyung Kim 提交于
      The fmt_to_data_file() is to retrieve struct data__file from
      perf_hpp_fmt which is embedded in diff_hpp_fmt.  It'll be used by sort
      callback functions later.
      Signed-off-by: NNamhyung Kim <namhyung@kernel.org>
      Acked-by: NJiri Olsa <jolsa@kernel.org>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Kan Liang <kan.liang@intel.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Link: http://lkml.kernel.org/r/1420677949-6719-5-git-send-email-namhyung@kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      ff21cef6
    • N
      perf diff: Print diff result more precisely · ec3d07cb
      Namhyung Kim 提交于
      Current perf diff result is somewhat confusing since it sometimes hide
      small result and sometimes there's no result.  So do not hide small
      result (less than 0.01%) and print "N/A" if baseline is not
      recorded (for ratio and wdiff only).  Blank means the baseline is
      available but its pairs are not.
      
      Before:
      
        # Baseline    Delta  Shared Object      Symbol
        # ........  .......  .................  .........................
        #
             ...
             0.01%   -0.01%  [kernel.kallsyms]  [k] native_write_msr_safe
             0.01%           [kernel.kallsyms]  [k] scheduler_tick
             0.01%           [kernel.kallsyms]  [k] native_read_msr_safe
             0.00%           [kernel.kallsyms]  [k] __rcu_read_unlock
                             [kernel.kallsyms]  [k] _raw_spin_lock
                     +0.01%  [kernel.kallsyms]  [k] apic_timer_interrupt
                             [kernel.kallsyms]  [k] read_tsc
      
      After:
      
        # Baseline    Delta  Shared Object      Symbol
        # ........  .......  .................  .........................
        #
             ...
             0.01%   -0.01%  [kernel.kallsyms]  [k] native_write_msr_safe
             0.01%           [kernel.kallsyms]  [k] scheduler_tick
             0.01%           [kernel.kallsyms]  [k] native_read_msr_safe
             0.00%           [kernel.kallsyms]  [k] __rcu_read_unlock
                     +0.01%  [kernel.kallsyms]  [k] _raw_spin_lock
                     +0.01%  [kernel.kallsyms]  [k] apic_timer_interrupt
                     +0.01%  [kernel.kallsyms]  [k] read_tsc
      Signed-off-by: NNamhyung Kim <namhyung@kernel.org>
      Acked-by: NJiri Olsa <jolsa@kernel.org>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Link: http://lkml.kernel.org/r/1419656793-32756-3-git-send-email-namhyung@kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      ec3d07cb
    • N
      perf diff: Get rid of hists__compute_resort() · 38259a17
      Namhyung Kim 提交于
      The hists__compute_resort() is to sort output fields based on the
      given field/criteria.  This was done without the sort list but as we
      added the field to the sort list, we can do it with normal
      hists__output_resort() using the ->sort callback.
      Signed-off-by: NNamhyung Kim <namhyung@kernel.org>
      Acked-by: NJiri Olsa <jolsa@kernel.org>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Link: http://lkml.kernel.org/r/1419656793-32756-2-git-send-email-namhyung@kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      38259a17
    • A
      perf hists: Rename hist_entry__free to __delete · 6733d1bf
      Arnaldo Carvalho de Melo 提交于
      No logic changes, just to be consistent.
      
      Cc: Adrian Hunter <adrian.hunter@intel.com>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: David Ahern <dsahern@gmail.com>
      Cc: Don Zickus <dzickus@redhat.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Link: http://lkml.kernel.org/n/tip-f7n5y0mvk6gew5185h6fg316@git.kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      6733d1bf
  17. 03 1月, 2015 1 次提交
    • N
      perf diff: Fix to sort by baseline field by default · e7024fc3
      Namhyung Kim 提交于
      The currently perf diff didn't add the baseline and delta (or other
      compute) fields to the sort list so output will be sorted by other
      fields like alphabetical order of DSO or symbol as below example.
      
      Fix it by adding hpp formats for the fields and provides default compare
      functions.
      
      Before:
      
        $ perf diff
        # Event 'cycles'
        #
        # Baseline    Delta  Shared Object       Symbol
        # ........  .......  ..................  ...............................
        #
                             [bridge]            [k] ip_sabotage_in
                             [btrfs]             [k] __etree_search.constprop.47
             0.01%           [btrfs]             [k] btrfs_file_mmap
             0.01%   -0.01%  [btrfs]             [k] btrfs_getattr
                             [e1000e]            [k] e1000_watchdog
             0.00%           [kernel.vmlinux]    [k] PageHuge
             0.00%           [kernel.vmlinux]    [k] __acct_update_integrals
             0.00%           [kernel.vmlinux]    [k] __activate_page
                             [kernel.vmlinux]    [k] __alloc_fd
             0.02%   +0.02%  [kernel.vmlinux]    [k] __alloc_pages_nodemask
             ...
      
      After:
      
        # Baseline    Delta  Shared Object       Symbol
        # ........  .......  ..................  ................................
        #
            24.73%   -4.62%  perf                [.] append_chain_children
             7.96%   -1.29%  perf                [.] dso__find_symbol
             6.97%   -2.07%  libc-2.20.so        [.] vfprintf
             4.61%   +0.88%  libc-2.20.so        [.] __fprintf_chk
             4.41%   +2.43%  perf                [.] sort__comm_cmp
             4.10%   -0.16%  perf                [.] comm__str
             4.03%   -0.93%  perf                [.] machine__findnew_thread_time
             3.82%   +3.09%  perf                [.] __hists__add_entry
             2.95%   -0.18%  perf                [.] sort__dso_cmp
             ...
      Signed-off-by: NNamhyung Kim <namhyung@kernel.org>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Link: http://lkml.kernel.org/r/1419656793-32756-1-git-send-email-namhyung@kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      e7024fc3
  18. 23 12月, 2014 1 次提交
  19. 19 11月, 2014 1 次提交
  20. 23 10月, 2014 1 次提交
  21. 10 10月, 2014 1 次提交
    • A
      perf evsel: Add hists helper · 4ea062ed
      Arnaldo Carvalho de Melo 提交于
      Not all tools need a hists instance per perf_evsel, so lets pave the way
      to remove evsel->hists while leaving a way to access the hists from a
      specially allocated evsel, one that comes with space at the end where
      lives the evsel.
      
      Cc: Adrian Hunter <adrian.hunter@intel.com>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: David Ahern <dsahern@gmail.com>
      Cc: Don Zickus <dzickus@redhat.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Jean Pihet <jean.pihet@linaro.org>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Link: http://lkml.kernel.org/n/tip-qlktkhe31w4mgtbd84035sr2@git.kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      4ea062ed
  22. 26 9月, 2014 1 次提交
  23. 14 8月, 2014 1 次提交
    • N
      perf tools: Check recorded kernel version when finding vmlinux · 0a7e6d1b
      Namhyung Kim 提交于
      Currently vmlinux_path__init() only tries to find vmlinux file from
      current directory, /boot and some canonical directories with version
      number of the running kernel.  This can be a problem when reporting old
      data recorded on a kernel version not running currently.
      
      We can use --symfs option for this but it's annoying for user to do it
      always.  As we already have the info in the perf.data file, it can be
      changed to use it for the search automatically.
      
      Before:
      
        $ perf report
        ...
        # Samples: 4K of event 'cpu-clock'
        # Event count (approx.): 1067250000
        #
        # Overhead  Command     Shared Object      Symbol
        # ........  ..........  .................  ..............................
            71.87%     swapper  [kernel.kallsyms]  [k] recover_probed_instruction
      
      After:
      
        # Overhead  Command     Shared Object      Symbol
        # ........  ..........  .................  ....................
            71.87%     swapper  [kernel.kallsyms]  [k] native_safe_halt
      
      This requires to change signature of symbol__init() to receive struct
      perf_session_env *.
      Reported-by: NMinchan Kim <minchan@kernel.org>
      Signed-off-by: NNamhyung Kim <namhyung@kernel.org>
      Cc: Adrian Hunter <adrian.hunter@intel.com>
      Cc: David Ahern <dsahern@gmail.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: Namhyung Kim <namhyung.kim@lge.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Stephane Eranian <eranian@google.com>
      Link: http://lkml.kernel.org/r/1407825645-24586-14-git-send-email-namhyung@kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      0a7e6d1b
  24. 12 8月, 2014 1 次提交
  25. 01 6月, 2014 1 次提交
  26. 21 5月, 2014 2 次提交
  27. 24 4月, 2014 3 次提交