1. 18 12月, 2018 1 次提交
  2. 09 8月, 2018 6 次提交
  3. 25 7月, 2018 1 次提交
  4. 08 6月, 2018 1 次提交
  5. 06 6月, 2018 1 次提交
    • A
      perf hists: Check if a hist_entry has callchains before using them · fabd37b8
      Arnaldo Carvalho de Melo 提交于
      So far if we use 'perf record -g' this will make
      symbol_conf.use_callchain 'true' and logic will assume that all events
      have callchains enabled, but ever since we added the possibility of
      setting up callchains for some events (e.g.: -e
      cycles/call-graph=dwarf/) while not for others, we limit usage scenarios
      by looking at that symbol_conf.use_callchain global boolean, we better
      look at each event attributes.
      
      On the road to that we need to look if a hist_entry has callchains, that
      is, to go from hist_entry->hists to the evsel that contains it, to then
      look at evsel->sample_type for PERF_SAMPLE_CALLCHAIN.
      
      The next step is to add a symbol_conf.ignore_callchains global, to use
      in the places where what we really want to know is if callchains should
      be ignored, even if present.
      
      Then -g will mean just to select a callchain mode to be applied to all
      events not explicitely setting some other callchain mode, i.e. a default
      callchain mode, and --no-call-graph will set
      symbol_conf.ignore_callchains with that clear intention.
      
      That too will at some point become a per evsel thing, that tools can set
      for all or just a few of its evsels.
      
      Cc: Adrian Hunter <adrian.hunter@intel.com>
      Cc: David Ahern <dsahern@gmail.com>
      Cc: Jiri Olsa <jolsa@kernel.org>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Wang Nan <wangnan0@huawei.com>
      Link: https://lkml.kernel.org/n/tip-0sas5cm4dsw2obn75g7ruz69@git.kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      fabd37b8
  6. 04 6月, 2018 4 次提交
  7. 19 5月, 2018 1 次提交
    • J
      perf annotate: Create hotkey 'c' to show min/max cycles · 3e71fc03
      Jin Yao 提交于
      In the 'perf annotate' view, a new hotkey 'c' is created for showing the
      min/max cycles.
      
      For example, when press 'c', the annotate view is:
      
        Percent│ IPC     Cycle(min/max)
               │
               │
               │                             Disassembly of section .text:
               │
               │                             000000000003aab0 <random@@GLIBC_2.2.5>:
          8.22 │3.92                           sub    $0x18,%rsp
               │3.92                           mov    $0x1,%esi
               │3.92                           xor    %eax,%eax
               │3.92                           cmpl   $0x0,argp_program_version_hook@@G
               │3.92             1(2/1)      ↓ je     20
               │                               lock   cmpxchg %esi,__abort_msg@@GLIBC_P
               │                             ↓ jne    29
               │                             ↓ jmp    43
               │1.10                     20:   cmpxchg %esi,__abort_msg@@GLIBC_PRIVATE+
          8.93 │1.10             1(5/1)      ↓ je     43
      
      When press 'c' again, the annotate view is switched back:
      
        Percent│ IPC Cycle
               │
               │
               │                Disassembly of section .text:
               │
               │                000000000003aab0 <random@@GLIBC_2.2.5>:
          8.22 │3.92              sub    $0x18,%rsp
               │3.92              mov    $0x1,%esi
               │3.92              xor    %eax,%eax
               │3.92              cmpl   $0x0,argp_program_version_hook@@GLIBC_2.2.5+0x
               │3.92     1      ↓ je     20
               │                  lock   cmpxchg %esi,__abort_msg@@GLIBC_PRIVATE+0x8a0
               │                ↓ jne    29
               │                ↓ jmp    43
               │1.10        20:   cmpxchg %esi,__abort_msg@@GLIBC_PRIVATE+0x8a0
          8.93 │1.10     1      ↓ je     43
      Signed-off-by: NJin Yao <yao.jin@linux.intel.com>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: Jiri Olsa <jolsa@kernel.org>
      Cc: Kan Liang <kan.liang@linux.intel.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Link: http://lkml.kernel.org/r/1526569118-14217-3-git-send-email-yao.jin@linux.intel.com
      [ Rename all maxmin to minmax ]
      Signed-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      3e71fc03
  8. 27 4月, 2018 2 次提交
    • A
      perf symbols: Unify symbol maps · 3183f8ca
      Arnaldo Carvalho de Melo 提交于
      Remove the split of symbol tables for data (MAP__VARIABLE) and for
      functions (MAP__FUNCTION), its unneeded and there were various places
      doing two lookups to find a symbol, so simplify this.
      
      We still will consider only the symbols that matched the filters in
      place, i.e. see the (elf_(sec,sym)|symbol_type)__filter() routines in
      the patch, just so that we consider only the same symbols as before,
      to reduce the possibility of regressions.
      
      All the tests on 50-something build environments, in varios versions
      of lots of distros and cross build environments were performed without
      build regressions, as usual with all pull requests the other tests were
      also performed: 'perf test' and 'make -C tools/perf build-test'.
      
      Also this was done at a great granularity so that regressions can be
      bisected more easily.
      
      Cc: Adrian Hunter <adrian.hunter@intel.com>
      Cc: David Ahern <dsahern@gmail.com>
      Cc: Jiri Olsa <jolsa@kernel.org>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Wang Nan <wangnan0@huawei.com>
      Link: https://lkml.kernel.org/n/tip-hiq0fy2rsleupnqqwuojo1ne@git.kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      3183f8ca
    • A
      perf ui stdio: Use map_groups__fprintf() · b0867f0c
      Arnaldo Carvalho de Melo 提交于
      Instead of the variant that allows asking for just a specific map_type,
      because that map_type split will go away.
      
      Cc: Adrian Hunter <adrian.hunter@intel.com>
      Cc: David Ahern <dsahern@gmail.com>
      Cc: Jiri Olsa <jolsa@kernel.org>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Wang Nan <wangnan0@huawei.com>
      Link: https://lkml.kernel.org/n/tip-eya0jvmu26qvro0nxxd49xia@git.kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      b0867f0c
  9. 19 4月, 2018 1 次提交
  10. 12 4月, 2018 1 次提交
    • A
      perf annotate browser: Allow showing offsets in more than just jump targets · 51f39603
      Arnaldo Carvalho de Melo 提交于
      Jesper wanted to see offsets at callq sites when doing some performance
      investigation related to retpolines, so save him some time by providing
      a 'O' hotkey to allow showing offsets from function start at call
      instructions or in all instructions, just go on pressing 'O' till the
      offsets you need appear.
      
      Example:
      
      Starts with:
      
      Samples: 64  of event 'cycles:ppp', 100000 Hz, Event count (approx.): 318963
      ixgbe_read_reg  /proc/kcore
      Percent│    ↑ je     2a
             │   ┌──cmp    $0xffffffff,%r13d
             │   ├──je     d0
             │   │  mov    $0x53e3,%edi
             │   │→ callq  __const_udelay
             │   │  sub    $0x1,%r15d
             │   │↑ jne    83
             │   │  mov    0x8(%rbp),%rax
             │   │  testb  $0x20,0x1799(%rax)
             │   │↑ je     2a
             │   │  mov    0x200(%rax),%rdi
             │   │  mov    %r13d,%edx
             │   │  mov    $0xffffffffc02595d8,%rsi
             │   │→ callq  netdev_warn
             │   │↑ jmpq   2a
             │d0:└─→mov    0x8(%rbp),%rsi
             │      mov    %rbp,%rdi
             │      mov    %eax,0x4(%rsp)
             │    → callq  ixgbe_remove_adapter.isra.77
             │      mov    0x4(%rsp),%eax
      Press 'h' for help on key bindings
      ============================================================================
      
      Pess 'O':
      
      Samples: 64  of event 'cycles:ppp', 100000 Hz, Event count (approx.): 318963
      ixgbe_read_reg  /proc/kcore
      Percent│    ↑ je     2a
             │   ┌──cmp    $0xffffffff,%r13d
             │   ├──je     d0
             │   │  mov    $0x53e3,%edi
             │99:│→ callq  __const_udelay
             │   │  sub    $0x1,%r15d
             │   │↑ jne    83
             │   │  mov    0x8(%rbp),%rax
             │   │  testb  $0x20,0x1799(%rax)
             │   │↑ je     2a
             │   │  mov    0x200(%rax),%rdi
             │   │  mov    %r13d,%edx
             │   │  mov    $0xffffffffc02595d8,%rsi
             │c6:│→ callq  netdev_warn
             │   │↑ jmpq   2a
             │d0:└─→mov    0x8(%rbp),%rsi
             │      mov    %rbp,%rdi
             │      mov    %eax,0x4(%rsp)
             │db: → callq  ixgbe_remove_adapter.isra.77
             │      mov    0x4(%rsp),%eax
      Press 'h' for help on key bindings
      ============================================================================
      
      Press 'O' again:
      
      Samples: 64  of event 'cycles:ppp', 100000 Hz, Event count (approx.): 318963
      ixgbe_read_reg  /proc/kcore
      Percent│8c: ↑ je     2a
             │8e:┌──cmp    $0xffffffff,%r13d
             │92:├──je     d0
             │94:│  mov    $0x53e3,%edi
             │99:│→ callq  __const_udelay
             │9e:│  sub    $0x1,%r15d
             │a2:│↑ jne    83
             │a4:│  mov    0x8(%rbp),%rax
             │a8:│  testb  $0x20,0x1799(%rax)
             │af:│↑ je     2a
             │b5:│  mov    0x200(%rax),%rdi
             │bc:│  mov    %r13d,%edx
             │bf:│  mov    $0xffffffffc02595d8,%rsi
             │c6:│→ callq  netdev_warn
             │cb:│↑ jmpq   2a
             │d0:└─→mov    0x8(%rbp),%rsi
             │d4:   mov    %rbp,%rdi
             │d7:   mov    %eax,0x4(%rsp)
             │db: → callq  ixgbe_remove_adapter.isra.77
             │e0:   mov    0x4(%rsp),%eax
      Press 'h' for help on key bindings
      ============================================================================
      
      Press 'O' again and it will show just jump target offsets.
      Suggested-by: NJesper Dangaard Brouer <brouer@redhat.com>
      Cc: Adrian Hunter <adrian.hunter@intel.com>
      Cc: Alexei Starovoitov <alexei.starovoitov@gmail.com>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: Daniel Borkmann <daniel@iogearbox.net>
      Cc: David Ahern <dsahern@gmail.com>
      Cc: Jin Yao <yao.jin@linux.intel.com>
      Cc: Jiri Olsa <jolsa@kernel.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Martin Liška <mliska@suse.cz>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Ravi Bangoria <ravi.bangoria@linux.vnet.ibm.com>
      Cc: Thomas Richter <tmricht@linux.vnet.ibm.com>
      Cc: Wang Nan <wangnan0@huawei.com>
      Link: https://lkml.kernel.org/n/tip-upp6pfdetwlsx18ec2uf1od4@git.kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      51f39603
  11. 06 4月, 2018 2 次提交
    • A
      perf hists browser: Remove leftover from row returned from refresh · 94e87a8b
      Arnaldo Carvalho de Melo 提交于
      The per-browser screen refresh routine (ui_browser->refresh()) should
      return the first row that should be cleaned after the rows just printed,
      in case not all rows available on the screen gets filled.
      
      When moving the extra title lines logic from the hists browser to the
      generic ui_browser class, one piece of that logic remained in the hists
      browser and then when going back from the annotate browser to the hists
      browser in a case where fewer lines were displayed in the hists browser,
      for instance when filtering the entries per substring, one line of the
      annotate browser would remain on the screen, fix that.
      
      Example of the screen artifact:
      
      ================================================================================
      Samples: 73K of event 'cycles:ppp', 4000 Hz, Event count (approx.): 45172901394
      Overhead  Shared O  Symbol
         0.30%  [kernel]  [k] __indirect_thunk_start
         0.09%  [kernel]  [k] __x86_indirect_thunk_r10
             │      lfence
      ================================================================================
      
      Here from 'perf top' the view was zoomed with '/thunk' to functions
      having that substring, then the first was annotated and from the
      annotate browser ESC was pressed, then the first lines were overwritten,
      but the 'lfence' line remained due to the off by one bug fixed in this
      cset.
      
      Cc: Adrian Hunter <adrian.hunter@intel.com>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: David Ahern <dsahern@gmail.com>
      Cc: Jin Yao <yao.jin@linux.intel.com>
      Cc: Jiri Olsa <jolsa@kernel.org>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Wang Nan <wangnan0@huawei.com>
      Fixes: ef9ff601 ("perf ui browser: Move the extra title lines from the hists browser")
      Link: https://lkml.kernel.org/n/tip-odryfso74eaarm0z3e4v9owx@git.kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      94e87a8b
    • A
      perf hists browser: Show extra_title_lines in the 'D' debug hotkey · fdae6400
      Arnaldo Carvalho de Melo 提交于
      To help in fixing problems in the browser.
      
      Cc: Adrian Hunter <adrian.hunter@intel.com>
      Cc: David Ahern <dsahern@gmail.com>
      Cc: Jiri Olsa <jolsa@kernel.org>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Wang Nan <wangnan0@huawei.com>
      Link: https://lkml.kernel.org/n/tip-uj0n76yqh5bf98i0edckd47t@git.kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      fdae6400
  12. 05 4月, 2018 2 次提交
  13. 04 4月, 2018 1 次提交
    • A
      perf annotate browser: Show extra title line with event information · 6920e285
      Arnaldo Carvalho de Melo 提交于
      So at the top we'll have two lines, like this, from 'perf report':
      
        # perf report --group --ignore-vmlinux
      =====================================================================================================
      Samples: 46  of events 'cycles', 4000 Hz, Event count (approx.): 5154895
      _raw_spin_lock_irqsave  /proc/kcore
      Percent              │      nop
                           │      push   %rbx
        0.00  14.29   0.00 │      pushfq
        9.09   0.00   0.00 │      pop    %rax
        9.09   0.00  20.00 │      nop
                           │      mov    %rax,%rbx
                           │      cli
        4.55   7.14   0.00 │      nop
                           │      xor    %eax,%eax
                           │      mov    $0x1,%edx
                           │      lock   cmpxchg %edx,(%rdi)
       77.27  78.57  70.00 │      test   %eax,%eax
                           │    ↓ jne    2b
                           │      mov    %rbx,%rax
        0.00   0.00  10.00 │      pop    %rbx
                           │    ← retq
                           │2b:   mov    %eax,%esi
                           │    → callq  queued_spin_lock_slowpath
                           │      mov    %rbx,%rax
                           │      pop    %rbx
      Press 'h' for help on│key bindings
      =====================================================================================================
      
       9.09 + 9.09 + 4.55 + 77.27 = 100
      14.29 + 7.14 + 78.57 = 100
      20 + 70 + 10 = 100
      
      We can do the math by using 't' to toggle from 'percent' to nr
      
      =====================================================================================================
      Samples: 46  of events 'cycles', 4000 Hz, Event count (approx.): 5154895
      _raw_spin_lock_irqsave  /proc/kcore
      Period                              │      nop
                                          │      push   %rbx
                0       79273           0 │      pushfq
           190455           0           0 │      pop    %rax
           198038           0        3045 │      nop
                                          │      mov    %rax,%rbx
                                          │      cli
           217233       32562           0 │      nop
                                          │      xor    %eax,%eax
                                          │      mov    $0x1,%edx
                                          │      lock   cmpxchg %edx,(%rdi)
          3421649      979174       28273 │      test   %eax,%eax
                                          │    ↓ jne    2b
                                          │      mov    %rbx,%rax
                0           0        5193 │      pop    %rbx
                                          │    ← retq
                                          │2b:   mov    %eax,%esi
                                          │    → callq  queued_spin_lock_slowpath
                                          │      mov    %rbx,%rax
                                          │      pop    %rbx
      Press 'h' for help on│key bindings
      =====================================================================================================
      
      79273 + 190455 + 198038 + 3045 + 217233 + 32562 + 3421649 + 979174 + 28273 + 5193 = 5154895
      
      Or number of samples:
      
      =====================================================================================================
      ooSamples: 46  of events 'cycles', 4000 Hz, Event count (approx.): 5154895
      _raw_spin_lock_irqsave  /proc/kcore
      Samples              │      nop
                           │      push   %rbx
           0      2      0 │      pushfq
           2      0      0 │      pop    %rax
           2      0      2 │      nop
                           │      mov    %rax,%rbx
                           │      cli
           1      1      0 │      nop
                           │      xor    %eax,%eax
                           │      mov    $0x1,%edx
                           │      lock   cmpxchg %edx,(%rdi)
          17     11      7 │      test   %eax,%eax
                           │    ↓ jne    2b
                           │      mov    %rbx,%rax
           0      0      1 │      pop    %rbx
                           │    ← retq
                           │2b:   mov    %eax,%esi
                           │    → callq  queued_spin_lock_slowpath
                           │      mov    %rbx,%rax
                           │      pop    %rbx
      Press 'h' for help on key bindings
      =====================================================================================================
      
      2 + 2 + 2 + 2 + 1 + 1 + 17 + 11 + 7 + 1 = 46
      Suggested-by: NMartin Liška <mliska@suse.cz>
      Cc: Adrian Hunter <adrian.hunter@intel.com>
      Cc: David Ahern <dsahern@gmail.com>
      Cc: Jiri Olsa <jolsa@kernel.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Wang Nan <wangnan0@huawei.com>
      Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=196935
      Link: https://lkml.kernel.org/n/tip-ezccyxld50wtwyt66np6aomo@git.kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      6920e285
  14. 03 4月, 2018 4 次提交
  15. 24 3月, 2018 2 次提交
    • A
      perf annotate: Support jumping from one function to another · e4cc91b8
      Arnaldo Carvalho de Melo 提交于
      For instance:
      
        entry_SYSCALL_64  /lib/modules/4.16.0-rc5-00086-gdf09348f/build/vmlinux
          5.50 │     → callq  do_syscall_64
         14.56 │       mov    0x58(%rsp),%rcx
          7.44 │       mov    0x80(%rsp),%r11
          0.32 │       cmp    %rcx,%r11
               │     → jne    swapgs_restore_regs_and_return_to_usermode
          0.32 │       shl    $0x10,%rcx
          0.32 │       sar    $0x10,%rcx
          3.24 │       cmp    %rcx,%r11
               │     → jne    swapgs_restore_regs_and_return_to_usermode
          2.27 │       cmpq   $0x33,0x88(%rsp)
          1.29 │     → jne    swapgs_restore_regs_and_return_to_usermode
               │       mov    0x30(%rsp),%r11
          8.74 │       cmp    %r11,0x90(%rsp)
               │     → jne    swapgs_restore_regs_and_return_to_usermode
          0.32 │       test   $0x10100,%r11
               │     → jne    swapgs_restore_regs_and_return_to_usermode
          0.32 │       cmpq   $0x2b,0xa0(%rsp)
          0.65 │     → jne    swapgs_restore_regs_and_return_to_usermode
      
      It'll behave just like a "call" instruction, i.e. press enter or right
      arrow over one such line and the browser will navigate to the annotated
      disassembly of that function, which when exited, via left arrow or esc,
      will come back to the calling function.
      
      Now to support jump to an offset on a different function...
      Reported-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Cc: Adrian Hunter <adrian.hunter@intel.com>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: David Ahern <dsahern@gmail.com>
      Cc: Jin Yao <yao.jin@linux.intel.com>
      Cc: Jiri Olsa <jolsa@kernel.org>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Wang Nan <wangnan0@huawei.com>
      Link: https://lkml.kernel.org/n/tip-78o508mqvr8inhj63ddtw7mo@git.kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      e4cc91b8
    • A
      perf annotate: Add "_local" to jump/offset validation routines · 2eff0611
      Arnaldo Carvalho de Melo 提交于
      Because they all really check if we can access data structures/visual
      constructs where a "jump" instruction targets code in the same function,
      i.e. things like:
      
        __pthread_mutex_lock  /usr/lib64/libpthread-2.26.so
        1.95 │       mov    __pthread_force_elision,%ecx
             │    ┌──test   %ecx,%ecx
        0.07 │    ├──je     60
             │    │  test   $0x300,%esi
             │    │↓ jne    60
             │    │  or     $0x100,%esi
             │    │  mov    %esi,0x10(%rdi)
             │ 42:│  mov    %esi,%edx
             │    │  lea    0x16(%r8),%rsi
             │    │  mov    %r8,%rdi
             │    │  and    $0x80,%edx
             │    │  add    $0x8,%rsp
             │    │→ jmpq   __lll_lock_elision
             │    │  nop
        0.29 │ 60:└─→and    $0x80,%esi
        0.07 │       mov    $0x1,%edi
        0.29 │       xor    %eax,%eax
        2.53 │       lock   cmpxchg %edi,(%r8)
      
      And not things like that "jmpq __lll_lock_elision", that instead should behave
      like a "call" instruction and "jump" to the disassembly of "___lll_lock_elision".
      
      Cc: Adrian Hunter <adrian.hunter@intel.com>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: David Ahern <dsahern@gmail.com>
      Cc: Jin Yao <yao.jin@linux.intel.com>
      Cc: Jiri Olsa <jolsa@kernel.org>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Wang Nan <wangnan0@huawei.com>
      Link: https://lkml.kernel.org/n/tip-3cwx39u3h66dfw9xjrlt7ca2@git.kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      2eff0611
  16. 21 3月, 2018 10 次提交