1. 09 4月, 2018 4 次提交
    • S
      perf tests clang: Fix function name for clang IR test · fcbd8fa4
      Sandipan Das 提交于
      As stated in tests/llvm-src-base.c, the name of the bpf function should
      be "bpf_func__SyS_epoll_pwait" but this clang test fails as it tries to
      lookup "bpf_func__SyS_epoll_wait".
      
      Before applying patch:
      
      55: builtin clang support                                 :
      55.1: builtin clang compile C source to IR                : FAILED!
      55.2: builtin clang compile C source to ELF object        : Skip
      
      After applying patch:
      
      55: builtin clang support                                 :
      55.1: builtin clang compile C source to IR                : Ok
      55.2: builtin clang compile C source to ELF object        : Ok
      Signed-off-by: NSandipan Das <sandipan@linux.vnet.ibm.com>
      Tested-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
      Fixes: e67d52d4 ("perf clang: Update test case to use real BPF script")
      Link: http://lkml.kernel.org/r/20180404180419.19056-3-sandipan@linux.vnet.ibm.comSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      fcbd8fa4
    • S
      perf clang: Add support for recent clang versions · 7854e499
      Sandipan Das 提交于
      The clang API calls used by perf have changed in recent releases and
      builds succeed with libclang-3.9 only. This introduces compatibility
      with libclang-4.0 and above.
      
      Without this patch, we will see the following compilation errors with
      libclang-4.0+:
      
       util/c++/clang.cpp: In function ‘clang::CompilerInvocation* perf::createCompilerInvocation(llvm::opt::ArgStringList, llvm::StringRef&, clang::DiagnosticsEngine&)’:
       util/c++/clang.cpp:62:33: error: ‘IK_C’ was not declared in this scope
         Opts.Inputs.emplace_back(Path, IK_C);
                                        ^~~~
       util/c++/clang.cpp: In function ‘std::unique_ptr<llvm::Module> perf::getModuleFromSource(llvm::opt::ArgStringList, llvm::StringRef, llvm::IntrusiveRefCntPtr<clang::vfs::FileSystem>)’:
       util/c++/clang.cpp:75:26: error: no matching function for call to ‘clang::CompilerInstance::setInvocation(clang::CompilerInvocation*)’
         Clang.setInvocation(&*CI);
                                 ^
       In file included from util/c++/clang.cpp:14:0:
       /usr/include/clang/Frontend/CompilerInstance.h:231:8: note: candidate: void clang::CompilerInstance::setInvocation(std::shared_ptr<clang::CompilerInvocation>)
          void setInvocation(std::shared_ptr<CompilerInvocation> Value);
               ^~~~~~~~~~~~~
      
      Committer testing:
      
      Tested on Fedora 27 after installing the clang-devel and llvm-devel
      packages, versions:
      
        # rpm -qa | egrep llvm\|clang
        llvm-5.0.1-6.fc27.x86_64
        clang-libs-5.0.1-5.fc27.x86_64
        clang-5.0.1-5.fc27.x86_64
        clang-tools-extra-5.0.1-5.fc27.x86_64
        llvm-libs-5.0.1-6.fc27.x86_64
        llvm-devel-5.0.1-6.fc27.x86_64
        clang-devel-5.0.1-5.fc27.x86_64
        #
      
      Make sure you don't have some older version lying around in /usr/local,
      etc, then:
      
        $ make LIBCLANGLLVM=1 -C tools/perf install-bin
      
      And in the end perf will be linked agains these libraries:
      
        # ldd ~/bin/perf | egrep -i llvm\|clang
      	libclangAST.so.5 => /lib64/libclangAST.so.5 (0x00007f8bb2eb4000)
      	libclangBasic.so.5 => /lib64/libclangBasic.so.5 (0x00007f8bb29e3000)
      	libclangCodeGen.so.5 => /lib64/libclangCodeGen.so.5 (0x00007f8bb23f7000)
      	libclangDriver.so.5 => /lib64/libclangDriver.so.5 (0x00007f8bb2060000)
      	libclangFrontend.so.5 => /lib64/libclangFrontend.so.5 (0x00007f8bb1d06000)
      	libclangLex.so.5 => /lib64/libclangLex.so.5 (0x00007f8bb1a3e000)
      	libclangTooling.so.5 => /lib64/libclangTooling.so.5 (0x00007f8bb17d4000)
      	libclangEdit.so.5 => /lib64/libclangEdit.so.5 (0x00007f8bb15c5000)
      	libclangSema.so.5 => /lib64/libclangSema.so.5 (0x00007f8bb0cc9000)
      	libclangAnalysis.so.5 => /lib64/libclangAnalysis.so.5 (0x00007f8bb0a23000)
      	libclangParse.so.5 => /lib64/libclangParse.so.5 (0x00007f8bb0725000)
      	libclangSerialization.so.5 => /lib64/libclangSerialization.so.5 (0x00007f8bb039a000)
      	libLLVM-5.0.so => /lib64/libLLVM-5.0.so (0x00007f8bace98000)
      	libclangASTMatchers.so.5 => /lib64/../lib64/libclangASTMatchers.so.5 (0x00007f8bab735000)
      	libclangFormat.so.5 => /lib64/../lib64/libclangFormat.so.5 (0x00007f8bab4b2000)
      	libclangRewrite.so.5 => /lib64/../lib64/libclangRewrite.so.5 (0x00007f8bab2a1000)
      	libclangToolingCore.so.5 => /lib64/../lib64/libclangToolingCore.so.5 (0x00007f8bab08e000)
        #
      Signed-off-by: NSandipan Das <sandipan@linux.vnet.ibm.com>
      Tested-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
      Fixes: 00b86691 ("perf clang: Add builtin clang support ant test case")
      Link: http://lkml.kernel.org/r/20180404180419.19056-2-sandipan@linux.vnet.ibm.comSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      7854e499
    • S
      perf tools: Fix perf builds with clang support · c2fb54a1
      Sandipan Das 提交于
      For libclang, some distro packages provide static libraries (.a) while
      some provide shared libraries (.so). Currently, perf code can only be
      linked with static libraries. This makes perf build possible for both
      cases.
      Signed-off-by: NSandipan Das <sandipan@linux.vnet.ibm.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
      Fixes: d58ac0bf ("perf build: Add clang and llvm compile and linking support")
      Link: http://lkml.kernel.org/r/20180404180419.19056-1-sandipan@linux.vnet.ibm.comSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      c2fb54a1
    • A
      perf tools: No need to include namespaces.h in util.h · ad0902e0
      Arnaldo Carvalho de Melo 提交于
      The only thing that is needed there is a forward declaration for 'struct
      nsinfo', so disentanble this, which in turns allows built-in clang
      builds, i.e. 'make LIBCLANGLLVM=1 -C tools/perf'.
      
      Cc: Adrian Hunter <adrian.hunter@intel.com>
      Cc: David Ahern <dsahern@gmail.com>
      Cc: Jiri Olsa <jolsa@kernel.org>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
      Cc: Sandipan Das <sandipan@linux.vnet.ibm.com>
      Cc: Wang Nan <wangnan0@huawei.com>
      Link: https://lkml.kernel.org/n/tip-vq26rsuwq1cqylpcyvq89c84@git.kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      ad0902e0
  2. 06 4月, 2018 4 次提交
  3. 05 4月, 2018 4 次提交
  4. 04 4月, 2018 4 次提交
    • C
      perf trace: Remove redundant ')' · 51125a29
      Changbin Du 提交于
      There is a redundant ')' at the tail of each event. So remove it.
      
      $ sudo perf trace --no-syscalls -e 'kmem:*' -a
         899.342 kmem:kfree:(vfs_writev+0xb9) call_site=ffffffff9c453979 ptr=(nil))
         899.344 kmem:kfree:(___sys_recvmsg+0x188) call_site=ffffffff9c9b8b88 ptr=(nil))
      Signed-off-by: NChangbin Du <changbin.du@intel.com>
      Tested-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Link: http://lkml.kernel.org/r/1520937601-24952-1-git-send-email-changbin.du@intel.comSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      51125a29
    • A
      perf annotate stdio2: Print more descriptive event information header · 520d3f01
      Arnaldo Carvalho de Melo 提交于
      To match the recently added event header information to --tui, e.g.:
      
        # perf annotate --ignore-vmlinux --stdio2 _raw_spin_lock_irqsave
        Samples: 128  of event 'cycles:ppp', 4000 Hz, Event count (approx.): 48617682
        _raw_spin_lock_irqsave() /proc/kcore
          0.78        nop
          7.03        push   %rbx
          3.12        pushfq
          6.25        pop    %rax
                      nop
                      mov    %rax,%rbx
          3.12        cli
                      nop
                      xor    %eax,%eax
                      mov    $0x1,%edx
         79.69        lock   cmpxchg %edx,(%rdi)
                      test   %eax,%eax
                    ↓ jne    2b
                      mov    %rbx,%rax
                      pop    %rbx
                    ← retq
                2b:   mov    %eax,%esi
                    → callq  *ffffffffb30eaed0
                      mov    %rbx,%rax
                      pop    %rbx
                    ← retq
        #
      
      Cc: Adrian Hunter <adrian.hunter@intel.com>
      Cc: David Ahern <dsahern@gmail.com>
      Cc: Jiri Olsa <jolsa@kernel.org>
      Cc: Martin Liška <mliska@suse.cz>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Wang Nan <wangnan0@huawei.com>
      Link: https://lkml.kernel.org/n/tip-ujy46x7cldyhyxelyf2b9quy@git.kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      520d3f01
    • A
      perf annotate browser: Show extra title line with event information · 6920e285
      Arnaldo Carvalho de Melo 提交于
      So at the top we'll have two lines, like this, from 'perf report':
      
        # perf report --group --ignore-vmlinux
      =====================================================================================================
      Samples: 46  of events 'cycles', 4000 Hz, Event count (approx.): 5154895
      _raw_spin_lock_irqsave  /proc/kcore
      Percent              │      nop
                           │      push   %rbx
        0.00  14.29   0.00 │      pushfq
        9.09   0.00   0.00 │      pop    %rax
        9.09   0.00  20.00 │      nop
                           │      mov    %rax,%rbx
                           │      cli
        4.55   7.14   0.00 │      nop
                           │      xor    %eax,%eax
                           │      mov    $0x1,%edx
                           │      lock   cmpxchg %edx,(%rdi)
       77.27  78.57  70.00 │      test   %eax,%eax
                           │    ↓ jne    2b
                           │      mov    %rbx,%rax
        0.00   0.00  10.00 │      pop    %rbx
                           │    ← retq
                           │2b:   mov    %eax,%esi
                           │    → callq  queued_spin_lock_slowpath
                           │      mov    %rbx,%rax
                           │      pop    %rbx
      Press 'h' for help on│key bindings
      =====================================================================================================
      
       9.09 + 9.09 + 4.55 + 77.27 = 100
      14.29 + 7.14 + 78.57 = 100
      20 + 70 + 10 = 100
      
      We can do the math by using 't' to toggle from 'percent' to nr
      
      =====================================================================================================
      Samples: 46  of events 'cycles', 4000 Hz, Event count (approx.): 5154895
      _raw_spin_lock_irqsave  /proc/kcore
      Period                              │      nop
                                          │      push   %rbx
                0       79273           0 │      pushfq
           190455           0           0 │      pop    %rax
           198038           0        3045 │      nop
                                          │      mov    %rax,%rbx
                                          │      cli
           217233       32562           0 │      nop
                                          │      xor    %eax,%eax
                                          │      mov    $0x1,%edx
                                          │      lock   cmpxchg %edx,(%rdi)
          3421649      979174       28273 │      test   %eax,%eax
                                          │    ↓ jne    2b
                                          │      mov    %rbx,%rax
                0           0        5193 │      pop    %rbx
                                          │    ← retq
                                          │2b:   mov    %eax,%esi
                                          │    → callq  queued_spin_lock_slowpath
                                          │      mov    %rbx,%rax
                                          │      pop    %rbx
      Press 'h' for help on│key bindings
      =====================================================================================================
      
      79273 + 190455 + 198038 + 3045 + 217233 + 32562 + 3421649 + 979174 + 28273 + 5193 = 5154895
      
      Or number of samples:
      
      =====================================================================================================
      ooSamples: 46  of events 'cycles', 4000 Hz, Event count (approx.): 5154895
      _raw_spin_lock_irqsave  /proc/kcore
      Samples              │      nop
                           │      push   %rbx
           0      2      0 │      pushfq
           2      0      0 │      pop    %rax
           2      0      2 │      nop
                           │      mov    %rax,%rbx
                           │      cli
           1      1      0 │      nop
                           │      xor    %eax,%eax
                           │      mov    $0x1,%edx
                           │      lock   cmpxchg %edx,(%rdi)
          17     11      7 │      test   %eax,%eax
                           │    ↓ jne    2b
                           │      mov    %rbx,%rax
           0      0      1 │      pop    %rbx
                           │    ← retq
                           │2b:   mov    %eax,%esi
                           │    → callq  queued_spin_lock_slowpath
                           │      mov    %rbx,%rax
                           │      pop    %rbx
      Press 'h' for help on key bindings
      =====================================================================================================
      
      2 + 2 + 2 + 2 + 1 + 1 + 17 + 11 + 7 + 1 = 46
      Suggested-by: NMartin Liška <mliska@suse.cz>
      Cc: Adrian Hunter <adrian.hunter@intel.com>
      Cc: David Ahern <dsahern@gmail.com>
      Cc: Jiri Olsa <jolsa@kernel.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Wang Nan <wangnan0@huawei.com>
      Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=196935
      Link: https://lkml.kernel.org/n/tip-ezccyxld50wtwyt66np6aomo@git.kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      6920e285
    • A
      perf annotate: Introduce annotation__scnprintf_samples_period() method · b213eac2
      Arnaldo Carvalho de Melo 提交于
      To print a string using the total period (nr_events) and the number of
      samples for a given annotation, i.e. for a given symbol, the counterpart
      to hists__scnprintf_samples_period(), that is for all the samples in a
      session (be it a live session, think 'perf top' or a perf.data file,
      think 'perf report').
      
      Cc: Adrian Hunter <adrian.hunter@intel.com>
      Cc: David Ahern <dsahern@gmail.com>
      Cc: Jiri Olsa <jolsa@kernel.org>
      Cc: Martin Liška <mliska@suse.cz>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Wang Nan <wangnan0@huawei.com>
      Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=196935
      Link: https://lkml.kernel.org/n/tip-goj2wu4fxutc8vd46mw3yg14@git.kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      b213eac2
  5. 03 4月, 2018 9 次提交
  6. 02 4月, 2018 2 次提交
    • A
      perf trace: Show only failing syscalls · 0a6545bd
      Arnaldo Carvalho de Melo 提交于
      For instance:
      
        # perf probe "vfs_getname=getname_flags:72 pathname=result->name:string"
        Added new event:
          probe:vfs_getname    (on getname_flags:72 with pathname=result->name:string)
      
        You can now use it in all perf tools, such as:
      
      	  perf record -e probe:vfs_getname -aR sleep 1
      
        # perf trace --failure sleep 1
           0.043 ( 0.010 ms): sleep/10978 access(filename: /etc/ld.so.preload, mode: R) = -1 ENOENT No such file or directory
      
      For reference, here are all the syscalls in this case:
      
        # perf trace sleep 1
               ? (         ): sleep/10976  ... [continued]: execve()) = 0
             0.027 ( 0.001 ms): sleep/10976 brk() = 0x55bdc2d04000
             0.044 ( 0.010 ms): sleep/10976 access(filename: /etc/ld.so.preload, mode: R) = -1 ENOENT No such file or directory
             0.057 ( 0.006 ms): sleep/10976 openat(dfd: CWD, filename: /etc/ld.so.cache, flags: CLOEXEC) = 3
             0.064 ( 0.002 ms): sleep/10976 fstat(fd: 3, statbuf: 0x7fffac22b370) = 0
             0.067 ( 0.003 ms): sleep/10976 mmap(len: 111457, prot: READ, flags: PRIVATE, fd: 3) = 0x7feec8615000
             0.071 ( 0.001 ms): sleep/10976 close(fd: 3) = 0
             0.080 ( 0.007 ms): sleep/10976 openat(dfd: CWD, filename: /lib64/libc.so.6, flags: CLOEXEC) = 3
             0.088 ( 0.002 ms): sleep/10976 read(fd: 3, buf: 0x7fffac22b538, count: 832) = 832
             0.092 ( 0.001 ms): sleep/10976 fstat(fd: 3, statbuf: 0x7fffac22b3d0) = 0
             0.094 ( 0.002 ms): sleep/10976 mmap(len: 8192, prot: READ|WRITE, flags: PRIVATE|ANONYMOUS) = 0x7feec8613000
             0.099 ( 0.004 ms): sleep/10976 mmap(len: 3889792, prot: EXEC|READ, flags: PRIVATE|DENYWRITE, fd: 3) = 0x7feec8057000
             0.104 ( 0.007 ms): sleep/10976 mprotect(start: 0x7feec8203000, len: 2097152) = 0
             0.112 ( 0.005 ms): sleep/10976 mmap(addr: 0x7feec8403000, len: 24576, prot: READ|WRITE, flags: PRIVATE|DENYWRITE|FIXED, fd: 3, off: 1753088) = 0x7feec8403000
             0.120 ( 0.003 ms): sleep/10976 mmap(addr: 0x7feec8409000, len: 14976, prot: READ|WRITE, flags: PRIVATE|ANONYMOUS|FIXED) = 0x7feec8409000
             0.128 ( 0.001 ms): sleep/10976 close(fd: 3) = 0
             0.139 ( 0.001 ms): sleep/10976 arch_prctl(option: 4098, arg2: 140663540761856) = 0
             0.186 ( 0.004 ms): sleep/10976 mprotect(start: 0x7feec8403000, len: 16384, prot: READ) = 0
             0.204 ( 0.003 ms): sleep/10976 mprotect(start: 0x55bdc0ec3000, len: 4096, prot: READ) = 0
             0.209 ( 0.004 ms): sleep/10976 mprotect(start: 0x7feec8631000, len: 4096, prot: READ) = 0
             0.214 ( 0.010 ms): sleep/10976 munmap(addr: 0x7feec8615000, len: 111457) = 0
             0.269 ( 0.001 ms): sleep/10976 brk() = 0x55bdc2d04000
             0.271 ( 0.002 ms): sleep/10976 brk(brk: 0x55bdc2d25000) = 0x55bdc2d25000
             0.274 ( 0.001 ms): sleep/10976 brk() = 0x55bdc2d25000
             0.278 ( 0.007 ms): sleep/10976 open(filename: /usr/lib/locale/locale-archive, flags: CLOEXEC) = 3
             0.288 ( 0.001 ms): sleep/10976 fstat(fd: 3</usr/lib/locale/locale-archive>, statbuf: 0x7feec8408aa0) = 0
             0.290 ( 0.003 ms): sleep/10976 mmap(len: 113045344, prot: READ, flags: PRIVATE, fd: 3) = 0x7feec1488000
             0.297 ( 0.001 ms): sleep/10976 close(fd: 3</usr/lib/locale/locale-archive>) = 0
             0.325 (1000.193 ms): sleep/10976 nanosleep(rqtp: 0x7fffac22c0b0) = 0
          1000.560 ( 0.006 ms): sleep/10976 close(fd: 1) = 0
          1000.573 ( 0.005 ms): sleep/10976 close(fd: 2) = 0
          1000.596 (         ): sleep/10976 exit_group()
        #
      
      And can be done systemwide, etc, with backtraces:
      
        # perf trace --max-stack=16 --failure sleep 1
           0.048 ( 0.015 ms): sleep/11092 access(filename: /etc/ld.so.preload, mode: R) = -1 ENOENT No such file or directory
                                             __access (inlined)
                                             dl_main (/usr/lib64/ld-2.26.so)
        #
      
      Or for some specific syscalls:
      
        # perf trace --max-stack=16 -e openat --failure cat /tmp/rien
        cat: /tmp/rien: No such file or directory
             0.251 ( 0.012 ms): cat/11106 openat(dfd: CWD, filename: /tmp/rien) = -1 ENOENT No such file or directory
                                               __libc_open64 (inlined)
                                               main (/usr/bin/cat)
                                               __libc_start_main (/usr/lib64/libc-2.26.so)
                                               _start (/usr/bin/cat)
        #
      
      Look for inotify* syscalls that fail, system wide, for 2 seconds, with backtraces:
      
        # perf trace -a --max-stack=16 --failure -e inotify* sleep 2
         819.165 ( 0.058 ms): gmain/1724 inotify_add_watch(fd: 8<anon_inode:inotify>, pathname: /home/acme/~, mask: 16789454) = -1 ENOENT No such file or directory
                                             __GI_inotify_add_watch (inlined)
                                             _ik_watch (/usr/lib64/libgio-2.0.so.0.5400.3)
                                             _ip_start_watching (/usr/lib64/libgio-2.0.so.0.5400.3)
                                             im_scan_missing (/usr/lib64/libgio-2.0.so.0.5400.3)
                                             g_timeout_dispatch (/usr/lib64/libglib-2.0.so.0.5400.3)
                                             g_main_context_dispatch (/usr/lib64/libglib-2.0.so.0.5400.3)
                                             g_main_context_iterate.isra.23 (/usr/lib64/libglib-2.0.so.0.5400.3)
                                             g_main_context_iteration (/usr/lib64/libglib-2.0.so.0.5400.3)
                                             glib_worker_main (/usr/lib64/libglib-2.0.so.0.5400.3)
                                             g_thread_proxy (/usr/lib64/libglib-2.0.so.0.5400.3)
                                             start_thread (/usr/lib64/libpthread-2.26.so)
                                             __GI___clone (inlined)
        #
      
      Cc: Adrian Hunter <adrian.hunter@intel.com>
      Cc: David Ahern <dsahern@gmail.com>
      Cc: Jiri Olsa <jolsa@kernel.org>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Wang Nan <wangnan0@huawei.com>
      Link: https://lkml.kernel.org/n/tip-8f7d3mngaxvi7tlzloz3n7cs@git.kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      0a6545bd
    • K
      perf tools: Add a "dso_size" sort order · b74d12d5
      Kim Phillips 提交于
      Add DSO size to perf report/top sort output list.
      
      This includes adding a map__size fn to map.h, which is
      approximately equal to the DSO data file_size:
      
        DSO				file size	map (end-start)	file / (end-start)
        libwebkit2gtk-4.0.so.37.24.9	43260072	41295872	95%
        libglib-2.0.so.0.5400.1		 1125680	 1118208	99%
        libc-2.26.so			 1960656 	 1925120	101%
        libdbus-1.so.3.14.13		  309456 	  303104	102%
      
      Sample output:
      
        $ ./perf report -s dso_size,dso
        Samples: 2K of event 'cycles:uppp', Event count (approx.): 128373340
        Overhead  DSO size  Shared Object
          90.62%   unknown  [unknown]
           2.87%   1118208  libglib-2.0.so.0.5400.1
           1.92%    303104  libdbus-1.so.3.14.13
           1.42%   1925120  libc-2.26.so
           0.77%  41295872  libwebkit2gtk-4.0.so.37.24.9
           0.61%    335872  libgobject-2.0.so.0.5400.1
           0.41%   1052672  libgdk-3.so.0.2200.25
           0.36%    106496  libpthread-2.26.so
           0.29%    221184  dbus-daemon
           0.17%    159744  ld-2.26.so
           0.13%     49152  libwayland-client.so.0.3.0
           0.12%   1642496  libgio-2.0.so.0.5400.1
           0.09%   73277443  libgtk-3.so.0.2200.25
           0.09%  12324864  libmozjs-52.so.0.0.0
           0.05%   4796416  perf
           0.04%    843776  libgjs.so.0.0.0
           0.03%   1409024  libmutter-clutter-1.so
      
      Committer testing:
      
      To sort by DSO size, use:
      
        # perf report -F dso_size,dso,overhead -s dso_size
        <SNIP>
           3465216  libdns-export.so.174.0.1   0.00%
           3522560  libgc.so.1.0.3             0.00%
           3538944  libbfd-2.29-13.fc27.so     0.59%
           3670016  libunistring.so.2.1.0      0.00%
           3723264  libguile-2.0.so.22.8.1     0.00%
           3776512  libgio-2.0.so.0.5400.3     0.00%
           3891200  libc-2.26.so               0.96%
           3944448  libmozjs-17.0.so           0.00%
           4218880  libperl.so.5.26.1          0.18%
           4452352  libpython2.7.so.1.0        0.02%
           4472832  perf                       0.02%
           4603904  git                        0.01%
           4751360  libcrypto.so.1.1.0g        0.00%
           5005312  libslang.so.2.3.1          0.00%
           7315456  libgtk-3.so.0.2200.26      0.09%
           8818688  i965_dri.so                2.46%
           8818688  i965_dri.so (deleted)      1.26%
          12414976  libmozjs-52.so.0.0.0       0.03%
          23642112  cc1                        2.02%
          27889664  [kernel.kallsyms]         25.41%
          80834560  libxul.so (deleted)       15.68%
          98078720  chrome                    32.03%
        1056964608  [kernel.kallsyms]          1.59%
        #
      Signed-off-by: NKim Phillips <kim.phillips@arm.com>
      Tested-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: Jin Yao <yao.jin@linux.intel.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Maxim Kuvyrkov <maxim.kuvyrkov@linaro.org>
      Cc: Milian Wolff <milian.wolff@kdab.com>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Link: http://lkml.kernel.org/r/20180327060956.1c01ebe67a2a941bb4468c6f@arm.comSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      b74d12d5
  7. 28 3月, 2018 8 次提交
  8. 24 3月, 2018 5 次提交
    • A
      perf annotate: Use absolute addresses to calculate jump target offsets · 980b68ec
      Arnaldo Carvalho de Melo 提交于
      These types of jumps were confusing the annotate browser:
      
      entry_SYSCALL_64  /lib/modules/4.16.0-rc5-00086-gdf09348f/build/vmlinux
      
      entry_SYSCALL_64  /lib/modules/4.16.0-rc5-00086-gdf09348f/build/vmlinux
        Percent│ffffffff81a00020:   swapgs
        <SNIP>
               │ffffffff81a00128: ↓ jae    ffffffff81a00139 <syscall_return_via_sysret+0x53>
        <SNIP>
               │ffffffff81a00155: → jmpq   *0x825d2d(%rip)   # ffffffff82225e88 <pv_cpu_ops+0xe8>
      
      I.e. the syscall_return_via_sysret function is actually "inside" the
      entry_SYSCALL_64 function, and the offsets in jumps like these (+0x53)
      are relative to syscall_return_via_sysret, not to syscall_return_via_sysret.
      
      Or this may be some artifact in how the assembler marks the start and
      end of a function and how this ends up in the ELF symtab for vmlinux,
      i.e. syscall_return_via_sysret() isn't "inside" entry_SYSCALL_64, but
      just right after it.
      
      From readelf -sw vmlinux:
      
       80267: ffffffff81a00020   315 NOTYPE  GLOBAL DEFAULT    1 entry_SYSCALL_64
         316: ffffffff81a000e6     0 NOTYPE  LOCAL  DEFAULT    1 syscall_return_via_sysret
      
       0xffffffff81a00020 + 315 > 0xffffffff81a000e6
      
      So instead of looking for offsets after that last '+' sign, calculate
      offsets for jump target addresses that are inside the function being
      disassembled from the absolute address, 0xffffffff81a00139 in this case,
      subtracting from it the objdump address for the start of the function
      being disassembled, entry_SYSCALL_64() in this case.
      
      So, before this patch:
      
      entry_SYSCALL_64  /lib/modules/4.16.0-rc5-00086-gdf09348f/build/vmlinux
      Percent│       pop    %r10
             │       pop    %r9
             │       pop    %r8
             │       pop    %rax
             │       pop    %rsi
             │       pop    %rdx
             │       pop    %rsi
             │       mov    %rsp,%rdi
             │       mov    %gs:0x5004,%rsp
             │       pushq  0x28(%rdi)
             │       pushq  (%rdi)
             │       push   %rax
             │     ↑ jmp    6c
             │       mov    %cr3,%rdi
             │     ↑ jmp    62
             │       mov    %rdi,%rax
             │       and    $0x7ff,%rdi
             │       bt     %rdi,%gs:0x2219a
             │     ↑ jae    53
             │       btr    %rdi,%gs:0x2219a
             │       mov    %rax,%rdi
             │     ↑ jmp    5b
      
      After:
      
      entry_SYSCALL_64  /lib/modules/4.16.0-rc5-00086-gdf09348f/build/vmlinux
        0.65 │     → jne    swapgs_restore_regs_and_return_to_usermode
             │       pop    %r10
             │       pop    %r9
             │       pop    %r8
             │       pop    %rax
             │       pop    %rsi
             │       pop    %rdx
             │       pop    %rsi
             │       mov    %rsp,%rdi
             │       mov    %gs:0x5004,%rsp
             │       pushq  0x28(%rdi)
             │       pushq  (%rdi)
             │       push   %rax
             │     ↓ jmp    132
             │       mov    %cr3,%rdi
             │    ┌──jmp    128
             │    │  mov    %rdi,%rax
             │    │  and    $0x7ff,%rdi
             │    │  bt     %rdi,%gs:0x2219a
             │    │↓ jae    119
             │    │  btr    %rdi,%gs:0x2219a
             │    │  mov    %rax,%rdi
             │    │↓ jmp    121
             │119:│  mov    %rax,%rdi
             │    │  bts    $0x3f,%rdi
             │121:│  or     $0x800,%rdi
             │128:└─→or     $0x1000,%rdi
             │       mov    %rdi,%cr3
             │132:   pop    %rax
             │       pop    %rdi
             │       pop    %rsp
             │     → jmpq   *0x825d2d(%rip)        # ffffffff82225e88 <pv_cpu_ops+0xe8>
      
      With those at least navigating to the right destination, an improvement
      for these cases seems to be to be to somehow mark those inner functions,
      which in this case could be:
      
      entry_SYSCALL_64  /lib/modules/4.16.0-rc5-00086-gdf09348f/build/vmlinux
             │syscall_return_via_sysret:
             │       pop    %r15
             │       pop    %r14
             │       pop    %r13
             │       pop    %r12
             │       pop    %rbp
             │       pop    %rbx
             │       pop    %rsi
             │       pop    %r10
             │       pop    %r9
             │       pop    %r8
             │       pop    %rax
             │       pop    %rsi
             │       pop    %rdx
             │       pop    %rsi
             │       mov    %rsp,%rdi
             │       mov    %gs:0x5004,%rsp
             │       pushq  0x28(%rdi)
             │       pushq  (%rdi)
             │       push   %rax
             │     ↓ jmp    132
             │       mov    %cr3,%rdi
             │    ┌──jmp    128
             │    │  mov    %rdi,%rax
             │    │  and    $0x7ff,%rdi
             │    │  bt     %rdi,%gs:0x2219a
             │    │↓ jae    119
             │    │  btr    %rdi,%gs:0x2219a
             │    │  mov    %rax,%rdi
             │    │↓ jmp    121
             │119:│  mov    %rax,%rdi
             │    │  bts    $0x3f,%rdi
             │121:│  or     $0x800,%rdi
             │128:└─→or     $0x1000,%rdi
             │       mov    %rdi,%cr3
             │132:   pop    %rax
             │       pop    %rdi
             │       pop    %rsp
             │     → jmpq   *0x825d2d(%rip)        # ffffffff82225e88 <pv_cpu_ops+0xe8>
      
      This all gets much better viewed if one uses 'perf report --ignore-vmlinux'
      forcing the usage of /proc/kcore + /proc/kallsyms, when the above
      actually gets down to:
      
        # perf report --ignore-vmlinux
        ## do '/64', will show the function names containing '64',
        ## navigate to /entry_SYSCALL_64_after_hwframe.annotation,
        ## press 'A' to annotate, then 'P' to print that annotation
        ## to a file
        ## From another xterm (or see on screen, this 'P' thing is for
        ## getting rid of those right side scroll bars/spaces):
        # cat /entry_SYSCALL_64_after_hwframe.annotation
        entry_SYSCALL_64_after_hwframe() /proc/kcore
        Event: cycles:ppp
      
        Percent
                    Disassembly of section load0:
      
                    ffffffff9aa00044 <load0>:
         11.97        push   %rax
          4.85        push   %rdi
                      push   %rsi
          2.59        push   %rdx
          2.27        push   %rcx
          0.32        pushq  $0xffffffffffffffda
          1.29        push   %r8
                      xor    %r8d,%r8d
          1.62        push   %r9
          0.65        xor    %r9d,%r9d
          1.62        push   %r10
                      xor    %r10d,%r10d
          5.50        push   %r11
                      xor    %r11d,%r11d
          3.56        push   %rbx
                      xor    %ebx,%ebx
          4.21        push   %rbp
                      xor    %ebp,%ebp
          2.59        push   %r12
          0.97        xor    %r12d,%r12d
          3.24        push   %r13
                      xor    %r13d,%r13d
          2.27        push   %r14
                      xor    %r14d,%r14d
          4.21        push   %r15
                      xor    %r15d,%r15d
          0.97        mov    %rsp,%rdi
          5.50      → callq  do_syscall_64
         14.56        mov    0x58(%rsp),%rcx
          7.44        mov    0x80(%rsp),%r11
          0.32        cmp    %rcx,%r11
                    → jne    swapgs_restore_regs_and_return_to_usermode
          0.32        shl    $0x10,%rcx
          0.32        sar    $0x10,%rcx
          3.24        cmp    %rcx,%r11
                    → jne    swapgs_restore_regs_and_return_to_usermode
          2.27        cmpq   $0x33,0x88(%rsp)
          1.29      → jne    swapgs_restore_regs_and_return_to_usermode
                      mov    0x30(%rsp),%r11
          8.74        cmp    %r11,0x90(%rsp)
                    → jne    swapgs_restore_regs_and_return_to_usermode
          0.32        test   $0x10100,%r11
                    → jne    swapgs_restore_regs_and_return_to_usermode
          0.32        cmpq   $0x2b,0xa0(%rsp)
          0.65      → jne    swapgs_restore_regs_and_return_to_usermode
      
      I.e. using kallsyms makes the function start/end be done differently
      than using what is in the vmlinux ELF symtab and actually the hits
      goes to entry_SYSCALL_64_after_hwframe, which is a GLOBAL() after the
      start of entry_SYSCALL_64:
      
        ENTRY(entry_SYSCALL_64)
                UNWIND_HINT_EMPTY
        <SNIP>
                pushq   $__USER_CS                      /* pt_regs->cs */
                pushq   %rcx                            /* pt_regs->ip */
        GLOBAL(entry_SYSCALL_64_after_hwframe)
                pushq   %rax                            /* pt_regs->orig_ax */
      
                PUSH_AND_CLEAR_REGS rax=$-ENOSYS
      
      And it goes and ends at:
      
                cmpq    $__USER_DS, SS(%rsp)            /* SS must match SYSRET */
                jne     swapgs_restore_regs_and_return_to_usermode
      
                /*
                 * We win! This label is here just for ease of understanding
                 * perf profiles. Nothing jumps here.
                 */
        syscall_return_via_sysret:
                /* rcx and r11 are already restored (see code above) */
                UNWIND_HINT_EMPTY
                POP_REGS pop_rdi=0 skip_r11rcx=1
      
      So perhaps some people should really just play with '--ignore-vmlinux'
      to force /proc/kcore + kallsyms.
      
      One idea is to do both, i.e. have a vmlinux annotation and a
      kcore+kallsyms one, when possible, and even show the patched location,
      etc.
      Reported-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Cc: Adrian Hunter <adrian.hunter@intel.com>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: David Ahern <dsahern@gmail.com>
      Cc: Jin Yao <yao.jin@linux.intel.com>
      Cc: Jiri Olsa <jolsa@kernel.org>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Wang Nan <wangnan0@huawei.com>
      Link: https://lkml.kernel.org/n/tip-r11knxv8voesav31xokjiuo6@git.kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      980b68ec
    • A
      perf annotate: Defer searching for comma in raw line till it is needed · c448234c
      Arnaldo Carvalho de Melo 提交于
      That strchr() in jump__scnprintf() needs to be nuked somehow, as it,
      IIRC is already done in jump__parse() and if needed at scnprintf() time,
      should be stashed in the struct filled in parse() time.
      
      For now jus defer it to just before where it is used.
      
      Cc: Adrian Hunter <adrian.hunter@intel.com>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: David Ahern <dsahern@gmail.com>
      Cc: Jin Yao <yao.jin@linux.intel.com>
      Cc: Jiri Olsa <jolsa@kernel.org>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Wang Nan <wangnan0@huawei.com>
      Link: https://lkml.kernel.org/n/tip-j0t5hagnphoz9xw07bh3ha3g@git.kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      c448234c
    • A
      perf annotate: Support jumping from one function to another · e4cc91b8
      Arnaldo Carvalho de Melo 提交于
      For instance:
      
        entry_SYSCALL_64  /lib/modules/4.16.0-rc5-00086-gdf09348f/build/vmlinux
          5.50 │     → callq  do_syscall_64
         14.56 │       mov    0x58(%rsp),%rcx
          7.44 │       mov    0x80(%rsp),%r11
          0.32 │       cmp    %rcx,%r11
               │     → jne    swapgs_restore_regs_and_return_to_usermode
          0.32 │       shl    $0x10,%rcx
          0.32 │       sar    $0x10,%rcx
          3.24 │       cmp    %rcx,%r11
               │     → jne    swapgs_restore_regs_and_return_to_usermode
          2.27 │       cmpq   $0x33,0x88(%rsp)
          1.29 │     → jne    swapgs_restore_regs_and_return_to_usermode
               │       mov    0x30(%rsp),%r11
          8.74 │       cmp    %r11,0x90(%rsp)
               │     → jne    swapgs_restore_regs_and_return_to_usermode
          0.32 │       test   $0x10100,%r11
               │     → jne    swapgs_restore_regs_and_return_to_usermode
          0.32 │       cmpq   $0x2b,0xa0(%rsp)
          0.65 │     → jne    swapgs_restore_regs_and_return_to_usermode
      
      It'll behave just like a "call" instruction, i.e. press enter or right
      arrow over one such line and the browser will navigate to the annotated
      disassembly of that function, which when exited, via left arrow or esc,
      will come back to the calling function.
      
      Now to support jump to an offset on a different function...
      Reported-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Cc: Adrian Hunter <adrian.hunter@intel.com>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: David Ahern <dsahern@gmail.com>
      Cc: Jin Yao <yao.jin@linux.intel.com>
      Cc: Jiri Olsa <jolsa@kernel.org>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Wang Nan <wangnan0@huawei.com>
      Link: https://lkml.kernel.org/n/tip-78o508mqvr8inhj63ddtw7mo@git.kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      e4cc91b8
    • A
      perf annotate: Add "_local" to jump/offset validation routines · 2eff0611
      Arnaldo Carvalho de Melo 提交于
      Because they all really check if we can access data structures/visual
      constructs where a "jump" instruction targets code in the same function,
      i.e. things like:
      
        __pthread_mutex_lock  /usr/lib64/libpthread-2.26.so
        1.95 │       mov    __pthread_force_elision,%ecx
             │    ┌──test   %ecx,%ecx
        0.07 │    ├──je     60
             │    │  test   $0x300,%esi
             │    │↓ jne    60
             │    │  or     $0x100,%esi
             │    │  mov    %esi,0x10(%rdi)
             │ 42:│  mov    %esi,%edx
             │    │  lea    0x16(%r8),%rsi
             │    │  mov    %r8,%rdi
             │    │  and    $0x80,%edx
             │    │  add    $0x8,%rsp
             │    │→ jmpq   __lll_lock_elision
             │    │  nop
        0.29 │ 60:└─→and    $0x80,%esi
        0.07 │       mov    $0x1,%edi
        0.29 │       xor    %eax,%eax
        2.53 │       lock   cmpxchg %edi,(%r8)
      
      And not things like that "jmpq __lll_lock_elision", that instead should behave
      like a "call" instruction and "jump" to the disassembly of "___lll_lock_elision".
      
      Cc: Adrian Hunter <adrian.hunter@intel.com>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: David Ahern <dsahern@gmail.com>
      Cc: Jin Yao <yao.jin@linux.intel.com>
      Cc: Jiri Olsa <jolsa@kernel.org>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Wang Nan <wangnan0@huawei.com>
      Link: https://lkml.kernel.org/n/tip-3cwx39u3h66dfw9xjrlt7ca2@git.kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      2eff0611
    • P
      perf python: Reference Py_None before returning it · 83428f2f
      Petr Machata 提交于
      Python None objects are handled just like all the other objects with
      respect to their reference counting. Before returning Py_None, its
      reference count thus needs to be bumped.
      Signed-off-by: NPetr Machata <petrm@mellanox.com>
      Acked-by: NJiri Olsa <jolsa@kernel.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Petr Machata <petrm@mellanox.com>
      Link: http://lkml.kernel.org/r/b1e565ecccf68064d8d54f37db5d028dda8fa522.1521675563.git.petrm@mellanox.comSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      83428f2f