1. 24 6月, 2015 5 次提交
  2. 20 6月, 2015 6 次提交
  3. 18 6月, 2015 5 次提交
    • A
      perf evlist: Add toggle_enable() method · 2b56bcfb
      Arnaldo Carvalho de Melo 提交于
      For an upcoming feature in 'perf top' we will have a hotkey to
      enable/disable events, so remember if the events in the list are
      enabled or disabled and allows toggling this state using a new
      method.
      
      Cc: Adrian Hunter <adrian.hunter@intel.com>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: David Ahern <dsahern@gmail.com>
      Cc: Don Zickus <dzickus@redhat.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Stephane Eranian <eranian@google.com>
      Link: http://lkml.kernel.org/n/tip-64c4jvdl5feg2zhimxvokqka@git.kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      2b56bcfb
    • S
      perf trace: Fix race condition at the end of started workloads · 7951722d
      Sukadev Bhattiprolu 提交于
      I get following crash on multiple systems and across several releases
      (at least since v3.18).
      
      	Core was generated by `/tmp/perf trace sleep 0.2 '.
      	Program terminated with signal SIGSEGV, Segmentation fault.
      	#0  perf_mmap__read_head (mm=0x3fff9bf30070) at util/evlist.h:195
      	195		u64 head = ACCESS_ONCE(pc->data_head);
      	(gdb) bt
      	#0  perf_mmap__read_head (mm=0x3fff9bf30070) at util/evlist.h:195
      	#1  perf_evlist__mmap_read (evlist=0x10027f11910, idx=<optimized out>)
      	    at util/evlist.c:637
      	#2  0x000000001003ce4c in trace__run (argv=<optimized out>,
      	    argc=<optimized out>, trace=0x3fffd7b28288) at builtin-trace.c:2259
      	#3  cmd_trace (argc=<optimized out>, argv=<optimized out>,
      	    prefix=<optimized out>) at builtin-trace.c:2799
      	#4  0x00000000100657b8 in run_builtin (p=0x10176798 <commands+480>, argc=3,
      	    argv=0x3fffd7b2b550) at perf.c:370
      	#5  0x00000000100063e8 in handle_internal_command (argv=0x3fffd7b2b550, argc=3)
      	    at perf.c:429
      	#6  run_argv (argv=0x3fffd7b2af70, argcp=0x3fffd7b2af7c) at perf.c:473
      	#7  main (argc=3, argv=0x3fffd7b2b550) at perf.c:588
      
      The problem seems to be a race condition, when the application has just
      exited.  Some/all fds associated with the perf-events (tracepoints) go
      into a POLLHUP/ POLLERR state and the mmap region associated with those
      events are unmapped (in perf_evlist__filter_pollfd()).
      
      But we go back and do a perf_evlist__mmap_read() which assumes that the
      mmaps are still valid and we hit the crash.
      
      If the mapping for an event is released, its refcnt is 0 (and ->base
      is NULL), so ensure we have non-zero refcount before accessing the map.
      
      Note that perf-record has a similar logic but unlike perf-trace, the
      record__mmap_read_all() checks the evlist->mmap[i].base before accessing
      the map.
      Signed-off-by: NSukadev Bhattiprolu <sukadev@linux.vnet.ibm.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Li Zhang <zhlcindy@linux.vnet.ibm.com>
      Link: http://lkml.kernel.org/r/20150612060003.GA19913@us.ibm.com
      [ Fixed it up to use atomic_read() ]
      Signed-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      7951722d
    • M
      perf probe: Speed up perf probe --list by caching debuginfo · 7737af01
      Masami Hiramatsu 提交于
      Speed up the "perf probe --list" by caching the last used debuginfo.
      perf probe --list always open and load debuginfo for each entry of probe
      list. This takes very a long time.
      
      E.g. with vfs_* events (total 96 probes)
      
        [root@localhost perf]# time  ./perf probe -l &> /dev/null
      
        real    0m25.376s
        user    0m24.381s
        sys     0m1.012s
      
      To solve this issue, this adds debuginfo_cache to cache the
      last used debuginfo on memory.
      
      With this fix, the perf-probe --list significantly improves
      its speed.
      
        [root@localhost perf]#  time  ./perf probe -l &> /dev/null
      
        real    0m0.161s
        user    0m0.136s
        sys     0m0.025s
      Signed-off-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Tested-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      Cc: David Ahern <dsahern@gmail.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Naohiro Aota <naota@elisp.net>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Link: http://lkml.kernel.org/r/20150617145854.19715.15314.stgit@localhost.localdomainSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      7737af01
    • M
      perf probe: Show usage even if the last event is skipped · d350bd57
      Masami Hiramatsu 提交于
      When the last part of converted events are blacklisted or out-of-text,
      those are skipped and perf probe doesn't show usage examples.  This
      fixes it to show the example even if the last part of event list is
      skipped.
      
      E.g. without this patch, events are added, but suddenly end:
      
        # perf probe vfs_*
        vfs_caches_init_early is out of .text, skip it.
        vfs_caches_init is out of .text, skip it.
        Added new events:
          probe:vfs_fallocate  (on vfs_*)
          probe:vfs_open       (on vfs_*)
        ...
          probe:vfs_dentry_acceptable (on vfs_*)
          probe:vfs_load_quota_inode (on vfs_*)
        #
      
      With this fix:
      
        # perf probe vfs_*
        vfs_caches_init_early is out of .text, skip it.
        vfs_caches_init is out of .text, skip it.
        Added new events:
          probe:vfs_fallocate  (on vfs_*)
        ...
          probe:vfs_load_quota_inode (on vfs_*)
      
        You can now use it in all perf tools, such as:
      
      	perf record -e probe:vfs_load_quota_inode -aR sleep 1
      
      Note that this can be reproduced ONLY IF the vfs_caches_init* is the
      last part of matched symbol list. I've checked this happens on
      "3.19.0-generic #18-Ubuntu" kernel binary.
      Signed-off-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Cc: David Ahern <dsahern@gmail.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Naohiro Aota <naota@elisp.net>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Link: http://lkml.kernel.org/r/20150616115057.19906.5502.stgit@localhost.localdomainSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      d350bd57
    • W
      perf tools: Fix a problem when opening old perf.data with different byte order · b30b6172
      Wang Nan 提交于
      Following error occurs when trying to use 'perf report' on x86_64 to
      cross analysis a perf.data generated by an old perf on a big-endian
      machine:
      
       # perf report
       *** Error in `/home/w00229757/perf': free(): invalid next size (fast): 0x00000000032c99f0 ***
       ======= Backtrace: =========
       /lib64/libc.so.6(+0x6eeef)[0x7ff6ff7e2eef]
       /lib64/libc.so.6(+0x78cae)[0x7ff6ff7eccae]
       /lib64/libc.so.6(+0x79987)[0x7ff6ff7ed987]
       /path/to/perf[0x4ac734]
       /path/to/perf[0x4ac829]
       /path/to/perf(perf_header__process_sections+0x129)[0x4ad2c9]
       /path/to/perf(perf_session__read_header+0x2e1)[0x4ad9e1]
       /path/to/perf(perf_session__new+0x168)[0x4bd458]
       /path/to/perf(cmd_report+0xfa0)[0x43eb70]
       /path/to/perf[0x47adc3]
       /path/to/perf(main+0x5f6)[0x42fd06]
       /lib64/libc.so.6(__libc_start_main+0xf5)[0x7ff6ff795bd5]
       /path/to/perf[0x42fe35]
       ======= Memory map: ========
       [SNIP]
      
      The bug is in perf_event__attr_swap(). It swaps all fields in 'struct
      perf_event_attr' without checking whether the swapped field exist or
      not. In addition, in read_event_desc() allocs memory for attr according
      to size read from perf.data.
      
      Therefore, if the perf.data is collected by an old perf (without
      aux_watermark, for example), when perf_event__attr_swap() swaping
      attr->aux_watermark it destroy malloc's metadata.
      
      This patch introduces boundary checking in perf_event__attr_swap(). It
      adds macros bswap_field_64 and bswap_field_32 into
      perf_event__attr_swap() to make it only swap exist fields.
      Signed-off-by: NWang Nan <wangnan0@huawei.com>
      Acked-by: NJiri Olsa <jolsa@kernel.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Zefan Li <lizefan@huawei.com>
      Cc: pi3orama@163.com
      Link: http://lkml.kernel.org/r/1434534999-85347-1-git-send-email-wangnan0@huawei.comSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      b30b6172
  4. 16 6月, 2015 8 次提交
  5. 13 6月, 2015 1 次提交
    • M
      perf probe: Cut off the gcc optimization postfixes from function name · 35a23ff9
      Masami Hiramatsu 提交于
      Cut off the postfixes which gcc added for optimized routines from the
      event name automatically generated from symbol name, since *probe-events
      doesn't accept it.  Those symbols will be used if we don't use debuginfo
      to find target functions.
      
      E.g. without this fix;
        -----
        # perf probe -va alloc_buf.isra.23
        probe-definition(0): alloc_buf.isra.23
        symbol:alloc_buf.isra.23 file:(null) line:0 offset:0 return:0 lazy:(null)
        [...]
        Opening /sys/kernel/debug/tracing/kprobe_events write=1
        Added new event:
        Writing event: p:probe/alloc_buf.isra.23 _text+4869328
        Failed to write event: Invalid argument
          Error: Failed to add events. Reason: Invalid argument (Code: -22)
        -----
      With this fix;
        -----
        perf probe -va alloc_buf.isra.23
        probe-definition(0): alloc_buf.isra.23
        symbol:alloc_buf.isra.23 file:(null) line:0 offset:0 return:0 lazy:(null)
        [...]
        Opening /sys/kernel/debug/tracing/kprobe_events write=1
        Added new event:
        Writing event: p:probe/alloc_buf _text+4869328
          probe:alloc_buf      (on alloc_buf.isra.23)
      
        You can now use it in all perf tools, such as:
      
        	perf record -e probe:alloc_buf -aR sleep 1
      
        -----
      Signed-off-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Tested-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      Cc: David Ahern <dsahern@gmail.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Naohiro Aota <naota@elisp.net>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Link: http://lkml.kernel.org/r/20150612050820.20548.41625.stgit@localhost.localdomainSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      35a23ff9
  6. 11 6月, 2015 2 次提交
  7. 10 6月, 2015 1 次提交
    • M
      perf tools: Avoid possible race condition in copyfile() · d7c72606
      Milos Vyletel 提交于
      Use unique temporary files when copying to buildid dir to prevent races
      in case multiple instances are trying to copy same file. This is done by
      
      - creating template in form <path>/.<filename>.XXXXXX where the suffix is
        used by mkstemp() to create unique file
      - change file mode
      - copy content
      - if successful link temp file to target file
      - unlink temp file
      
      At this point the only file left at target path should be the desired
      one either created by us or other instance if we raced. This should also
      prevent not yet fully copied files to be visible to to other perf
      instances that could try to parse them.
      
      On top of that slow_copyfile no longer needs to deal with file mode when
      creating file since temporary file is already created and mode is set.
      
      Succesfully tested by myself by running perf record, archive and reading
      the data on other system and by running perf buildid-cache on perf
      binary itself. I also did revert fix from 0635b0f7 that to exposes
      previously fixed race with EEXIST and recreator test passed sucessfully.
      Signed-off-by: NMilos Vyletel <milos@redhat.com>
      Acked-by: NIngo Molnar <mingo@kernel.org>
      Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
      Cc: Don Zickus <dzickus@redhat.com>
      Cc: Jiri Olsa <jolsa@kernel.org>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Link: http://lkml.kernel.org/r/1433775018-19868-1-git-send-email-milos@redhat.comSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      d7c72606
  8. 08 6月, 2015 6 次提交
  9. 07 6月, 2015 2 次提交
    • K
      perf tools: handle PERF_RECORD_LOST_SAMPLES · c4937a91
      Kan Liang 提交于
      This patch modifies the perf tool to handle the new RECORD type,
      PERF_RECORD_LOST_SAMPLES.
      
      The number of lost-sample events is stored in
      .nr_events[PERF_RECORD_LOST_SAMPLES]. The exact number of samples
      which the kernel dropped is stored in total_lost_samples.
      
      When the percentage of dropped samples is greater than 5%, a warning
      is printed.
      
      Here are some examples:
      
      Eg 1, Recording different frequently-occurring events is safe with the
            patch. Only a very low drop rate is associated with such actions.
      
      $ perf record -e '{cycles:p,instructions:p}' -c 20003 --no-time ~/tchain ~/tchain
      
      $ perf report -D | tail
                SAMPLE events:     120243
                 MMAP2 events:          5
          LOST_SAMPLES events:         24
        FINISHED_ROUND events:         15
      cycles:p stats:
                 TOTAL events:      59348
                SAMPLE events:      59348
      instructions:p stats:
                 TOTAL events:      60895
                SAMPLE events:      60895
      
      $ perf report --stdio --group
       # To display the perf.data header info, please use --header/--header-only options.
       #
       #
       # Total Lost Samples: 24
       #
       # Samples: 120K of event 'anon group { cycles:p, instructions:p }'
       # Event count (approx.): 24048600000
       #
       #         Overhead  Command      Shared Object     Symbol
       # ................  ...........  ................
       ..................................
       #
          99.74%  99.86%  tchain_edit  tchain_edit       [.] f3
           0.09%   0.02%  tchain_edit  tchain_edit       [.] f2
           0.04%   0.00%  tchain_edit  [kernel.vmlinux]  [k] ixgbe_read_reg
      
      Eg 2, Recording the same thing multiple times can lead to high drop
            rate, but it is not a useful configuration.
      
      $ perf record -e '{cycles:p,cycles:p}' -c 20003 --no-time ~/tchain
      Warning: Processed 600592 samples and lost 99.73% samples!
      [perf record: Woken up 148 times to write data]
      [perf record: Captured and wrote 36.922 MB perf.data (1206322 samples)]
      [perf record: Woken up 1 times to write data]
      [perf record: Captured and wrote 0.121 MB perf.data (1629 samples)]
      Signed-off-by: NKan Liang <kan.liang@intel.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: acme@infradead.org
      Cc: eranian@google.com
      Link: http://lkml.kernel.org/r/1431285195-14269-9-git-send-email-kan.liang@intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      c4937a91
    • S
      perf record: Add support for sampling indirect jumps · 5b68164d
      Stephane Eranian 提交于
      This patch adds a new branch sampling type support for indirect jumps:
      
        perf record -j ind_jmp .......
      
      It enables analysis of indirect jumps targets. It requires kernel and
      possibly hardware support to operate correctly.
      Signed-off-by: NStephane Eranian <eranian@google.com>
      [ Fixup against: f00898f4 (perf tools: Move branch option parsing to own file) ]
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Reviewed-by: NAndi Kleen <ak@linux.intel.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: acme@redhat.com
      Cc: dsahern@gmail.com
      Cc: jolsa@redhat.com
      Cc: kan.liang@intel.com
      Cc: namhyung@kernel.org
      Link: http://lkml.kernel.org/r/1431637800-31061-4-git-send-email-eranian@google.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      5b68164d
  10. 03 6月, 2015 4 次提交
    • W
      perf tools: Deal with kernel module names in '[]' correctly · 1f121b03
      Wang Nan 提交于
      Before patch ba92732e ('perf kmaps: Check kmaps to make code more
      robust'), 'perf report' and 'perf annotate' will segfault if trace data
      contains kernel module information like this:
      
       # perf report -D -i ./perf.data
       ...
       0 0 0x188 [0x50]: PERF_RECORD_MMAP -1/0: [0xffffffbff1018000(0xf068000) @ 0]: x [test_module]
       ...
      
       # perf report -i ./perf.data --objdump=/path/to/objdump --kallsyms=/path/to/kallsyms
      
       perf: Segmentation fault
       -------- backtrace --------
       /path/to/perf[0x503478]
       /lib64/libc.so.6(+0x3545f)[0x7fb201f3745f]
       /path/to/perf[0x499b56]
       /path/to/perf(dso__load_kallsyms+0x13c)[0x49b56c]
       /path/to/perf(dso__load+0x72e)[0x49c21e]
       /path/to/perf(map__load+0x6e)[0x4ae9ee]
       /path/to/perf(thread__find_addr_map+0x24c)[0x47deec]
       /path/to/perf(perf_event__preprocess_sample+0x88)[0x47e238]
       /path/to/perf[0x43ad02]
       /path/to/perf[0x4b55bc]
       /path/to/perf(ordered_events__flush+0xca)[0x4b57ea]
       /path/to/perf[0x4b1a01]
       /path/to/perf(perf_session__process_events+0x3be)[0x4b428e]
       /path/to/perf(cmd_report+0xf11)[0x43bfc1]
       /path/to/perf[0x474702]
       /path/to/perf(main+0x5f5)[0x42de95]
       /lib64/libc.so.6(__libc_start_main+0xf4)[0x7fb201f23bd4]
       /path/to/perf[0x42dfc4]
      
      This is because __kmod_path__parse treats '[' leading names as kernel
      name instead of names of kernel module.
      
      If perf.data contains build information and the buildid of such modules
      can be found, the dso->kernel of it will be set to DSO_TYPE_KERNEL by
      __event_process_build_id(), not kernel module.
      
      It will then be passed to dso__load() -> dso__load_kernel_sym() ->
      dso__load_kcore() if --kallsyms is provided.
      
      The refered patch adds NULL pointer checker to avoid segfault. However,
      such kernel modules are still processed incorrectly.
      
      This patch fixes __kmod_path__parse, makes it treat names like
      '[test_module]' as kernel modules.
      
      kmod-path.c is also update to reflect the above changes.
      Signed-off-by: NWang Nan <wangnan0@huawei.com>
      Acked-by: NJiri Olsa <jolsa@kernel.org>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Zefan Li <lizefan@huawei.com>
      Link: http://lkml.kernel.org/r/1433321541-170245-1-git-send-email-wangnan0@huawei.com
      [ Fixed the merged with 0443f36b ("perf machine: Fix the search
        for the kernel DSO on the unified list" ]
      Signed-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      1f121b03
    • W
      tools: Move tools/perf/util/include/linux/{list.h,poison.h} to tools/include · 4fc62a89
      Wang Nan 提交于
      This patch moves list.h from tools/perf/util/include/linux/list.h to
      tools/include/linux/list.h to enable other libraries use macros in it,
      like libbpf which will be introduced by further patches. Since list.h
      depend on poison.h, poison.h is also moved.
      
      Both file use relative path, so one '..' is removed for each header to
      make them suit for new directory.
      
      MANIFEST is also updated for 'make perf-*-src-pkg'.
      Signed-off-by: NWang Nan <wangnan0@huawei.com>
      Cc: Alexei Starovoitov <alexei.starovoitov@gmail.com>
      Cc: Brendan Gregg <brendan.d.gregg@gmail.com>
      Cc: Daniel Borkmann <daniel@iogearbox.net>
      Cc: David Ahern <dsahern@gmail.com>
      Cc: He Kuang <hekuang@huawei.com>
      Cc: Jiri Olsa <jolsa@kernel.org>
      Cc: Kaixu Xia <xiakaixu@huawei.com>
      Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Zefan Li <lizefan@huawei.com>
      Cc: pi3orama@163.com
      Link: http://lkml.kernel.org/r/1433144296-74992-3-git-send-email-wangnan0@huawei.comSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      4fc62a89
    • W
      perf tools: Move linux/kernel.h to tools/include · 37fbe0a4
      Wang Nan 提交于
      This patch moves kernel.h from tools/perf/util/include/linux/kernel.h
      to tools/include/linux/kernel.h to enable other libraries use macros in
      it, like libbpf which will be introduced by further patches.
      
      MANIFEST is also updated for 'make perf-*-src-pkg'.
      Signed-off-by: NWang Nan <wangnan0@huawei.com>
      Acked-by: NAlexei Starovoitov <ast@plumgrid.com>
      Cc: Brendan Gregg <brendan.d.gregg@gmail.com>
      Cc: Daniel Borkmann <daniel@iogearbox.net>
      Cc: David Ahern <dsahern@gmail.com>
      Cc: He Kuang <hekuang@huawei.com>
      Cc: Jiri Olsa <jolsa@kernel.org>
      Cc: Kaixu Xia <xiakaixu@huawei.com>
      Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Zefan Li <lizefan@huawei.com>
      Cc: pi3orama@163.com
      Link: http://lkml.kernel.org/r/1433144296-74992-2-git-send-email-wangnan0@huawei.com
      [ Fixed up the ifdef guard to match other entries in tools/include/linux ]
      Signed-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      37fbe0a4
    • A
      perf machine: Fix the search for the kernel DSO on the unified list · 0443f36b
      Arnaldo Carvalho de Melo 提交于
      When unifying the user_dsos and kernel_dsos a bug was introduced by
      inverting the check for dso->kernel, fix it.
      
      Fixes: 3d39ac53 ("perf machine: No need to have two DSOs lists")
      Cc: Adrian Hunter <adrian.hunter@intel.com>
      Cc: David Ahern <dsahern@gmail.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Link: http://lkml.kernel.org/n/tip-xnrnq0kams3s2z9ek1wjb506@git.kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      0443f36b