1. 22 9月, 2016 3 次提交
  2. 21 9月, 2016 6 次提交
    • J
      perf hists: Use bigger buffer for stdio headers · d5278220
      Jiri Olsa 提交于
      With node column on big CPUs servers we can run out of stdio header
      space quite soon. Enlarging header buffer.
      Signed-off-by: NJiri Olsa <jolsa@kernel.org>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: David Ahern <dsahern@gmail.com>
      Cc: Don Zickus <dzickus@redhat.com>
      Cc: Joe Mario <jmario@redhat.com>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Link: http://lkml.kernel.org/r/1474290610-23241-5-git-send-email-jolsa@kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      d5278220
    • J
      perf evsel: Remove superfluous initialization of weight · 82deb8a2
      Jiri Olsa 提交于
      Removing superfluous initialization of weight, it's already set to 0 via
      memset.
      Signed-off-by: NJiri Olsa <jolsa@kernel.org>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: David Ahern <dsahern@gmail.com>
      Cc: Don Zickus <dzickus@redhat.com>
      Cc: Joe Mario <jmario@redhat.com>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Link: http://lkml.kernel.org/r/1474290610-23241-3-git-send-email-jolsa@kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      82deb8a2
    • I
      Merge tag 'perf-core-for-mingo-20160920' of... · 89f1c2c5
      Ingo Molnar 提交于
      Merge tag 'perf-core-for-mingo-20160920' of git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux into perf/core
      
      Pull perf/core improvements and fixes from Arnaldo Carvalho de Melo:
      
      User visible changes:
      
      - Support event group view with hierarchy mode in 'perf top' and 'perf report'
        (Namhyung Kim)
      
        e.g.:
      
        $ perf record -e '{cycles,instructions}' make
        $ perf report --hierarchy --stdio
        ...
        #               Overhead  Command / Shared Object / Symbol
        # ......................  ..................................
        ...
            25.74%  27.18%        sh
               19.96%  24.14%        libc-2.24.so
                  9.55%  14.64%        [.] __strcmp_sse2
                  1.54%   0.00%        [.] __tfind
                  1.07%   1.13%        [.] _int_malloc
                  0.95%   0.00%        [.] __strchr_sse2
                  0.89%   1.39%        [.] __tsearch
                  0.76%   0.00%        [.] strlen
      
      - Fix the dwarf regs table for x86_64, adding a missing % to the "%di"
        register, noticed with a failing 'perf test bpf' (Arnaldo Carvalho de Melo)
      
      - Fix handling of mmap parameters in the 'perf trace' beautifier in
        architectures that don't have the same mappings as x86_64 (Wang Nan)
      
      - Handle hugetbl mappings in older systems running new kernels (Wang Nan)
      
      - Resolve 'call' operands in 'annotate', that when using /proc/kcore
        were appearing just as hexadecimal addresses, to function names
        (Arnaldo Carvalho de Melo)
      
      - Fix width computation for srcline sort entry (Jiri Olsa)
      
      - Do not ignore call instruction with indirect target in 'annotate'
        (Ravi Bangoria)
      
      - Handle MADV_FREE in the madvise 'trace' beautifier (Wang Nan)
      
      - Fix build of 'perf trace' mman beautifier in !x86_64 (Wang Nan)
      
      Infrastructure changes:
      
      - Add infrastructure for PMU specific configuration, allowing to pass
        config variables directly to the kernel PMU driver, prefixing those
        variables with a '@', part of a larger series to support Coresight (Mathieu Poirier)
      Signed-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      89f1c2c5
    • J
      perf symbols: Do not open device files · 3c028a0c
      Jiri Olsa 提交于
      The dso__read_binary_type_filename gets the dso's file name to open. We
      need to check it for regular file before trying to open it, otherwise we
      might get stuck with device file.
      Signed-off-by: NJiri Olsa <jolsa@kernel.org>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: David Ahern <dsahern@gmail.com>
      Cc: Don Zickus <dzickus@redhat.com>
      Cc: Joe Mario <jmario@redhat.com>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Link: http://lkml.kernel.org/r/20160920161245.GA8995@kravaSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      3c028a0c
    • N
      perf hists: Factor out hists__reset_column_width() · e3b60bc9
      Namhyung Kim 提交于
      The stdio and tui has same code to reset hpp format column width.
      Factor it out as a new function.
      Suggested-and-Acked-by: NJiri Olsa <jolsa@redhat.com>
      Signed-off-by: NNamhyung Kim <namhyung@kernel.org>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Link: http://lkml.kernel.org/r/20160920053025.13989-2-namhyung@kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      e3b60bc9
    • N
      perf ui/tui: Reset output width for hierarchy · 5ff3e7a2
      Namhyung Kim 提交于
      When --hierarchy option is used, each entry has its own hpp_list to show
      the result.  But it missed to update width of each column.
      
      Before:
      
        - 46.29% 48.12%        netctl-auto
           + 31.44% 29.25%        [kernel.vmlinux]
           + 8.52% 11.55%        libc-2.22.so
           + 5.19% 6.91%        bash
        + 10.75% 11.83%        wpa_cli
        + 8.25% 2.23%        swapper
        + 6.45% 5.40%        tr
        + 4.81% 8.09%        awk
        + 4.15% 2.85%        firefox
        + 3.86% 2.53%        sh
      
      After:
      
        -  46.29%  48.12%        netctl-auto
            +  31.44%  29.25%        [kernel.vmlinux]
            +   8.52%  11.55%        libc-2.22.so
            +   5.19%   6.91%        bash
        +  10.75%  11.83%        wpa_cli
        +   8.25%   2.23%        swapper
        +   6.45%   5.40%        tr
        +   4.81%   8.09%        awk
        +   4.15%   2.85%        firefox
        +   3.86%   2.53%        sh
      
      Committer note:
      
      Full testing instructions:
      
      1) Record with an event group:
      
        $ perf record -e '{cycles,instructions}' make -j4
      
      2) Use report in hierarchy mode, to get a few expanded trees on
         the same screen, use --percent-limit:
      
        $ perf report --hierarchy --percent-limit 0.5
      
      Samples: 103K of event 'anon group { cycles:u, instructions:u }',
      Event count (approx.): 57317631725
               Overhead        Command / Shared Object / Symbol        ◆
      -  58.89%  55.12%        cc1                                     ▒
         -  50.26%  48.10%        cc1                                  ▒
                3.61%   5.13%        [.] _cpp_lex_token                ▒
                2.58%   0.78%        [.] ht_lookup_with_hash           ▒
                1.31%   1.30%        [.] ggc_internal_alloc            ▒
                1.08%   2.25%        [.] get_combined_adhoc_loc        ▒
                1.01%   1.95%        [.] ira_init                      ▒
                0.96%   1.78%        [.] linemap_position_for_column   ▒
                0.65%   1.01%        [.] cpp_get_token_with_location   ▒
         -   7.52%   6.58%        libc-2.23.so                         ▒
                1.70%   1.78%        [.] _int_malloc                   ▒
                0.69%   0.75%        [.] _int_free                     ▒
                0.67%   0.42%        [.] malloc_consolidate            ▒
         -   0.58%   0.42%        ld-2.23.so                           ▒
                                     no entry >= 0.50%                 ▒
         -   0.52%   0.03%        [kernel.vmlinux]                     ▒
                                     no entry >= 0.50%                 ▒
      Signed-off-by: NNamhyung Kim <namhyung@kernel.org>
      Acked-by: NJiri Olsa <jolsa@kernel.org>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Fixes: 1b2dbbf4 ("perf hists: Use own hpp_list for hierarchy mode")
      Link: http://lkml.kernel.org/r/20160920053025.13989-1-namhyung@kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      5ff3e7a2
  3. 20 9月, 2016 5 次提交
    • A
      perf annotate: Resolve 'call' operands to function names · 5f62d4fd
      Arnaldo Carvalho de Melo 提交于
      Before this patch the '_raw_spin_lock_irqsave' and 'update_rq_clock' operands
      were appearing just as hexadecimal numbers:
      
        update_blocked_averages  /proc/kcore
             │       push   %r12
             │       push   %rbx
             │       and    $0xfffffffffffffff0,%rsp
             │       sub    $0x40,%rsp
             │       add    -0x662cac00(,%rdi,8),%rax
             │       mov    %rax,%rbx
             │       mov    %rax,%rdi
             │       mov    %rax,0x38(%rsp)
             │     → callq  _raw_spin_lock_irqsave
             │       mov    %rbx,%rdi
             │       mov    %rax,0x30(%rsp)
             │     → callq  update_rq_clock
             │       mov    0x8d0(%rbx),%rax
             │       lea    0x8d0(%rbx),%r11
      
      To check that all is right one can always use the 'o' hotkey and see
      the original objdump -dS output, that for this case is:
      
        update_blocked_averages  /proc/kcore
             │ffffffff990d5489:   push   %r12
             │ffffffff990d548b:   push   %rbx
             │ffffffff990d548c:   and    $0xfffffffffffffff0,%rsp
             │ffffffff990d5490:   sub    $0x40,%rsp
             │ffffffff990d5494:   add    -0x662cac00(,%rdi,8),%rax
             │ffffffff990d549c:   mov    %rax,%rbx
             │ffffffff990d549f:   mov    %rax,%rdi
             │ffffffff990d54a2:   mov    %rax,0x38(%rsp)
             │ffffffff990d54a7: → callq  0xffffffff997eb7a0
             │ffffffff990d54ac:   mov    %rbx,%rdi
             │ffffffff990d54af:   mov    %rax,0x30(%rsp)
             │ffffffff990d54b4: → callq  0xffffffff990c7720
             │ffffffff990d54b9:   mov    0x8d0(%rbx),%rax
             │ffffffff990d54c0:   lea    0x8d0(%rbx),%r11
      
      Use the 'h' hotkey to see a list of available hotkeys.
      
      More work needed to cover operands for other instructions, such as 'mov',
      that can resolve variable names, etc.
      
      Cc: Adrian Hunter <adrian.hunter@intel.com>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Chris Riyder <chris.ryder@arm.com>
      Cc: David Ahern <dsahern@gmail.com>
      Cc: Hemant Kumar <hemant@linux.vnet.ibm.com>
      Cc: Jiri Olsa <jolsa@kernel.org>
      Cc: Markus Trippelsdorf <markus@trippelsdorf.de>
      Cc: Masami Hiramatsu <mhiramat@kernel.org>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
      Cc: Pawel Moll <pawel.moll@arm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Ravi Bangoria <ravi.bangoria@linux.vnet.ibm.com>
      Cc: Russell King <rmk+kernel@arm.linux.org.uk>
      Cc: Taeung Song <treeze.taeung@gmail.com>
      Cc: Wang Nan <wangnan0@huawei.com>
      Link: http://lkml.kernel.org/n/tip-xqgtw9mzmzcjgwkis9kiiv1p@git.kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      5f62d4fd
    • A
      perf annotate: Pass the symbol's map/dso to the instruction parsers · bff5c306
      Arnaldo Carvalho de Melo 提交于
      So that things like:
      
             → callq  0xffffffff993e3230
      
      found while disassembling /proc/kcore can be beautified by later
      patches, that will resolve that address to a function, looking it up in
      /proc/kallsyms.
      
      Cc: Adrian Hunter <adrian.hunter@intel.com>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Chris Riyder <chris.ryder@arm.com>
      Cc: David Ahern <dsahern@gmail.com>
      Cc: Hemant Kumar <hemant@linux.vnet.ibm.com>
      Cc: Jiri Olsa <jolsa@kernel.org>
      Cc: Markus Trippelsdorf <markus@trippelsdorf.de>
      Cc: Masami Hiramatsu <mhiramat@kernel.org>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
      Cc: Pawel Moll <pawel.moll@arm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Ravi Bangoria <ravi.bangoria@linux.vnet.ibm.com>
      Cc: Russell King <rmk+kernel@arm.linux.org.uk>
      Cc: Taeung Song <treeze.taeung@gmail.com>
      Cc: Wang Nan <wangnan0@huawei.com>
      Link: http://lkml.kernel.org/n/tip-p76myuke4j7gplg54amaklxk@git.kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      bff5c306
    • R
      perf annotate: Do not ignore call instruction with indirect target · 88a7fcf9
      Ravi Bangoria 提交于
      Do not ignore call instruction with indirect target when its already
      identified as a call. This is an extension of commit e8ea1561 ("perf
      annotate: Use raw form for register indirect call instructions") to
      generalize annotation for all instructions with indirect calls.
      
      This is needed for certain powerpc call instructions that use address in
      a register (such as bctrl, btarl, ...).
      
      Apart from that, when kcore is used to disassemble function, all call
      instructions were ignored. This patch will fix it as a side effect by
      not ignoring them. For example,
      
      Before (with kcore):
             mov    %r13,%rdi
             callq  0xffffffff811a7e70
           ^ jmpq   64
             mov    %gs:0x7ef41a6e(%rip),%al
      
      After (with kcore):
             mov    %r13,%rdi
           > callq  0xffffffff811a7e70
           ^ jmpq   64
             mov    %gs:0x7ef41a6e(%rip),%al
      Suggested-by: NMichael Ellerman <mpe@ellerman.id.au>
      [Suggested about 'bctrl' instruction]
      Signed-off-by: NRavi Bangoria <ravi.bangoria@linux.vnet.ibm.com>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Chris Riyder <chris.ryder@arm.com>
      Cc: Hemant Kumar <hemant@linux.vnet.ibm.com>
      Cc: Jiri Olsa <jolsa@kernel.org>
      Cc: Markus Trippelsdorf <markus@trippelsdorf.de>
      Cc: Masami Hiramatsu <mhiramat@kernel.org>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
      Cc: Pawel Moll <pawel.moll@arm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Russell King <rmk+kernel@arm.linux.org.uk>
      Cc: Taeung Song <treeze.taeung@gmail.com>
      Link: http://lkml.kernel.org/r/1471611578-11255-5-git-send-email-ravi.bangoria@linux.vnet.ibm.comSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      88a7fcf9
    • J
      perf hists: Fix width computation for srcline sort entry · f666ac0d
      Jiri Olsa 提交于
      Adding header size to width computation for srcline sort entry,
      because it's possible to get empty data with ':0' which set width
      of 2 which is lower than width needed to display column header.
      Signed-off-by: NJiri Olsa <jolsa@kernel.org>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: David Ahern <dsahern@gmail.com>
      Cc: Don Zickus <dzickus@redhat.com>
      Cc: Joe Mario <jmario@redhat.com>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Link: http://lkml.kernel.org/r/1474290610-23241-62-git-send-email-jolsa@kernel.org
      [ Added declaration to sort.h ]
      Signed-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      f666ac0d
    • A
      perf/x86/intel/pt: Add support for PTWRITE and power event tracing · 8ee83b2a
      Alexander Shishkin 提交于
      The Intel PT facility grew some new functionality:
      
        * PTWRITE packet carries the payload of the new PTWRITE instruction
          that can be used to instrument Intel PT traces with user-supplied
          data. Packets of this type are only generated if 'ptwrite' capability
          is set and PTWEn bit is set in the event attribute's config. Flow
          update packets (FUP) can be generated on PTWRITE packets if FUPonPTW
          config bit is set. Setting these bits is not allowed if 'ptwrite'
          capability is not set.
      
        * PWRE, PWRX, MWAIT, EXSTOP packets communicate core power management
          events. These depend on 'power_event_tracing' capability and are
          enabled by setting PwrEvtEn bit in the event attribute.
      
      Extend the driver capabilities and provide the proper sanity checks in the
      event validation function.
      
      [ tglx: Massaged changelog ]
      Signed-off-by: NAlexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@infradead.org>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: vince@deater.net
      Cc: eranian@google.com
      Cc: Adrian Hunter <adrian.hunter@intel.com>
      Link: http://lkml.kernel.org/r/20160916134819.1978-1-alexander.shishkin@linux.intel.comSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      8ee83b2a
  4. 19 9月, 2016 2 次提交
    • W
      tools include: Add mman macros needed by perf for all arch · f82b7746
      Wang Nan 提交于
      Some macros required by tools/perf/trace/beauty/mmap.c is not support
      for all architectures. For example, MAP_32BIT is defined on x86 only,
      alpha doesn't define MADV_HWPOISON and MADV_SOFT_OFFLINE.
      
      This patch regenerates mman.h for each arch, defines these missing
      macros for perf. For missing MADV_*, fall back to asm-generic/mman-common
      because they are in a 'case ...' statement. For flags, define it to 0.
      
      Following is the script to generate this patch:
      
       macros=`cat $0 | awk 'V==1 {print}; /^# start macro list/ {V=1}'`
       rm `find ./tools/arch/ -name mman.h`
       for arch in `ls tools/arch`
       do
         [ -d tools/arch/$arch/include/uapi/asm ] || mkdir -p tools/arch/$arch/include/uapi/asm
         src=arch/$arch/include/uapi/asm/mman.h
         target=tools/arch/$arch/include/uapi/asm/mman.h.tmp
         real_target=tools/arch/$arch/include/uapi/asm/mman.h
         guard="TOOLS_ARCH_"`echo $arch | awk '{print toupper($0)}'`_UAPI_ASM_MMAN_FIX_H
         rm -f $target
      
         [ -f $src ] &&
         for m in $macros
         do
           if grep '#define[ \t]*'$m $src > /dev/null 2>&1
           then
             grep -h '#define[ \t]*'$m $src | sed 's/[ \t]*\/\*.*$//g' >> $target
           fi
         done
      
         if [ -f $src ]
         then
            grep '#include <asm-generic' $src >> $target
         else
            echo "#include <asm-generic/mman.h>" >> $target
         fi
      
         touch $real_target
         for m in $macros
         do
           if cat << EOF | gcc -Itools/arch/$arch/include -Itools/arch/$arch/include/uapi -Iinclude/ -Iinclude/uapi -E - | grep $m > /dev/null 2>&1
       #include <uapi/asm/mman.h.tmp>
       #include <uapi/linux/mman.h>
       $m
       EOF
         then
           echo "Fixing $m for $arch"
           echo "/* $m is undefined on $arch, fix it for perf */" >> $target
           if echo $m | grep '^MADV_' > /dev/null 2>&1
           then
             grep -h '#define[ \t]*'$m include/uapi/asm-generic/mman-common.h | sed 's/[ \t]*\/\*.*$//g' >> $target
           else
             echo "#define $m	0" >> $target
           fi
         fi
         done
      
         real_target=tools/arch/$arch/include/uapi/asm/mman.h
         echo '#ifndef '$guard > $real_target
         echo '#define '$guard >> $real_target
         cat $target | sed 's|asm-generic|uapi/asm-generic|g' >> $real_target
         echo '#endif' >> $real_target
         rm $target
         echo "$real_target"
       done
      
       exit 0
       # Following macros are extracted from:
       # tools/perf/trace/beauty/mmap.c
       #
       # start macro list
       MADV_DODUMP
       MADV_DOFORK
       MADV_DONTDUMP
       MADV_DONTFORK
       MADV_DONTNEED
       MADV_FREE
       MADV_HUGEPAGE
       MADV_HWPOISON
       MADV_MERGEABLE
       MADV_NOHUGEPAGE
       MADV_NORMAL
       MADV_RANDOM
       MADV_REMOVE
       MADV_SEQUENTIAL
       MADV_SOFT_OFFLINE
       MADV_UNMERGEABLE
       MADV_WILLNEED
       MAP_32BIT
       MAP_ANONYMOUS
       MAP_DENYWRITE
       MAP_EXECUTABLE
       MAP_FILE
       MAP_FIXED
       MAP_GROWSDOWN
       MAP_HUGETLB
       MAP_LOCKED
       MAP_NONBLOCK
       MAP_NORESERVE
       MAP_POPULATE
       MAP_PRIVATE
       MAP_SHARED
       MAP_STACK
       MAP_UNINITIALIZED
       MREMAP_FIXED
       MREMAP_MAYMOVE
       PROT_EXEC
       PROT_GROWSDOWN
       PROT_GROWSUP
       PROT_NONE
       PROT_READ
       PROT_SEM
       PROT_WRITE
      Signed-off-by: NWang Nan <wangnan0@huawei.com>
      Tested-by: NKim Phillips <kim.phillips@arm.com>
      Tested-by: NNaveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
      Cc: Ravi Bangoria <ravi.bangoria@linux.vnet.ibm.com>
      Cc: Zefan Li <lizefan@huawei.com>
      Cc: pi3orama@163.com
      Fixes: 277cf08f ("perf trace beauty mmap: Fix defines for non !x86_64")
      Link: http://lkml.kernel.org/r/1473850649-83389-3-git-send-email-wangnan0@huawei.comSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      f82b7746
    • W
      perf trace beauty mmap: Add missing MADV_FREE · f752e90e
      Wang Nan 提交于
      tools/perf/trace/beauty/mmap.c forgets to check MADV_FREE.
      This patch fixes it.
      Signed-off-by: NWang Nan <wangnan0@huawei.com>
      Cc: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
      Cc: Zefan Li <lizefan@huawei.com>
      Cc: pi3orama@163.com
      Link: http://lkml.kernel.org/r/1473850649-83389-2-git-send-email-wangnan0@huawei.comSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      f752e90e
  5. 14 9月, 2016 11 次提交
  6. 12 9月, 2016 2 次提交
    • N
      perf hists browser: Fix event group display · d9ea48bc
      Namhyung Kim 提交于
      Milian reported that the event group on TUI shows duplicated overhead.
      This was due to a bug on calculating hpp->buf position.  The
      hpp_advance() was called from __hpp__slsmg_color_printf() on TUI but
      it's already called from the hpp__call_print_fn macro in __hpp__fmt().
      The end result is that the print function returns number of bytes it
      printed but the buffer advanced twice of the length.
      
      This is generally not a problem since it doesn't need to access the
      buffer again.  But with event group, overhead needs to be printed
      multiple times and hist_entry__snprintf_alignment() tries to fill the
      space with buffer after it printed.  So it (brokenly) showed the last
      overhead again.
      
      The bug was there from the beginning, but I think it's only revealed
      when the alignment function was added.
      Reported-by: NMilian Wolff <milian.wolff@kdab.com>
      Signed-off-by: NNamhyung Kim <namhyung@kernel.org>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: Jiri Olsa <jolsa@kernel.org>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Fixes: 89fee709 ("perf hists: Do column alignment on the format iterator")
      Link: http://lkml.kernel.org/r/20160912061958.16656-2-namhyung@kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      d9ea48bc
    • A
      perf probe: Fix dwarf regs table for x86_64 · 7a023fd2
      Arnaldo Carvalho de Melo 提交于
      In 293d5b43 ("perf probe: Support probing on offline cross-arch binary")
      DWARF register tables were introduced for many architectures, with the one for
      the "dx" register being broken for x86_64, which got noticed by the 'perf test
      bpf' testcase, that has this difference from a successful run to one that
      fails, with the aforementioned patch:
      
        -Writing event: p:perf_bpf_probe/func _text+5197232 f_mode=+68(%di):x32 offset=%si:s64 orig=dx:s32
        -Failed to write event: Invalid argument
        -bpf_probe: failed to apply perf probe eventsFailed to add events selected by BPF
        +Writing event: p:perf_bpf_probe/func _text+5197232 f_mode=+68(%di):x32 offset=%si:s64 orig=%dx:s32
      
      Add the missing '%' to '%dx' to fix this.
      Acked-by: NMasami Hiramatsu <mhiramat@kernel.org>
      Cc: Adrian Hunter <adrian.hunter@intel.com>
      Cc: David Ahern <dsahern@gmail.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Wang Nan <wangnan0@huawei.com>
      Fixes: 293d5b43 ("perf probe: Support probing on offline cross-arch binary")
      Link: https://lkml.kernel.org/r/20160909145955.GC32585@kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      7a023fd2
  7. 10 9月, 2016 10 次提交
    • K
      perf/x86/intel/uncore: Add Skylake server uncore support · cd34cd97
      Kan Liang 提交于
      This patch implements the uncore monitoring driver for Skylake server.
      The uncore subsystem in Skylake server is similar to previous
      server. There are some differences in config register encoding and pci
      device IDs. Besides, Skylake introduces many new boxes to reflect the
      MESH architecture changes.
      
      The control registers for IIO and UPI have been extended to 64 bit. This
      patch also introduces event_mask_ext to handle the high 32 bit mask.
      
      The CHA box number could vary for different machines. This patch gets
      the CHA box number by counting the CHA register space during
      initialization at runtime.
      Signed-off-by: NKan Liang <kan.liang@intel.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Link: http://lkml.kernel.org/r/1471378190-17276-3-git-send-email-kan.liang@intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      cd34cd97
    • H
      perf/x86/rapl: Enable Apollo Lake RAPL support · 2668c619
      Harry Pan 提交于
      This patch enables RAPL counters (energy consumption counters)
      support for Intel Apollo Lake (Goldmont) processors (Model 92):
      
      RAPL of Goldmont, unlikes ESU increment of Silvermont/Airmont,
      it likes the Haswell microarchitecture in 1/2^ESU joules and
      supports power domains in PP0/PP1/PKG/RAM.
      
      ESU and power domains refer to Intel Software Developers' Manual,
      Vol. 3C, Order No. 325384, Table 35-12.
      
      Usage example:
      
        $ perf list
        $ perf stat -a -e power/energy-cores/,power/energy-pkg/ sleep 10
      Signed-off-by: NHarry Pan <harry.pan@intel.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Cc: bp@alien8.de
      Cc: gs0622@gmail.com
      Cc: hpa@zytor.com
      Cc: srinivas.pandruvada@linux.intel.com
      Link: http://lkml.kernel.org/r/1473325738-730-1-git-send-email-harry.pan@intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      2668c619
    • I
      50069218
    • P
      perf/x86/intel: Fix PEBSv3 record drain · 8ef9b845
      Peter Zijlstra 提交于
      Alexander hit the WARN_ON_ONCE(!event) on his Skylake while running
      the perf fuzzer.
      
      This means the PEBSv3 record included a status bit for an inactive
      event, something that _should_ not happen.
      
      Move the code that filters the status bits against our known PEBS
      events up a spot to guarantee we only deal with events we know about.
      
      Further add "continue" statements to the WARN_ON_ONCE()s such that
      we'll not die nor generate silly events in case we ever do hit them
      again.
      Reported-by: NAlexander Shishkin <alexander.shishkin@linux.intel.com>
      Tested-by: NAlexander Shishkin <alexander.shishkin@linux.intel.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Kan Liang <kan.liang@intel.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vince@deater.net>
      Cc: stable@vger.kernel.org
      Fixes: a3d86542 ("perf/x86/intel/pebs: Add PEBSv3 decoding")
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      8ef9b845
    • A
      perf/x86/intel/bts: Kill a silly warning · ef9ef3be
      Alexander Shishkin 提交于
      At the moment, intel_bts will WARN() out if there is more than one
      event writing to the same ring buffer, via SET_OUTPUT, and will only
      send data from one event to a buffer.
      
      There is no reason to have this warning in, so kill it.
      Signed-off-by: NAlexander Shishkin <alexander.shishkin@linux.intel.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Arnaldo Carvalho de Melo <acme@infradead.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: vince@deater.net
      Link: http://lkml.kernel.org/r/20160906132353.19887-6-alexander.shishkin@linux.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      ef9ef3be
    • A
      perf/x86/intel/bts: Fix BTS PMI detection · 4d4c4741
      Alexander Shishkin 提交于
      Since BTS doesn't have a dedicated PMI status bit, the driver needs to
      take extra care to check for the condition that triggers it to avoid
      spurious NMI warnings.
      
      Regardless of the local BTS context state, the only way of knowing that
      the NMI is ours is to compare the write pointer against the interrupt
      threshold.
      Reported-by: NVince Weaver <vincent.weaver@maine.edu>
      Signed-off-by: NAlexander Shishkin <alexander.shishkin@linux.intel.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Arnaldo Carvalho de Melo <acme@infradead.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: vince@deater.net
      Link: http://lkml.kernel.org/r/20160906132353.19887-5-alexander.shishkin@linux.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      4d4c4741
    • A
      perf/x86/intel/bts: Fix confused ordering of PMU callbacks · a9a94401
      Alexander Shishkin 提交于
      The intel_bts driver is using a CPU-local 'started' variable to order
      callbacks and PMIs and make sure that AUX transactions don't get messed
      up. However, the ordering rules in regard to this variable is a complete
      mess, which recently resulted in perf_fuzzer-triggered warnings and
      panics.
      
      The general ordering rule that is patch is enforcing is that this
      cpu-local variable be set only when the cpu-local AUX transaction is
      active; consequently, this variable is to be checked before the AUX
      related bits can be touched.
      Reported-by: NVince Weaver <vincent.weaver@maine.edu>
      Signed-off-by: NAlexander Shishkin <alexander.shishkin@linux.intel.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Arnaldo Carvalho de Melo <acme@infradead.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: vince@deater.net
      Link: http://lkml.kernel.org/r/20160906132353.19887-4-alexander.shishkin@linux.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      a9a94401
    • A
      perf/core: Fix aux_mmap_count vs aux_refcount order · b79ccadd
      Alexander Shishkin 提交于
      The order of accesses to ring buffer's aux_mmap_count and aux_refcount
      has to be preserved across the users, namely perf_mmap_close() and
      perf_aux_output_begin(), otherwise the inversion can result in the latter
      holding the last reference to the aux buffer and subsequently free'ing
      it in atomic context, triggering a warning.
      
      > ------------[ cut here ]------------
      > WARNING: CPU: 0 PID: 257 at kernel/events/ring_buffer.c:541 __rb_free_aux+0x11a/0x130
      > CPU: 0 PID: 257 Comm: stopbug Not tainted 4.8.0-rc1+ #2596
      > Call Trace:
      >  [<ffffffff810f3e0b>] __warn+0xcb/0xf0
      >  [<ffffffff810f3f3d>] warn_slowpath_null+0x1d/0x20
      >  [<ffffffff8121182a>] __rb_free_aux+0x11a/0x130
      >  [<ffffffff812127a8>] rb_free_aux+0x18/0x20
      >  [<ffffffff81212913>] perf_aux_output_begin+0x163/0x1e0
      >  [<ffffffff8100c33a>] bts_event_start+0x3a/0xd0
      >  [<ffffffff8100c42d>] bts_event_add+0x5d/0x80
      >  [<ffffffff81203646>] event_sched_in.isra.104+0xf6/0x2f0
      >  [<ffffffff8120652e>] group_sched_in+0x6e/0x190
      >  [<ffffffff8120694e>] ctx_sched_in+0x2fe/0x5f0
      >  [<ffffffff81206ca0>] perf_event_sched_in+0x60/0x80
      >  [<ffffffff81206d1b>] ctx_resched+0x5b/0x90
      >  [<ffffffff81207281>] __perf_event_enable+0x1e1/0x240
      >  [<ffffffff81200639>] event_function+0xa9/0x180
      >  [<ffffffff81202000>] ? perf_cgroup_attach+0x70/0x70
      >  [<ffffffff8120203f>] remote_function+0x3f/0x50
      >  [<ffffffff811971f3>] flush_smp_call_function_queue+0x83/0x150
      >  [<ffffffff81197bd3>] generic_smp_call_function_single_interrupt+0x13/0x60
      >  [<ffffffff810a6477>] smp_call_function_single_interrupt+0x27/0x40
      >  [<ffffffff81a26ea9>] call_function_single_interrupt+0x89/0x90
      >  [<ffffffff81120056>] finish_task_switch+0xa6/0x210
      >  [<ffffffff81120017>] ? finish_task_switch+0x67/0x210
      >  [<ffffffff81a1e83d>] __schedule+0x3dd/0xb50
      >  [<ffffffff81a1efe5>] schedule+0x35/0x80
      >  [<ffffffff81128031>] sys_sched_yield+0x61/0x70
      >  [<ffffffff81a25be5>] entry_SYSCALL_64_fastpath+0x18/0xa8
      > ---[ end trace 6235f556f5ea83a9 ]---
      
      This patch puts the checks in perf_aux_output_begin() in the same order
      as that of perf_mmap_close().
      Reported-by: NVince Weaver <vincent.weaver@maine.edu>
      Signed-off-by: NAlexander Shishkin <alexander.shishkin@linux.intel.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Arnaldo Carvalho de Melo <acme@infradead.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: vince@deater.net
      Link: http://lkml.kernel.org/r/20160906132353.19887-3-alexander.shishkin@linux.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      b79ccadd
    • A
      perf/core: Fix a race between mmap_close() and set_output() of AUX events · 767ae086
      Alexander Shishkin 提交于
      In the mmap_close() path we need to stop all the AUX events that are
      writing data to the AUX area that we are unmapping, before we can
      safely free the pages. To determine if an event needs to be stopped,
      we're comparing its ->rb against the one that's getting unmapped.
      However, a SET_OUTPUT ioctl may turn up inside an AUX transaction
      and swizzle event::rb to some other ring buffer, but the transaction
      will keep writing data to the old ring buffer until the event gets
      scheduled out. At this point, mmap_close() will skip over such an
      event and will proceed to free the AUX area, while it's still being
      used by this event, which will set off a warning in the mmap_close()
      path and cause a memory corruption.
      
      To avoid this, always stop an AUX event before its ->rb is updated;
      this will release the (potentially) last reference on the AUX area
      of the buffer. If the event gets restarted, its new ring buffer will
      be used. If another SET_OUTPUT comes and switches it back to the
      old ring buffer that's getting unmapped, it's also fine: this
      ring buffer's aux_mmap_count will be zero and AUX transactions won't
      start any more.
      Reported-by: NVince Weaver <vincent.weaver@maine.edu>
      Signed-off-by: NAlexander Shishkin <alexander.shishkin@linux.intel.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Arnaldo Carvalho de Melo <acme@infradead.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: vince@deater.net
      Link: http://lkml.kernel.org/r/20160906132353.19887-2-alexander.shishkin@linux.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      767ae086
    • S
      perf/x86/amd/uncore: Prevent use after free · 7d762e49
      Sebastian Andrzej Siewior 提交于
      The resent conversion of the cpu hotplug support in the uncore driver
      introduced a regression due to the way the callbacks are invoked at
      initialization time.
      
      The old code called the prepare/starting/online function on each online cpu
      as a block. The new code registers the hotplug callbacks in the core for
      each state. The core invokes the callbacks at each registration on all
      online cpus.
      
      The code implicitely relied on the prepare/starting/online callbacks being
      called as combo on a particular cpu, which was not obvious and completely
      undocumented.
      
      The resulting subtle wreckage happens due to the way how the uncore code
      manages shared data structures for cpus which share an uncore resource in
      hardware. The sharing is determined in the cpu starting callback, but the
      prepare callback allocates per cpu data for the upcoming cpu because
      potential sharing is unknown at this point. If the starting callback finds
      a online cpu which shares the hardware resource it takes a refcount on the
      percpu data of that cpu and puts the own data structure into a
      'free_at_online' pointer of that shared data structure. The online callback
      frees that.
      
      With the old model this worked because in a starting callback only one non
      unused structure (the one of the starting cpu) was available. The new code
      allocates the data structures for all cpus when the prepare callback is
      registered.
      
      Now the starting function iterates through all online cpus and looks for a
      data structure (skipping its own) which has a matching hardware id. The id
      member of the data structure is initialized to 0, but the hardware id can
      be 0 as well. The resulting wreckage is:
      
        CPU0 finds a matching id on CPU1, takes a refcount on CPU1 data and puts
        its own data structure into CPU1s data structure to be freed.
      
        CPU1 skips CPU0 because the data structure is its allegedly unsued own.
        It finds a matching id on CPU2, takes a refcount on CPU1 data and puts
        its own data structure into CPU2s data structure to be freed.
      
        ....
      
      Now the online callbacks are invoked.
      
        CPU0 has a pointer to CPU1s data and frees the original CPU0 data. So
        far so good.
      
        CPU1 has a pointer to CPU2s data and frees the original CPU1 data, which
        is still referenced by CPU0 ---> Booom
      
      So there are two issues to be solved here:
      
      1) The id field must be initialized at allocation time to a value which
         cannot be a valid hardware id, i.e. -1
      
         This prevents the above scenario, but now CPU1 and CPU2 both stick their
         own data structure into the free_at_online pointer of CPU0. So we leak
         CPU1s data structure.
      
      2) Fix the memory leak described in #1
      
         Instead of having a single pointer, use a hlist to enqueue the
         superflous data structures which are then freed by the first cpu
         invoking the online callback.
      
      Ideally we should know the sharing _before_ invoking the prepare callback,
      but that's way beyond the scope of this bug fix.
      
      [ tglx: Rewrote changelog ]
      
      Fixes: 96b2bd38 ("perf/x86/amd/uncore: Convert to hotplug state machine")
      Reported-and-tested-by: NEric Sandeen <sandeen@sandeen.net>
      Signed-off-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de>
      Cc: Borislav Petkov <bp@suse.de>
      Link: http://lkml.kernel.org/r/20160909160822.lowgmkdwms2dheyv@linutronix.deSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      7d762e49
  8. 09 9月, 2016 1 次提交
    • I
      Merge tag 'perf-core-for-mingo-20160908' of... · 14520d63
      Ingo Molnar 提交于
      Merge tag 'perf-core-for-mingo-20160908' of git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux into perf/core
      
      Pull perf/core improvements and fixes from Arnaldo Carvalho de Melo:
      
      User visible changes:
      
      - Add branch stack / basic block info to 'perf annotate --stdio', where for
        each branch, we add an asm comment after the instruction with information on
        how often it was taken and predicted. See example with color output at:
      
          http://vger.kernel.org/~acme/perf/annotate_basic_blocks.png
      
        (Peter Zijlstra)
      
      - Only open an evsel in CPUs in its cpu map, fixing some use cases in
        systems with multiple PMUs with different CPU maps (Mark Rutland)
      
      - Fix handling of huge TLB maps, recognizing it as anonymous (Wang Nan)
      
      Infrastructure changes:
      
      - Remove the symbol filtering code, i.e. the callbacks passed to all functions
        that could end up loading a DSO symtab, simplifying the code, eventually
        allowing what we should have had since day one: removing the 'map' parameter
        from dso__load() functions (Arnaldo Carvalho de Melo)
      
      Arch specific build fixes:
      
      - Fix detached tarball build on powerpc, where we were still accessing a
        file outside tools/ (Ravi Bangoria)
      Signed-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      14520d63