1. 27 4月, 2016 14 次提交
  2. 26 4月, 2016 10 次提交
  3. 25 4月, 2016 6 次提交
    • A
      perf trace: Make --pf honour --min-stack too · 1df54290
      Arnaldo Carvalho de Melo 提交于
      To check deeply nested page fault callchains.
      
      Cc: Adrian Hunter <adrian.hunter@intel.com>
      Cc: David Ahern <dsahern@gmail.com>
      Cc: Jiri Olsa <jolsa@kernel.org>
      Cc: Milian Wolff <milian.wolff@kdab.com>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Wang Nan <wangnan0@huawei.com>
      Link: http://lkml.kernel.org/n/tip-wuji34xx003kr88nmqt6jkgf@git.kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      1df54290
    • A
      perf trace: Make --event honour --min-stack too · 7ad35615
      Arnaldo Carvalho de Melo 提交于
      Cc: Adrian Hunter <adrian.hunter@intel.com>
      Cc: David Ahern <dsahern@gmail.com>
      Cc: Jiri Olsa <jolsa@kernel.org>
      Cc: Milian Wolff <milian.wolff@kdab.com>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Wang Nan <wangnan0@huawei.com>
      Link: http://lkml.kernel.org/n/tip-shj0fazntmskhjild5i6x73l@git.kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      7ad35615
    • C
      perf script: Fix segfault when printing callchains · e557b674
      Chris Phlipot 提交于
      This fixes a bug caused by an unitialized callchain cursor. The crash
      frist appeared in:
      
      6f736735 ("perf evsel: Require that callchains be resolved before
      calling fprintf_{sym,callchain}")
      
      The callchain cursor is a struct that contains pointers, that when
      uninitialized will cause unpredictable behavior (usually a crash)
      when trying to append to the callchain.
      
      The existing implementation has the following issues:
      
      1. The callchain cursor used is not initialized, resulting in
      	unpredictable behavior when used.
      2. The cursor is declared on the stack. Even if it is properly initalized,
      	the implmentation will leak memory when the function returns,
      	since all the references to the callchain_nodes allocated by
      	callchain_cursor_append will be lost when the cursor goes out of
      	scope.
      3. Storing the cursor on the stack is inefficient. Even if memory is
      	properly freed when it goes out of scope, a performance penalty
      	will be incurred due to reallocation of callchain nodes.
      	callchain_cursor_append is designed to avoid these reallocations
      	when an existing cursor is reused.
      
      This patch fixes the crash by replacing cursor_callchain with a reference
      to the global callchain_cursor which also resolves all 3 issues mentioned
      above.
      
      How to reproduce the crash:
      
        $ perf record --call-graph=dwarf stress -t 1 -c 1
        $ perf script > /dev/null
        Segfault
      Signed-off-by: NChris Phlipot <cphlipot0@gmail.com>
      Tested-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Fixes: 6f736735 ("perf evsel: Require that callchains be resolved before calling fprintf_{sym,callchain}")
      Link: http://lkml.kernel.org/r/1461119531-2529-1-git-send-email-cphlipot0@gmail.comSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      e557b674
    • A
      perf trace: Make --pf maj/min/all use callchains too · 0c3a6ef4
      Arnaldo Carvalho de Melo 提交于
      Forgot about page faults, a software event, when adding support for callchains,
      fix it:
      
        # trace --no-syscalls --pf maj --call dwarf
           0.000 ( 0.000 ms): Xorg/2068 majfault [sfbSegment1+0x0] => /usr/lib64/xorg/modules/drivers/intel_drv.so@0x11b490 (x.)
                                             sfbSegment1+0x0 (/usr/lib64/xorg/modules/drivers/intel_drv.so)
                                             fbPolySegment32+0x361 (/usr/lib64/xorg/modules/drivers/intel_drv.so)
                                             sna_poly_segment+0x743 (/usr/lib64/xorg/modules/drivers/intel_drv.so)
                                             damagePolySegment+0x77 (/usr/libexec/Xorg)
                                             ProcPolySegment+0xe7 (/usr/libexec/Xorg)
                                             Dispatch+0x25f (/usr/libexec/Xorg)
                                             dix_main+0x3c3 (/usr/libexec/Xorg)
                                             __libc_start_main+0xf0 (/usr/lib64/libc-2.22.so)
                                             _start+0x29 (/usr/libexec/Xorg)
           0.257 ( 0.000 ms): Xorg/2068 majfault [miZeroClipLine+0x0] => /usr/libexec/Xorg@0x18e830 (x.)
                                             miZeroClipLine+0x0 (/usr/libexec/Xorg)
                                             _fbSegment+0x2c0 (/usr/lib64/xorg/modules/drivers/intel_drv.so)
                                             sfbSegment1+0x67 (/usr/lib64/xorg/modules/drivers/intel_drv.so)
                                             fbPolySegment32+0x361 (/usr/lib64/xorg/modules/drivers/intel_drv.so)
                                             sna_poly_segment+0x743 (/usr/lib64/xorg/modules/drivers/intel_drv.so)
                                             damagePolySegment+0x77 (/usr/libexec/Xorg)
                                             ProcPolySegment+0xe7 (/usr/libexec/Xorg)
                                             Dispatch+0x25f (/usr/libexec/Xorg)
                                             dix_main+0x3c3 (/usr/libexec/Xorg)
                                             __libc_start_main+0xf0 (/usr/lib64/libc-2.22.so)
                                             _start+0x29 (/usr/libexec/Xorg)
      ^C#
      
      Cc: Adrian Hunter <adrian.hunter@intel.com>
      Cc: David Ahern <dsahern@gmail.com>
      Cc: Jiri Olsa <jolsa@kernel.org>
      Cc: Milian Wolff <milian.wolff@kdab.com>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Wang Nan <wangnan0@huawei.com>
      Link: http://lkml.kernel.org/n/tip-8h6ssirw5z15qyhy2lwd6f89@git.kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      0c3a6ef4
    • A
      perf trace: Extract evsel contructor from perf_evlist__add_pgfault · 0ae537cb
      Arnaldo Carvalho de Melo 提交于
      Prep work for next patches, where we'll need access to the created
      evsels, to possibly configure callchains.
      
      Cc: Adrian Hunter <adrian.hunter@intel.com>
      Cc: David Ahern <dsahern@gmail.com>
      Cc: Jiri Olsa <jolsa@kernel.org>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Wang Nan <wangnan0@huawei.com>
      Link: http://lkml.kernel.org/n/tip-2pcgsgnkgellhlcao4aub8tu@git.kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      0ae537cb
    • A
      perf buildid: Fix off-by-one in write_buildid() · 70a2cba9
      Andrey Ryabinin 提交于
      write_buildid() increments 'name_len' with intention to take into
      account trailing zero byte. However, 'name_len' was already incremented
      in machine__write_buildid_table() before.  So this leads to
      out-of-bounds read in do_write():
      
        $ ./perf record sleep 0
        [ perf record: Woken up 1 times to write data ]
        =================================================================
        ==15899==ERROR: AddressSanitizer: global-buffer-overflow on address 0x00000099fc92 at pc 0x7f1aa9c7eab5 bp 0x7fff940f84d0 sp 0x7fff940f7c78
        READ of size 19 at 0x00000099fc92 thread T0
            #0 0x7f1aa9c7eab4  (/usr/lib/gcc/x86_64-pc-linux-gnu/5.3.0/libasan.so.2+0x44ab4)
            #1 0x649c5b in do_write util/header.c:67
            #2 0x649c5b in write_padded util/header.c:82
            #3 0x57e8bc in write_buildid util/build-id.c:239
            #4 0x57e8bc in machine__write_buildid_table util/build-id.c:278
        ...
      
        0x00000099fc92 is located 0 bytes to the right of global variable '*.LC99' defined in 'util/symbol.c' (0x99fc80) of size 18
          '*.LC99' is ascii string '[kernel.kallsyms]'
        ...
      
        Shadow bytes around the buggy address:
          0x00008012bf80: f9 f9 f9 f9 00 00 00 00 00 00 03 f9 f9 f9 f9 f9
        =>0x00008012bf90: 00 00[02]f9 f9 f9 f9 f9 00 00 00 00 00 05 f9 f9
          0x00008012bfa0: f9 f9 f9 f9 00 03 f9 f9 f9 f9 f9 f9 00 00 00 00
      Signed-off-by: NAndrey Ryabinin <aryabinin@virtuozzo.com>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Link: http://lkml.kernel.org/r/1461053847-5633-1-git-send-email-aryabinin@virtuozzo.com
      [ Remove the off-by one at the origin, to keep len(s) == strlen(s) assumption ]
      Signed-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      70a2cba9
  4. 23 4月, 2016 10 次提交
    • I
      Merge tag 'perf-core-for-mingo-20160419' of... · 67d61296
      Ingo Molnar 提交于
      Merge tag 'perf-core-for-mingo-20160419' of git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux
      
      Pull perf/core improvements and fixes from Arnaldo Carvalho de Melo:
      
      Build fixes:
      
      - Fix 'perf trace' build when DWARF unwind isn't available (Arnaldo Carvalho de Melo)
      
      - Remove x86 references from arch-neutral Build, fixing it in !x86 arches,
        reported as breaking the build for powerpc64le in linux-next (Arnaldo Carvalho de Melo)
      
      Infrastructure changes:
      
      - Do memset() variable 'st' using the correct size in the jit code (Colin Ian King)
      
      - Fix postgresql ubuntu 'perf script' install instructions (Chris Phlipot)
      
      - Use callchain_param more thoroughly when checking how callchains were
        configured, eventually will be the only way to look for callchain parameters
        (Arnaldo Carvalho de Melo)
      
      - Fix some issues in the 'perf test kallsyms' entry (Arnaldo Carvalho de Melo)
      Signed-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      67d61296
    • P
      x86/perf/rapl: Add missing Broadwell model · 31b84310
      Peter Zijlstra 提交于
      With the array aligned as per events/intel/core.c it was fairly
      obvious we missed one, add it in.
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      31b84310
    • P
      x86/perf/rapl: Reorder model numbers · c416e5aa
      Peter Zijlstra 提交于
      Re-order the model array to match the order in events/intel/core.c,
      to easier spot gaps and such.
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      c416e5aa
    • S
      perf/x86/intel/rapl: Support Skylake RAPL domains · dcee75b3
      Srinivas Pandruvada 提交于
      Add Skylake client support for RAPL domains. In addition to RAPL domains
      in Broadwell clients, it has support for platform domain (aka PSys). The
      PSys domain controls the entire SoC instead of just a CPU package. Unlike
      package domain, PSys support requires more than just processor level
      implementation. The other parts in the system need additional HW level
      signaling, which OEMs need to support. When not supported, the energy
      counter register in PSys domain returns 0.
      
      Also corrected error in comment for GPU counter, which previously was
      DRAM counter.
      
      Signed-off-by: Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com
      [ Cnverted to model_match stuff. ]
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Cc: bp@alien8.de
      Cc: hpa@zytor.com
      Cc: jacob.jun.pan@linux.intel.com
      Cc: rjw@rjwysocki.net
      Link: http://lkml.kernel.org/r/1460930581-29748-2-git-send-email-srinivas.pandruvada@linux.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      dcee75b3
    • W
      perf/core: Add ::write_backward attribute to perf event · 9ecda41a
      Wang Nan 提交于
      This patch introduces 'write_backward' bit to perf_event_attr, which
      controls the direction of a ring buffer. After set, the corresponding
      ring buffer is written from end to beginning. This feature is design to
      support reading from overwritable ring buffer.
      
      Ring buffer can be created by mapping a perf event fd. Kernel puts event
      records into ring buffer, user tooling like perf fetch them from
      address returned by mmap(). To prevent racing between kernel and tooling,
      they communicate to each other through 'head' and 'tail' pointers.
      Kernel maintains 'head' pointer, points it to the next free area (tail
      of the last record). Tooling maintains 'tail' pointer, points it to the
      tail of last consumed record (record has already been fetched). Kernel
      determines the available space in a ring buffer using these two
      pointers to avoid overwrite unfetched records.
      
      By mapping without 'PROT_WRITE', an overwritable ring buffer is created.
      Different from normal ring buffer, tooling is unable to maintain 'tail'
      pointer because writing is forbidden. Therefore, for this type of ring
      buffers, kernel overwrite old records unconditionally, works like flight
      recorder. This feature would be useful if reading from overwritable ring
      buffer were as easy as reading from normal ring buffer. However,
      there's an obscure problem.
      
      The following figure demonstrates a full overwritable ring buffer. In
      this figure, the 'head' pointer points to the end of last record, and a
      long record 'E' is pending. For a normal ring buffer, a 'tail' pointer
      would have pointed to position (X), so kernel knows there's no more
      space in the ring buffer. However, for an overwritable ring buffer,
      kernel ignore the 'tail' pointer.
      
         (X)                              head
          .                                |
          .                                V
          +------+-------+----------+------+---+
          |A....A|B.....B|C........C|D....D|   |
          +------+-------+----------+------+---+
      
      Record 'A' is overwritten by event 'E':
      
            head
             |
             V
          +--+---+-------+----------+------+---+
          |.E|..A|B.....B|C........C|D....D|E..|
          +--+---+-------+----------+------+---+
      
      Now tooling decides to read from this ring buffer. However, none of these
      two natural positions, 'head' and the start of this ring buffer, are
      pointing to the head of a record. Even the full ring buffer can be
      accessed by tooling, it is unable to find a position to start decoding.
      
      The first attempt tries to solve this problem AFAIK can be found from
      [1]. It makes kernel to maintain 'tail' pointer: updates it when ring
      buffer is half full. However, this approach introduces overhead to
      fast path. Test result shows a 1% overhead [2]. In addition, this method
      utilizes no more tham 50% records.
      
      Another attempt can be found from [3], which allows putting the size of
      an event at the end of each record. This approach allows tooling to find
      records in a backward manner from 'head' pointer by reading size of a
      record from its tail. However, because of alignment requirement, it
      needs 8 bytes to record the size of a record, which is a huge waste. Its
      performance is also not good, because more data need to be written.
      This approach also introduces some extra branch instructions to fast
      path.
      
      'write_backward' is a better solution to this problem.
      
      Following figure demonstrates the state of the overwritable ring buffer
      when 'write_backward' is set before overwriting:
      
             head
              |
              V
          +---+------+----------+-------+------+
          |   |D....D|C........C|B.....B|A....A|
          +---+------+----------+-------+------+
      
      and after overwriting:
                                           head
                                            |
                                            V
          +---+------+----------+-------+---+--+
          |..E|D....D|C........C|B.....B|A..|E.|
          +---+------+----------+-------+---+--+
      
      In each situation, 'head' points to the beginning of the newest record.
      From this record, tooling can iterate over the full ring buffer and fetch
      records one by one.
      
      The only limitation that needs to be considered is back-to-back reading.
      Due to the non-deterministic of user programs, it is impossible to ensure
      the ring buffer keeps stable during reading. Consider an extreme situation:
      tooling is scheduled out after reading record 'D', then a burst of events
      come, eat up the whole ring buffer (one or multiple rounds). When the
      tooling process comes back, reading after 'D' is incorrect now.
      
      To prevent this problem, we need to find a way to ensure the ring buffer
      is stable during reading. ioctl(PERF_EVENT_IOC_PAUSE_OUTPUT) is
      suggested because its overhead is lower than
      ioctl(PERF_EVENT_IOC_ENABLE).
      
      By carefully verifying 'header' pointer, reader can avoid pausing the
      ring-buffer. For example:
      
          /* A union of all possible events */
          union perf_event event;
      
          p = head = perf_mmap__read_head();
          while (true) {
              /* copy header of next event */
              fetch(&event.header, p, sizeof(event.header));
      
              /* read 'head' pointer */
              head = perf_mmap__read_head();
      
              /* check overwritten: is the header good? */
              if (!verify(sizeof(event.header), p, head))
                  break;
      
              /* copy the whole event */
              fetch(&event, p, event.header.size);
      
              /* read 'head' pointer again */
              head = perf_mmap__read_head();
      
              /* is the whole event good? */
              if (!verify(event.header.size, p, head))
                  break;
              p += event.header.size;
          }
      
      However, the overhead is high because:
      
       a) In-place decoding is not safe.
          Copying-verifying-decoding is required.
       b) Fetching 'head' pointer requires additional synchronization.
      
      (From Alexei Starovoitov:
      
      Even when this trick works, pause is needed for more than stability of
      reading. When we collect the events into overwrite buffer we're waiting
      for some other trigger (like all cpu utilization spike or just one cpu
      running and all others are idle) and when it happens the buffer has
      valuable info from the past. At this point new events are no longer
      interesting and buffer should be paused, events read and unpaused until
      next trigger comes.)
      
      This patch utilizes event's default overflow_handler introduced
      previously. perf_event_output_backward() is created as the default
      overflow handler for backward ring buffers. To avoid extra overhead to
      fast path, original perf_event_output() becomes __perf_event_output()
      and marked '__always_inline'. In theory, there's no extra overhead
      introduced to fast path.
      
      Performance testing:
      
      Calling 3000000 times of 'close(-1)', use gettimeofday() to check
      duration.  Use 'perf record -o /dev/null -e raw_syscalls:*' to capture
      system calls. In ns.
      
      Testing environment:
      
        CPU    : Intel(R) Core(TM) i7-4790 CPU @ 3.60GHz
        Kernel : v4.5.0
                          MEAN         STDVAR
       BASE            800214.950    2853.083
       PRE1           2253846.700    9997.014
       PRE2           2257495.540    8516.293
       POST           2250896.100    8933.921
      
      Where 'BASE' is pure performance without capturing. 'PRE1' is test
      result of pure 'v4.5.0' kernel. 'PRE2' is test result before this
      patch. 'POST' is test result after this patch. See [4] for the detailed
      experimental setup.
      
      Considering the stdvar, this patch doesn't introduce performance
      overhead to the fast path.
      
       [1] http://lkml.iu.edu/hypermail/linux/kernel/1304.1/04584.html
       [2] http://lkml.iu.edu/hypermail/linux/kernel/1307.1/00535.html
       [3] http://lkml.iu.edu/hypermail/linux/kernel/1512.0/01265.html
       [4] http://lkml.kernel.org/g/56F89DCD.1040202@huawei.comSigned-off-by: NWang Nan <wangnan0@huawei.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Acked-by: NAlexei Starovoitov <ast@kernel.org>
      Cc: <acme@kernel.org>
      Cc: <pi3orama@163.com>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Brendan Gregg <brendan.d.gregg@gmail.com>
      Cc: He Kuang <hekuang@huawei.com>
      Cc: Jiri Olsa <jolsa@kernel.org>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Cc: Zefan Li <lizefan@huawei.com>
      Link: http://lkml.kernel.org/r/1459865478-53413-1-git-send-email-wangnan0@huawei.com
      [ Fixed the changelog some more. ]
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      9ecda41a
    • K
      perf/x86/intel: Add LBR filter support for Silvermont and Airmont CPUs · f21d5adc
      Kan Liang 提交于
      LBR filtering is also supported on the Silvermont and Airmont
      microarchitectures. The layout of MSR_LBR_SELECT is the same as Nehalem.
      Signed-off-by: NKan Liang <kan.liang@intel.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Link: http://lkml.kernel.org/r/1460706825-46163-1-git-send-email-kan.liang@intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      f21d5adc
    • K
      perf/x86/intel: Add Goldmont CPU support · 8b92c3a7
      Kan Liang 提交于
      Add perf core PMU support for Intel Goldmont CPU cores:
      
       - The init code is based on Silvermont.
      
       - There is a new cache event list, based on the Silvermont cache event list.
      
       - Goldmont has 32 LBR entries. It also uses new LBRv6 format, which
         report the cycle information using upper 16-bit of the LBR_TO.
      
       - It's recommended to use CPU_CLK_UNHALTED.CORE_P + NPEBS for precise cycles.
      
      For details, please refer to the latest SDM058:
      
       http://www.intel.com/content/dam/www/public/us/en/documents/manuals/64-ia-32-architectures-software-developer-vol-3b-part-2-manual.pdfSigned-off-by: NKan Liang <kan.liang@intel.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Link: http://lkml.kernel.org/r/1460706167-45320-1-git-send-email-kan.liang@intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      8b92c3a7
    • I
      65cbbd03
    • P
      perf/core: Make sysctl_perf_cpu_time_max_percent conform to documentation · b303e7c1
      Peter Zijlstra 提交于
      Markus reported that 0 should also disable the throttling we per
      Documentation/sysctl/kernel.txt.
      Reported-by: NMarkus Trippelsdorf <markus@trippelsdorf.de>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Fixes: 91a612ee ("perf/core: Fix dynamic interrupt throttle")
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      b303e7c1
    • S
      perf/x86/intel/rapl: Add missing Haswell model · e1089602
      Srinivas Pandruvada 提交于
      Added one missing Haswell model.
      Signed-off-by: NSrinivas Pandruvada <srinivas.pandruvada@linux.intel.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Cc: bp@alien8.de
      Cc: hpa@zytor.com
      Link: http://lkml.kernel.org/r/1460907809-11897-1-git-send-email-srinivas.pandruvada@linux.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      e1089602