1. 13 7月, 2016 20 次提交
  2. 12 7月, 2016 12 次提交
  3. 11 7月, 2016 1 次提交
    • I
      Revert "perf/x86/intel, watchdog: Switch NMI watchdog to ref cycles on x86" · 44530d58
      Ingo Molnar 提交于
      This reverts commit 2c95afc1.
      
      Stephane reported the following regression:
      
       > Since Andi added:
       >
       > commit 2c95afc1
       > Author: Andi Kleen <ak@linux.intel.com>
       > Date:   Thu Jun 9 06:14:38 2016 -0700
       >
       >    perf/x86/intel, watchdog: Switch NMI watchdog to ref cycles on x86
       >
       > $ perf stat -e ref-cycles ls
       >   <not counted> ....
       >
       > fails systematically because the ref-cycles is now used by the
       > watchdog and given this is a system-wide pinned event, it monopolizes
       > the fixed counter 2 which is the only counter able to measure this event.
      
      Since the next merge window is near, fix the regression for now
      by reverting the commit.
      Reported-by: NStephane Eranian <eranian@google.com>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      44530d58
  4. 07 7月, 2016 4 次提交
    • K
      perf/x86/intel/uncore: Add support for the Intel Skylake client uncore PMU · 46866b59
      Kan Liang 提交于
      This patch adds full support for Intel SKL client uncore PMU:
      
       - Add support for SKL client CPU uncore PMU, which is similar to the
         BDW client PMU driver. (There are some differences in CBOX numbering
         and uncore control MSR.)
       - Add new support for SkyLake Mobile uncore PMUs, for both CPU and PCI
         uncore functionality.
      Signed-off-by: NKan Liang <kan.liang@intel.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Link: http://lkml.kernel.org/r/1467208912-8179-1-git-send-email-kan.liang@intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      46866b59
    • P
      perf/x86/intel: Fix rdlbr_to() MSR reading typo · aefbc4d0
      Peter Zijlstra 提交于
      It helps to actually read the right MSR..
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Cc: linux-kernel@vger.kernel.org
      Fixes: d4cf1949 ("perf/x86/intel: Add {rd,wr}lbr_{to,from} wrappers")
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      aefbc4d0
    • I
    • M
      perf/core: Fix pmu::filter_match for SW-led groups · 2c81a647
      Mark Rutland 提交于
      The following commit:
      
        66eb579e ("perf: allow for PMU-specific event filtering")
      
      added the pmu::filter_match() callback. This was intended to
      avoid HW constraints on events from resulting in extremely
      pessimistic scheduling.
      
      However, pmu::filter_match() is only called for the leader of each event
      group. When the leader is a SW event, we do not filter the groups, and
      may fail at pmu::add() time, and when this happens we'll give up on
      scheduling any event groups later in the list until they are rotated
      ahead of the failing group.
      
      This can result in extremely sub-optimal event scheduling behaviour,
      e.g. if running the following on a big.LITTLE platform:
      
      $ taskset -c 0 ./perf stat \
       -e 'a57{context-switches,armv8_cortex_a57/config=0x11/}' \
       -e 'a53{context-switches,armv8_cortex_a53/config=0x11/}' \
       ls
      
           <not counted>      context-switches                                              (0.00%)
           <not counted>      armv8_cortex_a57/config=0x11/                                 (0.00%)
                      24      context-switches                                              (37.36%)
                57589154      armv8_cortex_a53/config=0x11/                                 (37.36%)
      
      Here the 'a53' event group was always eligible to be scheduled, but
      the 'a57' group never eligible to be scheduled, as the task was always
      affine to a Cortex-A53 CPU. The SW (group leader) event in the 'a57'
      group was eligible, but the HW event failed at pmu::add() time,
      resulting in ctx_flexible_sched_in giving up on scheduling further
      groups with HW events.
      
      One way of avoiding this is to check pmu::filter_match() on siblings
      as well as the group leader. If any of these fail their
      pmu::filter_match() call, we must skip the entire group before
      attempting to add any events.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Will Deacon <will.deacon@arm.com>
      Fixes: 66eb579e ("perf: allow for PMU-specific event filtering")
      Link: http://lkml.kernel.org/r/1465917041-15339-1-git-send-email-mark.rutland@arm.com
      [ Small readability edits. ]
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      2c81a647
  5. 06 7月, 2016 1 次提交
  6. 05 7月, 2016 2 次提交
    • I
      Merge tag 'perf-core-for-mingo-20160704' of... · c50f6245
      Ingo Molnar 提交于
      Merge tag 'perf-core-for-mingo-20160704' of git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux into perf/core
      
      Pull perf/core improvements and fixes from Arnaldo Carvalho de Melo:
      
      Documentation changes:
      
       - Update android build documentation (Chris Phlipot)
      
      Infrastructure changes:
      
       - Respect WERROR=0 in libapi and libsubcmd, to allow building on Android (Chris Phlipot)
      
       - Prep work to support SDT events in probe cache (Masami Hiramatsu)
      
       - ELF support for SDT (Hemant Kumar)
      
       - Add feature detection for libelf's elf_getshdrstrndx function (Arnaldo Carvalho de Melo)
      
       - Fix hist accumulation test (Jiri Olsa)
      
       - Unwind callchain fixes (Jiri Olsa)
      
       - Change internal representation of numa nodes obtained from
         perf.data header (Jiri Olsa)
      
       - Sync copy of syscall_64.tbl with the kernel (Arnaldo Carvalho de Melo)
      
       - Add LGPL 2.1 license header to libbpf source files (Wang Nan)
      Signed-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      c50f6245
    • A
      perf tools: Sync copy of syscall_64.tbl with the kernel · f3d082ce
      Arnaldo Carvalho de Melo 提交于
      Noticed by the build system, that emitted this warning:
      
        Warning: x86_64's syscall_64.tbl differs from kernel
      
      This was due to the wiring up of the recently added preadv2 & pwritev2
      syscalls to the compat code, which hadn't been done by the patch
      introducing those syscalls: 4babf2c5 ("x86: wire up preadv2 and
      pwritev2").
      
      The patch doing the compat wiring was:
      
        482dd2ef ("x86/syscalls: Wire up compat readv2/writev2 syscalls")
      
      This just silences the perf build warning, as compat syscalls still
      can't be supported in 'perf trace´ due to limitations in the
      raw_syscalls:sys_{enter,exit} tracepoints it relies on.
      Reported-by: NIngo Molnar <mingo@kernel.org>
      Cc: Adrian Hunter <adrian.hunter@intel.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: David Ahern <dsahern@gmail.com>
      Cc: Jiri Olsa <jolsa@kernel.org>
      Cc: Milian Wolff <milian.wolff@kdab.com>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Wang Nan <wangnan0@huawei.com>
      Link: http://lkml.kernel.org/n/tip-4dm8eoy0wslgtwqdhz64ods0@git.kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      f3d082ce