1. 16 7月, 2014 1 次提交
    • K
      perf/x86/intel: Protect LBR and extra_regs against KVM lying · 338b522c
      Kan Liang 提交于
      With -cpu host, KVM reports LBR and extra_regs support, if the host has
      support.
      
      When the guest perf driver tries to access LBR or extra_regs MSR,
      it #GPs all MSR accesses,since KVM doesn't handle LBR and extra_regs support.
      So check the related MSRs access right once at initialization time to avoid
      the error access at runtime.
      
      For reproducing the issue, please build the kernel with CONFIG_KVM_INTEL = y
      (for host kernel).
      And CONFIG_PARAVIRT = n and CONFIG_KVM_GUEST = n (for guest kernel).
      Start the guest with -cpu host.
      Run perf record with --branch-any or --branch-filter in guest to trigger LBR
      Run perf stat offcore events (E.g. LLC-loads/LLC-load-misses ...) in guest to
      trigger offcore_rsp #GP
      Signed-off-by: NKan Liang <kan.liang@intel.com>
      Signed-off-by: NPeter Zijlstra <peterz@infradead.org>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Maria Dimakopoulou <maria.n.dimakopoulou@gmail.com>
      Cc: Mark Davies <junk@eslaf.co.uk>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Yan, Zheng <zheng.z.yan@intel.com>
      Link: http://lkml.kernel.org/r/1405365957-20202-1-git-send-email-kan.liang@intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      338b522c
  2. 27 2月, 2014 1 次提交
  3. 09 2月, 2014 1 次提交
  4. 05 12月, 2013 1 次提交
  5. 04 10月, 2013 1 次提交
  6. 13 9月, 2013 1 次提交
  7. 02 9月, 2013 1 次提交
  8. 27 6月, 2013 1 次提交
    • S
      perf/x86: Fix shared register mutual exclusion enforcement · 2f7f73a5
      Stephane Eranian 提交于
      This patch fixes a problem with the shared registers mutual
      exclusion code and incremental event scheduling by the
      generic perf_event code.
      
      There was a bug whereby the mutual exclusion on the shared
      registers was not enforced because of incremental scheduling
      abort due to event constraints. As an example on Intel
      Nehalem, consider the following events:
      
      group1= L1D_CACHE_LD:E_STATE,OFFCORE_RESPONSE_0:PF_RFO,L1D_CACHE_LD:I_STATE
      group2= L1D_CACHE_LD:I_STATE
      
      The L1D_CACHE_LD event can only be measured by 2 counters. Yet, there
      are 3 instances here. The first group can be scheduled and is committed.
      Then, the generic code tries to schedule group2 and this fails (because
      there is no more counter to support the 3rd instance of L1D_CACHE_LD).
      But in x86_schedule_events() error path, put_event_contraints() is invoked
      on ALL the events and not just the ones that just failed. That causes the
      "lock" on the shared offcore_response MSR to be released. Yet the first group
      is actually scheduled and is exposed to reprogramming of that shared msr by
      the sibling HT thread. In other words, there is no guarantee on what is
      measured.
      
      This patch fixes the problem by tagging committed events with the
      PERF_X86_EVENT_COMMITTED tag. In the error path of x86_schedule_events(),
      only the events NOT tagged have their constraint released. The tag
      is eventually removed when the event in descheduled.
      Signed-off-by: NStephane Eranian <eranian@google.com>
      Signed-off-by: NPeter Zijlstra <peterz@infradead.org>
      Link: http://lkml.kernel.org/r/20130620164254.GA3556@quadSigned-off-by: NIngo Molnar <mingo@kernel.org>
      2f7f73a5
  9. 26 6月, 2013 1 次提交
  10. 19 6月, 2013 5 次提交
  11. 01 4月, 2013 3 次提交
  12. 27 3月, 2013 2 次提交
  13. 07 2月, 2013 2 次提交
  14. 24 10月, 2012 4 次提交
  15. 04 10月, 2012 1 次提交
  16. 19 9月, 2012 1 次提交
  17. 31 7月, 2012 1 次提交
    • P
      perf/x86: Fix USER/KERNEL tagging of samples properly · d07bdfd3
      Peter Zijlstra 提交于
      Some PMUs don't provide a full register set for their sample,
      specifically 'advanced' PMUs like AMD IBS and Intel PEBS which provide
      'better' than regular interrupt accuracy.
      
      In this case we use the interrupt regs as basis and over-write some
      fields (typically IP) with different information.
      
      The perf core however uses user_mode() to distinguish user/kernel
      samples, user_mode() relies on regs->cs. If the interrupt skid pushed
      us over a boundary the new IP might not be in the same domain as the
      interrupt.
      
      Commit ce5c1fe9 ("perf/x86: Fix USER/KERNEL tagging of samples")
      tried to fix this by making the perf core use kernel_ip(). This
      however is wrong (TM), as pointed out by Linus, since it doesn't allow
      for VM86 and non-zero based segments in IA32 mode.
      
      Therefore, provide a new helper to set the regs->ip field,
      set_linear_ip(), which massages the regs into a suitable state
      assuming the provided IP is in fact a linear address.
      
      Also modify perf_instruction_pointer() and perf_callchain_user() to
      deal with segments base offsets.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Link: http://lkml.kernel.org/r/1341910954.3462.102.camel@twinsSigned-off-by: NIngo Molnar <mingo@kernel.org>
      d07bdfd3
  18. 26 7月, 2012 1 次提交
  19. 06 7月, 2012 2 次提交
  20. 18 6月, 2012 1 次提交
  21. 06 6月, 2012 6 次提交
  22. 17 3月, 2012 1 次提交
  23. 13 3月, 2012 1 次提交