1. 13 9月, 2013 2 次提交
  2. 02 9月, 2013 2 次提交
  3. 27 6月, 2013 1 次提交
  4. 19 6月, 2013 3 次提交
  5. 01 4月, 2013 3 次提交
  6. 21 3月, 2013 1 次提交
    • S
      perf/x86: Fix uninitialized pt_regs in intel_pmu_drain_bts_buffer() · 0e48026a
      Stephane Eranian 提交于
      This patch fixes an uninitialized pt_regs struct in drain BTS
      function. The pt_regs struct is propagated all the way to the
      code_get_segment() function from perf_instruction_pointer()
      and may get garbage.
      
      We cannot simply inherit the actual pt_regs from the interrupt
      because BTS must be flushed on context-switch or when the
      associated event is disabled. And there we do not have a pt_regs
      handy.
      
      Setting pt_regs to all zeroes may not be the best option but it
      is not clear what else to do given where the drain_bts_buffer()
      is called from.
      
      In V2, we move the memset() later in the code to avoid doing it
      when we end up returning early without doing the actual BTS
      processing. Also dropped the reg.val initialization because it
      is redundant with the memset() as suggested by PeterZ.
      Signed-off-by: NStephane Eranian <eranian@google.com>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: peterz@infradead.org
      Cc: sqazi@google.com
      Cc: ak@linux.intel.com
      Cc: jolsa@redhat.com
      Link: http://lkml.kernel.org/r/20130319151038.GA25439@quadSigned-off-by: NIngo Molnar <mingo@kernel.org>
      0e48026a
  7. 18 3月, 2013 1 次提交
    • L
      perf,x86: fix wrmsr_on_cpu() warning on suspend/resume · 2a6e06b2
      Linus Torvalds 提交于
      Commit 1d9d8639 ("perf,x86: fix kernel crash with PEBS/BTS after
      suspend/resume") fixed a crash when doing PEBS performance profiling
      after resuming, but in using init_debug_store_on_cpu() to restore the
      DS_AREA mtrr it also resulted in a new WARN_ON() triggering.
      
      init_debug_store_on_cpu() uses "wrmsr_on_cpu()", which in turn uses CPU
      cross-calls to do the MSR update.  Which is not really valid at the
      early resume stage, and the warning is quite reasonable.  Now, it all
      happens to _work_, for the simple reason that smp_call_function_single()
      ends up just doing the call directly on the CPU when the CPU number
      matches, but we really should just do the wrmsr() directly instead.
      
      This duplicates the wrmsr() logic, but hopefully we can just remove the
      wrmsr_on_cpu() version eventually.
      Reported-and-tested-by: NParag Warudkar <parag.lkml@gmail.com>
      Cc: stable@vger.kernel.org
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      2a6e06b2
  8. 16 3月, 2013 1 次提交
  9. 19 9月, 2012 1 次提交
  10. 31 7月, 2012 1 次提交
    • P
      perf/x86: Fix USER/KERNEL tagging of samples properly · d07bdfd3
      Peter Zijlstra 提交于
      Some PMUs don't provide a full register set for their sample,
      specifically 'advanced' PMUs like AMD IBS and Intel PEBS which provide
      'better' than regular interrupt accuracy.
      
      In this case we use the interrupt regs as basis and over-write some
      fields (typically IP) with different information.
      
      The perf core however uses user_mode() to distinguish user/kernel
      samples, user_mode() relies on regs->cs. If the interrupt skid pushed
      us over a boundary the new IP might not be in the same domain as the
      interrupt.
      
      Commit ce5c1fe9 ("perf/x86: Fix USER/KERNEL tagging of samples")
      tried to fix this by making the perf core use kernel_ip(). This
      however is wrong (TM), as pointed out by Linus, since it doesn't allow
      for VM86 and non-zero based segments in IA32 mode.
      
      Therefore, provide a new helper to set the regs->ip field,
      set_linear_ip(), which massages the regs into a suitable state
      assuming the provided IP is in fact a linear address.
      
      Also modify perf_instruction_pointer() and perf_callchain_user() to
      deal with segments base offsets.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Link: http://lkml.kernel.org/r/1341910954.3462.102.camel@twinsSigned-off-by: NIngo Molnar <mingo@kernel.org>
      d07bdfd3
  11. 06 7月, 2012 1 次提交
  12. 06 6月, 2012 3 次提交
  13. 09 5月, 2012 1 次提交
  14. 05 3月, 2012 2 次提交
  15. 03 2月, 2012 1 次提交
    • S
      perf: Remove deprecated WARN_ON_ONCE() · 84f2b9b2
      Stephane Eranian 提交于
      With the new throttling/unthrottling code introduced with
      commit:
      
        e050e3f0 ("perf: Fix broken interrupt rate throttling")
      
      we occasionally hit two WARN_ON_ONCE() checks in:
      
        - intel_pmu_pebs_enable()
        - intel_pmu_lbr_enable()
        - x86_pmu_start()
      
      The assertions are no longer problematic. There is a valid
      path where they can trigger but it is harmless.
      
      The assertion can be triggered with:
      
        $ perf record -e instructions:pp ....
      
      Leading to paths:
      
        intel_pmu_pebs_enable
        intel_pmu_enable_event
        x86_perf_event_set_period
        x86_pmu_start
        perf_adjust_freq_unthr_context
        perf_event_task_tick
        scheduler_tick
      
      And:
      
        intel_pmu_lbr_enable
        intel_pmu_enable_event
        x86_perf_event_set_period
        x86_pmu_start
        perf_adjust_freq_unthr_context.
        perf_event_task_tick
        scheduler_tick
      
      cpuc->enabled is always on because when we get to
      perf_adjust_freq_unthr_context() the PMU is not totally
      disabled. Furthermore when we need to adjust a period,
      we only stop the event we need to change and not the
      entire PMU. Thus, when we re-enable, cpuc->enabled is
      already set. Note that when we stop the event, both
      pebs and lbr are stopped if necessary (and possible).
      Signed-off-by: NStephane Eranian <eranian@google.com>
      Cc: peterz@infradead.org
      Link: http://lkml.kernel.org/r/20120202110401.GA30911@quadSigned-off-by: NIngo Molnar <mingo@elte.hu>
      84f2b9b2
  16. 14 11月, 2011 1 次提交
  17. 26 9月, 2011 1 次提交
  18. 01 7月, 2011 2 次提交
    • P
      perf: Remove the perf_output_begin(.sample) argument · a7ac67ea
      Peter Zijlstra 提交于
      Since only samples call perf_output_sample() its much saner (and more
      correct) to put the sample logic in there than in the
      perf_output_begin()/perf_output_end() pair.
      
      Saves a useless argument, reduces conditionals and shrinks
      struct perf_output_handle, win!
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Link: http://lkml.kernel.org/n/tip-2crpvsx3cqu67q3zqjbnlpsc@git.kernel.orgSigned-off-by: NIngo Molnar <mingo@elte.hu>
      a7ac67ea
    • P
      perf: Remove the nmi parameter from the swevent and overflow interface · a8b0ca17
      Peter Zijlstra 提交于
      The nmi parameter indicated if we could do wakeups from the current
      context, if not, we would set some state and self-IPI and let the
      resulting interrupt do the wakeup.
      
      For the various event classes:
      
        - hardware: nmi=0; PMI is in fact an NMI or we run irq_work_run from
          the PMI-tail (ARM etc.)
        - tracepoint: nmi=0; since tracepoint could be from NMI context.
        - software: nmi=[0,1]; some, like the schedule thing cannot
          perform wakeups, and hence need 0.
      
      As one can see, there is very little nmi=1 usage, and the down-side of
      not using it is that on some platforms some software events can have a
      jiffy delay in wakeup (when arch_irq_work_raise isn't implemented).
      
      The up-side however is that we can remove the nmi parameter and save a
      bunch of conditionals in fast paths.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Michael Cree <mcree@orcon.net.nz>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Deng-Cheng Zhu <dengcheng.zhu@gmail.com>
      Cc: Anton Blanchard <anton@samba.org>
      Cc: Eric B Munson <emunson@mgebm.net>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Jason Wessel <jason.wessel@windriver.com>
      Cc: Don Zickus <dzickus@redhat.com>
      Link: http://lkml.kernel.org/n/tip-agjev8eu666tvknpb3iaj0fg@git.kernel.orgSigned-off-by: NIngo Molnar <mingo@elte.hu>
      a8b0ca17
  19. 16 3月, 2011 2 次提交
  20. 04 3月, 2011 1 次提交
    • S
      perf_events: Update PEBS event constraints · 17e31629
      Stephane Eranian 提交于
      This patch updates PEBS event constraints for Intel Atom, Nehalem, Westmere.
      
      This patch also reorganizes the PEBS format/constraint detection code. It is
      now based on processor model and not PEBS format. Two processors may use the
      same PEBS format without have the same list of PEBS events.
      
      In this second version, we simplified the initialization of the PEBS
      constraints by leveraging the existing switch() statement in perf_event_intel.c.
      We also renamed the constraint tables to be more consistent with regular
      constraints.
      
      In this 3rd version, we drop BR_INST_RETIRED.MISPRED from Intel Atom as it does
      not seem to work. Use MISPREDICTED_BRANCH_RETIRED instead. Also add FP_ASSIST.*
      o both Intel Nehalem and Westmere. I misssed those in the earlier patches.
      Events were tested using libpfm4 perf_examples.
      Signed-off-by: NStephane Eranian <eranian@google.com>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <4d6e6b02.815bdf0a.637b.07a7@mx.google.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      17e31629
  21. 02 3月, 2011 1 次提交
    • L
      perf, x86: Add Intel SandyBridge CPU support · b06b3d49
      Lin Ming 提交于
      This patch adds basic SandyBridge support, including hardware
      cache events and PEBS events support.
      
      It has been tested on SandyBridge CPUs with perf stat and also
      with PEBS based profiling - both work fine.
      
      The patch does not affect other models.
      
      v2 -> v3:
       - fix PEBS event 0xd0 with right umask combinations
       - move snb pebs constraint assignment to intel_pmu_init
      
      v1 -> v2:
       - add more raw and PEBS events constraints
       - use offcore events for LLC-* cache events
       - remove the call to Nehalem workaround enable_all function
      Signed-off-by: NLin Ming <ming.m.lin@intel.com>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Andi Kleen <andi@firstfloor.org>
      LKML-Reference: <1299072424.2175.24.camel@localhost>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      b06b3d49
  22. 22 10月, 2010 6 次提交
  23. 13 9月, 2010 1 次提交
    • S
      perf_events: Fix BTS interrupt handling to avoid being dazed by NMI (v2) · b0b2072d
      Stephane Eranian 提交于
      Fix a bug introduced with commit de725dec and the change in the
      meaning of the return value of intel_pmu_handle_irq(). With the
      current code, when you are using the BTS, you get 'dazed by NMI'
      each time the BTS buffer fills up.
      
      BTS does interrupt on the PMU vector, thus NMI. You need to take
      this into account in the return value of the function.
      
      This version fixes initial patch which was missing changes to
      perf_event_intel_ds.c.
      Signed-off-by: NStephane Eranian <eranian@google.com>
      Acked-by: NDon Zickus <dzickus@redhat.com>
      Cc: peterz@infradead.org
      Cc: paulus@samba.org
      Cc: davem@davemloft.net
      Cc: fweisbec@gmail.com
      Cc: perfmon2-devel@lists.sf.net
      Cc: eranian@gmail.com
      Cc: robert.richter@amd.com
      LKML-Reference: <4c8a1686.aae9d80a.5aa4.5e35@mx.google.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      b0b2072d
  24. 10 9月, 2010 1 次提交
    • P
      perf: Rework the PMU methods · a4eaf7f1
      Peter Zijlstra 提交于
      Replace pmu::{enable,disable,start,stop,unthrottle} with
      pmu::{add,del,start,stop}, all of which take a flags argument.
      
      The new interface extends the capability to stop a counter while
      keeping it scheduled on the PMU. We replace the throttled state with
      the generic stopped state.
      
      This also allows us to efficiently stop/start counters over certain
      code paths (like IRQ handlers).
      
      It also allows scheduling a counter without it starting, allowing for
      a generic frozen state (useful for rotating stopped counters).
      
      The stopped state is implemented in two different ways, depending on
      how the architecture implemented the throttled state:
      
       1) We disable the counter:
          a) the pmu has per-counter enable bits, we flip that
          b) we program a NOP event, preserving the counter state
      
       2) We store the counter state and ignore all read/overflow events
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: paulus <paulus@samba.org>
      Cc: stephane eranian <eranian@googlemail.com>
      Cc: Robert Richter <robert.richter@amd.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Cyrill Gorcunov <gorcunov@gmail.com>
      Cc: Lin Ming <ming.m.lin@intel.com>
      Cc: Yanmin <yanmin_zhang@linux.intel.com>
      Cc: Deng-Cheng Zhu <dengcheng.zhu@gmail.com>
      Cc: David Miller <davem@davemloft.net>
      Cc: Michael Cree <mcree@orcon.net.nz>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      a4eaf7f1