1. 26 4月, 2013 4 次提交
  2. 14 3月, 2013 1 次提交
  3. 01 2月, 2013 3 次提交
  4. 29 1月, 2013 1 次提交
  5. 10 1月, 2013 3 次提交
    • M
      powerpc/perf: Fix for PMCs not making progress · e13e895f
      Michael Neuling 提交于
      On POWER7 when we have really small counts left before overflow, we can take a
      PMU IRQ, but the PMC gets wound back to just before the overflow.
      
      If the kernel is setting the PMC to a value just before the overflow, we can
      get interrupted again without the PMC making any progress (ie another buggy
      overflow).  In this case, we can end up making no forward progress, with the
      PMC interrupt returning us to the same count over and over.
      
      The below detects when we are making no forward progress (ie. delta = 0) and
      then increases the amount left before the overflow.  This stops us from locking
      up.
      Signed-off-by: NMichael Neuling <mikey@neuling.org>
      Reviewed-by: NSukadev Bhattiprolu <sukadev@linux.vnet.ibm.com>
      cc: Paul Mackerras <paulus@samba.org>
      cc: Anton Blanchard <anton@samba.org>
      cc: Linux PPC dev <linuxppc-dev@ozlabs.org>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      e13e895f
    • M
      powerpc/perf: Fix finding overflowed PMC in interrupt · bc09c219
      Michael Neuling 提交于
      If a PMC is about to overflow on a counter that's on an active perf event
      (ie. less than 256 from the end) and a _different_ PMC overflows just at this
      time (a PMC that's not on an active perf event), we currently mark the event as
      found, but in reality it's not as it's likely the other PMC that caused the
      IRQ.  Since we mark it as found the second catch all for overflows doesn't run,
      and we don't reset the overflowing PMC ever.  Hence we keep hitting that same
      PMC IRQ over and over and don't reset the actual overflowing counter.
      
      This is a rewrite of the perf interrupt handler for book3s to get around this.
      We now check to see if any of the PMCs have actually overflowed (ie >=
      0x80000000).  If yes, record it for active counters and just reset it for
      inactive counters.  If it's not overflowed, then we check to see if it's one of
      the buggy power7 counters and if it is, record it and continue.  If none of the
      PMCs match this, then we make note that we couldn't find the PMC that caused
      the IRQ.
      Signed-off-by: NMichael Neuling <mikey@neuling.org>
      Reviewed-by: NSukadev Bhattiprolu <sukadev@linux.vnet.ibm.com>
      cc: Paul Mackerras <paulus@samba.org>
      cc: Anton Blanchard <anton@samba.org>
      cc: Linux PPC dev <linuxppc-dev@ozlabs.org>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      bc09c219
    • C
      powerpc/perf: Add stalled-cycles events · 15fab56e
      Chris Freehill 提交于
      Support for stalled-cycles-frontend and stalled-cycles-backend is
      added for e500-based processors.
      
      The following mappings are used:
      
      stalled-cycles-frontend or idle-cycles-frontend:
      Com:18 Cycles decode stalled
      
      stalled-cycles-backend or idle-cycles-backend
      Com:19 cycles issue stalled
      Signed-off-by: NChris Freehill <chrisf@freescale.com>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      15fab56e
  6. 15 11月, 2012 1 次提交
  7. 18 10月, 2012 1 次提交
  8. 27 9月, 2012 1 次提交
  9. 05 9月, 2012 1 次提交
  10. 24 8月, 2012 1 次提交
    • S
      powerpc/perf: Use pmc_overflow() to detect rolled back events · 81331211
      Sukadev Bhattiprolu 提交于
      For certain speculative events on Power7, 'perf stat' reports far higher
      event count than 'perf record' for the same event.
      
      As described in following commit, a performance monitor exception is raised
      even when the the performance events are rolled back.
      
              commit 0837e324
              Author: Anton Blanchard <anton@samba.org>
              Date:   Wed Mar 9 14:38:42 2011 +1100
      
      perf_event_interrupt() records an event only when an overflow occurs. But
      this check for overflow is a simple 'if (val < 0)'.
      
      Because the events are rolled back, this check for overflow fails and the
      event is not recorded. perf_event_interrupt() later uses pmc_overflow() to
      detect the overflow and resets the counters and the events are lost completely.
      
      To properly detect the overflow of rolled back events, use pmc_overflow()
      even when recording events.
      
      To reproduce:
              $ cat strcpy.c
              #include <stdio.h>
              #include <string.h>
              main()
              {
                      char buf[256];
      
                      alarm(5);
                      while(1)
                              strcpy(buf, "string1");
              }
      
              $ perf record -e r20014 ./strcpy
              $ perf report -n > report.1
              $ perf stat -e r20014 > report.2
              # Compare report.1 and report.2
      Reported-by: NMaynard Johnson <mpjohn@us.ibm.com>
      Signed-off-by: NSukadev Bhattiprolu <sukadev@linux.vnet.ibm.com>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      81331211
  11. 10 7月, 2012 4 次提交
  12. 09 5月, 2012 1 次提交
  13. 28 3月, 2012 1 次提交
    • B
      powerpc/perf: Fix instruction address sampling on 970 and Power4 · 1ce447b9
      Benjamin Herrenschmidt 提交于
      970 and Power4 don't support "continuous sampling" which means that
      when we aren't in marked instruction sampling mode (marked events),
      SIAR isn't updated with the last instruction sampled before the
      perf interrupt. On those processors, we must thus use the exception
      SRR0 value as the sampled instruction pointer.
      
      Those processors also don't support the SIPR and SIHV bits in MMCRA
      which means we need some kind of heuristic to decide if SIAR values
      represent kernel or user addresses.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      1ce447b9
  14. 23 2月, 2012 1 次提交