1. 11 6月, 2009 6 次提交
  2. 10 6月, 2009 3 次提交
  3. 09 6月, 2009 5 次提交
  4. 08 6月, 2009 3 次提交
  5. 07 6月, 2009 1 次提交
  6. 06 6月, 2009 3 次提交
    • I
      perf_counter: Implement generalized cache event types · 8326f44d
      Ingo Molnar 提交于
      Extend generic event enumeration with the PERF_TYPE_HW_CACHE
      method.
      
      This is a 3-dimensional space:
      
             { L1-D, L1-I, L2, ITLB, DTLB, BPU } x
             { load, store, prefetch } x
             { accesses, misses }
      
      User-space passes in the 3 coordinates and the kernel provides
      a counter. (if the hardware supports that type and if the
      combination makes sense.)
      
      Combinations that make no sense produce a -EINVAL.
      Combinations that are not supported by the hardware produce -ENOTSUP.
      
      Extend the tools to deal with this, and rewrite the event symbol
      parsing code with various popular aliases for the units and
      access methods above. So 'l1-cache-miss' and 'l1d-read-ops' are
      both valid aliases.
      
      ( x86 is supported for now, with the Nehalem event table filled in,
        and with Core2 and Atom having placeholder tables. )
      
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      Cc: Marcelo Tosatti <mtosatti@redhat.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      8326f44d
    • I
      perf_counter: Separate out attr->type from attr->config · a21ca2ca
      Ingo Molnar 提交于
      Counter type is a frequently used value and we do a lot of
      bit juggling by encoding and decoding it from attr->config.
      
      Clean this up by creating a separate attr->type field.
      
      Also clean up the various similarly complex user-space bits
      all around counter attribute management.
      
      The net improvement is significant, and it will be easier
      to add a new major type (which is what triggered this cleanup).
      
      (This changes the ABI, all tools are adapted.)
      (PowerPC build-tested.)
      
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      Cc: Marcelo Tosatti <mtosatti@redhat.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      a21ca2ca
    • D
      [CPUFREQ] powernow-k8: check space_id of _PCT registers to be FFH · 2c701b10
      Dave Jones 提交于
      The powernow-k8 driver checks to see that the Performance Control/Status
      Registers are declared as FFH (functional fixed hardware) by the BIOS.
      However, this check got broken in the commit:
       0e64a0c9
       [CPUFREQ] checkpatch cleanups for powernow-k8
      
      Fix based on an original patch from Naga Chumbalkar.
      Signed-off-by: NNaga Chumbalkar <nagananda.chumbalkar@hp.com>
      Cc: Mark Langsdorf <mark.langsdorf@amd.com>
      Signed-off-by: NDave Jones <davej@redhat.com>
      2c701b10
  7. 04 6月, 2009 1 次提交
    • I
      perf_counter: Fix throttling lock-up · 128f048f
      Ingo Molnar 提交于
      Throttling logic is broken and we can lock up with too small
      hw sampling intervals.
      
      Make the throttling code more robust: disable counters even
      if we already disabled them.
      
      ( Also clean up whitespace damage i noticed while reading
        various pieces of code related to throttling. )
      
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      Cc: Marcelo Tosatti <mtosatti@redhat.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      128f048f
  8. 03 6月, 2009 5 次提交
    • Y
      perf_counter/x86: Remove the IRQ (non-NMI) handling bits · a3288106
      Yong Wang 提交于
      Remove the IRQ (non-NMI) handling bits as NMI will be used always.
      Signed-off-by: NYong Wang <yong.y.wang@intel.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      Cc: Marcelo Tosatti <mtosatti@redhat.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: John Kacur <jkacur@redhat.com>
      LKML-Reference: <20090603051255.GA2791@ywang-moblin2.bj.intel.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      a3288106
    • P
      perf_counter: Rename perf_counter_hw_event => perf_counter_attr · 0d48696f
      Peter Zijlstra 提交于
      The structure isn't hw only and when I read event, I think about those
      things that fall out the other end. Rename the thing.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      Cc: Marcelo Tosatti <mtosatti@redhat.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: John Kacur <jkacur@redhat.com>
      Cc: Stephane Eranian <eranian@googlemail.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      0d48696f
    • P
      perf_counter: x86: Emulate longer sample periods · e4abb5d4
      Peter Zijlstra 提交于
      Do as Power already does, emulate sample periods up to 2^63-1 by
      composing them of smaller values limited by hardware capabilities.
      Only once we wrap the software period do we generate an overflow
      event.
      
      Just 10 lines of new code.
      Reported-by: NStephane Eranian <eranian@googlemail.com>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      Cc: Marcelo Tosatti <mtosatti@redhat.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: John Kacur <jkacur@redhat.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      e4abb5d4
    • P
      perf_counter: Remove the last nmi/irq bits · 8a016db3
      Peter Zijlstra 提交于
      IRQ (non-NMI) sampling is not used anymore - remove the last few bits.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      Cc: Marcelo Tosatti <mtosatti@redhat.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: John Kacur <jkacur@redhat.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      8a016db3
    • P
      perf_counter: Rename various fields · b23f3325
      Peter Zijlstra 提交于
      A few renames:
      
        s/irq_period/sample_period/
        s/irq_freq/sample_freq/
        s/PERF_RECORD_/PERF_SAMPLE_/
        s/record_type/sample_type/
      
      And change both the new sample_type and read_format to u64.
      Reported-by: NStephane Eranian <eranian@googlemail.com>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      Cc: Marcelo Tosatti <mtosatti@redhat.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: John Kacur <jkacur@redhat.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      b23f3325
  9. 30 5月, 2009 1 次提交
  10. 29 5月, 2009 1 次提交
  11. 27 5月, 2009 4 次提交
  12. 26 5月, 2009 6 次提交
    • I
      perf_counter, x86: Make NMI lockups more robust · aaba9801
      Ingo Molnar 提交于
      We have a debug check that detects stuck NMIs and returns with
      the PMU disabled in the global ctrl MSR - but i managed to trigger
      a situation where this was not enough to deassert the NMI.
      
      So clear/reset the full PMU and keep the disable count balanced when
      exiting from here. This way the box produces a debug warning but
      stays up and is more debuggable.
      
      [ Impact: in case of PMU related bugs, recover more gracefully ]
      
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      Cc: Marcelo Tosatti <mtosatti@redhat.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: John Kacur <jkacur@redhat.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      aaba9801
    • I
      perf_counter, x86: Fix APIC NMI programming · 79202ba9
      Ingo Molnar 提交于
      My Nehalem box locks up in certain situations (with an
      always-asserted NMI causing a lockup) if the PMU LVT
      entry is programmed between NMI and IRQ mode with a
      high frequency.
      
      Standardize exlusively on NMIs instead.
      
      [ Impact: fix lockup ]
      
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      Cc: Marcelo Tosatti <mtosatti@redhat.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: John Kacur <jkacur@redhat.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      79202ba9
    • I
      Revert "perf_counter, x86: speed up the scheduling fast-path" · 53b441a5
      Ingo Molnar 提交于
      This reverts commit b68f1d2e.
      
      It is causing problems (stuck/stuttering profiling) - when mixed
      NMI and non-NMI counters are used.
      
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: John Kacur <jkacur@redhat.com>
      LKML-Reference: <20090525153931.703093461@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      53b441a5
    • P
      perf_counter: Generic per counter interrupt throttle · a78ac325
      Peter Zijlstra 提交于
      Introduce a generic per counter interrupt throttle.
      
      This uses the perf_counter_overflow() quick disable to throttle a specific
      counter when its going too fast when a pmu->unthrottle() method is provided
      which can undo the quick disable.
      
      Power needs to implement both the quick disable and the unthrottle method.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: John Kacur <jkacur@redhat.com>
      LKML-Reference: <20090525153931.703093461@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      a78ac325
    • P
      perf_counter: x86: Remove interrupt throttle · 48e22d56
      Peter Zijlstra 提交于
      remove the x86 specific interrupt throttle
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: John Kacur <jkacur@redhat.com>
      LKML-Reference: <20090525153931.616671838@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      48e22d56
    • P
      perf_counter: x86: Expose INV and EDGE bits · ff99be57
      Peter Zijlstra 提交于
      Expose the INV and EDGE bits of the PMU to raw configs.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: John Kacur <jkacur@redhat.com>
      LKML-Reference: <20090525153931.494709027@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      ff99be57
  13. 23 5月, 2009 1 次提交