1. 19 6月, 2009 1 次提交
    • P
      perf_counter: Make callchain samples extensible · f9188e02
      Peter Zijlstra 提交于
      Before exposing upstream tools to a callchain-samples ABI, tidy it
      up to make it more extensible in the future:
      
      Use markers in the IP chain to denote context, use (u64)-1..-4095 range
      for these context markers because we use them for ERR_PTR(), so these
      addresses are unlikely to be mapped.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      f9188e02
  2. 18 6月, 2009 1 次提交
  3. 15 6月, 2009 3 次提交
    • P
      perf_counter: x86: Fix call-chain support to use NMI-safe methods · 74193ef0
      Peter Zijlstra 提交于
      __copy_from_user_inatomic() isn't NMI safe in that it can trigger
      the page fault handler which is another trap and its return path
      invokes IRET which will also close the NMI context.
      
      Therefore use a GUP based approach to copy the stack frames over.
      
      We tried an alternative solution as well: we used a forward ported
      version of Mathieu Desnoyers's "NMI safe INT3 and Page Fault" patch
      that modifies the exception return path to use an open-coded IRET with
      explicit stack unrolling and TF checking.
      
      This didnt work as it interacted with faulting user-space instructions,
      causing them not to restart properly, which corrupts user-space
      registers.
      
      Solving that would probably involve disassembling those instructions
      and backtracing the RIP. But even without that, the code was deemed
      rather complex to the already non-trivial x86 entry assembly code,
      so instead we went for this GUP based method that does a
      software-walk of the pagetables.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Nick Piggin <npiggin@suse.de>
      Cc: Pekka Enberg <penberg@cs.helsinki.fi>
      Cc: Vegard Nossum <vegard.nossum@gmail.com>
      Cc: Jeremy Fitzhardinge <jeremy@goop.org>
      Cc: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      74193ef0
    • I
      perf_counter, x86: Fix kernel-space call-chains · 038e836e
      Ingo Molnar 提交于
      Kernel-space call-chains were trimmed at the first entry because
      we never processed anything beyond the first stack context.
      
      Allow the backtrace to jump from NMI to IRQ stack then to task stack
      and finally user-space stack.
      
      Also calculate the stack and bp variables correctly so that the
      stack walker does not exit early.
      
      We can get deep traces as a result, visible in perf report -D output:
      
      0x32af0 [0xe0]: PERF_EVENT (IP, 5): 15134: 0xffffffff815225fd period: 1
      ... chain: u:2, k:22, nr:24
      .....  0: 0xffffffff815225fd
      .....  1: 0xffffffff810ac51c
      .....  2: 0xffffffff81018e29
      .....  3: 0xffffffff81523939
      .....  4: 0xffffffff81524b8f
      .....  5: 0xffffffff81524bd9
      .....  6: 0xffffffff8105e498
      .....  7: 0xffffffff8152315a
      .....  8: 0xffffffff81522c3a
      .....  9: 0xffffffff810d9b74
      ..... 10: 0xffffffff810dbeec
      ..... 11: 0xffffffff810dc3fb
      
      This is a 22-entries kernel-space chain.
      
      (We still only record reliable stack entries.)
      
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      038e836e
    • I
      perf_counter, x86: Fix call-chain walking · 5a6cec3a
      Ingo Molnar 提交于
      Fix the ptregs variant when we hit user-mode tasks.
      
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Pekka Enberg <penberg@cs.helsinki.fi>
      Cc: Arjan van de Ven <arjan@infradead.org>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      5a6cec3a
  4. 13 6月, 2009 2 次提交
  5. 12 6月, 2009 1 次提交
  6. 11 6月, 2009 4 次提交
  7. 10 6月, 2009 2 次提交
  8. 09 6月, 2009 3 次提交
  9. 08 6月, 2009 3 次提交
  10. 06 6月, 2009 2 次提交
    • I
      perf_counter: Implement generalized cache event types · 8326f44d
      Ingo Molnar 提交于
      Extend generic event enumeration with the PERF_TYPE_HW_CACHE
      method.
      
      This is a 3-dimensional space:
      
             { L1-D, L1-I, L2, ITLB, DTLB, BPU } x
             { load, store, prefetch } x
             { accesses, misses }
      
      User-space passes in the 3 coordinates and the kernel provides
      a counter. (if the hardware supports that type and if the
      combination makes sense.)
      
      Combinations that make no sense produce a -EINVAL.
      Combinations that are not supported by the hardware produce -ENOTSUP.
      
      Extend the tools to deal with this, and rewrite the event symbol
      parsing code with various popular aliases for the units and
      access methods above. So 'l1-cache-miss' and 'l1d-read-ops' are
      both valid aliases.
      
      ( x86 is supported for now, with the Nehalem event table filled in,
        and with Core2 and Atom having placeholder tables. )
      
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      Cc: Marcelo Tosatti <mtosatti@redhat.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      8326f44d
    • I
      perf_counter: Separate out attr->type from attr->config · a21ca2ca
      Ingo Molnar 提交于
      Counter type is a frequently used value and we do a lot of
      bit juggling by encoding and decoding it from attr->config.
      
      Clean this up by creating a separate attr->type field.
      
      Also clean up the various similarly complex user-space bits
      all around counter attribute management.
      
      The net improvement is significant, and it will be easier
      to add a new major type (which is what triggered this cleanup).
      
      (This changes the ABI, all tools are adapted.)
      (PowerPC build-tested.)
      
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      Cc: Marcelo Tosatti <mtosatti@redhat.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      a21ca2ca
  11. 04 6月, 2009 1 次提交
    • I
      perf_counter: Fix throttling lock-up · 128f048f
      Ingo Molnar 提交于
      Throttling logic is broken and we can lock up with too small
      hw sampling intervals.
      
      Make the throttling code more robust: disable counters even
      if we already disabled them.
      
      ( Also clean up whitespace damage i noticed while reading
        various pieces of code related to throttling. )
      
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      Cc: Marcelo Tosatti <mtosatti@redhat.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      128f048f
  12. 03 6月, 2009 5 次提交
    • Y
      perf_counter/x86: Remove the IRQ (non-NMI) handling bits · a3288106
      Yong Wang 提交于
      Remove the IRQ (non-NMI) handling bits as NMI will be used always.
      Signed-off-by: NYong Wang <yong.y.wang@intel.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      Cc: Marcelo Tosatti <mtosatti@redhat.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: John Kacur <jkacur@redhat.com>
      LKML-Reference: <20090603051255.GA2791@ywang-moblin2.bj.intel.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      a3288106
    • P
      perf_counter: Rename perf_counter_hw_event => perf_counter_attr · 0d48696f
      Peter Zijlstra 提交于
      The structure isn't hw only and when I read event, I think about those
      things that fall out the other end. Rename the thing.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      Cc: Marcelo Tosatti <mtosatti@redhat.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: John Kacur <jkacur@redhat.com>
      Cc: Stephane Eranian <eranian@googlemail.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      0d48696f
    • P
      perf_counter: x86: Emulate longer sample periods · e4abb5d4
      Peter Zijlstra 提交于
      Do as Power already does, emulate sample periods up to 2^63-1 by
      composing them of smaller values limited by hardware capabilities.
      Only once we wrap the software period do we generate an overflow
      event.
      
      Just 10 lines of new code.
      Reported-by: NStephane Eranian <eranian@googlemail.com>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      Cc: Marcelo Tosatti <mtosatti@redhat.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: John Kacur <jkacur@redhat.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      e4abb5d4
    • P
      perf_counter: Remove the last nmi/irq bits · 8a016db3
      Peter Zijlstra 提交于
      IRQ (non-NMI) sampling is not used anymore - remove the last few bits.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      Cc: Marcelo Tosatti <mtosatti@redhat.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: John Kacur <jkacur@redhat.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      8a016db3
    • P
      perf_counter: Rename various fields · b23f3325
      Peter Zijlstra 提交于
      A few renames:
      
        s/irq_period/sample_period/
        s/irq_freq/sample_freq/
        s/PERF_RECORD_/PERF_SAMPLE_/
        s/record_type/sample_type/
      
      And change both the new sample_type and read_format to u64.
      Reported-by: NStephane Eranian <eranian@googlemail.com>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      Cc: Marcelo Tosatti <mtosatti@redhat.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: John Kacur <jkacur@redhat.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      b23f3325
  13. 29 5月, 2009 1 次提交
  14. 26 5月, 2009 6 次提交
    • I
      perf_counter, x86: Make NMI lockups more robust · aaba9801
      Ingo Molnar 提交于
      We have a debug check that detects stuck NMIs and returns with
      the PMU disabled in the global ctrl MSR - but i managed to trigger
      a situation where this was not enough to deassert the NMI.
      
      So clear/reset the full PMU and keep the disable count balanced when
      exiting from here. This way the box produces a debug warning but
      stays up and is more debuggable.
      
      [ Impact: in case of PMU related bugs, recover more gracefully ]
      
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      Cc: Marcelo Tosatti <mtosatti@redhat.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: John Kacur <jkacur@redhat.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      aaba9801
    • I
      perf_counter, x86: Fix APIC NMI programming · 79202ba9
      Ingo Molnar 提交于
      My Nehalem box locks up in certain situations (with an
      always-asserted NMI causing a lockup) if the PMU LVT
      entry is programmed between NMI and IRQ mode with a
      high frequency.
      
      Standardize exlusively on NMIs instead.
      
      [ Impact: fix lockup ]
      
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      Cc: Marcelo Tosatti <mtosatti@redhat.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: John Kacur <jkacur@redhat.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      79202ba9
    • I
      Revert "perf_counter, x86: speed up the scheduling fast-path" · 53b441a5
      Ingo Molnar 提交于
      This reverts commit b68f1d2e.
      
      It is causing problems (stuck/stuttering profiling) - when mixed
      NMI and non-NMI counters are used.
      
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: John Kacur <jkacur@redhat.com>
      LKML-Reference: <20090525153931.703093461@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      53b441a5
    • P
      perf_counter: Generic per counter interrupt throttle · a78ac325
      Peter Zijlstra 提交于
      Introduce a generic per counter interrupt throttle.
      
      This uses the perf_counter_overflow() quick disable to throttle a specific
      counter when its going too fast when a pmu->unthrottle() method is provided
      which can undo the quick disable.
      
      Power needs to implement both the quick disable and the unthrottle method.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: John Kacur <jkacur@redhat.com>
      LKML-Reference: <20090525153931.703093461@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      a78ac325
    • P
      perf_counter: x86: Remove interrupt throttle · 48e22d56
      Peter Zijlstra 提交于
      remove the x86 specific interrupt throttle
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: John Kacur <jkacur@redhat.com>
      LKML-Reference: <20090525153931.616671838@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      48e22d56
    • P
      perf_counter: x86: Expose INV and EDGE bits · ff99be57
      Peter Zijlstra 提交于
      Expose the INV and EDGE bits of the PMU to raw configs.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: John Kacur <jkacur@redhat.com>
      LKML-Reference: <20090525153931.494709027@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      ff99be57
  15. 21 5月, 2009 1 次提交
    • I
      perf_counter: Fix context removal deadlock · 34adc806
      Ingo Molnar 提交于
      Disable the PMU globally before removing a counter from a
      context. This fixes the following lockup:
      
      [22081.741922] ------------[ cut here ]------------
      [22081.746668] WARNING: at arch/x86/kernel/cpu/perf_counter.c:803 intel_pmu_handle_irq+0x9b/0x24e()
      [22081.755624] Hardware name: X8DTN
      [22081.758903] perfcounters: irq loop stuck!
      [22081.762985] Modules linked in:
      [22081.766136] Pid: 11082, comm: perf Not tainted 2.6.30-rc6-tip #226
      [22081.772432] Call Trace:
      [22081.774940]  <NMI>  [<ffffffff81019aed>] ? intel_pmu_handle_irq+0x9b/0x24e
      [22081.781993]  [<ffffffff81019aed>] ? intel_pmu_handle_irq+0x9b/0x24e
      [22081.788368]  [<ffffffff8104505c>] ? warn_slowpath_common+0x77/0xa3
      [22081.794649]  [<ffffffff810450d3>] ? warn_slowpath_fmt+0x40/0x45
      [22081.800696]  [<ffffffff81019aed>] ? intel_pmu_handle_irq+0x9b/0x24e
      [22081.807080]  [<ffffffff814d1a72>] ? perf_counter_nmi_handler+0x3f/0x4a
      [22081.813751]  [<ffffffff814d2d09>] ? notifier_call_chain+0x58/0x86
      [22081.819951]  [<ffffffff8105b250>] ? notify_die+0x2d/0x32
      [22081.825392]  [<ffffffff814d1414>] ? do_nmi+0x8e/0x242
      [22081.830538]  [<ffffffff814d0f0a>] ? nmi+0x1a/0x20
      [22081.835342]  [<ffffffff8117e102>] ? selinux_file_free_security+0x0/0x1a
      [22081.842105]  [<ffffffff81018793>] ? x86_pmu_disable_counter+0x15/0x41
      [22081.848673]  <<EOE>>  [<ffffffff81018f3d>] ? x86_pmu_disable+0x86/0x103
      [22081.855512]  [<ffffffff8108fedd>] ? __perf_counter_remove_from_context+0x0/0xfe
      [22081.862926]  [<ffffffff8108fcbc>] ? counter_sched_out+0x30/0xce
      [22081.868909]  [<ffffffff8108ff36>] ? __perf_counter_remove_from_context+0x59/0xfe
      [22081.876382]  [<ffffffff8106808a>] ? smp_call_function_single+0x6c/0xe6
      [22081.882955]  [<ffffffff81091b96>] ? perf_release+0x86/0x14c
      [22081.888600]  [<ffffffff810c4c84>] ? __fput+0xe7/0x195
      [22081.893718]  [<ffffffff810c213e>] ? filp_close+0x5b/0x62
      [22081.899107]  [<ffffffff81046a70>] ? put_files_struct+0x64/0xc2
      [22081.905031]  [<ffffffff8104841a>] ? do_exit+0x1e2/0x6ef
      [22081.910360]  [<ffffffff814d0a60>] ? _spin_lock_irqsave+0x9/0xe
      [22081.916292]  [<ffffffff8104898e>] ? do_group_exit+0x67/0x93
      [22081.921953]  [<ffffffff810489cc>] ? sys_exit_group+0x12/0x16
      [22081.927759]  [<ffffffff8100baab>] ? system_call_fastpath+0x16/0x1b
      [22081.934076] ---[ end trace 3a3936ce3e1b4505 ]---
      
      And could potentially also fix the lockup reported by Marcelo Tosatti.
      
      Also, print more debug info in case of a detected lockup.
      
      [ Impact: fix lockup ]
      Reported-by: NMarcelo Tosatti <mtosatti@redhat.com>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      34adc806
  16. 18 5月, 2009 1 次提交
    • I
      perf_counter, x86: speed up the scheduling fast-path · b68f1d2e
      Ingo Molnar 提交于
      We have to set up the LVT entry only at counter init time, not at
      every switch-in time.
      
      There's friction between NMI and non-NMI use here - we'll probably
      remove the per counter configurability of it - but until then, dont
      slow down things ...
      
      [ Impact: micro-optimization ]
      
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Srivatsa Vaddagiri <vatsa@in.ibm.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Marcelo Tosatti <mtosatti@redhat.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      b68f1d2e
  17. 17 5月, 2009 1 次提交
    • I
      perf_counter, x86: fix zero irq_period counters · d2517a49
      Ingo Molnar 提交于
      The quirk to irq_period unearthed an unrobustness we had in the
      hw_counter initialization sequence: we left irq_period at 0, which
      was then quirked up to 2 ... which then generated a _lot_ of
      interrupts during 'perf stat' runs, slowed them down and skewed
      the counter results in general.
      
      Initialize irq_period to the maximum instead.
      
      [ Impact: fix perf stat results ]
      
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Srivatsa Vaddagiri <vatsa@in.ibm.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Marcelo Tosatti <mtosatti@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      d2517a49
  18. 15 5月, 2009 2 次提交