1. 09 5月, 2009 1 次提交
  2. 06 5月, 2009 7 次提交
  3. 05 5月, 2009 5 次提交
    • I
      perf_counter: fix fixed-purpose counter support on v2 Intel-PERFMON · 066d7dea
      Ingo Molnar 提交于
      Fixed-purpose counters stopped working in a simple 'perf stat ls' run:
      
         <not counted>  cache references
         <not counted>  cache misses
      
      Due to:
      
        ef7b3e09: perf_counter, x86: remove vendor check in fixed_mode_idx()
      
      Which made x86_pmu.num_counters_fixed matter: if it's nonzero, the
      fixed-purpose counters are utilized.
      
      But on v2 perfmon this field is not set (despite there being
      fixed-purpose PMCs). So add a quirk to set the number of fixed-purpose
      counters to at least three.
      
      [ Impact: add quirk for three fixed-purpose counters on certain Intel CPUs ]
      
      Cc: Robert Richter <robert.richter@amd.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <1241002046-8832-28-git-send-email-robert.richter@amd.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      066d7dea
    • I
      perf_counter: convert perf_resource_mutex to a spinlock · 1dce8d99
      Ingo Molnar 提交于
      Now percpu counters can be initialized very early. But the init
      sequence uses mutex_lock(). Fortunately, perf_resource_mutex should
      be a spinlock anyway, so convert it.
      
      [ Impact: fix crash due to early init mutex use ]
      
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      1dce8d99
    • I
      perf_counter: initialize the per-cpu context earlier · 0d905bca
      Ingo Molnar 提交于
      percpu scheduling for perfcounters wants to take the context lock,
      but that lock first needs to be initialized. Currently it is an
      early_initcall() - but that is too late, the task tick runs much
      sooner than that.
      
      Call it explicitly from the scheduler init sequence instead.
      
      [ Impact: fix access-before-init crash ]
      
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      0d905bca
    • P
      perf_counter: x86: fixup nmi_watchdog vs perf_counter boo-boo · ba77813a
      Peter Zijlstra 提交于
      Invert the atomic_inc_not_zero() test so that we will indeed detect the
      first activation.
      
      Also rename the global num_counters, since its easy to confuse with
      x86_pmu.num_counters.
      
      [ Impact: fix non-working perfcounters on AMD CPUs, cleanup ]
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <1241455664.7620.4938.camel@twins>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      ba77813a
    • I
      perf_counter: round-robin per-CPU counters too · b82914ce
      Ingo Molnar 提交于
      This used to be unstable when we had the rq->lock dependencies,
      but now that they are that of the past we can turn on percpu
      counter RR too.
      
      [ Impact: handle counter over-commit for per-CPU counters too ]
      
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      b82914ce
  4. 03 5月, 2009 1 次提交
  5. 02 5月, 2009 4 次提交
  6. 01 5月, 2009 10 次提交
  7. 30 4月, 2009 12 次提交