1. 15 5月, 2009 13 次提交
    • P
      perf_counter: remove perf_disable/enable exports · 548e1ddf
      Peter Zijlstra 提交于
      Now that ACPI idle doesn't use it anymore, remove the exports.
      
      [ Impact: remove dead code/data ]
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      LKML-Reference: <20090515132018.429826617@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      548e1ddf
    • I
      perf stat: handle Ctrl-C · 58d7e993
      Ingo Molnar 提交于
      Before this change, if a long-running perf stat workload was Ctrl-C-ed,
      the utility exited without displaying statistics.
      
      After the change, the Ctrl-C gets propagated into the workload (and
      causes its early exit there), but perf stat itself will still continue
      to run and will display counter results.
      
      This is useful to run open-ended workloads, let them run for
      a while, then Ctrl-C them to get the stats.
      
      [ Impact: extend perf stat with new functionality ]
      
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      58d7e993
    • I
      perf_counter: Remove ACPI quirk · 251e8e3c
      Ingo Molnar 提交于
      We had a disable/enable around acpi_idle_do_entry() due to an erratum
      in an early prototype CPU i had access to. That erratum has been fixed
      in the BIOS so remove the quirk.
      
      The quirk also kept us from profiling interrupts that hit the ACPI idle
      instruction - so this is an improvement as well, beyond a cleanup and
      a micro-optimization.
      
      [ Impact: improve profiling scope, cleanup, micro-optimization ]
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      251e8e3c
    • I
      perf_counter: x86: Protect against infinite loops in intel_pmu_handle_irq() · 9029a5e3
      Ingo Molnar 提交于
      intel_pmu_handle_irq() can lock up in an infinite loop if the hardware
      does not allow the acking of irqs. Alas, this happened in testing so
      make this robust and emit a warning if it happens in the future.
      
      Also, clean up the IRQ handlers a bit.
      
      [ Impact: improve perfcounter irq/nmi handling robustness ]
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      9029a5e3
    • I
      perf_counter: x86: Disallow interval of 1 · 1c80f4b5
      Ingo Molnar 提交于
      On certain CPUs i have observed a stuck PMU if interval was set to
      1 and NMIs were used. The PMU had PMC0 set in MSR_CORE_PERF_GLOBAL_STATUS,
      but it was not possible to ack it via MSR_CORE_PERF_GLOBAL_OVF_CTRL,
      and the NMI loop got stuck infinitely.
      
      [ Impact: fix rare hangs during high perfcounter load ]
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      1c80f4b5
    • P
      perf_counter: x86: Robustify interrupt handling · a4016a79
      Peter Zijlstra 提交于
      Two consecutive NMIs could daze and confuse the machine when the
      first would handle the overflow of both counters.
      
      [ Impact: fix false-positive syslog messages under multi-session profiling ]
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      a4016a79
    • P
      perf_counter: Rework the perf counter disable/enable · 9e35ad38
      Peter Zijlstra 提交于
      The current disable/enable mechanism is:
      
      	token = hw_perf_save_disable();
      	...
      	/* do bits */
      	...
      	hw_perf_restore(token);
      
      This works well, provided that the use nests properly. Except we don't.
      
      x86 NMI/INT throttling has non-nested use of this, breaking things. Therefore
      provide a reference counter disable/enable interface, where the first disable
      disables the hardware, and the last enable enables the hardware again.
      
      [ Impact: refactor, simplify the PMU disable/enable logic ]
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      9e35ad38
    • P
      perf_counter: x86: Fix up the amd NMI/INT throttle · 962bf7a6
      Peter Zijlstra 提交于
      perf_counter_unthrottle() restores throttle_ctrl, buts its never set.
      Also, we fail to disable all counters when throttling.
      
      [ Impact: fix rare stuck perf-counters when they are throttled ]
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      962bf7a6
    • P
      perf_counter: Fix perf_output_copy() WARN to account for overflow · 53020fe8
      Peter Zijlstra 提交于
      The simple reservation test in perf_output_copy() failed to take
      unsigned int overflow into account, fix this.
      
      [ Impact: fix false positive warning with more than 4GB of profiling data ]
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      53020fe8
    • P
      perf_counter: x86: Allow unpriviliged use of NMIs · a026dfec
      Peter Zijlstra 提交于
      Apply sysctl_perf_counter_priv to NMIs. Also, fail the counter
      creation instead of silently down-grading to regular interrupts.
      
      [ Impact: allow wider perf-counter usage ]
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      a026dfec
    • I
      perf_counter: x86: Fix throttling · f5a5a2f6
      Ingo Molnar 提交于
      If counters are disabled globally when a perfcounter IRQ/NMI hits,
      and if we throttle in that case, we'll promote the '0' value to
      the next lapic IRQ and disable all perfcounters at that point,
      permanently ...
      
      Fix it.
      
      [ Impact: fix hung perfcounters under load ]
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      f5a5a2f6
    • P
      perf_counter: x86: More accurate counter update · ec3232bd
      Peter Zijlstra 提交于
      Take the counter width into account instead of assuming 32 bits.
      
      In particular Nehalem has 44 bit wide counters, and all
      arithmetics should happen on a 44-bit signed integer basis.
      
      [ Impact: fix rare event imprecision, warning message on Nehalem ]
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      ec3232bd
    • A
      perf record: Allow specifying a pid to record · 1a853e36
      Arnaldo Carvalho de Melo 提交于
      Allow specifying a pid instead of always fork+exec'ing a command.
      
      Because the PERF_EVENT_COMM and PERF_EVENT_MMAP events happened before
      we connected, we must synthesize them so that 'perf report' can get what
      it needs.
      
      [ Impact: add new command line option ]
      Signed-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Clark Williams <williams@redhat.com>
      Cc: John Kacur <jkacur@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      LKML-Reference: <20090515015046.GA13664@ghostprotocols.net>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      1a853e36
  2. 13 5月, 2009 1 次提交
  3. 12 5月, 2009 1 次提交
    • P
      perf_counter: call hw_perf_save_disable/restore around group_sched_in · e758a33d
      Paul Mackerras 提交于
      I noticed that when enabling a group via the PERF_COUNTER_IOC_ENABLE
      ioctl on the group leader, the counters weren't enabled and counting
      immediately on return from the ioctl, but did start counting a little
      while later (presumably after a context switch).
      
      The reason was that __perf_counter_enable calls group_sched_in which
      calls hw_perf_group_sched_in, which on powerpc assumes that the caller
      has called hw_perf_save_disable already.  Until commit 46d686c6
      ("perf_counter: put whole group on when enabling group leader") it was
      true that all callers of group_sched_in had called
      hw_perf_save_disable first, and the powerpc hw_perf_group_sched_in
      relies on that (there isn't an x86 version).
      
      This fixes the problem by putting calls to hw_perf_save_disable /
      hw_perf_restore around the calls to group_sched_in and
      counter_sched_in in __perf_counter_enable.  Having the calls to
      hw_perf_save_disable/restore around the counter_sched_in call is
      harmless and makes this call consistent with the other call sites
      of counter_sched_in, which have all called hw_perf_save_disable first.
      
      [ Impact: more precise counter group disable/enable functionality ]
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      LKML-Reference: <18953.25733.53359.147452@cargo.ozlabs.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      e758a33d
  4. 11 5月, 2009 4 次提交
    • P
      perf_counter: call atomic64_set for counter->count · 615a3f1e
      Paul Mackerras 提交于
      A compile warning triggered because we are calling
      atomic_set(&counter->count). But since counter->count
      is an atomic64_t, we have to use atomic64_set.
      
      So the count can be set short, resulting in the reset ioctl
      only resetting the low word.
      
      [ Impact: clear counter properly during the reset ioctl ]
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      LKML-Reference: <18951.48285.270311.981806@drongo.ozlabs.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      615a3f1e
    • P
      perf_counter: don't count scheduler ticks as context switches · a08b159f
      Paul Mackerras 提交于
      The context-switch software counter gives inflated values at present
      because each scheduler tick and each process-wide counter
      enable/disable prctl gets counted as a context switch.
      
      This happens because perf_counter_task_tick, perf_counter_task_disable
      and perf_counter_task_enable all call perf_counter_task_sched_out,
      which calls perf_swcounter_event to record a context switch event.
      
      This fixes it by introducing a variant of perf_counter_task_sched_out
      with two underscores in front for internal use within the perf_counter
      code, and makes perf_counter_task_{tick,disable,enable} call it.  This
      variant doesn't record a context switch event, and takes a struct
      perf_counter_context *.  This adds the new variant rather than
      changing the behaviour or interface of perf_counter_task_sched_out
      because that is called from other code.
      
      [ Impact: fix inflated context-switch event counts ]
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      LKML-Reference: <18951.48034.485580.498953@drongo.ozlabs.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      a08b159f
    • P
      perf_counter: Put whole group on when enabling group leader · 6751b71e
      Paul Mackerras 提交于
      Currently, if you have a group where the leader is disabled and there
      are siblings that are enabled, and then you enable the leader, we only
      put the leader on the PMU, and not its enabled siblings.  This is
      incorrect, since the enabled group members should be all on or all off
      at any given point.
      
      This fixes it by adding a call to group_sched_in in
      __perf_counter_enable in the case where we're enabling a group leader.
      
      To avoid the need for a forward declaration this also moves
      group_sched_in up before __perf_counter_enable.  The actual content of
      group_sched_in is unchanged by this patch.
      
      [ Impact: fix bug in counter enable code ]
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      LKML-Reference: <18951.34946.451546.691693@drongo.ozlabs.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      6751b71e
    • M
      perf_counter, x86: clean up throttling printk · 88233923
      Mike Galbraith 提交于
      s/PERFMON/perfcounters for perfcounter interrupt throttling warning.
      
      'perfmon' is the CPU feature name that is Intel-only, while we do
      throttling in a generic way.
      
      [ Impact: cleanup ]
      Signed-off-by: NMike Galbraith <efault@gmx.de>
      Cc: Robert Richter <robert.richter@amd.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      88233923
  5. 10 5月, 2009 1 次提交
  6. 09 5月, 2009 5 次提交
  7. 06 5月, 2009 7 次提交
  8. 05 5月, 2009 5 次提交
    • I
      perf_counter: fix fixed-purpose counter support on v2 Intel-PERFMON · 066d7dea
      Ingo Molnar 提交于
      Fixed-purpose counters stopped working in a simple 'perf stat ls' run:
      
         <not counted>  cache references
         <not counted>  cache misses
      
      Due to:
      
        ef7b3e09: perf_counter, x86: remove vendor check in fixed_mode_idx()
      
      Which made x86_pmu.num_counters_fixed matter: if it's nonzero, the
      fixed-purpose counters are utilized.
      
      But on v2 perfmon this field is not set (despite there being
      fixed-purpose PMCs). So add a quirk to set the number of fixed-purpose
      counters to at least three.
      
      [ Impact: add quirk for three fixed-purpose counters on certain Intel CPUs ]
      
      Cc: Robert Richter <robert.richter@amd.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <1241002046-8832-28-git-send-email-robert.richter@amd.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      066d7dea
    • I
      perf_counter: convert perf_resource_mutex to a spinlock · 1dce8d99
      Ingo Molnar 提交于
      Now percpu counters can be initialized very early. But the init
      sequence uses mutex_lock(). Fortunately, perf_resource_mutex should
      be a spinlock anyway, so convert it.
      
      [ Impact: fix crash due to early init mutex use ]
      
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      1dce8d99
    • I
      perf_counter: initialize the per-cpu context earlier · 0d905bca
      Ingo Molnar 提交于
      percpu scheduling for perfcounters wants to take the context lock,
      but that lock first needs to be initialized. Currently it is an
      early_initcall() - but that is too late, the task tick runs much
      sooner than that.
      
      Call it explicitly from the scheduler init sequence instead.
      
      [ Impact: fix access-before-init crash ]
      
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      0d905bca
    • P
      perf_counter: x86: fixup nmi_watchdog vs perf_counter boo-boo · ba77813a
      Peter Zijlstra 提交于
      Invert the atomic_inc_not_zero() test so that we will indeed detect the
      first activation.
      
      Also rename the global num_counters, since its easy to confuse with
      x86_pmu.num_counters.
      
      [ Impact: fix non-working perfcounters on AMD CPUs, cleanup ]
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <1241455664.7620.4938.camel@twins>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      ba77813a
    • I
      perf_counter: round-robin per-CPU counters too · b82914ce
      Ingo Molnar 提交于
      This used to be unstable when we had the rq->lock dependencies,
      but now that they are that of the past we can turn on percpu
      counter RR too.
      
      [ Impact: handle counter over-commit for per-CPU counters too ]
      
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      b82914ce
  9. 03 5月, 2009 1 次提交
  10. 02 5月, 2009 2 次提交