1. 09 5月, 2009 3 次提交
    • P
      perf_counter: add PERF_RECORD_CPU · f370e1e2
      Peter Zijlstra 提交于
      Allow recording the CPU number the event was generated on.
      
      RFC: this leaves a u32 as reserved, should we fill in the
           node_id() there, or leave this open for future extention,
           as userspace can already easily do the cpu->node mapping
           if needed.
      
      [ Impact: extend perfcounter output record format ]
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      LKML-Reference: <20090508170029.008627711@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      f370e1e2
    • P
      perf_counter: add PERF_RECORD_CONFIG · a85f61ab
      Peter Zijlstra 提交于
      Much like CONFIG_RECORD_GROUP records the hw_event.config to
      identify the values, allow to record this for all counters.
      
      [ Impact: extend perfcounter output record format ]
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      LKML-Reference: <20090508170028.923228280@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      a85f61ab
    • P
      perf_counter: rework ioctl()s · 3df5edad
      Peter Zijlstra 提交于
      Corey noticed that ioctl()s on grouped counters didn't work on
      the whole group. This extends the ioctl() interface to take a
      second argument that is interpreted as a flags field. We then
      provide PERF_IOC_FLAG_GROUP to toggle the behaviour.
      
      Having this flag gives the greatest flexibility, allowing you
      to individually enable/disable/reset counters in a group, or
      all together.
      
      [ Impact: fix group counter enable/disable semantics ]
      Reported-by: NCorey Ashford <cjashfor@linux.vnet.ibm.com>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      LKML-Reference: <20090508170028.837558214@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      3df5edad
  2. 06 5月, 2009 3 次提交
  3. 05 5月, 2009 1 次提交
    • I
      perf_counter: initialize the per-cpu context earlier · 0d905bca
      Ingo Molnar 提交于
      percpu scheduling for perfcounters wants to take the context lock,
      but that lock first needs to be initialized. Currently it is an
      early_initcall() - but that is too late, the task tick runs much
      sooner than that.
      
      Call it explicitly from the scheduler init sequence instead.
      
      [ Impact: fix access-before-init crash ]
      
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      0d905bca
  4. 01 5月, 2009 1 次提交
    • P
      perf_counter: fix race in perf_output_* · c33a0bc4
      Peter Zijlstra 提交于
      When two (or more) contexts output to the same buffer, it is possible
      to observe half written output.
      
      Suppose we have CPU0 doing perf_counter_mmap(), CPU1 doing
      perf_counter_overflow(). If CPU1 does a wakeup and exposes head to
      user-space, then CPU2 can observe the data CPU0 is still writing.
      
      [ Impact: fix occasionally corrupted profiling records ]
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      LKML-Reference: <20090501102533.007821627@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      c33a0bc4
  5. 29 4月, 2009 3 次提交
  6. 09 4月, 2009 7 次提交
  7. 07 4月, 2009 10 次提交
  8. 06 4月, 2009 12 次提交
    • P
      perf_counter: update mmap() counter read · 92f22a38
      Peter Zijlstra 提交于
      Paul noted that we don't need SMP barriers for the mmap() counter read
      because its always on the same cpu (otherwise you can't access the hw
      counter anyway).
      
      So remove the SMP barriers and replace them with regular compiler
      barriers.
      
      Further, update the comment to include a race free method of reading
      said hardware counter. The primary change is putting the pmc_read
      inside the seq-loop, otherwise we can still race and read rubbish.
      Noticed-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      Orig-LKML-Reference: <20090402091319.577951445@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      92f22a38
    • P
      perf_counter: add more context information · 5872bdb8
      Peter Zijlstra 提交于
      Put in counts to tell which ips belong to what context.
      
        -----
         | |  hv
         | --
      nr | |  kernel
         | --
         | |  user
        -----
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      Orig-LKML-Reference: <20090402091319.493101305@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      5872bdb8
    • P
      perf_counter: per event wakeups · c457810a
      Peter Zijlstra 提交于
      By request, provide a way to request a wakeup every 'n' events instead
      of every page of output.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      Orig-LKML-Reference: <20090402091319.323309784@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      c457810a
    • P
      perf_counter: move the event overflow output bits to record_type · 8a057d84
      Peter Zijlstra 提交于
      Per suggestion from Paul, move the event overflow bits to record_type
      and sanitize the enums a bit.
      
      Breaks the ABI -- again ;-)
      Suggested-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      Orig-LKML-Reference: <20090402091319.151921176@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      8a057d84
    • P
      perf_counter: provide generic callchain bits · 394ee076
      Peter Zijlstra 提交于
      Provide the generic callchain support bits. If hw_event->callchain is
      set the arch specific perf_callchain() function is called upon to
      provide a perf_callchain_entry structure filled with the current
      callchain.
      
      If it does so, it is added to the overflow output event.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Acked-by: NPaul Mackerras <paulus@samba.org>
      Orig-LKML-Reference: <20090330171024.254266860@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      394ee076
    • P
      perf_counter: re-arrange the perf_event_type · 5ed00415
      Peter Zijlstra 提交于
      Breaks ABI yet again :-)
      
      Change the event type so that [0, 2^31-1] are regular event types, but
      [2^31, 2^32-1] forms a bitmask for overflow events.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Acked-by: NPaul Mackerras <paulus@samba.org>
      Orig-LKML-Reference: <20090330171024.047961770@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      5ed00415
    • P
      perf_counter: executable mmap() information · 0a4a9391
      Peter Zijlstra 提交于
      Currently the profiling information returns userspace IPs but no way
      to correlate them to userspace code. Userspace could look into
      /proc/$pid/maps but that might not be current or even present anymore
      at the time of analyzing the IPs.
      
      Therefore provide means to track the mmap information and provide it
      in the output stream.
      
      XXX: only covers mmap()/munmap(), mremap() and mprotect() are missing.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Acked-by: NPaul Mackerras <paulus@samba.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Orig-LKML-Reference: <20090330171023.417259499@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      0a4a9391
    • P
      perf_counter: fix update_userpage() · 38ff667b
      Peter Zijlstra 提交于
      It just occured to me it is possible to have multiple contending
      updates of the userpage (mmap information vs overflow vs counter).
      This would break the seqlock logic.
      
      It appear the arch code uses this from NMI context, so we cannot
      possibly serialize its use, therefore separate the data_head update
      from it and let it return to its original use.
      
      The arch code needs to make sure there are no contending callers by
      disabling the counter before using it -- powerpc appears to do this
      nicely.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Acked-by: NPaul Mackerras <paulus@samba.org>
      Orig-LKML-Reference: <20090330171023.241410660@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      38ff667b
    • P
      perf_counter: unify and fix delayed counter wakeup · 925d519a
      Peter Zijlstra 提交于
      While going over the wakeup code I noticed delayed wakeups only work
      for hardware counters but basically all software counters rely on
      them.
      
      This patch unifies and generalizes the delayed wakeup to fix this
      issue.
      
      Since we're dealing with NMI context bits here, use a cmpxchg() based
      single link list implementation to track counters that have pending
      wakeups.
      
      [ This should really be generic code for delayed wakeups, but since we
        cannot use cmpxchg()/xchg() in generic code, I've let it live in the
        perf_counter code. -- Eric Dumazet could use it to aggregate the
        network wakeups. ]
      
      Furthermore, the x86 method of using TIF flags was flawed in that its
      quite possible to end up setting the bit on the idle task, loosing the
      wakeup.
      
      The powerpc method uses per-cpu storage and does appear to be
      sufficient.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Acked-by: NPaul Mackerras <paulus@samba.org>
      Orig-LKML-Reference: <20090330171023.153932974@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      925d519a
    • P
      perf_counter: record time running and time enabled for each counter · 53cfbf59
      Paul Mackerras 提交于
      Impact: new functionality
      
      Currently, if there are more counters enabled than can fit on the CPU,
      the kernel will multiplex the counters on to the hardware using
      round-robin scheduling.  That isn't too bad for sampling counters, but
      for counting counters it means that the value read from a counter
      represents some unknown fraction of the true count of events that
      occurred while the counter was enabled.
      
      This remedies the situation by keeping track of how long each counter
      is enabled for, and how long it is actually on the cpu and counting
      events.  These times are recorded in nanoseconds using the task clock
      for per-task counters and the cpu clock for per-cpu counters.
      
      These values can be supplied to userspace on a read from the counter.
      Userspace requests that they be supplied after the counter value by
      setting the PERF_FORMAT_TOTAL_TIME_ENABLED and/or
      PERF_FORMAT_TOTAL_TIME_RUNNING bits in the hw_event.read_format field
      when creating the counter.  (There is no way to change the read format
      after the counter is created, though it would be possible to add some
      way to do that.)
      
      Using this information it is possible for userspace to scale the count
      it reads from the counter to get an estimate of the true count:
      
      true_count_estimate = count * total_time_enabled / total_time_running
      
      This also lets userspace detect the situation where the counter never
      got to go on the cpu: total_time_running == 0.
      
      This functionality has been requested by the PAPI developers, and will
      be generally needed for interpreting the count values from counting
      counters correctly.
      
      In the implementation, this keeps 5 time values (in nanoseconds) for
      each counter: total_time_enabled and total_time_running are used when
      the counter is in state OFF or ERROR and for reporting back to
      userspace.  When the counter is in state INACTIVE or ACTIVE, it is the
      tstamp_enabled, tstamp_running and tstamp_stopped values that are
      relevant, and total_time_enabled and total_time_running are determined
      from them.  (tstamp_stopped is only used in INACTIVE state.)  The
      reason for doing it like this is that it means that only counters
      being enabled or disabled at sched-in and sched-out time need to be
      updated.  There are no new loops that iterate over all counters to
      update total_time_enabled or total_time_running.
      
      This also keeps separate child_total_time_running and
      child_total_time_enabled fields that get added in when reporting the
      totals to userspace.  They are separate fields so that they can be
      atomic.  We don't want to use atomics for total_time_running,
      total_time_enabled etc., because then we would have to use atomic
      sequences to update them, which are slower than regular arithmetic and
      memory accesses.
      
      It is possible to measure total_time_running by adding a task_clock
      counter to each group of counters, and total_time_enabled can be
      measured approximately with a top-level task_clock counter (though
      inaccuracies will creep in if you need to disable and enable groups
      since it is not possible in general to disable/enable the top-level
      task_clock counter simultaneously with another group).  However, that
      adds extra overhead - I measured around 15% increase in the context
      switch latency reported by lat_ctx (from lmbench) when a task_clock
      counter was added to each of 2 groups, and around 25% increase when a
      task_clock counter was added to each of 4 groups.  (In both cases a
      top-level task-clock counter was also added.)
      
      In contrast, the code added in this commit gives better information
      with no overhead that I could measure (in fact in some cases I
      measured lower times with this code, but the differences were all less
      than one standard deviation).
      
      [ v2: address review comments by Andrew Morton. ]
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Orig-LKML-Reference: <18890.6578.728637.139402@cargo.ozlabs.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      53cfbf59
    • P
      perf_counter: optionally provide the pid/tid of the sampled task · ea5d20cf
      Peter Zijlstra 提交于
      Allow cpu wide counters to profile userspace by providing what process
      the sample belongs to.
      
      This raises the first issue with the output type, lots of these
      options: group, tid, callchain, etc.. are non-exclusive and could be
      combined, suggesting a bitfield.
      
      However, things like the mmap() data stream doesn't fit in that.
      
      How to split the type field...
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Arjan van de Ven <arjan@infradead.org>
      Cc: Wu Fengguang <fengguang.wu@intel.com>
      Orig-LKML-Reference: <20090325113317.013775235@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      ea5d20cf
    • P
      perf_counter: output objects · 5c148194
      Peter Zijlstra 提交于
      Provide a {type,size} header for each output entry.
      
      This should provide extensible output, and the ability to mix multiple streams.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Arjan van de Ven <arjan@infradead.org>
      Cc: Wu Fengguang <fengguang.wu@intel.com>
      Orig-LKML-Reference: <20090325113316.831607932@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      5c148194