1. 06 4月, 2009 40 次提交
    • P
      perf_counter: move the event overflow output bits to record_type · 8a057d84
      Peter Zijlstra 提交于
      Per suggestion from Paul, move the event overflow bits to record_type
      and sanitize the enums a bit.
      
      Breaks the ABI -- again ;-)
      Suggested-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      Orig-LKML-Reference: <20090402091319.151921176@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      8a057d84
    • M
      perf_counter tools: kerneltop: add real-time data acquisition thread · 9dd49988
      Mike Galbraith 提交于
      Decouple kerneltop display from event acquisition by introducing
      a separate data acquisition thread. This fixes annnoying kerneltop
      display refresh jitter and missed events.
      
      Also add a -r <prio> option, to switch the data acquisition thread
      to real-time priority.
      Signed-off-by: NMike Galbraith <efault@gmx.de>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Orig-LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      9dd49988
    • P
      perf_counter: pmc arbitration · 4e935e47
      Peter Zijlstra 提交于
      Follow the example set by powerpc and try to play nice with oprofile
      and the nmi watchdog.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Acked-by: NPaul Mackerras <paulus@samba.org>
      Orig-LKML-Reference: <20090330171024.459968444@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      4e935e47
    • P
      perf_counter: x86: callchain support · d7d59fb3
      Peter Zijlstra 提交于
      Provide the x86 perf_callchain() implementation.
      
      Code based on the ftrace/sysprof code from Soeren Sandmann Pedersen.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Acked-by: NPaul Mackerras <paulus@samba.org>
      Cc: Soeren Sandmann Pedersen <sandmann@redhat.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Steven Rostedt <srostedt@redhat.com>
      Orig-LKML-Reference: <20090330171024.341993293@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      d7d59fb3
    • P
      perf_counter: provide generic callchain bits · 394ee076
      Peter Zijlstra 提交于
      Provide the generic callchain support bits. If hw_event->callchain is
      set the arch specific perf_callchain() function is called upon to
      provide a perf_callchain_entry structure filled with the current
      callchain.
      
      If it does so, it is added to the overflow output event.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Acked-by: NPaul Mackerras <paulus@samba.org>
      Orig-LKML-Reference: <20090330171024.254266860@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      394ee076
    • P
      perf_counter tools: kerneltop: update event_types · 023c54c4
      Peter Zijlstra 提交于
      Go along with the new perf_event_type ABI.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Acked-by: NPaul Mackerras <paulus@samba.org>
      Orig-LKML-Reference: <20090330171024.133985461@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      023c54c4
    • P
      perf_counter: re-arrange the perf_event_type · 5ed00415
      Peter Zijlstra 提交于
      Breaks ABI yet again :-)
      
      Change the event type so that [0, 2^31-1] are regular event types, but
      [2^31, 2^32-1] forms a bitmask for overflow events.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Acked-by: NPaul Mackerras <paulus@samba.org>
      Orig-LKML-Reference: <20090330171024.047961770@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      5ed00415
    • P
      perf_counter: small cleanup of the output routines · 78d613eb
      Peter Zijlstra 提交于
      Move the nmi argument to the _begin() function, so that _end() only needs the
      handle. This allows the _begin() function to generate a wakeup on event loss.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Acked-by: NPaul Mackerras <paulus@samba.org>
      Orig-LKML-Reference: <20090330171023.959404268@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      78d613eb
    • P
      perf_counter tools: optionally scale counter values in perfstat mode · 31f004df
      Paul Mackerras 提交于
      Impact: new functionality
      
      This adds add an option to the perfstat mode of kerneltop to scale the
      reported counter values according to the fraction of time that each
      counter gets to count.  This is invoked with the -l option (I used 'l'
      because s, c, a and e were all taken already.)  This uses the new
      PERF_RECORD_TOTAL_TIME_{ENABLED,RUNNING} read format options.
      
      With this, we get output like this:
      
      $ ./perfstat -l -e 0:0,0:1,0:2,0:3,0:4,0:5 ./spin
      
       Performance counter stats for './spin':
      
           4016072055  CPU cycles           (events)  (scaled from 66.53%)
           2005887318  instructions         (events)  (scaled from 66.53%)
              1762849  cache references     (events)  (scaled from 66.69%)
               165229  cache misses         (events)  (scaled from 66.85%)
           1001298009  branches             (events)  (scaled from 66.78%)
                41566  branch misses        (events)  (scaled from 66.61%)
      
       Wall-clock time elapsed:  2438.227446 msecs
      
      This also lets us detect when a counter is zero because the counter
      never got to go on the CPU at all.  In that case we print <not counted>
      rather than 0.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Orig-LKML-Reference: <20090330171023.871484899@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      31f004df
    • P
      perf_counter: x86: proper error propagation for the x86 hw_perf_counter_init() · 9ea98e19
      Peter Zijlstra 提交于
      Now that Paul cleaned up the error propagation paths, pass down the
      x86 error as well.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Acked-by: NPaul Mackerras <paulus@samba.org>
      Orig-LKML-Reference: <20090330171023.792822360@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      9ea98e19
    • P
      perf_counter: make it possible for hw_perf_counter_init to return error codes · d5d2bc0d
      Paul Mackerras 提交于
      Impact: better error reporting
      
      At present, if hw_perf_counter_init encounters an error, all it can do
      is return NULL, which causes sys_perf_counter_open to return an EINVAL
      error to userspace.  This isn't very informative for userspace; it means
      that userspace can't tell the difference between "sorry, oprofile is
      already using the PMU" and "we don't support this CPU" and "this CPU
      doesn't support the requested generic hardware event".
      
      This commit uses the PTR_ERR/ERR_PTR/IS_ERR set of macros to let
      hw_perf_counter_init return an error code on error rather than just NULL
      if it wishes.  If it does so, that error code will be returned from
      sys_perf_counter_open to userspace.  If it returns NULL, an EINVAL
      error will be returned to userspace, as before.
      
      This also adapts the powerpc hw_perf_counter_init to make use of this
      to return ENXIO, EINVAL, EBUSY, or EOPNOTSUPP as appropriate.  It would
      be good to add extra error numbers in future to allow userspace to
      distinguish the various errors that are currently reported as EINVAL,
      i.e. irq_period < 0, too many events in a group, conflict between
      exclude_* settings in a group, and PMU resource conflict in a group.
      
      [ v2: fix a bug pointed out by Corey Ashford where error returns from
            hw_perf_counter_init were not handled correctly in the case of
            raw hardware events.]
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Orig-LKML-Reference: <20090330171023.682428180@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      d5d2bc0d
    • P
      perf_counter: powerpc: only reserve PMU hardware when we need it · 7595d63b
      Paul Mackerras 提交于
      Impact: cooperate with oprofile
      
      At present, on PowerPC, if you have perf_counters compiled in, oprofile
      doesn't work.  There is code to allow the PMU to be shared between
      competing subsystems, such as perf_counters and oprofile, but currently
      the perf_counter subsystem reserves the PMU for itself at boot time,
      and never releases it.
      
      This makes perf_counter play nicely with oprofile.  Now we keep a count
      of how many perf_counter instances are counting hardware events, and
      reserve the PMU when that count becomes non-zero, and release the PMU
      when that count becomes zero.  This means that it is possible to have
      perf_counters compiled in and still use oprofile, as long as there are
      no hardware perf_counters active.  This also means that if oprofile is
      active, sys_perf_counter_open will fail if the hw_event specifies a
      hardware event.
      
      To avoid races with other tasks creating and destroying perf_counters,
      we use a mutex.  We use atomic_inc_not_zero and atomic_add_unless to
      avoid having to take the mutex unless there is a possibility of the
      count going between 0 and 1.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Orig-LKML-Reference: <20090330171023.627912475@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      7595d63b
    • P
      perf_counter: kerneltop: parse the mmap data stream · 3c1ba6fa
      Peter Zijlstra 提交于
      frob the kerneltop code to print the mmap data in the stream
      
      Better use would be collecting the IPs per PID and mapping them onto
      the provided userspace code.. TODO
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Acked-by: NPaul Mackerras <paulus@samba.org>
      Orig-LKML-Reference: <20090330171023.501902515@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      3c1ba6fa
    • P
      perf_counter: executable mmap() information · 0a4a9391
      Peter Zijlstra 提交于
      Currently the profiling information returns userspace IPs but no way
      to correlate them to userspace code. Userspace could look into
      /proc/$pid/maps but that might not be current or even present anymore
      at the time of analyzing the IPs.
      
      Therefore provide means to track the mmap information and provide it
      in the output stream.
      
      XXX: only covers mmap()/munmap(), mremap() and mprotect() are missing.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Acked-by: NPaul Mackerras <paulus@samba.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Orig-LKML-Reference: <20090330171023.417259499@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      0a4a9391
    • P
      perf_counter: kerneltop: simplify data_head read · 19556439
      Peter Zijlstra 提交于
      Now that the kernel side changed, match up again.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Acked-by: NPaul Mackerras <paulus@samba.org>
      Orig-LKML-Reference: <20090330171023.327144324@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      19556439
    • P
      perf_counter: fix update_userpage() · 38ff667b
      Peter Zijlstra 提交于
      It just occured to me it is possible to have multiple contending
      updates of the userpage (mmap information vs overflow vs counter).
      This would break the seqlock logic.
      
      It appear the arch code uses this from NMI context, so we cannot
      possibly serialize its use, therefore separate the data_head update
      from it and let it return to its original use.
      
      The arch code needs to make sure there are no contending callers by
      disabling the counter before using it -- powerpc appears to do this
      nicely.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Acked-by: NPaul Mackerras <paulus@samba.org>
      Orig-LKML-Reference: <20090330171023.241410660@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      38ff667b
    • P
      perf_counter: unify and fix delayed counter wakeup · 925d519a
      Peter Zijlstra 提交于
      While going over the wakeup code I noticed delayed wakeups only work
      for hardware counters but basically all software counters rely on
      them.
      
      This patch unifies and generalizes the delayed wakeup to fix this
      issue.
      
      Since we're dealing with NMI context bits here, use a cmpxchg() based
      single link list implementation to track counters that have pending
      wakeups.
      
      [ This should really be generic code for delayed wakeups, but since we
        cannot use cmpxchg()/xchg() in generic code, I've let it live in the
        perf_counter code. -- Eric Dumazet could use it to aggregate the
        network wakeups. ]
      
      Furthermore, the x86 method of using TIF flags was flawed in that its
      quite possible to end up setting the bit on the idle task, loosing the
      wakeup.
      
      The powerpc method uses per-cpu storage and does appear to be
      sufficient.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Acked-by: NPaul Mackerras <paulus@samba.org>
      Orig-LKML-Reference: <20090330171023.153932974@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      925d519a
    • P
      perf_counter: record time running and time enabled for each counter · 53cfbf59
      Paul Mackerras 提交于
      Impact: new functionality
      
      Currently, if there are more counters enabled than can fit on the CPU,
      the kernel will multiplex the counters on to the hardware using
      round-robin scheduling.  That isn't too bad for sampling counters, but
      for counting counters it means that the value read from a counter
      represents some unknown fraction of the true count of events that
      occurred while the counter was enabled.
      
      This remedies the situation by keeping track of how long each counter
      is enabled for, and how long it is actually on the cpu and counting
      events.  These times are recorded in nanoseconds using the task clock
      for per-task counters and the cpu clock for per-cpu counters.
      
      These values can be supplied to userspace on a read from the counter.
      Userspace requests that they be supplied after the counter value by
      setting the PERF_FORMAT_TOTAL_TIME_ENABLED and/or
      PERF_FORMAT_TOTAL_TIME_RUNNING bits in the hw_event.read_format field
      when creating the counter.  (There is no way to change the read format
      after the counter is created, though it would be possible to add some
      way to do that.)
      
      Using this information it is possible for userspace to scale the count
      it reads from the counter to get an estimate of the true count:
      
      true_count_estimate = count * total_time_enabled / total_time_running
      
      This also lets userspace detect the situation where the counter never
      got to go on the cpu: total_time_running == 0.
      
      This functionality has been requested by the PAPI developers, and will
      be generally needed for interpreting the count values from counting
      counters correctly.
      
      In the implementation, this keeps 5 time values (in nanoseconds) for
      each counter: total_time_enabled and total_time_running are used when
      the counter is in state OFF or ERROR and for reporting back to
      userspace.  When the counter is in state INACTIVE or ACTIVE, it is the
      tstamp_enabled, tstamp_running and tstamp_stopped values that are
      relevant, and total_time_enabled and total_time_running are determined
      from them.  (tstamp_stopped is only used in INACTIVE state.)  The
      reason for doing it like this is that it means that only counters
      being enabled or disabled at sched-in and sched-out time need to be
      updated.  There are no new loops that iterate over all counters to
      update total_time_enabled or total_time_running.
      
      This also keeps separate child_total_time_running and
      child_total_time_enabled fields that get added in when reporting the
      totals to userspace.  They are separate fields so that they can be
      atomic.  We don't want to use atomics for total_time_running,
      total_time_enabled etc., because then we would have to use atomic
      sequences to update them, which are slower than regular arithmetic and
      memory accesses.
      
      It is possible to measure total_time_running by adding a task_clock
      counter to each group of counters, and total_time_enabled can be
      measured approximately with a top-level task_clock counter (though
      inaccuracies will creep in if you need to disable and enable groups
      since it is not possible in general to disable/enable the top-level
      task_clock counter simultaneously with another group).  However, that
      adds extra overhead - I measured around 15% increase in the context
      switch latency reported by lat_ctx (from lmbench) when a task_clock
      counter was added to each of 2 groups, and around 25% increase when a
      task_clock counter was added to each of 4 groups.  (In both cases a
      top-level task-clock counter was also added.)
      
      In contrast, the code added in this commit gives better information
      with no overhead that I could measure (in fact in some cases I
      measured lower times with this code, but the differences were all less
      than one standard deviation).
      
      [ v2: address review comments by Andrew Morton. ]
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Orig-LKML-Reference: <18890.6578.728637.139402@cargo.ozlabs.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      53cfbf59
    • P
      perf_counter: allow and require one-page mmap on counting counters · 7730d865
      Peter Zijlstra 提交于
      A brainfart stopped single page mmap()s working. The rest of the code
      should be perfectly fine with not having any data pages.
      Reported-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Orig-LKML-Reference: <1237981712.7972.812.camel@twins>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      7730d865
    • P
      perf_counter: kerneltop: output event support · 00f0ad73
      Peter Zijlstra 提交于
      Teach kerneltop about the new output ABI.
      
      XXX: anybody fancy integrating the PID/TID data into the output?
      
      Bump the mmap_data pages a little because we bloated the output and
      have to be more careful about overruns with structured data.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Arjan van de Ven <arjan@infradead.org>
      Cc: Wu Fengguang <fengguang.wu@intel.com>
      Orig-LKML-Reference: <20090325113317.192910290@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      00f0ad73
    • P
      perf_counter: kerneltop: mmap_pages argument · 4c4ba21d
      Peter Zijlstra 提交于
      provide a knob to set the number of mmap data pages.
      Signed-off-by: NMike Galbraith <efault@gmx.de>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Arjan van de Ven <arjan@infradead.org>
      Cc: Wu Fengguang <fengguang.wu@intel.com>
      Orig-LKML-Reference: <20090325113317.104545398@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      4c4ba21d
    • P
      perf_counter: optionally provide the pid/tid of the sampled task · ea5d20cf
      Peter Zijlstra 提交于
      Allow cpu wide counters to profile userspace by providing what process
      the sample belongs to.
      
      This raises the first issue with the output type, lots of these
      options: group, tid, callchain, etc.. are non-exclusive and could be
      combined, suggesting a bitfield.
      
      However, things like the mmap() data stream doesn't fit in that.
      
      How to split the type field...
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Arjan van de Ven <arjan@infradead.org>
      Cc: Wu Fengguang <fengguang.wu@intel.com>
      Orig-LKML-Reference: <20090325113317.013775235@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      ea5d20cf
    • P
      perf_counter: sanity check on the output API · 63e35b25
      Peter Zijlstra 提交于
      Ensure we never write more than we said we would.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Arjan van de Ven <arjan@infradead.org>
      Cc: Wu Fengguang <fengguang.wu@intel.com>
      Orig-LKML-Reference: <20090325113316.921433024@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      63e35b25
    • P
      perf_counter: output objects · 5c148194
      Peter Zijlstra 提交于
      Provide a {type,size} header for each output entry.
      
      This should provide extensible output, and the ability to mix multiple streams.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Arjan van de Ven <arjan@infradead.org>
      Cc: Wu Fengguang <fengguang.wu@intel.com>
      Orig-LKML-Reference: <20090325113316.831607932@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      5c148194
    • P
      perf_counter: more elaborate write API · b9cacc7b
      Peter Zijlstra 提交于
      Provide a begin, copy, end interface to the output buffer.
      
      begin() reserves the space,
       copy() copies the data over, considering page boundaries,
        end() finalizes the event and does the wakeup.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Arjan van de Ven <arjan@infradead.org>
      Cc: Wu Fengguang <fengguang.wu@intel.com>
      Orig-LKML-Reference: <20090325113316.740550870@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      b9cacc7b
    • P
      perf_counter: fix perf_poll() · c7138f37
      Peter Zijlstra 提交于
      Impact: fix kerneltop 100% CPU usage
      
      Only return a poll event when there's actually been one, poll_wait()
      doesn't actually wait for the waitq you pass it, it only enqueues
      you on it.
      
      Only once all FDs have been iterated and none of thm returned a
      poll-event will it schedule().
      
      Also make it return POLL_HUP when there's not mmap() area to read from.
      
      Further, fix a silly bug in the write code.
      Reported-by: NMike Galbraith <efault@gmx.de>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Arjan van de Ven <arjan@infradead.org>
      Orig-LKML-Reference: <1237897096.24918.181.camel@twins>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      c7138f37
    • P
      perf_counter: update documentation · f66c6b20
      Paul Mackerras 提交于
      Impact: documentation fix
      
      This updates the perfcounter documentation to reflect recent changes.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      f66c6b20
    • P
      perf_counter tools: remove glib dependency and fix bugs in kerneltop.c, fix poll() · 0fd112e4
      Peter Zijlstra 提交于
      Paul Mackerras wrote:
      
      > I noticed the poll stuff is bogus - we have a 2D array of struct
      > pollfds (MAX_NR_CPUS x MAX_COUNTERS), we fill in a sub-array (with the
      > rest being uninitialized, since the array is on the stack) and then
      > pass the first nr_cpus elements to poll.  Not what we really meant, I
      > suspect. :)  Not even if we only have one counter, since it's the
      > counter dimension that varies fastest.
      
      This should fix the most obvious poll fubar.. not enough to fix the
      full problem though..
      Reported-by: NPaul Mackerras <paulus@samba.org>
      Reported-by: NMike Galbraith <efault@gmx.de>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Orig-LKML-Reference: <18888.29986.340328.540512@cargo.ozlabs.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      0fd112e4
    • P
      perf_counter tools: remove glib dependency and fix bugs in kerneltop.c · cbe46555
      Paul Mackerras 提交于
      The glib dependency in kerneltop.c is only for a little bit of list
      manipulation, and I find it inconvenient.  This adds a 'next' field to
      struct source_line, which lets us link them together into a list.  The
      code to do the linking ourselves turns out to be no longer or more
      difficult than using glib.
      
      This also fixes a few other problems:
      
      - We need to #include <limits.h> to get PATH_MAX on powerpc.
      
      - We need to #include <linux/types.h> rather than have our own
        definitions of __u64 and __s64; on powerpc the installed headers
        define them to be unsigned long and long respectively, and if we
        have our own, different definition here that causes a compile error.
      
      - This takes out the x86 setting of errno from -ret in
        sys_perf_counter_open.  My experiments on x86 indicate that the
        glibc syscall() does this for us already.
      
      - We had two CPU migration counters in the default set, which seems
        unnecessary; I changed one of them to a context switch counter.
      
      - In perfstat mode we were printing CPU cycles and instructions as
        milliseconds, and the cpu clock and task clock counters as events.
        This fixes that.
      
      - In perfstat mode we were still printing a blank line after the first
        counter, which was a holdover from when a task clock counter was
        automatically included as the first counter.  This removes the blank
        line.
      
      - On a test machine here, parse_symbols() and parse_vmlinux() were
        taking long enough (almost 0.5 seconds) for the mmap buffer to
        overflow before we got to the first mmap_read() call, so this moves
        them before we open all the counters.
      
      - The error message if sys_perf_counter_open fails needs to use errno,
        not -fd[i][counter].
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Acked-by: NMike Galbraith <efault@gmx.de>
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Orig-LKML-Reference: <18888.29986.340328.540512@cargo.ozlabs.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      cbe46555
    • I
      perf_counter tools: increase cpu-cycles again · 81cdbe05
      Ingo Molnar 提交于
      Commit b7368fdd7d decreased the CPU cycles interval 100-fold, but
      this is causig kerneltop failures on my Nehalem box:
      
       aldebaran:/home/mingo/linux/linux/Documentation/perf_counter>
       ./kerneltop
       KernelTop refresh period: 2 seconds
       ERROR: failed to keep up with mmap data
      
      10,000 cycles is way too short.
      
      What we should do instead on mostly-idle systems is some sort of
      read/poll timeout, so that we display something every 2 seconds
      for sure.
      
      Cc: Wu Fengguang <fengguang.wu@intel.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      81cdbe05
    • I
      perf_counter tools: fix build warning in kerneltop.c · 193e8df1
      Ingo Molnar 提交于
      Fix:
      
       kerneltop.c: In function ‘record_ip’:
       kerneltop.c:1005: warning: format ‘%016llx’ expects type ‘long long unsigned int’, but argument 2 has type ‘uint64_t’
      
      Cc: Wu Fengguang <fengguang.wu@intel.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Orig-LKML-Reference: <20090323172417.677932499@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      193e8df1
    • I
      perf_counter tools: tidy up in-kernel dependencies · 383c5f8c
      Ingo Molnar 提交于
      Remove now unified perfstat.c and perf_counter.h, and link to the
      in-kernel perf_counter.h.
      
      Cc: Wu Fengguang <fengguang.wu@intel.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Orig-LKML-Reference: <20090323172417.677932499@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      383c5f8c
    • P
      perf_counter tools: use mmap() output · bcbcb37c
      Peter Zijlstra 提交于
      update kerneltop to use the mmap() output to gather overflow information
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Wu Fengguang <fengguang.wu@intel.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Orig-LKML-Reference: <20090323172417.677932499@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      bcbcb37c
    • P
      perf_counter tools: update to new syscall ABI · 803d4f39
      Peter Zijlstra 提交于
      update the kerneltop userspace to work with the latest syscall ABI
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Wu Fengguang <fengguang.wu@intel.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Orig-LKML-Reference: <20090323172417.559643732@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      803d4f39
    • P
      perf_counter: new output ABI - part 1 · 7b732a75
      Peter Zijlstra 提交于
      Impact: Rework the perfcounter output ABI
      
      use sys_read() only for instant data and provide mmap() output for all
      async overflow data.
      
      The first mmap() determines the size of the output buffer. The mmap()
      size must be a PAGE_SIZE multiple of 1+pages, where pages must be a
      power of 2 or 0. Further mmap()s of the same fd must have the same
      size. Once all maps are gone, you can again mmap() with a new size.
      
      In case of 0 extra pages there is no data output and the first page
      only contains meta data.
      
      When there are data pages, a poll() event will be generated for each
      full page of data. Furthermore, the output is circular. This means
      that although 1 page is a valid configuration, its useless, since
      we'll start overwriting it the instant we report a full page.
      
      Future work will focus on the output format (currently maintained)
      where we'll likey want each entry denoted by a header which includes a
      type and length.
      
      Further future work will allow to splice() the fd, also containing the
      async overflow data -- splice() would be mutually exclusive with
      mmap() of the data.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Orig-LKML-Reference: <20090323172417.470536358@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      7b732a75
    • H
      mutex: drop "inline" from mutex_lock() inside kernel/mutex.c · b09d2501
      H. Peter Anvin 提交于
      Impact: build fix
      
      mutex_lock() is was defined inline in kernel/mutex.c, but wasn't
      declared so not in <linux/mutex.h>.  This didn't cause a problem until
      checkin 3a2d367d9aabac486ac4444c6c7ec7a1dab16267 added the
      atomic_dec_and_mutex_lock() inline in between declaration and
      definion.
      
      This broke building with CONFIG_ALLOW_WARNINGS=n, e.g. make
      allnoconfig.
      
      Either from the source code nor the allnoconfig binary output I cannot
      find any internal references to mutex_lock() in kernel/mutex.c, so
      presumably this "inline" is now-useless legacy.
      
      Cc: Eric Paris <eparis@redhat.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Orig-LKML-Reference: <tip-3a2d367d9aabac486ac4444c6c7ec7a1dab16267@git.kernel.org>
      Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      b09d2501
    • E
      mutex: add atomic_dec_and_mutex_lock() · 9ab772cd
      Eric Paris 提交于
      Much like the atomic_dec_and_lock() function in which we take an hold a
      spin_lock if we drop the atomic to 0 this function takes and holds the
      mutex if we dec the atomic to 0.
      Signed-off-by: NEric Paris <eparis@redhat.com>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Orig-LKML-Reference: <20090323172417.410913479@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      9ab772cd
    • P
      perf_counter: add an mmap method to allow userspace to read hardware counters · 37d81828
      Paul Mackerras 提交于
      Impact: new feature giving performance improvement
      
      This adds the ability for userspace to do an mmap on a hardware counter
      fd and get access to a read-only page that contains the information
      needed to translate a hardware counter value to the full 64-bit
      counter value that would be returned by a read on the fd.  This is
      useful on architectures that allow user programs to read the hardware
      counters, such as PowerPC.
      
      The mmap will only succeed if the counter is a hardware counter
      monitoring the current process.
      
      On my quad 2.5GHz PowerPC 970MP machine, userspace can read a counter
      and translate it to the full 64-bit value in about 30ns using the
      mmapped page, compared to about 830ns for the read syscall on the
      counter, so this does give a significant performance improvement.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Orig-LKML-Reference: <20090323172417.297057964@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      37d81828
    • P
      perf_counter: avoid recursion · 96f6d444
      Peter Zijlstra 提交于
      Tracepoint events like lock_acquire and software counters like
      pagefaults can recurse into the perf counter code again, avoid that.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Orig-LKML-Reference: <20090323172417.152096433@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      96f6d444
    • P
      perf_counter: remove the event config bitfields · f4a2deb4
      Peter Zijlstra 提交于
      Since the bitfields turned into a bit of a mess, remove them and rely on
      good old masks.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Orig-LKML-Reference: <20090323172417.059499915@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      f4a2deb4