1. 06 4月, 2009 40 次提交
    • P
      perf_counter: unify and fix delayed counter wakeup · 925d519a
      Peter Zijlstra 提交于
      While going over the wakeup code I noticed delayed wakeups only work
      for hardware counters but basically all software counters rely on
      them.
      
      This patch unifies and generalizes the delayed wakeup to fix this
      issue.
      
      Since we're dealing with NMI context bits here, use a cmpxchg() based
      single link list implementation to track counters that have pending
      wakeups.
      
      [ This should really be generic code for delayed wakeups, but since we
        cannot use cmpxchg()/xchg() in generic code, I've let it live in the
        perf_counter code. -- Eric Dumazet could use it to aggregate the
        network wakeups. ]
      
      Furthermore, the x86 method of using TIF flags was flawed in that its
      quite possible to end up setting the bit on the idle task, loosing the
      wakeup.
      
      The powerpc method uses per-cpu storage and does appear to be
      sufficient.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Acked-by: NPaul Mackerras <paulus@samba.org>
      Orig-LKML-Reference: <20090330171023.153932974@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      925d519a
    • P
      perf_counter: record time running and time enabled for each counter · 53cfbf59
      Paul Mackerras 提交于
      Impact: new functionality
      
      Currently, if there are more counters enabled than can fit on the CPU,
      the kernel will multiplex the counters on to the hardware using
      round-robin scheduling.  That isn't too bad for sampling counters, but
      for counting counters it means that the value read from a counter
      represents some unknown fraction of the true count of events that
      occurred while the counter was enabled.
      
      This remedies the situation by keeping track of how long each counter
      is enabled for, and how long it is actually on the cpu and counting
      events.  These times are recorded in nanoseconds using the task clock
      for per-task counters and the cpu clock for per-cpu counters.
      
      These values can be supplied to userspace on a read from the counter.
      Userspace requests that they be supplied after the counter value by
      setting the PERF_FORMAT_TOTAL_TIME_ENABLED and/or
      PERF_FORMAT_TOTAL_TIME_RUNNING bits in the hw_event.read_format field
      when creating the counter.  (There is no way to change the read format
      after the counter is created, though it would be possible to add some
      way to do that.)
      
      Using this information it is possible for userspace to scale the count
      it reads from the counter to get an estimate of the true count:
      
      true_count_estimate = count * total_time_enabled / total_time_running
      
      This also lets userspace detect the situation where the counter never
      got to go on the cpu: total_time_running == 0.
      
      This functionality has been requested by the PAPI developers, and will
      be generally needed for interpreting the count values from counting
      counters correctly.
      
      In the implementation, this keeps 5 time values (in nanoseconds) for
      each counter: total_time_enabled and total_time_running are used when
      the counter is in state OFF or ERROR and for reporting back to
      userspace.  When the counter is in state INACTIVE or ACTIVE, it is the
      tstamp_enabled, tstamp_running and tstamp_stopped values that are
      relevant, and total_time_enabled and total_time_running are determined
      from them.  (tstamp_stopped is only used in INACTIVE state.)  The
      reason for doing it like this is that it means that only counters
      being enabled or disabled at sched-in and sched-out time need to be
      updated.  There are no new loops that iterate over all counters to
      update total_time_enabled or total_time_running.
      
      This also keeps separate child_total_time_running and
      child_total_time_enabled fields that get added in when reporting the
      totals to userspace.  They are separate fields so that they can be
      atomic.  We don't want to use atomics for total_time_running,
      total_time_enabled etc., because then we would have to use atomic
      sequences to update them, which are slower than regular arithmetic and
      memory accesses.
      
      It is possible to measure total_time_running by adding a task_clock
      counter to each group of counters, and total_time_enabled can be
      measured approximately with a top-level task_clock counter (though
      inaccuracies will creep in if you need to disable and enable groups
      since it is not possible in general to disable/enable the top-level
      task_clock counter simultaneously with another group).  However, that
      adds extra overhead - I measured around 15% increase in the context
      switch latency reported by lat_ctx (from lmbench) when a task_clock
      counter was added to each of 2 groups, and around 25% increase when a
      task_clock counter was added to each of 4 groups.  (In both cases a
      top-level task-clock counter was also added.)
      
      In contrast, the code added in this commit gives better information
      with no overhead that I could measure (in fact in some cases I
      measured lower times with this code, but the differences were all less
      than one standard deviation).
      
      [ v2: address review comments by Andrew Morton. ]
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Orig-LKML-Reference: <18890.6578.728637.139402@cargo.ozlabs.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      53cfbf59
    • P
      perf_counter: allow and require one-page mmap on counting counters · 7730d865
      Peter Zijlstra 提交于
      A brainfart stopped single page mmap()s working. The rest of the code
      should be perfectly fine with not having any data pages.
      Reported-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Orig-LKML-Reference: <1237981712.7972.812.camel@twins>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      7730d865
    • P
      perf_counter: kerneltop: output event support · 00f0ad73
      Peter Zijlstra 提交于
      Teach kerneltop about the new output ABI.
      
      XXX: anybody fancy integrating the PID/TID data into the output?
      
      Bump the mmap_data pages a little because we bloated the output and
      have to be more careful about overruns with structured data.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Arjan van de Ven <arjan@infradead.org>
      Cc: Wu Fengguang <fengguang.wu@intel.com>
      Orig-LKML-Reference: <20090325113317.192910290@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      00f0ad73
    • P
      perf_counter: kerneltop: mmap_pages argument · 4c4ba21d
      Peter Zijlstra 提交于
      provide a knob to set the number of mmap data pages.
      Signed-off-by: NMike Galbraith <efault@gmx.de>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Arjan van de Ven <arjan@infradead.org>
      Cc: Wu Fengguang <fengguang.wu@intel.com>
      Orig-LKML-Reference: <20090325113317.104545398@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      4c4ba21d
    • P
      perf_counter: optionally provide the pid/tid of the sampled task · ea5d20cf
      Peter Zijlstra 提交于
      Allow cpu wide counters to profile userspace by providing what process
      the sample belongs to.
      
      This raises the first issue with the output type, lots of these
      options: group, tid, callchain, etc.. are non-exclusive and could be
      combined, suggesting a bitfield.
      
      However, things like the mmap() data stream doesn't fit in that.
      
      How to split the type field...
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Arjan van de Ven <arjan@infradead.org>
      Cc: Wu Fengguang <fengguang.wu@intel.com>
      Orig-LKML-Reference: <20090325113317.013775235@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      ea5d20cf
    • P
      perf_counter: sanity check on the output API · 63e35b25
      Peter Zijlstra 提交于
      Ensure we never write more than we said we would.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Arjan van de Ven <arjan@infradead.org>
      Cc: Wu Fengguang <fengguang.wu@intel.com>
      Orig-LKML-Reference: <20090325113316.921433024@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      63e35b25
    • P
      perf_counter: output objects · 5c148194
      Peter Zijlstra 提交于
      Provide a {type,size} header for each output entry.
      
      This should provide extensible output, and the ability to mix multiple streams.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Arjan van de Ven <arjan@infradead.org>
      Cc: Wu Fengguang <fengguang.wu@intel.com>
      Orig-LKML-Reference: <20090325113316.831607932@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      5c148194
    • P
      perf_counter: more elaborate write API · b9cacc7b
      Peter Zijlstra 提交于
      Provide a begin, copy, end interface to the output buffer.
      
      begin() reserves the space,
       copy() copies the data over, considering page boundaries,
        end() finalizes the event and does the wakeup.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Arjan van de Ven <arjan@infradead.org>
      Cc: Wu Fengguang <fengguang.wu@intel.com>
      Orig-LKML-Reference: <20090325113316.740550870@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      b9cacc7b
    • P
      perf_counter: fix perf_poll() · c7138f37
      Peter Zijlstra 提交于
      Impact: fix kerneltop 100% CPU usage
      
      Only return a poll event when there's actually been one, poll_wait()
      doesn't actually wait for the waitq you pass it, it only enqueues
      you on it.
      
      Only once all FDs have been iterated and none of thm returned a
      poll-event will it schedule().
      
      Also make it return POLL_HUP when there's not mmap() area to read from.
      
      Further, fix a silly bug in the write code.
      Reported-by: NMike Galbraith <efault@gmx.de>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Arjan van de Ven <arjan@infradead.org>
      Orig-LKML-Reference: <1237897096.24918.181.camel@twins>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      c7138f37
    • P
      perf_counter: update documentation · f66c6b20
      Paul Mackerras 提交于
      Impact: documentation fix
      
      This updates the perfcounter documentation to reflect recent changes.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      f66c6b20
    • P
      perf_counter tools: remove glib dependency and fix bugs in kerneltop.c, fix poll() · 0fd112e4
      Peter Zijlstra 提交于
      Paul Mackerras wrote:
      
      > I noticed the poll stuff is bogus - we have a 2D array of struct
      > pollfds (MAX_NR_CPUS x MAX_COUNTERS), we fill in a sub-array (with the
      > rest being uninitialized, since the array is on the stack) and then
      > pass the first nr_cpus elements to poll.  Not what we really meant, I
      > suspect. :)  Not even if we only have one counter, since it's the
      > counter dimension that varies fastest.
      
      This should fix the most obvious poll fubar.. not enough to fix the
      full problem though..
      Reported-by: NPaul Mackerras <paulus@samba.org>
      Reported-by: NMike Galbraith <efault@gmx.de>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Orig-LKML-Reference: <18888.29986.340328.540512@cargo.ozlabs.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      0fd112e4
    • P
      perf_counter tools: remove glib dependency and fix bugs in kerneltop.c · cbe46555
      Paul Mackerras 提交于
      The glib dependency in kerneltop.c is only for a little bit of list
      manipulation, and I find it inconvenient.  This adds a 'next' field to
      struct source_line, which lets us link them together into a list.  The
      code to do the linking ourselves turns out to be no longer or more
      difficult than using glib.
      
      This also fixes a few other problems:
      
      - We need to #include <limits.h> to get PATH_MAX on powerpc.
      
      - We need to #include <linux/types.h> rather than have our own
        definitions of __u64 and __s64; on powerpc the installed headers
        define them to be unsigned long and long respectively, and if we
        have our own, different definition here that causes a compile error.
      
      - This takes out the x86 setting of errno from -ret in
        sys_perf_counter_open.  My experiments on x86 indicate that the
        glibc syscall() does this for us already.
      
      - We had two CPU migration counters in the default set, which seems
        unnecessary; I changed one of them to a context switch counter.
      
      - In perfstat mode we were printing CPU cycles and instructions as
        milliseconds, and the cpu clock and task clock counters as events.
        This fixes that.
      
      - In perfstat mode we were still printing a blank line after the first
        counter, which was a holdover from when a task clock counter was
        automatically included as the first counter.  This removes the blank
        line.
      
      - On a test machine here, parse_symbols() and parse_vmlinux() were
        taking long enough (almost 0.5 seconds) for the mmap buffer to
        overflow before we got to the first mmap_read() call, so this moves
        them before we open all the counters.
      
      - The error message if sys_perf_counter_open fails needs to use errno,
        not -fd[i][counter].
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Acked-by: NMike Galbraith <efault@gmx.de>
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Orig-LKML-Reference: <18888.29986.340328.540512@cargo.ozlabs.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      cbe46555
    • I
      perf_counter tools: increase cpu-cycles again · 81cdbe05
      Ingo Molnar 提交于
      Commit b7368fdd7d decreased the CPU cycles interval 100-fold, but
      this is causig kerneltop failures on my Nehalem box:
      
       aldebaran:/home/mingo/linux/linux/Documentation/perf_counter>
       ./kerneltop
       KernelTop refresh period: 2 seconds
       ERROR: failed to keep up with mmap data
      
      10,000 cycles is way too short.
      
      What we should do instead on mostly-idle systems is some sort of
      read/poll timeout, so that we display something every 2 seconds
      for sure.
      
      Cc: Wu Fengguang <fengguang.wu@intel.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      81cdbe05
    • I
      perf_counter tools: fix build warning in kerneltop.c · 193e8df1
      Ingo Molnar 提交于
      Fix:
      
       kerneltop.c: In function ‘record_ip’:
       kerneltop.c:1005: warning: format ‘%016llx’ expects type ‘long long unsigned int’, but argument 2 has type ‘uint64_t’
      
      Cc: Wu Fengguang <fengguang.wu@intel.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Orig-LKML-Reference: <20090323172417.677932499@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      193e8df1
    • I
      perf_counter tools: tidy up in-kernel dependencies · 383c5f8c
      Ingo Molnar 提交于
      Remove now unified perfstat.c and perf_counter.h, and link to the
      in-kernel perf_counter.h.
      
      Cc: Wu Fengguang <fengguang.wu@intel.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Orig-LKML-Reference: <20090323172417.677932499@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      383c5f8c
    • P
      perf_counter tools: use mmap() output · bcbcb37c
      Peter Zijlstra 提交于
      update kerneltop to use the mmap() output to gather overflow information
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Wu Fengguang <fengguang.wu@intel.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Orig-LKML-Reference: <20090323172417.677932499@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      bcbcb37c
    • P
      perf_counter tools: update to new syscall ABI · 803d4f39
      Peter Zijlstra 提交于
      update the kerneltop userspace to work with the latest syscall ABI
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Wu Fengguang <fengguang.wu@intel.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Orig-LKML-Reference: <20090323172417.559643732@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      803d4f39
    • P
      perf_counter: new output ABI - part 1 · 7b732a75
      Peter Zijlstra 提交于
      Impact: Rework the perfcounter output ABI
      
      use sys_read() only for instant data and provide mmap() output for all
      async overflow data.
      
      The first mmap() determines the size of the output buffer. The mmap()
      size must be a PAGE_SIZE multiple of 1+pages, where pages must be a
      power of 2 or 0. Further mmap()s of the same fd must have the same
      size. Once all maps are gone, you can again mmap() with a new size.
      
      In case of 0 extra pages there is no data output and the first page
      only contains meta data.
      
      When there are data pages, a poll() event will be generated for each
      full page of data. Furthermore, the output is circular. This means
      that although 1 page is a valid configuration, its useless, since
      we'll start overwriting it the instant we report a full page.
      
      Future work will focus on the output format (currently maintained)
      where we'll likey want each entry denoted by a header which includes a
      type and length.
      
      Further future work will allow to splice() the fd, also containing the
      async overflow data -- splice() would be mutually exclusive with
      mmap() of the data.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Orig-LKML-Reference: <20090323172417.470536358@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      7b732a75
    • H
      mutex: drop "inline" from mutex_lock() inside kernel/mutex.c · b09d2501
      H. Peter Anvin 提交于
      Impact: build fix
      
      mutex_lock() is was defined inline in kernel/mutex.c, but wasn't
      declared so not in <linux/mutex.h>.  This didn't cause a problem until
      checkin 3a2d367d9aabac486ac4444c6c7ec7a1dab16267 added the
      atomic_dec_and_mutex_lock() inline in between declaration and
      definion.
      
      This broke building with CONFIG_ALLOW_WARNINGS=n, e.g. make
      allnoconfig.
      
      Either from the source code nor the allnoconfig binary output I cannot
      find any internal references to mutex_lock() in kernel/mutex.c, so
      presumably this "inline" is now-useless legacy.
      
      Cc: Eric Paris <eparis@redhat.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Orig-LKML-Reference: <tip-3a2d367d9aabac486ac4444c6c7ec7a1dab16267@git.kernel.org>
      Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      b09d2501
    • E
      mutex: add atomic_dec_and_mutex_lock() · 9ab772cd
      Eric Paris 提交于
      Much like the atomic_dec_and_lock() function in which we take an hold a
      spin_lock if we drop the atomic to 0 this function takes and holds the
      mutex if we dec the atomic to 0.
      Signed-off-by: NEric Paris <eparis@redhat.com>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Orig-LKML-Reference: <20090323172417.410913479@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      9ab772cd
    • P
      perf_counter: add an mmap method to allow userspace to read hardware counters · 37d81828
      Paul Mackerras 提交于
      Impact: new feature giving performance improvement
      
      This adds the ability for userspace to do an mmap on a hardware counter
      fd and get access to a read-only page that contains the information
      needed to translate a hardware counter value to the full 64-bit
      counter value that would be returned by a read on the fd.  This is
      useful on architectures that allow user programs to read the hardware
      counters, such as PowerPC.
      
      The mmap will only succeed if the counter is a hardware counter
      monitoring the current process.
      
      On my quad 2.5GHz PowerPC 970MP machine, userspace can read a counter
      and translate it to the full 64-bit value in about 30ns using the
      mmapped page, compared to about 830ns for the read syscall on the
      counter, so this does give a significant performance improvement.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Orig-LKML-Reference: <20090323172417.297057964@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      37d81828
    • P
      perf_counter: avoid recursion · 96f6d444
      Peter Zijlstra 提交于
      Tracepoint events like lock_acquire and software counters like
      pagefaults can recurse into the perf counter code again, avoid that.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Orig-LKML-Reference: <20090323172417.152096433@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      96f6d444
    • P
      perf_counter: remove the event config bitfields · f4a2deb4
      Peter Zijlstra 提交于
      Since the bitfields turned into a bit of a mess, remove them and rely on
      good old masks.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Orig-LKML-Reference: <20090323172417.059499915@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      f4a2deb4
    • W
      perf_counter tools: when no command is feed to perfstat, display help and exit · af9522cf
      Wu Fengguang 提交于
      Signed-off-by: NWu Fengguang <fengguang.wu@intel.com>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      af9522cf
    • W
      perf_counter tools: cut down default count for cpu-cycles · dda7c02f
      Wu Fengguang 提交于
      In my system, it takes kerneltop dozens of minutes to
      show up usable numbers. Make the default count 100 times
      smaller fixed this long startup latency.
      
      I'm not sure if it's the right solution though.
      Signed-off-by: NWu Fengguang <fengguang.wu@intel.com>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      dda7c02f
    • W
      perf_counter tools: fix event_id type · 3ab8d792
      Wu Fengguang 提交于
      Signed-off-by: NWu Fengguang <fengguang.wu@intel.com>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      3ab8d792
    • W
      perf_counter tools: fix comment for sym_weight() · ef45fa9e
      Wu Fengguang 提交于
      Signed-off-by: NWu Fengguang <fengguang.wu@intel.com>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      ef45fa9e
    • W
      perf_counter tools: move remaining code into kerneltop.c · f7524bda
      Wu Fengguang 提交于
      - perfstat.c can be safely removed now
      - perfstat: -s => -a for system wide accounting
      - kerneltop: add -S/--stat for perfstat mode
      - minor adjustments to kerneltop --help, perfstat --help
      Signed-off-by: NWu Fengguang <fengguang.wu@intel.com>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      f7524bda
    • W
      perf_counter tools: Reuse event_name() in kerneltop · e3908612
      Wu Fengguang 提交于
      - can handle sw counters now
      - the outputs will look slightly different
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      e3908612
    • W
      perf_counter tools: support symbolic event names in kerneltop · 95bb3be1
      Wu Fengguang 提交于
      - kerneltop: --event_id => --event
      - kerneltop: can accept SW event types now
      - perfstat: it used to implicitly add event -2(task-clock),
      	    the new code no longer does this. Shall we?
      Signed-off-by: NWu Fengguang <fengguang.wu@intel.com>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      95bb3be1
    • W
      perf_counter tools: Move perfstat supporting code into perfcounters.h · f49012fa
      Wu Fengguang 提交于
      Signed-off-by: NWu Fengguang <fengguang.wu@intel.com>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      f49012fa
    • W
      perf_counter tools: Merge common code into perfcounters.h · cea92ce5
      Wu Fengguang 提交于
      kerneltop's MAX_COUNTERS is increased from 8 to 64(the value used by perfstat).
      Signed-off-by: NWu Fengguang <fengguang.wu@intel.com>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      cea92ce5
    • I
      perf_counter: add sample user-space to Documentation/perf_counter/ · e0143bad
      Ingo Molnar 提交于
      Initial version of kerneltop.c and perfstat.c.
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      e0143bad
    • I
      perf_counter: create Documentation/perf_counter/ and move perfcounters.txt there · 6f9f791e
      Ingo Molnar 提交于
      We'll have more files in that directory, prepare for that.
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      6f9f791e
    • P
      perf_counter: fix type/event_id layout on big-endian systems · 9aaa131a
      Paul Mackerras 提交于
      Impact: build fix for powerpc
      
      Commit db3a944aca35ae61 ("perf_counter: revamp syscall input ABI")
      expanded the hw_event.type field into a union of structs containing
      bitfields.  In particular it introduced a type field and a raw_type
      field, with the intention that the 1-bit raw_type field should
      overlay the most-significant bit of the 8-bit type field, and in fact
      perf_counter_alloc() now assumes that (or at least, assumes that
      raw_type doesn't overlay any of the bits that are 1 in the values of
      PERF_TYPE_{HARDWARE,SOFTWARE,TRACEPOINT}).
      
      Unfortunately this is not true on big-endian systems such as PowerPC,
      where bitfields are laid out from left to right, i.e. from most
      significant bit to least significant.  This means that setting
      hw_event.type = PERF_TYPE_SOFTWARE will set hw_event.raw_type to 1.
      
      This fixes it by making the layout depend on whether or not
      __BIG_ENDIAN_BITFIELD is defined.  It's a bit ugly, but that's what
      we get for using bitfields in a user/kernel ABI.
      
      Also, that commit didn't fix up some places in arch/powerpc/kernel/
      perf_counter.c where hw_event.raw and hw_event.event_id were used.
      This fixes them too.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      9aaa131a
    • P
      perf_counter: powerpc: clean up perc_counter_interrupt · db4fb5ac
      Paul Mackerras 提交于
      Impact: cleanup
      
      This updates the powerpc perf_counter_interrupt following on from the
      "perf_counter: unify irq output code" patch.  Since we now use the
      generic perf_counter_output code, which sets the perf_counter_pending
      flag directly, we no longer need the need_wakeup variable.
      
      This removes need_wakeup and makes perf_counter_interrupt use
      get_perf_counter_pending() instead.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Orig-LKML-Reference: <20090319194234.024464535@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      db4fb5ac
    • P
      perf_counter: unify irq output code · 0322cd6e
      Peter Zijlstra 提交于
      Impact: cleanup
      
      Having 3 slightly different copies of the same code around does nobody
      any good. First step in revamping the output format.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Orig-LKML-Reference: <20090319194233.929962222@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      0322cd6e
    • P
      perf_counter: revamp syscall input ABI · b8e83514
      Peter Zijlstra 提交于
      Impact: modify ABI
      
      The hardware/software classification in hw_event->type became a little
      strained due to the addition of tracepoint tracing.
      
      Instead split up the field and provide a type field to explicitly specify
      the counter type, while using the event_id field to specify which event to
      use.
      
      Raw counters still work as before, only the raw config now goes into
      raw_event.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Orig-LKML-Reference: <20090319194233.836807573@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      b8e83514
    • P
      perf_counter: hook up the tracepoint events · e077df4f
      Peter Zijlstra 提交于
      Impact: new perfcounters feature
      
      Enable usage of tracepoints as perf counter events.
      
      tracepoint event ids can be found in /debug/tracing/event/*/*/id
      and (for now) are represented as -65536+id in the type field.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Orig-LKML-Reference: <20090319194233.744044174@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      e077df4f