1. 20 6月, 2009 3 次提交
    • F
      perfcounter: Handle some IO return values · eadc84cc
      Frederic Weisbecker 提交于
      Building perfcounter tools raises the following warnings:
      
       builtin-record.c: In function ‘atexit_header’:
       builtin-record.c:464: erreur: ignoring return value of ‘pwrite’, declared with attribute warn_unused_result
       builtin-record.c: In function ‘__cmd_record’:
       builtin-record.c:503: erreur: ignoring return value of ‘read’, declared with attribute warn_unused_result
      
       builtin-report.c: In function ‘__cmd_report’:
       builtin-report.c:1403: erreur: ignoring return value of ‘read’, declared with attribute warn_unused_result
      
      This patch handles these IO return values.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      LKML-Reference: <1245456100-5477-1-git-send-email-fweisbec@gmail.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      eadc84cc
    • P
      perf_counter: Push perf_sample_data through the swcounter code · 92bf309a
      Peter Zijlstra 提交于
      Push the perf_sample_data further outwards to the swcounter interface,
      to abstract it away some more.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      92bf309a
    • P
      perf_counter tools: Define and use our own u64, s64 etc. definitions · 9cffa8d5
      Paul Mackerras 提交于
      On 64-bit powerpc, __u64 is defined to be unsigned long rather than
      unsigned long long.  This causes compiler warnings every time we
      print a __u64 value with %Lx.
      
      Rather than changing __u64, we define our own u64 to be unsigned long
      long on all architectures, and similarly s64 as signed long long.
      For consistency we also define u32, s32, u16, s16, u8 and s8.  These
      definitions are put in a new header, types.h, because these definitions
      are needed in util/string.h and util/symbol.h.
      
      The main change here is the mechanical change of __[us]{64,32,16,8}
      to remove the "__".  The other changes are:
      
      * Create types.h
      * Include types.h in perf.h, util/string.h and util/symbol.h
      * Add types.h to the LIB_H definition in Makefile
      * Added (u64) casts in process_overflow_event() and print_sym_table()
        to kill two remaining warnings.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: benh@kernel.crashing.org
      LKML-Reference: <19003.33494.495844.956580@cargo.ozlabs.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      9cffa8d5
  2. 19 6月, 2009 6 次提交
    • P
      perf_counter: Close race in perf_lock_task_context() · b49a9e7e
      Peter Zijlstra 提交于
      perf_lock_task_context() is buggy because it can return a dead
      context.
      
      the RCU read lock in perf_lock_task_context() only guarantees
      the memory won't get freed, it doesn't guarantee the object is
      valid (in our case refcount > 0).
      
      Therefore we can return a locked object that can get freed the
      moment we release the rcu read lock.
      
      perf_pin_task_context() then increases the refcount and does an
      unlock on freed memory.
      
      That increased refcount will cause a double free, in case it
      started out with 0.
      
      Ammend this by including the get_ctx() functionality in
      perf_lock_task_context() (all users already did this later
      anyway), and return a NULL context when the found one is
      already dead.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      b49a9e7e
    • I
      perf_counter, x86: Improve interactions with fast-gup · 0c871971
      Ingo Molnar 提交于
      Improve a few details in perfcounter call-chain recording that
      makes use of fast-GUP:
      
      - Use ACCESS_ONCE() to observe the pte value. ptes are fundamentally
        racy and can be changed on another CPU, so we have to be careful
        about how we access them. The PAE branch is already careful with
        read-barriers - but the non-PAE and 64-bit side needs an
        ACCESS_ONCE() to make sure the pte value is observed only once.
      
      - make the checks a bit stricter so that we can feed it any kind of
        cra^H^H^H user-space input ;-)
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      0c871971
    • P
      perf_counter: Simplify and fix task migration counting · e5289d4a
      Peter Zijlstra 提交于
      The task migrations counter was causing rare and hard to decypher
      memory corruptions under load. After a day of debugging and bisection
      we found that the problem was introduced with:
      
        3f731ca6: perf_counter: Fix cpu migration counter
      
      Turning them off fixes the crashes. Incidentally, the whole
      perf_counter_task_migration() logic can be done simpler as well,
      by injecting a proper sw-counter event.
      
      This cleanup also fixed the crashes. The precise failure mode is
      not completely clear yet, but we are clearly not unhappy about
      having a fix ;-)
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      Cc: Marcelo Tosatti <mtosatti@redhat.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      e5289d4a
    • P
      perf_counter tools: Add a data file header · f5970550
      Peter Zijlstra 提交于
      Add a data file header so we can transfer data between record and report.
      
      LKML-Reference: <new-submission>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      f5970550
    • P
      perf_counter: Update userspace callchain sampling uses · 2a0a50fe
      Peter Zijlstra 提交于
      Update the tools to reflect the new callchain sampling format.
      
      LKML-Reference: <new-submission>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      2a0a50fe
    • P
      perf_counter: Make callchain samples extensible · f9188e02
      Peter Zijlstra 提交于
      Before exposing upstream tools to a callchain-samples ABI, tidy it
      up to make it more extensible in the future:
      
      Use markers in the IP chain to denote context, use (u64)-1..-4095 range
      for these context markers because we use them for ERR_PTR(), so these
      addresses are unlikely to be mapped.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      f9188e02
  3. 18 6月, 2009 16 次提交
    • I
      perf report: Filter to parent set by default · b8e6d829
      Ingo Molnar 提交于
      Make it easier to use parent filtering - default to a filtered
      output. Also add the parent column so that we get collapsing but
      dont display it by default.
      
      add --no-exclude-other to override this.
      
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      b8e6d829
    • P
      perf_counter tools: Handle lost events · 9d91a6f7
      Peter Zijlstra 提交于
      Make use of the new ->data_tail mechanism to tell kernel-space
      about user-space draining the data stream. Emit lost events
      (and display them) if they happen.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      9d91a6f7
    • P
      perf_counter: Add event overlow handling · 43a21ea8
      Peter Zijlstra 提交于
      Alternative method of mmap() data output handling that provides
      better overflow management and a more reliable data stream.
      
      Unlike the previous method, that didn't have any user->kernel
      feedback and relied on userspace keeping up, this method relies on
      userspace writing its last read position into the control page.
      
      It will ensure new output doesn't overwrite not-yet read events,
      new events for which there is no space left are lost and the
      overflow counter is incremented, providing exact event loss
      numbers.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      43a21ea8
    • P
      fs: Provide empty .set_page_dirty() aop for anon inodes · d3a9262e
      Peter Zijlstra 提交于
      .set_page_dirty() is one of those a_ops that defaults to the
      buffer implementation when not set. Therefore provide a dummy
      function to make it do nothing.
      
      (Uncovered by perfcounters fd's which can now be writable-mmap-ed.)
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Cc: Davide Libenzi <davidel@xmailserver.org>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      d3a9262e
    • P
      perf_counter: tools: Makefile tweaks for 64-bit powerpc · e24a72c4
      Paul Mackerras 提交于
      On 64-bit powerpc, perf needs to be built as a 64-bit executable.
      This arranges to add the -m64 flag to CFLAGS if we are running on
      a 64-bit machine, indicated by the result of uname -m ending in "64".
      This means that we'll use -m64 on x86_64 machines as well.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: linuxppc-dev@ozlabs.org
      Cc: benh@kernel.crashing.org
      LKML-Reference: <19000.55666.866148.559620@cargo.ozlabs.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      e24a72c4
    • P
      perf_counter: powerpc: Add processor back-end for MPC7450 family · 7325927e
      Paul Mackerras 提交于
      This adds support for the performance monitor hardware on the
      MPC7450 family of processors (7450, 7451, 7455, 7447/7457, 7447A,
      7448), used in the later Apple G4 powermacs/powerbooks and other
      machines.  These machines have 6 hardware counters with a unique
      set of events which can be counted on each counter, with some
      events being available on multiple counters.
      
      Raw event codes for these processors are (PMC << 8) + PMCSEL.
      If PMC is non-zero then the event is that selected by the given
      PMCSEL value for that PMC (hardware counter).  If PMC is zero
      then the event selected is one of the low-numbered ones that are
      common to several PMCs.  In this case PMCSEL must be <= 22 and
      the event is what that PMCSEL value would select on PMC1 (but
      it may be placed any other PMC that has the same event for that
      PMCSEL value).
      
      For events that count cycles or occurrences that exceed a threshold,
      the threshold requested can be specified in the 0x3f000 bits of the
      raw event codes.  If the event uses the threshold multiplier bit
      and that bit should be set, that is indicated with the 0x40000 bit
      of the raw event code.
      
      This fills in some of the generic cache events.  Unfortunately there
      are quite a few blank spaces in the table, partly because these
      processors tend to count cache hits rather than cache accesses.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: linuxppc-dev@ozlabs.org
      Cc: benh@kernel.crashing.org
      LKML-Reference: <19000.55631.802122.696927@cargo.ozlabs.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      7325927e
    • P
      perf_counter: powerpc: Make powerpc perf_counter code safe for 32-bit kernels · 98fb1807
      Paul Mackerras 提交于
      This abstracts a few things in arch/powerpc/kernel/perf_counter.c
      that are specific to 64-bit kernels, and provides definitions for
      32-bit kernels.  In particular,
      
      * Only 64-bit has MMCRA and the bits in it that give information
        about a PMU interrupt (sampled PR, HV, slot number etc.)
      * Only 64-bit has the lppaca and the lppaca->pmcregs_in_use field
      * Use of SDAR is confined to 64-bit for now
      * Only 64-bit has soft/lazy interrupt disable and therefore
        pseudo-NMIs (interrupts that occur while interrupts are soft-disabled)
      * Only 64-bit has PMC7 and PMC8
      * Only 64-bit has the MSR_HV bit.
      
      This also fixes the types used in a couple of places, where we were
      using long types for things that need to be 64-bit.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: linuxppc-dev@ozlabs.org
      Cc: benh@kernel.crashing.org
      LKML-Reference: <19000.55590.634126.876084@cargo.ozlabs.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      98fb1807
    • P
      perf_counter: powerpc: Change how processor-specific back-ends get selected · 079b3c56
      Paul Mackerras 提交于
      At present, the powerpc generic (processor-independent) perf_counter
      code has list of processor back-end modules, and at initialization,
      it looks at the PVR (processor version register) and has a switch
      statement to select a suitable processor-specific back-end.
      
      This is going to become inconvenient as we add more processor-specific
      back-ends, so this inverts the order: now each back-end checks whether
      it applies to the current processor, and registers itself if so.
      Furthermore, instead of looking at the PVR, back-ends now check the
      cur_cpu_spec->oprofile_cpu_type string and match on that.
      
      Lastly, each back-end now specifies a name for itself so the core can
      print a nice message when a back-end registers itself.
      
      This doesn't provide any support for unregistering back-ends, but that
      wouldn't be hard to do and would allow back-ends to be modules.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: linuxppc-dev@ozlabs.org
      Cc: benh@kernel.crashing.org
      LKML-Reference: <19000.55529.762227.518531@cargo.ozlabs.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      079b3c56
    • P
      perf_counter: powerpc: Use unsigned long for register and constraint values · 448d64f8
      Paul Mackerras 提交于
      This changes the powerpc perf_counter back-end to use unsigned long
      types for hardware register values and for the value/mask pairs used
      in checking whether a given set of events fit within the hardware
      constraints.  This is in preparation for adding support for the PMU
      on some 32-bit powerpc processors.  On 32-bit processors the hardware
      registers are only 32 bits wide, and the PMU structure is generally
      simpler, so 32 bits should be ample for expressing the hardware
      constraints.  On 64-bit processors, unsigned long is 64 bits wide,
      so using unsigned long vs. u64 (unsigned long long) makes no actual
      difference.
      
      This makes some other very minor changes: adjusting whitespace to line
      things up in initialized structures, and simplifying some code in
      hw_perf_disable().
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: linuxppc-dev@ozlabs.org
      Cc: benh@kernel.crashing.org
      LKML-Reference: <19000.55473.26174.331511@cargo.ozlabs.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      448d64f8
    • P
      perf_counter: powerpc: Enable use of software counters on 32-bit powerpc · 105988c0
      Paul Mackerras 提交于
      This enables the perf_counter subsystem on 32-bit powerpc.  Since we
      don't have any support for hardware counters on 32-bit powerpc yet,
      only software counters can be used.
      
      Besides selecting HAVE_PERF_COUNTERS for 32-bit powerpc as well as
      64-bit, the main thing this does is add an implementation of
      set_perf_counter_pending().  This needs to arrange for
      perf_counter_do_pending() to be called when interrupts are enabled.
      Rather than add code to local_irq_restore as 64-bit does, the 32-bit
      set_perf_counter_pending() generates an interrupt by setting the
      decrementer to 1 so that a decrementer interrupt will become pending
      in 1 or 2 timebase ticks (if a decrementer interrupt isn't already
      pending).  When interrupts are enabled, timer_interrupt() will be
      called, and some new code in there calls perf_counter_do_pending().
      We use a per-cpu array of flags to indicate whether we need to call
      perf_counter_do_pending() or not.
      
      This introduces a couple of new Kconfig symbols: PPC_HAVE_PMU_SUPPORT,
      which is selected by processor families for which we have hardware PMU
      support (currently only PPC64), and PPC_PERF_CTRS, which enables the
      powerpc-specific perf_counter back-end.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: linuxppc-dev@ozlabs.org
      Cc: benh@kernel.crashing.org
      LKML-Reference: <19000.55404.103840.393470@cargo.ozlabs.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      105988c0
    • P
      perf_counter tools: Add and use isprint() · a73c7d84
      Peter Zijlstra 提交于
      Introduce isprint() to print out raw event dumps to ASCII, etc.
      
      (This is an extension to upstream Git's ctype.c.)
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      LKML-Reference: <new-submission>
      [ removed openssl.h inclusion from util.h - it leaked ctype.h ]
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      a73c7d84
    • I
      perf report: Add validation of call-chain entries · 7522060c
      Ingo Molnar 提交于
      Add boundary checks for call-chain events. In case of corrupted
      entries we could crash otherwise.
      
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      7522060c
    • I
      perf report: Tidy up the "--parent <regex>" and "--sort parent" call-chain features · b25bcf2f
      Ingo Molnar 提交于
      Instead of the ambigious 'call' naming use the much more
      specific 'parent' naming:
      
       - rename --call <regex> to --parent <regex>
      
       - rename --sort call to --sort parent
      
       - rename [unmatched] to [other] - to signal that this is not
         an error but the inverse set
      
      Also add pagefaults to the default parent-symbol pattern too,
      as it's a 'syscall overhead category' in a sense.
      
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      b25bcf2f
    • P
      perf_counter tools: Replace isprint() with issane() · 5aa75a0f
      Peter Zijlstra 提交于
      The Git utils came with a ctype replacement that doesn't provide
      isprint(). Add a replacement.
      
      Solves a build bug on certain distros.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      5aa75a0f
    • P
      perf_counter: x86: Set the period in the intel overflow handler · 60f916de
      Peter Zijlstra 提交于
      Commit 9e350de3 ("perf_counter: Accurate period data")
      missed a spot, which caused all Intel-PMU samples to have a
      period of 0.
      
      This broke auto-freq sampling.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      60f916de
    • P
      perf report: Add --sort <call> --call <$regex> · 6e7d6fdc
      Peter Zijlstra 提交于
      Implement sorting by callchain symbols, --sort <call>.
      
      It will create a new column which will show a match to
      --call $regex or "[unmatched]".
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      6e7d6fdc
  4. 17 6月, 2009 15 次提交