1. 20 6月, 2009 1 次提交
  2. 19 6月, 2009 2 次提交
    • P
      perf_counter: Close race in perf_lock_task_context() · b49a9e7e
      Peter Zijlstra 提交于
      perf_lock_task_context() is buggy because it can return a dead
      context.
      
      the RCU read lock in perf_lock_task_context() only guarantees
      the memory won't get freed, it doesn't guarantee the object is
      valid (in our case refcount > 0).
      
      Therefore we can return a locked object that can get freed the
      moment we release the rcu read lock.
      
      perf_pin_task_context() then increases the refcount and does an
      unlock on freed memory.
      
      That increased refcount will cause a double free, in case it
      started out with 0.
      
      Ammend this by including the get_ctx() functionality in
      perf_lock_task_context() (all users already did this later
      anyway), and return a NULL context when the found one is
      already dead.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      b49a9e7e
    • P
      perf_counter: Simplify and fix task migration counting · e5289d4a
      Peter Zijlstra 提交于
      The task migrations counter was causing rare and hard to decypher
      memory corruptions under load. After a day of debugging and bisection
      we found that the problem was introduced with:
      
        3f731ca6: perf_counter: Fix cpu migration counter
      
      Turning them off fixes the crashes. Incidentally, the whole
      perf_counter_task_migration() logic can be done simpler as well,
      by injecting a proper sw-counter event.
      
      This cleanup also fixed the crashes. The precise failure mode is
      not completely clear yet, but we are clearly not unhappy about
      having a fix ;-)
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      Cc: Marcelo Tosatti <mtosatti@redhat.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      e5289d4a
  3. 18 6月, 2009 1 次提交
    • P
      perf_counter: Add event overlow handling · 43a21ea8
      Peter Zijlstra 提交于
      Alternative method of mmap() data output handling that provides
      better overflow management and a more reliable data stream.
      
      Unlike the previous method, that didn't have any user->kernel
      feedback and relied on userspace keeping up, this method relies on
      userspace writing its last read position into the control page.
      
      It will ensure new output doesn't overwrite not-yet read events,
      new events for which there is no space left are lost and the
      overflow counter is incremented, providing exact event loss
      numbers.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      43a21ea8
  4. 15 6月, 2009 1 次提交
  5. 13 6月, 2009 2 次提交
  6. 12 6月, 2009 2 次提交
    • P
      perf_counter: Add forward/backward attribute ABI compatibility · 974802ea
      Peter Zijlstra 提交于
      Provide for means of extending the perf_counter_attr in a 'natural' way.
      
      We allow growing the structure by appending fields at the end by specifying
      the full structure size inside it.
      
      When a new kernel sees a smaller (old) structure, it will 0 pad the tail.
      When an old kernel sees a larger (new) structure, it will verify the tail
      consists of 0s, otherwise fail.
      
      If we fail due to a size-mismatch, we return -E2BIG and write the kernel's
      native attribe size back into the provided structure.
      
      Furthermore, add some attribute verification, so that we'll fail counter
      creation when unknown bits are present (PERF_SAMPLE, PERF_FORMAT, or in
      the __reserved fields).
      
      (This ABI detail is introduced while keeping the existing syscall ABI.)
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      974802ea
    • P
      perf_counter: Remove PERF_TYPE_RAW special casing · 081fad86
      Peter Zijlstra 提交于
      The PERF_TYPE_RAW special case seems superfluous these days. Remove
      it and add it to the switch() stmt like the others.
      
      [ Impact: cleanup ]
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      081fad86
  7. 11 6月, 2009 8 次提交
  8. 10 6月, 2009 1 次提交
    • P
      perf_counter: More aggressive frequency adjustment · bd2b5b12
      Peter Zijlstra 提交于
      Also employ the overflow handler to adjust the frequency, this results
      in a stable frequency in about 40~50 samples, instead of that many ticks.
      
      This also means we can start sampling at a sample period of 1 without
      running head-first into the throttle.
      
      It relies on sched_clock() to accurately measure the time difference
      between the overflow NMIs.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      bd2b5b12
  9. 06 6月, 2009 5 次提交
    • I
      perf_counter: Implement generalized cache event types · 8326f44d
      Ingo Molnar 提交于
      Extend generic event enumeration with the PERF_TYPE_HW_CACHE
      method.
      
      This is a 3-dimensional space:
      
             { L1-D, L1-I, L2, ITLB, DTLB, BPU } x
             { load, store, prefetch } x
             { accesses, misses }
      
      User-space passes in the 3 coordinates and the kernel provides
      a counter. (if the hardware supports that type and if the
      combination makes sense.)
      
      Combinations that make no sense produce a -EINVAL.
      Combinations that are not supported by the hardware produce -ENOTSUP.
      
      Extend the tools to deal with this, and rewrite the event symbol
      parsing code with various popular aliases for the units and
      access methods above. So 'l1-cache-miss' and 'l1d-read-ops' are
      both valid aliases.
      
      ( x86 is supported for now, with the Nehalem event table filled in,
        and with Core2 and Atom having placeholder tables. )
      
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      Cc: Marcelo Tosatti <mtosatti@redhat.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      8326f44d
    • I
      perf_counter: Separate out attr->type from attr->config · a21ca2ca
      Ingo Molnar 提交于
      Counter type is a frequently used value and we do a lot of
      bit juggling by encoding and decoding it from attr->config.
      
      Clean this up by creating a separate attr->type field.
      
      Also clean up the various similarly complex user-space bits
      all around counter attribute management.
      
      The net improvement is significant, and it will be easier
      to add a new major type (which is what triggered this cleanup).
      
      (This changes the ABI, all tools are adapted.)
      (PowerPC build-tested.)
      
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      Cc: Marcelo Tosatti <mtosatti@redhat.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      a21ca2ca
    • P
      perf_counter: Fix frequency adjustment for < HZ · 6a24ed6c
      Peter Zijlstra 提交于
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      6a24ed6c
    • P
      perf_counter: Add PERF_SAMPLE_PERIOD · 689802b2
      Peter Zijlstra 提交于
      In order to allow easy tracking of the period, also provide means of
      adding it to the sample data.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      689802b2
    • P
      perf_counter: Change PERF_SAMPLE_CONFIG into PERF_SAMPLE_ID · ac4bcf88
      Peter Zijlstra 提交于
      The purpose of PERF_SAMPLE_CONFIG was to identify the counters,
      since then we've added counter ids, use those instead.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      ac4bcf88
  10. 05 6月, 2009 2 次提交
    • P
      perf_counter: Generate mmap events for install_special_mapping() · 089dd79d
      Peter Zijlstra 提交于
      In order to track the vdso also generate mmap events for
      install_special_mapping().
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      089dd79d
    • P
      perf_counter: Fix lockup with interrupting counters · 6dc5f2a4
      Paul Mackerras 提交于
      Commit 8e3747c1 ("perf_counter: Change data head from u32 to u64")
      changed the type of 'head' in struct perf_mmap_data from atomic_t
      to atomic_long_t, but missed converting one use of atomic_read on
      it to atomic_long_read.  The effect of using atomic_read rather than
      atomic_long_read on powerpc (and other big-endian architectures) is
      that we get the high half of the 64-bit quantity, resulting in the
      cmpxchg retry loop in perf_output_begin spinning forever as soon as
      data->head becomes non-zero.  On little-endian architectures such as
      x86 we would get the low half, resulting in a lockup once data->head
      becomes greater than 4G.
      
      This fixes it by using atomic_long_read rather than atomic_read.
      
      [ Impact: fix perfcounter lockup on PowerPC / big-endian systems ]
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <18984.33964.21541.743096@cargo.ozlabs.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      6dc5f2a4
  11. 04 6月, 2009 3 次提交
    • P
      perf_counter: Remove munmap stuff · d99e9446
      Peter Zijlstra 提交于
      In name of keeping it simple, only track mmap events. Userspace
      will have to remove old overlapping maps when it encounters them.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      d99e9446
    • P
      perf_counter: Add fork event · 60313ebe
      Peter Zijlstra 提交于
      Create a fork event so that we can easily clone the comm and
      dso maps without having to generate all those events.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      60313ebe
    • I
      perf_counter: Fix throttling lock-up · 128f048f
      Ingo Molnar 提交于
      Throttling logic is broken and we can lock up with too small
      hw sampling intervals.
      
      Make the throttling code more robust: disable counters even
      if we already disabled them.
      
      ( Also clean up whitespace damage i noticed while reading
        various pieces of code related to throttling. )
      
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      Cc: Marcelo Tosatti <mtosatti@redhat.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      128f048f
  12. 03 6月, 2009 6 次提交
  13. 02 6月, 2009 5 次提交
    • P
      perf_counter: Use PID namespaces properly · 709e50cf
      Peter Zijlstra 提交于
      Stop using task_struct::pid and start using PID namespaces.
      
      PIDs will be reported in the PID namespace of the monitoring
      task at the moment of counter creation.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Eric W. Biederman <ebiederm@xmission.com>
      Cc: Oleg Nesterov <oleg@tv-sign.ru>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      Cc: Marcelo Tosatti <mtosatti@redhat.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: John Kacur <jkacur@redhat.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      709e50cf
    • P
      perf_counter: Remove unused prev_state field · bf4e0ed3
      Paul Mackerras 提交于
      This removes the prev_state field of struct perf_counter since
      it is now unused.  It was only used by the cpu migration
      counter, which doesn't use it any more.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      Cc: Marcelo Tosatti <mtosatti@redhat.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: John Kacur <jkacur@redhat.com>
      LKML-Reference: <18979.35052.915728.626374@cargo.ozlabs.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      bf4e0ed3
    • P
      perf_counter: Fix cpu migration counter · 3f731ca6
      Paul Mackerras 提交于
      This fixes the cpu migration software counter to count
      correctly even when contexts get swapped from one task to
      another.  Previously the cpu migration counts reported by perf
      stat were bogus, ranging from negative to several thousand for
      a single "lat_ctx 2 8 32" run.  With this patch the cpu
      migration count reported for "lat_ctx 2 8 32" is almost always
      between 35 and 44.
      
      This fixes the problem by adding a call into the perf_counter
      code from set_task_cpu when tasks are migrated.  This enables
      us to use the generic swcounter code (with some modifications)
      for the cpu migration counter.
      
      This modifies the swcounter code to allow a NULL regs pointer
      to be passed in to perf_swcounter_ctx_event() etc.  The cpu
      migration counter does this because there isn't necessarily a
      pt_regs struct for the task available.  In this case, the
      counter will not have interrupt capability - but the migration
      counter didn't have interrupt capability before, so this is no
      loss.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      Cc: Marcelo Tosatti <mtosatti@redhat.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: John Kacur <jkacur@redhat.com>
      LKML-Reference: <18979.35006.819769.416327@cargo.ozlabs.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      3f731ca6
    • P
      perf_counter: Initialize per-cpu context earlier on cpu up · f38b0820
      Paul Mackerras 提交于
      This arranges for perf_counter's notifier for cpu hotplug
      operations to be called earlier than the migration notifier in
      sched.c by increasing its priority to 20, compared to the 10
      for the migration notifier.  The reason for doing this is that
      a subsequent commit to convert the cpu migration counter to use
      the generic swcounter infrastructure will add a call into the
      perf_counter subsystem when tasks get migrated.  Therefore the
      perf_counter subsystem needs a chance to initialize its per-cpu
      data for the new cpu before it can get called from the
      migration code.
      
      This also adds a comment to the migration notifier noting that
      its priority needs to be lower than that of the perf_counter
      notifier.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <18981.1900.792795.836858@cargo.ozlabs.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      f38b0820
    • I
      perf_counter: Tidy up style details · 22a4f650
      Ingo Molnar 提交于
       - whitespace fixlets
       - make local variable definitions more consistent
      
      [ Impact: cleanup ]
      
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      Cc: Marcelo Tosatti <mtosatti@redhat.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: John Kacur <jkacur@redhat.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      22a4f650
  14. 01 6月, 2009 1 次提交
    • P
      perf_counter: Allow software counters to count while task is not running · 880ca15a
      Paul Mackerras 提交于
      This changes perf_swcounter_match() so that per-task software
      counters can count events that occur while their associated
      task is not running.  This will allow us to use the generic
      software counter code for counting task migrations, which can
      occur while the task is not scheduled in.
      
      To do this, we have to distinguish between the situations where
      the counter is inactive because its task has been scheduled
      out, and those where the counter is inactive because it is part
      of a group that was not able to go on the PMU.  In the former
      case we want the counter to count, but not in the latter case.
      If the context is active, we have the latter case.  If the
      context is inactive then we need to know whether the counter
      was counting when the context was last active, which we can
      determine by comparing its ->tstamp_stopped timestamp with the
      context's timestamp.
      
      This also folds three checks in perf_swcounter_match, checking
      perf_event_raw(), perf_event_type() and perf_event_id()
      individually, into a single 64-bit comparison on
      counter->hw_event.config, as an optimization.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      Cc: Marcelo Tosatti <mtosatti@redhat.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: John Kacur <jkacur@redhat.com>
      LKML-Reference: <18979.34810.259718.955621@cargo.ozlabs.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      880ca15a