1. 06 6月, 2009 4 次提交
  2. 05 6月, 2009 2 次提交
    • P
      perf_counter: Generate mmap events for install_special_mapping() · 089dd79d
      Peter Zijlstra 提交于
      In order to track the vdso also generate mmap events for
      install_special_mapping().
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      089dd79d
    • P
      perf_counter: Fix lockup with interrupting counters · 6dc5f2a4
      Paul Mackerras 提交于
      Commit 8e3747c1 ("perf_counter: Change data head from u32 to u64")
      changed the type of 'head' in struct perf_mmap_data from atomic_t
      to atomic_long_t, but missed converting one use of atomic_read on
      it to atomic_long_read.  The effect of using atomic_read rather than
      atomic_long_read on powerpc (and other big-endian architectures) is
      that we get the high half of the 64-bit quantity, resulting in the
      cmpxchg retry loop in perf_output_begin spinning forever as soon as
      data->head becomes non-zero.  On little-endian architectures such as
      x86 we would get the low half, resulting in a lockup once data->head
      becomes greater than 4G.
      
      This fixes it by using atomic_long_read rather than atomic_read.
      
      [ Impact: fix perfcounter lockup on PowerPC / big-endian systems ]
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <18984.33964.21541.743096@cargo.ozlabs.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      6dc5f2a4
  3. 04 6月, 2009 3 次提交
    • P
      perf_counter: Remove munmap stuff · d99e9446
      Peter Zijlstra 提交于
      In name of keeping it simple, only track mmap events. Userspace
      will have to remove old overlapping maps when it encounters them.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      d99e9446
    • P
      perf_counter: Add fork event · 60313ebe
      Peter Zijlstra 提交于
      Create a fork event so that we can easily clone the comm and
      dso maps without having to generate all those events.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      60313ebe
    • I
      perf_counter: Fix throttling lock-up · 128f048f
      Ingo Molnar 提交于
      Throttling logic is broken and we can lock up with too small
      hw sampling intervals.
      
      Make the throttling code more robust: disable counters even
      if we already disabled them.
      
      ( Also clean up whitespace damage i noticed while reading
        various pieces of code related to throttling. )
      
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      Cc: Marcelo Tosatti <mtosatti@redhat.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      128f048f
  4. 03 6月, 2009 6 次提交
  5. 02 6月, 2009 5 次提交
    • P
      perf_counter: Use PID namespaces properly · 709e50cf
      Peter Zijlstra 提交于
      Stop using task_struct::pid and start using PID namespaces.
      
      PIDs will be reported in the PID namespace of the monitoring
      task at the moment of counter creation.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Eric W. Biederman <ebiederm@xmission.com>
      Cc: Oleg Nesterov <oleg@tv-sign.ru>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      Cc: Marcelo Tosatti <mtosatti@redhat.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: John Kacur <jkacur@redhat.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      709e50cf
    • P
      perf_counter: Remove unused prev_state field · bf4e0ed3
      Paul Mackerras 提交于
      This removes the prev_state field of struct perf_counter since
      it is now unused.  It was only used by the cpu migration
      counter, which doesn't use it any more.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      Cc: Marcelo Tosatti <mtosatti@redhat.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: John Kacur <jkacur@redhat.com>
      LKML-Reference: <18979.35052.915728.626374@cargo.ozlabs.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      bf4e0ed3
    • P
      perf_counter: Fix cpu migration counter · 3f731ca6
      Paul Mackerras 提交于
      This fixes the cpu migration software counter to count
      correctly even when contexts get swapped from one task to
      another.  Previously the cpu migration counts reported by perf
      stat were bogus, ranging from negative to several thousand for
      a single "lat_ctx 2 8 32" run.  With this patch the cpu
      migration count reported for "lat_ctx 2 8 32" is almost always
      between 35 and 44.
      
      This fixes the problem by adding a call into the perf_counter
      code from set_task_cpu when tasks are migrated.  This enables
      us to use the generic swcounter code (with some modifications)
      for the cpu migration counter.
      
      This modifies the swcounter code to allow a NULL regs pointer
      to be passed in to perf_swcounter_ctx_event() etc.  The cpu
      migration counter does this because there isn't necessarily a
      pt_regs struct for the task available.  In this case, the
      counter will not have interrupt capability - but the migration
      counter didn't have interrupt capability before, so this is no
      loss.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      Cc: Marcelo Tosatti <mtosatti@redhat.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: John Kacur <jkacur@redhat.com>
      LKML-Reference: <18979.35006.819769.416327@cargo.ozlabs.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      3f731ca6
    • P
      perf_counter: Initialize per-cpu context earlier on cpu up · f38b0820
      Paul Mackerras 提交于
      This arranges for perf_counter's notifier for cpu hotplug
      operations to be called earlier than the migration notifier in
      sched.c by increasing its priority to 20, compared to the 10
      for the migration notifier.  The reason for doing this is that
      a subsequent commit to convert the cpu migration counter to use
      the generic swcounter infrastructure will add a call into the
      perf_counter subsystem when tasks get migrated.  Therefore the
      perf_counter subsystem needs a chance to initialize its per-cpu
      data for the new cpu before it can get called from the
      migration code.
      
      This also adds a comment to the migration notifier noting that
      its priority needs to be lower than that of the perf_counter
      notifier.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <18981.1900.792795.836858@cargo.ozlabs.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      f38b0820
    • I
      perf_counter: Tidy up style details · 22a4f650
      Ingo Molnar 提交于
       - whitespace fixlets
       - make local variable definitions more consistent
      
      [ Impact: cleanup ]
      
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      Cc: Marcelo Tosatti <mtosatti@redhat.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: John Kacur <jkacur@redhat.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      22a4f650
  6. 01 6月, 2009 2 次提交
    • P
      perf_counter: Allow software counters to count while task is not running · 880ca15a
      Paul Mackerras 提交于
      This changes perf_swcounter_match() so that per-task software
      counters can count events that occur while their associated
      task is not running.  This will allow us to use the generic
      software counter code for counting task migrations, which can
      occur while the task is not scheduled in.
      
      To do this, we have to distinguish between the situations where
      the counter is inactive because its task has been scheduled
      out, and those where the counter is inactive because it is part
      of a group that was not able to go on the PMU.  In the former
      case we want the counter to count, but not in the latter case.
      If the context is active, we have the latter case.  If the
      context is inactive then we need to know whether the counter
      was counting when the context was last active, which we can
      determine by comparing its ->tstamp_stopped timestamp with the
      context's timestamp.
      
      This also folds three checks in perf_swcounter_match, checking
      perf_event_raw(), perf_event_type() and perf_event_id()
      individually, into a single 64-bit comparison on
      counter->hw_event.config, as an optimization.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      Cc: Marcelo Tosatti <mtosatti@redhat.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: John Kacur <jkacur@redhat.com>
      LKML-Reference: <18979.34810.259718.955621@cargo.ozlabs.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      880ca15a
    • P
      perf_counter: Provide functions for locking and pinning the context for a task · 25346b93
      Paul Mackerras 提交于
      This abstracts out the code for locking the context associated
      with a task.  Because the context might get transferred from
      one task to another concurrently, we have to check after
      locking the context that it is still the right context for the
      task and retry if not.  This was open-coded in
      find_get_context() and perf_counter_init_task().
      
      This adds a further function for pinning the context for a
      task, i.e. marking it so it can't be transferred to another
      task.  This adds a 'pin_count' field to struct
      perf_counter_context to indicate that a context is pinned,
      instead of the previous method of setting the parent_gen count
      to all 1s.  Pinning the context with a pin_count is easier to
      undo and doesn't require saving the parent_gen value.  This
      also adds a perf_unpin_context() to undo the effect of
      perf_pin_task_context() and changes perf_counter_init_task to
      use it.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      Cc: Marcelo Tosatti <mtosatti@redhat.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: John Kacur <jkacur@redhat.com>
      LKML-Reference: <18979.34748.755674.596386@cargo.ozlabs.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      25346b93
  7. 29 5月, 2009 6 次提交
    • P
      perf_counter: Ammend cleanup in fork() fail · bbbee908
      Peter Zijlstra 提交于
      When fork() fails we cannot use perf_counter_exit_task() since that
      assumes to operate on current. Write a new helper that cleans up
      unused/clean contexts.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      Cc: Marcelo Tosatti <mtosatti@redhat.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: John Kacur <jkacur@redhat.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      bbbee908
    • P
      perf_counter: Clean up task_ctx vs interrupts · 665c2142
      Peter Zijlstra 提交于
      Remove the local_irq_save() etc.. in routines that are smp function
      calls, or have IRQs disabled by other means.
      
      Then change the COMM, MMAP, and swcounter context iteration to
      current->perf_counter_ctxp and RCU, since it really doesn't matter
      which context they iterate, they're all folded.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      Cc: Marcelo Tosatti <mtosatti@redhat.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: John Kacur <jkacur@redhat.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      665c2142
    • P
      perf_counter: Fix COMM and MMAP events for cpu wide counters · efb3d172
      Peter Zijlstra 提交于
      Commit a63eaf34 ("perf_counter: Dynamically allocate tasks'
      perf_counter_context struct") broke COMM and MMAP notification for
      cpu wide counters by dropping out early if there was no task context,
      thereby also not iterating the cpu context.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      Cc: Marcelo Tosatti <mtosatti@redhat.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: John Kacur <jkacur@redhat.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      efb3d172
    • I
      perf_counter: Robustify counter-free logic · 012b84da
      Ingo Molnar 提交于
      This fixes a nasty crash and highlights a bug that we were
      freeing failed-fork() counters incorrectly.
      
      (the fix for that will come separately)
      
      [ Impact: fix crashes/lockups with inherited counters ]
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      Cc: Marcelo Tosatti <mtosatti@redhat.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: John Kacur <jkacur@redhat.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      012b84da
    • I
      perf_counter: Fix cpuctx->task_ctx races · 3f4dee22
      Ingo Molnar 提交于
      Peter noticed that we are sometimes reading cpuctx->task_ctx with
      interrupts enabled.
      Noticed-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      Cc: Marcelo Tosatti <mtosatti@redhat.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: John Kacur <jkacur@redhat.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      3f4dee22
    • P
      perf_counter: Don't swap contexts containing locked mutex · ad3a37de
      Paul Mackerras 提交于
      Peter Zijlstra pointed out that under some circumstances, we can take
      the mutex in a context or a counter and then swap that context or
      counter to another task, potentially leading to lock order inversions
      or the mutexes not protecting what they are supposed to protect.
      
      This fixes the problem by making sure that we never take a mutex in a
      context or counter which could get swapped to another task.  Most of
      the cases where we take a mutex is on a top-level counter or context,
      i.e. a counter which has an fd associated with it or a context that
      contains such a counter.  This adds WARN_ON_ONCE statements to verify
      that.
      
      The two cases where we need to take the mutex on a context that is a
      clone of another are in perf_counter_exit_task and
      perf_counter_init_task.  The perf_counter_exit_task case is solved by
      uncloning the context before starting to remove the counters from it.
      The perf_counter_init_task is a little trickier; we temporarily
      disable context swapping for the parent (forking) task by setting its
      ctx->parent_gen to the all-1s value after locking the context, if it
      is a cloned context, and restore the ctx->parent_gen value at the end
      if the context didn't get uncloned in the meantime.
      
      This also moves the increment of the context generation count to be
      within the same critical section, protected by the context mutex, that
      adds the new counter to the context.  That way, taking the mutex is
      sufficient to ensure that both the counter list and the generation
      count are stable.
      
      [ Impact: fix hangs, races with inherited and PID counters ]
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <18975.31580.520676.619896@drongo.ozlabs.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      ad3a37de
  8. 28 5月, 2009 1 次提交
    • P
      perf_counter: Fix race in attaching counters to tasks and exiting · c93f7669
      Paul Mackerras 提交于
      Commit 564c2b21 ("perf_counter: Optimize context switch between
      identical inherited contexts") introduced a race where it is possible
      that a counter being attached to a task could get attached to the
      wrong task, if the task is one that has inherited its context from
      another task via fork.  This happens because the optimized context
      switch could switch the context to another task after find_get_context
      has read task->perf_counter_ctxp.  In fact, it's possible that the
      context could then get freed, if the other task then exits.
      
      This fixes the problem by protecting both the context switch and the
      critical code in find_get_context with spinlocks.  The context switch
      locks the cxt->lock of both the outgoing and incoming contexts before
      swapping them.  That means that once code such as find_get_context
      has obtained the spinlock for the context associated with a task,
      the context can't get swapped to another task.  However, the context
      may have been swapped in the interval between reading
      task->perf_counter_ctxp and getting the lock, so it is necessary to
      check and retry.
      
      To make sure that none of the contexts being looked at in
      find_get_context can get freed, this changes the context freeing code
      to use RCU.  Thus an rcu_read_lock() is sufficient to ensure that no
      contexts can get freed.  This part of the patch is lifted from a patch
      posted by Peter Zijlstra.
      
      This also adds a check to make sure that we can't add a counter to a
      task that is exiting.
      
      There is also a race between perf_counter_exit_task and
      find_get_context; this solves the race by moving the get_ctx that
      was in perf_counter_alloc into the locked region in find_get_context,
      so that once find_get_context has got the context for a task, it
      won't get freed even if the task calls perf_counter_exit_task.  It
      doesn't matter if new top-level (non-inherited) counters get attached
      to the context after perf_counter_exit_task has detached the context
      from the task.  They will just stay there and never get scheduled in
      until the counters' fds get closed, and then perf_release will remove
      them from the context and eventually free the context.
      
      With this, we are now doing the unclone in find_get_context rather
      than when a counter was added to or removed from a context (actually,
      we were missing the unclone_ctx() call when adding a counter to a
      context).  We don't need to unclone when removing a counter from a
      context because we have no way to remove a counter from a cloned
      context.
      
      This also takes out the smp_wmb() in find_get_context, which Peter
      Zijlstra pointed out was unnecessary because the cmpxchg implies a
      full barrier anyway.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Marcelo Tosatti <mtosatti@redhat.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: John Kacur <jkacur@redhat.com>
      LKML-Reference: <18974.33033.667187.273886@cargo.ozlabs.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      c93f7669
  9. 26 5月, 2009 3 次提交
    • I
      perf_counter: Initialize ->oncpu properly · 329d876d
      Ingo Molnar 提交于
      This shouldnt matter normally (and i have not seen any
      misbehavior), because active counters always have a
      proper ->oncpu value - but nevertheless initialize the
      field properly to -1.
      
      [ Impact: cleanup ]
      
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      Cc: Marcelo Tosatti <mtosatti@redhat.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: John Kacur <jkacur@redhat.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      329d876d
    • I
      perf_counter: fix warning & lockup · 0127c3ea
      Ingo Molnar 提交于
       - remove bogus warning
       - fix wakeup from NMI path lockup
       - also fix up whitespace noise in perf_counter.h
      
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: John Kacur <jkacur@redhat.com>
      LKML-Reference: <20090525153931.703093461@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      0127c3ea
    • P
      perf_counter: Generic per counter interrupt throttle · a78ac325
      Peter Zijlstra 提交于
      Introduce a generic per counter interrupt throttle.
      
      This uses the perf_counter_overflow() quick disable to throttle a specific
      counter when its going too fast when a pmu->unthrottle() method is provided
      which can undo the quick disable.
      
      Power needs to implement both the quick disable and the unthrottle method.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: John Kacur <jkacur@redhat.com>
      LKML-Reference: <20090525153931.703093461@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      a78ac325
  10. 25 5月, 2009 3 次提交
  11. 24 5月, 2009 5 次提交
    • I
      perf_counter: Increase mmap limit · a3862d3f
      Ingo Molnar 提交于
      In a default 'perf top' run the tool will create a counter for
      each online CPU. With enough CPUs this will eventually exhaust
      the default limit.
      
      So scale it up with the number of online CPUs.
      
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: John Kacur <jkacur@redhat.com>
      Cc: Mike Galbraith <efault@gmx.de>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      a3862d3f
    • P
      perf_counter: Remove perf_counter_context::nr_enabled · 475c5579
      Peter Zijlstra 提交于
      now that pctrl() no longer disables other people's counters,
      remove the PMU cache code that deals with that.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: John Kacur <jkacur@redhat.com>
      LKML-Reference: <20090523163013.032998331@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      475c5579
    • P
      perf_counter: Change pctrl() behaviour · 082ff5a2
      Peter Zijlstra 提交于
      Instead of en/dis-abling all counters acting on a particular
      task, en/dis- able all counters we created.
      
      [ v2: fix crash on first counter enable ]
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: John Kacur <jkacur@redhat.com>
      LKML-Reference: <20090523163012.916937244@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      082ff5a2
    • P
      perf_counter: Simplify context cleanup · aa9c67f5
      Peter Zijlstra 提交于
      Use perf_counter_remove_from_context() to remove counters from
      the context.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: John Kacur <jkacur@redhat.com>
      LKML-Reference: <20090523163012.796275849@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      aa9c67f5
    • P
      perf_counter: Sanitize context locking · 682076ae
      Peter Zijlstra 提交于
      Ensure we're consistent with the context locks.
      
       context->mutex
         context->lock
           list_{add,del}_counter();
      
      so that either lock is sufficient to stabilize the context.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: John Kacur <jkacur@redhat.com>
      LKML-Reference: <20090523163012.618790733@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      682076ae