1. 11 10月, 2010 1 次提交
  2. 09 6月, 2010 12 次提交
  3. 03 6月, 2010 1 次提交
    • P
      perf: Fix crash in swevents · c6df8d5a
      Peter Zijlstra 提交于
      Frederic reported that because swevents handling doesn't disable IRQs
      anymore, we can get a recursion of perf_adjust_period(), once from
      overflow handling and once from the tick.
      
      If both call ->disable, we get a double hlist_del_rcu() and trigger
      a LIST_POISON2 dereference.
      
      Since we don't actually need to stop/start a swevent to re-programm
      the hardware (lack of hardware to program), simply nop out these
      callbacks for the swevent pmu.
      Reported-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <1275557609.27810.35218.camel@twins>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      c6df8d5a
  4. 31 5月, 2010 4 次提交
    • F
      perf_events: Fix unincremented buffer base on partial copy · 74048f89
      Frederic Weisbecker 提交于
      If a sample size crosses to the next page boundary, the copy
      will be made in more than one step. However we forget to advance
      the source offset for the next copy, leading to unexpected double
      copies that completely mess up the traces.
      
      This fixes various kinds of bad traces that have irrelevant
      data inside, as an example:
      
      	geany-4979  [001]  5758.077775: sched_switch: prev_comm=! prev_pid=121
      		prev_prio=0 prev_state=S|D|Z|X|x ==> next_comm= next_pid=7497072
      		next_prio=0
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <1274988898-5639-1-git-send-regression-fweisbec@gmail.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      74048f89
    • S
      perf_events: Fix event scheduling issues introduced by transactional API · 90151c35
      Stephane Eranian 提交于
      The transactional API patch between the generic and model-specific
      code introduced several important bugs with event scheduling, at
      least on X86. If you had pinned events, e.g., watchdog,  and were
      over-committing the PMU, you would get bogus counts. The bug was
      showing up on Intel CPU because events would move around more
      often that on AMD. But the problem also existed on AMD, though
      harder to expose.
      
      The issues were:
      
       - group_sched_in() was missing a cancel_txn() in the error path
      
       - cpuc->n_added was not properly maintained, leading to missing
         actions in hw_perf_enable(), i.e., n_running being 0. You cannot
         update n_added until you know the transaction has succeeded. In
         case of failed transaction n_added was not adjusted back.
      
       - in case of failed transactions, event_sched_out() was called
         and eventually invoked x86_disable_event() to touch the HW reg.
         But with transactions, on X86, event_sched_in() does not touch
         HW registers, it simply collects events into a list. Thus, you
         could end up calling x86_disable_event() on a counter which
         did not correspond to the current event when idx != -1.
      
      The patch modifies the generic and X86 code to avoid all those problems.
      
      First, we keep track of the number of events added last. In case the
      transaction fails, we substract them from n_added. This approach is
      necessary (as opposed to delaying updates to n_added) because not all
      event updates use the transaction API, e.g., single events.
      
      Second, we encapsulate the event_sched_in() and event_sched_out() in
      group_sched_in() inside the transaction. That makes the operations
      symmetrical and you can also detect that you are inside a transaction
      and skip the HW reg access by checking cpuc->group_flag.
      
      With this patch, you can now overcommit the PMU even with pinned
      system-wide events present and still get valid counts.
      Signed-off-by: NStephane Eranian <eranian@google.com>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <1274796225.5882.1389.camel@twins>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      90151c35
    • P
      perf_events: Fix races in group composition · 8a49542c
      Peter Zijlstra 提交于
      Group siblings don't pin each-other or the parent, so when we destroy
      events we must make sure to clean up all cross referencing pointers.
      
      In particular, for destruction of a group leader we must be able to
      find all its siblings and remove their reference to it.
      
      This means that detaching an event from its context must not detach it
      from the group, otherwise we can end up failing to clear all pointers.
      
      Solve this by clearly separating the attachment to a context and
      attachment to a group, and keep the group composed until we destroy
      the events.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      8a49542c
    • P
      perf_events: Fix races and clean up perf_event and perf_mmap_data interaction · ac9721f3
      Peter Zijlstra 提交于
      In order to move toward separate buffer objects, rework the whole
      perf_mmap_data construct to be a more self-sufficient entity, one
      with its own lifetime rules.
      
      This greatly sanitizes the whole output redirection code, which
      was riddled with bugs and races.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: <stable@kernel.org>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      ac9721f3
  5. 28 5月, 2010 1 次提交
  6. 21 5月, 2010 8 次提交
    • P
      perf: Optimize perf_tp_event_match() · 580d607c
      Peter Zijlstra 提交于
      Since we know tracepoints come from kernel context,
      avoid conditionals that try and establish that very
      fact.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      LKML-Reference: <20100521090710.904944001@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      580d607c
    • P
      perf: Remove more code from the fastpath · a94ffaaf
      Peter Zijlstra 提交于
      Sanity checks cost instructions.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      LKML-Reference: <20100521090710.852926930@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      a94ffaaf
    • P
      perf: Optimize the !vmalloc backed buffer · 3cafa9fb
      Peter Zijlstra 提交于
      Reduce code and data by using the knowledge that for
      !PERF_USE_VMALLOC data_order is always 0.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      LKML-Reference: <20100521090710.795019386@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      3cafa9fb
    • P
      perf: Optimize perf_output_copy() · 5d967a8b
      Peter Zijlstra 提交于
      Reduce the clutter in perf_output_copy() by keeping
      an interator in perf_output_handle.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      LKML-Reference: <20100521090710.742809176@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      5d967a8b
    • P
      perf: Fix wakeup storm for RO mmap()s · adb8e118
      Peter Zijlstra 提交于
      RO mmap()s don't update the tail pointer, so
      comparing against it for determining the written data
      size doesn't really do any good.
      
      Keep track of when we last did a wakeup, and compare
      against that.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      LKML-Reference: <20100521090710.684479310@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      adb8e118
    • P
      perf: Ensure that IOC_OUTPUT isn't used to create multi-writer buffers · 0f139300
      Peter Zijlstra 提交于
      Since we want to ensure buffers only have a single
      writer, we must avoid creating one with multiple.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      LKML-Reference: <20100521090710.528215873@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      0f139300
    • P
      perf, trace: Optimize tracepoints by using per-tracepoint-per-cpu hlist to track events · 1c024eca
      Peter Zijlstra 提交于
      Avoid the swevent hash-table by using per-tracepoint
      hlists.
      
      Also, avoid conditionals on the fast path by ordering
      with probe unregister so that we should never get on
      the callback path without the data being there.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      LKML-Reference: <20100521090710.473188012@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      1c024eca
    • F
      perf: Fix forgotten preempt_enable by nested writers · acd35a46
      Frederic Weisbecker 提交于
      A writer that gets a reference to the buffer handle disables
      preemption. When we put that reference, we check if we are
      the outer most writer and if not, we simply return and defer
      the head update to the outer most writer. The problem here
      is that preemption is only reenabled by the outer most, that
      produces preemption count imbalance for every nested writer
      that exit.
      
      So just don't forget to always re-enable preemption when we
      put the buffer reference, whoever we are.
      
      Fixes lots of sleeping in atomic warnings, visible with lock
      events recording.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Robert Richter <robert.richter@amd.com>
      acd35a46
  7. 20 5月, 2010 1 次提交
    • F
      perf: Comply with new rcu checks API · 49f135ed
      Frederic Weisbecker 提交于
      The software events hlist doesn't fully comply with the new
      rcu checks api.
      
      We need to consider three different sides that access the hlist:
      
      - the hlist allocation/release side. This side happens when an
        events is created or released, accesses to the hlist are
        serialized under the cpuctx mutex.
      
      - the events insertion/removal in the hlist. This side is always
        serialized against the above one. The hlist is always present
        during such operations. This side happens when a software event
        is scheduled in/out. The serialization that ensures the software
        event is really attached to the context is made under the
        ctx->lock.
      
      - events triggering. This is the read side, it can happen
        concurrently with any update side.
      
      This patch deals with them one by one and anticipates with the
      separate rcu mem space patches in preparation.
      
      This patch fixes various annoying rcu warnings.
      Reported-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Paul Mackerras <paulus@samba.org>
      49f135ed
  8. 19 5月, 2010 7 次提交
  9. 11 5月, 2010 3 次提交
  10. 08 5月, 2010 1 次提交
    • P
      perf_event: Make software events work again · 6e85158c
      Paul Mackerras 提交于
      Commit 6bde9b6c ("perf: Add
      group scheduling transactional APIs") added code to allow a
      group to be scheduled in a single transaction.  However, it
      introduced a bug in handling events whose pmu does not implement
      transactions -- at the end of scheduling in the events in the
      group, in the non-transactional case the code now falls through
      to the group_error label, and proceeds to unschedule all the
      events in the group and return failure.
      
      This fixes it by returning 0 (success) in the non-transactional
      case.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Lin Ming <ming.m.lin@intel.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: eranian@gmail.com
      LKML-Reference: <20100508105800.GB10650@brick.ozlabs.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      6e85158c
  11. 07 5月, 2010 1 次提交
    • L
      perf: Add group scheduling transactional APIs · 6bde9b6c
      Lin Ming 提交于
      Add group scheduling transactional APIs to struct pmu.
      These APIs will be implemented in arch code, based on Peter's idea as
      below.
      
      > the idea behind hw_perf_group_sched_in() is to not perform
      > schedulability tests on each event in the group, but to add the group
      > as a whole and then perform one test.
      >
      > Of course, when that test fails, you'll have to roll-back the whole
      > group again.
      >
      > So start_txn (or a better name) would simply toggle a flag in the pmu
      > implementation that will make pmu::enable() not perform the
      > schedulablilty test.
      >
      > Then commit_txn() will perform the schedulability test (so note the
      > method has to have a !void return value.
      >
      > This will allow us to use the regular
      > kernel/perf_event.c::group_sched_in() and all the rollback code.
      > Currently each hw_perf_group_sched_in() implementation duplicates all
      > the rolllback code (with various bugs).
      
      ->start_txn:
      Start group events scheduling transaction, set a flag to make
      pmu::enable() not perform the schedulability test, it will be performed
      at commit time.
      
      ->commit_txn:
      Commit group events scheduling transaction, perform the group
      schedulability as a whole
      
      ->cancel_txn:
      Stop group events scheduling transaction, clear the flag so
      pmu::enable() will perform the schedulability test.
      Reviewed-by: NStephane Eranian <eranian@google.com>
      Reviewed-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Signed-off-by: NLin Ming <ming.m.lin@intel.com>
      Cc: David Miller <davem@davemloft.net>
      Cc: Paul Mackerras <paulus@samba.org>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <1272002160.5707.60.camel@minggr.sh.intel.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      6bde9b6c