1. 11 12月, 2009 1 次提交
  2. 10 12月, 2009 1 次提交
    • X
      perf_event: Fix perf_swevent_hrtimer() variable initialization · 21140f4d
      Xiao Guangrong 提交于
      fix:
      
       [<c0477471>] ? printk+0x1d/0x24
       [<c01c98f9>] ? perf_prepare_sample+0x269/0x280
       [<c0149231>] warn_slowpath_common+0x71/0xd0
       [<c01c98f9>] ? perf_prepare_sample+0x269/0x280
       [<c01492aa>] warn_slowpath_null+0x1a/0x20
       [<c01c98f9>] perf_prepare_sample+0x269/0x280
       [<c016e9f3>] ? cpu_clock+0x53/0x90
       [<c01cc368>] __perf_event_overflow+0x2a8/0x300
       [<c01ccc3b>] perf_event_overflow+0x1b/0x30
       [<c01ccccf>] perf_swevent_hrtimer+0x7f/0x120
      
      This is because 'data.raw' variable not initialize.
      Signed-off-by: NXiao Guangrong <xiaoguangrong@cn.fujitsu.com>
      Acked-by: NPeter Zijlstra <peterz@infradead.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Paul Mackerras <paulus@samba.org>
      LKML-Reference: <4B208E93.1010801@cn.fujitsu.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      21140f4d
  3. 09 12月, 2009 4 次提交
  4. 06 12月, 2009 1 次提交
  5. 02 12月, 2009 1 次提交
    • K
      perf: Don't free perf_mmap_data until work has been done · ec70ccd8
      Kristian Høgsberg 提交于
      In the CONFIG_PERF_USE_VMALLOC case, perf_mmap_data_free() only
      schedules the cleanup of the perf_mmap_data struct.  In that
      case we have to wait until the work has been done before we free
      data.
      Signed-off-by: NKristian Høgsberg <krh@bitplanet.net>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: <stable@kernel.org>
      LKML-Reference: <1259697901-1747-1-git-send-email-krh@bitplanet.net>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      ec70ccd8
  6. 01 12月, 2009 1 次提交
  7. 27 11月, 2009 1 次提交
  8. 26 11月, 2009 2 次提交
  9. 25 11月, 2009 1 次提交
    • F
      perf_events: Fix bad software/trace event recursion counting · fe612672
      Frederic Weisbecker 提交于
      Commit 4ed7c92d
      (perf_events: Undo some recursion damage) has introduced a bad
      reference counting of the recursion context. putting the context
      behaves like getting it, dropping every software/trace events
      after the first one in a context.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Arjan van de Ven <arjan@infradead.org>
      Cc: Li Zefan <lizf@cn.fujitsu.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      LKML-Reference: <1259091502-5171-1-git-send-email-fweisbec@gmail.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      fe612672
  10. 24 11月, 2009 2 次提交
  11. 23 11月, 2009 8 次提交
    • P
      perf_events: Restore sanity to scaling land · acd1d7c1
      Peter Zijlstra 提交于
      It is quite possible to call update_event_times() on a context
      that isn't actually running and thereby confuse the thing.
      
      perf stat was reporting !100% scale values for software counters
      (2e2af50b perf_events: Disable events when we detach them,
      solved the worst of that, but there was still some left).
      
      The thing that happens is that because we are not self-reaping
      (we have a caring parent) there is a time between the last
      schedule (out) and having do_exit() called which will detach the
      events.
      
      This period would be accounted as enabled,!running because the
      event->state==INACTIVE, even though !event->ctx->is_active.
      
      Similar issues could have been observed by calling read() on a
      event while the attached task was not scheduled in.
      
      Solve this by teaching update_event_times() about
      ctx->is_active.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      LKML-Reference: <1258984836.4531.480.camel@laptop>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      acd1d7c1
    • P
      perf_events: Undo some recursion damage · 4ed7c92d
      Peter Zijlstra 提交于
      Make perf_swevent_get_recursion_context return a context number
      and disable preemption.
      
      This could be used to remove the IRQ disable from the trace bit
      and index the per-cpu buffer with.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Paul Mackerras <paulus@samba.org>
      LKML-Reference: <20091123103819.993226816@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      4ed7c92d
    • P
      perf_events: Fix __perf_event_exit_task() vs. update_event_times() locking · f67218c3
      Peter Zijlstra 提交于
      Move the update_event_times() call in __perf_event_exit_task()
      into list_del_event() because that holds the proper lock
      (ctx->lock) and seems a more natural place to do the last time
      update.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      LKML-Reference: <20091123103819.842455480@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      f67218c3
    • P
      perf_events: Update the context time on exit · 5e942bb3
      Peter Zijlstra 提交于
      It appeared we did call update_event_times() on exit, but we
      failed to update the context time, which renders the former
      moot.
      
      Locking is a bit iffy, we call update_event_times under
      ctx->mutex instead of ctx->lock - the next patch fixes this.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      LKML-Reference: <20091123103819.764207355@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      5e942bb3
    • P
      perf_events: Disable events when we detach them · 2e2af50b
      Peter Zijlstra 提交于
      If we leave the event in STATE_INACTIVE, any read of the event
      after the detach will increase the running count but not the
      enabled count and cause funny scaling artefacts.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      LKML-Reference: <20091123103819.689055515@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      2e2af50b
    • P
      perf_events: Fix style nits · 6c2bfcbe
      Peter Zijlstra 提交于
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      LKML-Reference: <20091123103819.613427378@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      6c2bfcbe
    • P
      perf_events: Undo copy/paste damage · a66a3052
      Peter Zijlstra 提交于
      We had two almost identical functions, avoid the duplication.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      LKML-Reference: <20091123103819.537537928@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      a66a3052
    • I
      perf_events: Optimize the swcounter hotpath · a4234bfc
      Ingo Molnar 提交于
      The structure init creates a bit memcpy, which shows
      up big time in perf annotate output:
      
                :      ffffffff810a859d <__perf_sw_event>:
           1.68 :      ffffffff810a859d:       55                      push   %rbp
           1.69 :      ffffffff810a859e:       41 89 fa                mov    %edi,%r10d
           0.01 :      ffffffff810a85a1:       49 89 c9                mov    %rcx,%r9
           0.00 :      ffffffff810a85a4:       31 c0                   xor    %eax,%eax
           1.71 :      ffffffff810a85a6:       b9 16 00 00 00          mov    $0x16,%ecx
           0.00 :      ffffffff810a85ab:       48 89 e5                mov    %rsp,%rbp
           0.00 :      ffffffff810a85ae:       48 83 ec 60             sub    $0x60,%rsp
           1.52 :      ffffffff810a85b2:       48 8d 7d a0             lea    -0x60(%rbp),%rdi
          85.20 :      ffffffff810a85b6:       f3 ab                   rep stos %eax,%es:(%rdi)
      
      None of the callees depends on the structure being pre-initialized,
      so only initialize ->addr. This gets rid of the memcpy overhead.
      
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      a4234bfc
  12. 22 11月, 2009 3 次提交
    • I
      perf_events: Fix modular build · 645e8cc0
      Ingo Molnar 提交于
      Fix:
      
        ERROR: "perf_swevent_put_recursion_context" [fs/ext4/ext4.ko] undefined!
        ERROR: "perf_swevent_get_recursion_context" [fs/ext4/ext4.ko] undefined!
      
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Masami Hiramatsu <mhiramat@redhat.com>
      Cc: Jason Baron <jbaron@redhat.com>
      LKML-Reference: <1258864015-10579-1-git-send-email-fweisbec@gmail.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      645e8cc0
    • M
      perf_event: Remove redundant zero fill · 96b02d78
      Márton Németh 提交于
      The buffer is first zeroed out by memset(). Then strncpy() is
      used to fill the content. The strncpy() function also pads the
      string till the end of the specified length, which is redundant.
      The strncpy() does not ensures that the string will be properly
      closed with 0. Use strlcpy() instead.
      
      The semantic match that finds this kind of pattern is as
      follows: (http://coccinelle.lip6.fr/)
      
      // <smpl>
      @@
      expression buffer;
      expression size;
      expression str;
      @@
      	memset(buffer, 0, size);
      	...
      -	strncpy(
      +	strlcpy(
      	buffer, str, sizeof(buffer)
      	);
      @@
      expression buffer;
      expression size;
      expression str;
      @@
      	memset(&buffer, 0, size);
      	...
      -	strncpy(
      +	strlcpy(
      	&buffer, str, sizeof(buffer));
      @@
      expression buffer;
      identifier field;
      expression size;
      expression str;
      @@
      	memset(buffer, 0, size);
      	...
      -	strncpy(
      +	strlcpy(
      	buffer->field, str, sizeof(buffer->field)
      	);
      @@
      expression buffer;
      identifier field;
      expression size;
      expression str;
      @@
      	memset(&buffer, 0, size);
      	...
      -	strncpy(
      +	strlcpy(
      	buffer.field, str, sizeof(buffer.field));
      // </smpl>
      
      On strncpy() vs strlcpy() see
      http://www.gratisoft.us/todd/papers/strlcpy.html .
      Signed-off-by: NMárton Németh <nm127@freemail.hu>
      Cc: Julia Lawall <julia@diku.dk>
      Cc: cocci@diku.dk
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      LKML-Reference: <4B086547.5040100@freemail.hu>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      96b02d78
    • F
      tracing: Use the perf recursion protection from trace event · ce71b9df
      Frederic Weisbecker 提交于
      When we commit a trace to perf, we first check if we are
      recursing in the same buffer so that we don't mess-up the buffer
      with a recursing trace. But later on, we do the same check from
      perf to avoid commit recursion. The recursion check is desired
      early before we touch the buffer but we want to do this check
      only once.
      
      Then export the recursion protection from perf and use it from
      the trace events before submitting a trace.
      
      v2: Put appropriate Reported-by tag
      Reported-by: NPeter Zijlstra <peterz@infradead.org>
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Masami Hiramatsu <mhiramat@redhat.com>
      Cc: Jason Baron <jbaron@redhat.com>
      LKML-Reference: <1258864015-10579-1-git-send-email-fweisbec@gmail.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      ce71b9df
  13. 21 11月, 2009 14 次提交
    • S
      perf_events: Fix default watermark calculation · 8904b180
      Stephane Eranian 提交于
      This patch fixes the default watermark value for the sampling
      buffer. With the existing calculation (watermark =
      max(PAGE_SIZE, max_size / 2)), no notification was ever received
      when the buffer was exactly 1 page. This was because you would
      never cross the threshold (there is no partial samples).
      
      In certain configuration, there was no possibilty detecting the
      problem because there was not enough space left to store the
      LOST record.In fact, there may be a more generic problem here.
      The kernel should ensure that there is alaways enough space to
      store one LOST record.
      
      This patch sets the default watermark to half the buffer size.
      With such limit, we are guaranteed to get a notification even
      with a single page buffer assuming no sample is bigger than a
      page.
      Signed-off-by: NStephane Eranian <eranian@gmail.com>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      LKML-Reference: <20091120212509.344964101@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      LKML-Reference: <1256302576-6169-1-git-send-email-eranian@gmail.com>
      8904b180
    • P
      perf: Fix locking for PERF_FORMAT_GROUP · 6f10581a
      Peter Zijlstra 提交于
      We should hold event->child_mutex when iterating the inherited
      counters, we should hold ctx->mutex when iterating siblings.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      LKML-Reference: <20091120212509.251030114@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      6f10581a
    • P
      perf: Fix event scaling for inherited counters · 59ed446f
      Peter Zijlstra 提交于
      Properly account the full hierarchy of counters for both the
      count (we already did so) and the scale times (new).
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      LKML-Reference: <20091120212509.153379276@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      59ed446f
    • P
      perf: Fix time locking · 2b8988c9
      Peter Zijlstra 提交于
      Most sites updating ctx->time and event times do so under
      ctx->lock, make sure they all do.
      
      This was made possible by removing the __perf_event_read() call
      from __perf_event_sync_stat(), which already had this lock
      taken.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      LKML-Reference: <20091120212509.102316434@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      2b8988c9
    • P
      perf: Simplify __perf_event_read · 58e5ad1d
      Peter Zijlstra 提交于
      cpuctx is always active, task context is always active for
      current
      
      the previous condition verifies that if its a task context its
      for current, hence we can assume ctx->is_active.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      LKML-Reference: <20091120212509.000272254@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      58e5ad1d
    • P
      perf: Simplify __perf_event_sync_stat · 3dbebf15
      Peter Zijlstra 提交于
      Removes constraints from __perf_event_read() by leaving it with
      a single callsite; this callsite had ctx->lock held, the other
      one does not.
      
      Removes some superfluous code from __perf_event_sync_stat().
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      LKML-Reference: <20091120212508.918544317@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      3dbebf15
    • P
      perf: Optimize __perf_event_read() · f6f83785
      Peter Zijlstra 提交于
      Both callers actually have IRQs disabled, no need doing so
      again.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      LKML-Reference: <20091120212508.863685796@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      f6f83785
    • P
      perf: Optimize perf_event_task_sched_out · 02ffdbc8
      Peter Zijlstra 提交于
      Remove an update_context_time() call from the
      perf_event_task_sched_out() path and into the branch its needed.
      
      The call was both superfluous, because __perf_event_sched_out()
      already does it, and wrong, because it was done without holding
      ctx->lock.
      
      Place it in perf_event_sync_stat(), which is the only place it
      is needed and which does already hold ctx->lock.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      LKML-Reference: <20091120212508.779516394@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      02ffdbc8
    • P
      perf: Fix PERF_FORMAT_GROUP scale info · abf4868b
      Peter Zijlstra 提交于
      As Corey reported, the total_enabled and total_running times
      could occasionally be 0, even though there were events counted.
      
      It turns out this is because we record the times before reading
      the counter while the latter updates the times.
      
      This patch corrects that.
      
      While looking at this code I found that there is a lot of
      locking iffyness around, the following patches correct most of
      that.
      Reported-by: NCorey Ashford <cjashfor@linux.vnet.ibm.com>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      LKML-Reference: <20091120212508.685559857@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      abf4868b
    • P
      perf: Optimize perf_event_mmap_ctx() · f6d9dd23
      Peter Zijlstra 提交于
      Remove a rcu_read_{,un}lock() pair and a few conditionals.
      
      We can remove the rcu_read_lock() by increasing the scope of one
      in the calling function.
      
      We can do away with the system_state check if the machine still
      boots after this patch (seems to be the case).
      
      We can do away with the list_empty() check because the bare
      list_for_each_entry_rcu() reduces to that now that we've removed
      everything else.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      LKML-Reference: <20091120212508.606459548@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      f6d9dd23
    • P
      perf: Optimize perf_event_comm_ctx() · f6595f3a
      Peter Zijlstra 提交于
      Remove a rcu_read_{,un}lock() pair and a few conditionals.
      
      We can remove the rcu_read_lock() by increasing the scope of one
      in the calling function.
      
      We can do away with the system_state check if the machine still
      boots after this patch (seems to be the case).
      
      We can do away with the list_empty() check because the bare
      list_for_each_entry_rcu() reduces to that now that we've removed
      everything else.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      LKML-Reference: <20091120212508.527608793@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      f6595f3a
    • P
      perf: Optimize perf_event_task_ctx() · d6ff86cf
      Peter Zijlstra 提交于
      Remove a rcu_read_{,un}lock() pair and a few conditionals.
      
      We can remove the rcu_read_lock() by increasing the scope of one
      in the calling function.
      
      We can do away with the system_state check if the machine still
      boots after this patch (seems to be the case).
      
      We can do away with the list_empty() check because the bare
      list_for_each_entry_rcu() reduces to that now that we've removed
      everything else.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      LKML-Reference: <20091120212508.452227115@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      d6ff86cf
    • P
      perf: Optimize perf_swevent_ctx_event() · 81520183
      Peter Zijlstra 提交于
      Remove a rcu_read_{,un}lock() pair and a few conditionals.
      
      We can remove the rcu_read_lock() by increasing the scope of one
      in the calling function.
      
      We can do away with the system_state check if the machine still
      boots after this patch (seems to be the case).
      
      We can do away with the list_empty() check because the bare
      list_for_each_entry_rcu() reduces to that now that we've removed
      everything else.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      LKML-Reference: <20091120212508.378188589@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      81520183
    • P
      perf: Optimize some swcounter attr.sample_period==1 paths · 0cff784a
      Peter Zijlstra 提交于
      Avoid the rather expensive perf_swevent_set_period() if we know
      we have to sample every single event anyway.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      LKML-Reference: <20091120212508.299508332@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      0cff784a