1. 28 2月, 2013 1 次提交
    • S
      hlist: drop the node parameter from iterators · b67bfe0d
      Sasha Levin 提交于
      I'm not sure why, but the hlist for each entry iterators were conceived
      
              list_for_each_entry(pos, head, member)
      
      The hlist ones were greedy and wanted an extra parameter:
      
              hlist_for_each_entry(tpos, pos, head, member)
      
      Why did they need an extra pos parameter? I'm not quite sure. Not only
      they don't really need it, it also prevents the iterator from looking
      exactly like the list iterator, which is unfortunate.
      
      Besides the semantic patch, there was some manual work required:
      
       - Fix up the actual hlist iterators in linux/list.h
       - Fix up the declaration of other iterators based on the hlist ones.
       - A very small amount of places were using the 'node' parameter, this
       was modified to use 'obj->member' instead.
       - Coccinelle didn't handle the hlist_for_each_entry_safe iterator
       properly, so those had to be fixed up manually.
      
      The semantic patch which is mostly the work of Peter Senna Tschudin is here:
      
      @@
      iterator name hlist_for_each_entry, hlist_for_each_entry_continue, hlist_for_each_entry_from, hlist_for_each_entry_rcu, hlist_for_each_entry_rcu_bh, hlist_for_each_entry_continue_rcu_bh, for_each_busy_worker, ax25_uid_for_each, ax25_for_each, inet_bind_bucket_for_each, sctp_for_each_hentry, sk_for_each, sk_for_each_rcu, sk_for_each_from, sk_for_each_safe, sk_for_each_bound, hlist_for_each_entry_safe, hlist_for_each_entry_continue_rcu, nr_neigh_for_each, nr_neigh_for_each_safe, nr_node_for_each, nr_node_for_each_safe, for_each_gfn_indirect_valid_sp, for_each_gfn_sp, for_each_host;
      
      type T;
      expression a,c,d,e;
      identifier b;
      statement S;
      @@
      
      -T b;
          <+... when != b
      (
      hlist_for_each_entry(a,
      - b,
      c, d) S
      |
      hlist_for_each_entry_continue(a,
      - b,
      c) S
      |
      hlist_for_each_entry_from(a,
      - b,
      c) S
      |
      hlist_for_each_entry_rcu(a,
      - b,
      c, d) S
      |
      hlist_for_each_entry_rcu_bh(a,
      - b,
      c, d) S
      |
      hlist_for_each_entry_continue_rcu_bh(a,
      - b,
      c) S
      |
      for_each_busy_worker(a, c,
      - b,
      d) S
      |
      ax25_uid_for_each(a,
      - b,
      c) S
      |
      ax25_for_each(a,
      - b,
      c) S
      |
      inet_bind_bucket_for_each(a,
      - b,
      c) S
      |
      sctp_for_each_hentry(a,
      - b,
      c) S
      |
      sk_for_each(a,
      - b,
      c) S
      |
      sk_for_each_rcu(a,
      - b,
      c) S
      |
      sk_for_each_from
      -(a, b)
      +(a)
      S
      + sk_for_each_from(a) S
      |
      sk_for_each_safe(a,
      - b,
      c, d) S
      |
      sk_for_each_bound(a,
      - b,
      c) S
      |
      hlist_for_each_entry_safe(a,
      - b,
      c, d, e) S
      |
      hlist_for_each_entry_continue_rcu(a,
      - b,
      c) S
      |
      nr_neigh_for_each(a,
      - b,
      c) S
      |
      nr_neigh_for_each_safe(a,
      - b,
      c, d) S
      |
      nr_node_for_each(a,
      - b,
      c) S
      |
      nr_node_for_each_safe(a,
      - b,
      c, d) S
      |
      - for_each_gfn_sp(a, c, d, b) S
      + for_each_gfn_sp(a, c, d) S
      |
      - for_each_gfn_indirect_valid_sp(a, c, d, b) S
      + for_each_gfn_indirect_valid_sp(a, c, d) S
      |
      for_each_host(a,
      - b,
      c) S
      |
      for_each_host_safe(a,
      - b,
      c, d) S
      |
      for_each_mesh_entry(a,
      - b,
      c, d) S
      )
          ...+>
      
      [akpm@linux-foundation.org: drop bogus change from net/ipv4/raw.c]
      [akpm@linux-foundation.org: drop bogus hunk from net/ipv6/raw.c]
      [akpm@linux-foundation.org: checkpatch fixes]
      [akpm@linux-foundation.org: fix warnings]
      [akpm@linux-foudnation.org: redo intrusive kvm changes]
      Tested-by: NPeter Senna Tschudin <peter.senna@gmail.com>
      Acked-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Signed-off-by: NSasha Levin <sasha.levin@oracle.com>
      Cc: Wu Fengguang <fengguang.wu@intel.com>
      Cc: Marcelo Tosatti <mtosatti@redhat.com>
      Cc: Gleb Natapov <gleb@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      b67bfe0d
  2. 19 2月, 2013 1 次提交
    • S
      ftrace: Call ftrace cleanup module notifier after all other notifiers · 8c189ea6
      Steven Rostedt (Red Hat) 提交于
      Commit: c1bf08ac "ftrace: Be first to run code modification on modules"
      
      changed ftrace module notifier's priority to INT_MAX in order to
      process the ftrace nops before anything else could touch them
      (namely kprobes). This was the correct thing to do.
      
      Unfortunately, the ftrace module notifier also contains the ftrace
      clean up code. As opposed to the set up code, this code should be
      run *after* all the module notifiers have run in case a module is doing
      correct clean-up and unregisters its ftrace hooks. Basically, ftrace
      needs to do clean up on module removal, as it needs to know about code
      being removed so that it doesn't try to modify that code. But after it
      removes the module from its records, if a ftrace user tries to remove
      a probe, that removal will fail due as the record of that code segment
      no longer exists.
      
      Nothing really bad happens if the probe removal is called after ftrace
      did the clean up, but the ftrace removal function will return an error.
      Correct code (such as kprobes) will produce a WARN_ON() if it fails
      to remove the probe. As people get annoyed by frivolous warnings, it's
      best to do the ftrace clean up after everything else.
      
      By splitting the ftrace_module_notifier into two notifiers, one that
      does the module load setup that is run at high priority, and the other
      that is called for module clean up that is run at low priority, the
      problem is solved.
      
      Cc: stable@vger.kernel.org
      Reported-by: NFrank Ch. Eigler <fche@redhat.com>
      Acked-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      8c189ea6
  3. 13 2月, 2013 1 次提交
    • S
      tracing/syscalls: Allow archs to ignore tracing compat syscalls · f431b634
      Steven Rostedt 提交于
      The tracing of ia32 compat system calls has been a bit of a pain as they
      use different system call numbers than the 64bit equivalents.
      
      I wrote a simple 'lls' program that lists files. I compiled it as a i686
      ELF binary and ran it under a x86_64 box. This is the result:
      
      echo 0 > /debug/tracing/tracing_on
      echo 1 > /debug/tracing/events/syscalls/enable
      echo 1 > /debug/tracing/tracing_on ; ./lls ; echo 0 > /debug/tracing/tracing_on
      
      grep lls /debug/tracing/trace
      
      [.. skipping calls before TS_COMPAT is set ...]
      
                   lls-1127  [005] d...   936.409188: sys_recvfrom(fd: 0, ubuf: 4d560fc4, size: 0, flags: 8048034, addr: 8, addr_len: f7700420)
                   lls-1127  [005] d...   936.409190: sys_recvfrom -> 0x8a77000
                   lls-1127  [005] d...   936.409211: sys_lgetxattr(pathname: 0, name: 1000, value: 3, size: 22)
                   lls-1127  [005] d...   936.409215: sys_lgetxattr -> 0xf76ff000
                   lls-1127  [005] d...   936.409223: sys_dup2(oldfd: 4d55ae9b, newfd: 4)
                   lls-1127  [005] d...   936.409228: sys_dup2 -> 0xfffffffffffffffe
                   lls-1127  [005] d...   936.409236: sys_newfstat(fd: 4d55b085, statbuf: 80000)
                   lls-1127  [005] d...   936.409242: sys_newfstat -> 0x3
                   lls-1127  [005] d...   936.409243: sys_removexattr(pathname: 3, name: ffcd0060)
                   lls-1127  [005] d...   936.409244: sys_removexattr -> 0x0
                   lls-1127  [005] d...   936.409245: sys_lgetxattr(pathname: 0, name: 19614, value: 1, size: 2)
                   lls-1127  [005] d...   936.409248: sys_lgetxattr -> 0xf76e5000
                   lls-1127  [005] d...   936.409248: sys_newlstat(filename: 3, statbuf: 19614)
                   lls-1127  [005] d...   936.409249: sys_newlstat -> 0x0
                   lls-1127  [005] d...   936.409262: sys_newfstat(fd: f76fb588, statbuf: 80000)
                   lls-1127  [005] d...   936.409279: sys_newfstat -> 0x3
                   lls-1127  [005] d...   936.409279: sys_close(fd: 3)
                   lls-1127  [005] d...   936.421550: sys_close -> 0x200
                   lls-1127  [005] d...   936.421558: sys_removexattr(pathname: 3, name: ffcd00d0)
                   lls-1127  [005] d...   936.421560: sys_removexattr -> 0x0
                   lls-1127  [005] d...   936.421569: sys_lgetxattr(pathname: 4d564000, name: 1b1abc, value: 5, size: 802)
                   lls-1127  [005] d...   936.421574: sys_lgetxattr -> 0x4d564000
                   lls-1127  [005] d...   936.421575: sys_capget(header: 4d70f000, dataptr: 1000)
                   lls-1127  [005] d...   936.421580: sys_capget -> 0x0
                   lls-1127  [005] d...   936.421580: sys_lgetxattr(pathname: 4d710000, name: 3000, value: 3, size: 812)
                   lls-1127  [005] d...   936.421589: sys_lgetxattr -> 0x4d710000
                   lls-1127  [005] d...   936.426130: sys_lgetxattr(pathname: 4d713000, name: 2abc, value: 3, size: 32)
                   lls-1127  [005] d...   936.426141: sys_lgetxattr -> 0x4d713000
                   lls-1127  [005] d...   936.426145: sys_newlstat(filename: 3, statbuf: f76ff3f0)
                   lls-1127  [005] d...   936.426146: sys_newlstat -> 0x0
                   lls-1127  [005] d...   936.431748: sys_lgetxattr(pathname: 0, name: 1000, value: 3, size: 22)
      
      Obviously I'm not calling newfstat with a fd of 4d55b085. The calls are
      obviously incorrect, and confusing.
      
      Other efforts have been made to fix this:
      
      https://lkml.org/lkml/2012/3/26/367
      
      But the real solution is to rewrite the syscall internals and come up
      with a fixed solution. One that doesn't require all the kluge that the
      current solution has.
      
      Thus for now, instead of outputting incorrect data, simply ignore them.
      With this patch the changes now have:
      
       #> grep lls /debug/tracing/trace
       #>
      
      Compat system calls simply are not traced. If users need compat
      syscalls, then they should just use the raw syscall tracepoints.
      
      For an architecture to make their compat syscalls ignored, it must
      define ARCH_TRACE_IGNORE_COMPAT_SYSCALLS (done in asm/ftrace.h) and also
      define an arch_trace_is_compat_syscall() function that will return true
      if the current task should ignore tracing the syscall.
      
      I want to stress that this change does not affect actual syscalls in any
      way, shape or form. It is only used within the tracing system and
      doesn't interfere with the syscall logic at all. The changes are
      consolidated nicely into trace_syscalls.c and asm/ftrace.h.
      
      I had to make one small modification to asm/thread_info.h and that was
      to remove the include of asm/ftrace.h. As asm/ftrace.h required the
      current_thread_info() it was causing include hell. That include was
      added back in 2008 when the function graph tracer was added:
      
       commit caf4b323 "tracing, x86: add low level support for ftrace return tracing"
      
      It does not need to be included there.
      
      Link: http://lkml.kernel.org/r/1360703939.21867.99.camel@gandalf.local.homeAcked-by: NH. Peter Anvin <hpa@zytor.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      f431b634
  4. 09 2月, 2013 12 次提交
    • O
      uprobes/perf: Avoid uprobe_apply() whenever possible · b2fe8ba6
      Oleg Nesterov 提交于
      uprobe_perf_open/close call the costly uprobe_apply() every time,
      we can avoid it if:
      
      	- "nr_systemwide != 0" is not changed.
      
      	- There is another process/thread with the same ->mm.
      
      	- copy_proccess() does inherit_event(). dup_mmap() preserves the
      	  inserted breakpoints.
      
      	- event->attr.enable_on_exec == T, we can rely on uprobe_mmap()
      	  called by exec/mmap paths.
      
      	- tp_target is exiting. Only _close() checks PF_EXITING, I don't
      	  think TRACE_REG_PERF_OPEN can hit the dying task too often.
      Signed-off-by: NOleg Nesterov <oleg@redhat.com>
      b2fe8ba6
    • O
      uprobes/perf: Teach trace_uprobe/perf code to use UPROBE_HANDLER_REMOVE · f42d24a1
      Oleg Nesterov 提交于
      Change uprobe_trace_func() and uprobe_perf_func() to return "int". Change
      uprobe_dispatcher() to return "trace_ret | perf_ret" although this is not
      needed, currently TP_FLAG_TRACE/TP_FLAG_PROFILE are mutually exclusive.
      
      The only functional change is that uprobe_perf_func() checks the filtering
      too and returns UPROBE_HANDLER_REMOVE if nobody wants to trace current.
      
      Testing:
      
      	# perf probe -x /lib/libc.so.6 syscall
      
      	# perf record -e probe_libc:syscall -i perl -e 'fork; syscall -1 for 1..10; wait'
      
      	# perf report --show-total-period
      		100.00%            10     perl  libc-2.8.so    [.] syscall
      
      Before this patch:
      
      	# cat /sys/kernel/debug/tracing/uprobe_profile
      		/lib/libc.so.6 syscall				20
      
      A child process doesn't have a counter, but still it hits this breakoint
      "copied" by dup_mmap().
      
      After the patch:
      
      	# cat /sys/kernel/debug/tracing/uprobe_profile
      		/lib/libc.so.6 syscall				11
      
      The child process hits this int3 only once and does unapply_uprobe().
      Signed-off-by: NOleg Nesterov <oleg@redhat.com>
      f42d24a1
    • O
      uprobes/perf: Teach trace_uprobe/perf code to pre-filter · 31ba3348
      Oleg Nesterov 提交于
      Finally implement uprobe_perf_filter() which checks ->nr_systemwide or
      ->perf_events to figure out whether we need to insert the breakpoint.
      
      uprobe_perf_open/close are changed to do uprobe_apply(true/false) when
      the new perf event comes or goes away.
      
      Note that currently this is very suboptimal:
      
      	- uprobe_register() called by TRACE_REG_PERF_REGISTER becomes a
      	  heavy nop, consumer->filter() always returns F at this stage.
      
      	  As it was already discussed we need uprobe_register_only() to
      	  avoid the costly register_for_each_vma() when possible.
      
      	- uprobe_apply() is oftenly overkill. Unless "nr_systemwide != 0"
      	  changes we need uprobe_apply_mm(), unapply_uprobe() is almost
      	  what we need.
      
      	- uprobe_apply() can be simply avoided sometimes, see the next
      	  changes.
      
      Testing:
      
      	# perf probe -x /lib/libc.so.6 syscall
      
      	# perl -e 'syscall -1 while 1' &
      	[1] 530
      
      	# perf record -e probe_libc:syscall perl -e 'syscall -1 for 1..10; sleep 1'
      
      	# perf report --show-total-period
      		100.00%            10     perl  libc-2.8.so    [.] syscall
      
      Before this patch:
      
      	# cat /sys/kernel/debug/tracing/uprobe_profile
      		/lib/libc.so.6 syscall				79291
      
      A huge ->nrhit == 79291 reflects the fact that the background process
      530 constantly hits this breakpoint too, even if doesn't contribute to
      the output.
      
      After the patch:
      
      	# cat /sys/kernel/debug/tracing/uprobe_profile
      		/lib/libc.so.6 syscall				10
      
      This shows that only the target process was punished by int3.
      Signed-off-by: NOleg Nesterov <oleg@redhat.com>
      31ba3348
    • O
      uprobes/perf: Teach trace_uprobe/perf code to track the active perf_event's · 736288ba
      Oleg Nesterov 提交于
      Introduce "struct trace_uprobe_filter" which records the "active"
      perf_event's attached to ftrace_event_call. For the start we simply
      use list_head, we can optimize this later if needed. For example, we
      do not really need to record an event with ->parent != NULL, we can
      rely on parent->child_list. And we can certainly do some optimizations
      for the case when 2 events have the same ->tp_target or tp_target->mm.
      
      Change trace_uprobe_register() to process TRACE_REG_PERF_OPEN/CLOSE
      and add/del this perf_event to the list.
      
      We can probably avoid any locking, but lets start with the "obvioulsy
      correct" trace_uprobe_filter->rwlock which protects everything.
      Signed-off-by: NOleg Nesterov <oleg@redhat.com>
      736288ba
    • O
      uprobes/perf: Always increment trace_uprobe->nhit · 1b47aefd
      Oleg Nesterov 提交于
      Move tu->nhit++ from uprobe_trace_func() to uprobe_dispatcher().
      
      ->nhit counts how many time we hit the breakpoint inserted by this
      uprobe, we do not want to loose this info if uprobe was enabled by
      sys_perf_event_open().
      Signed-off-by: NOleg Nesterov <oleg@redhat.com>
      Acked-by: NSrikar Dronamraju <srikar@linux.vnet.ibm.com>
      1b47aefd
    • O
      uprobes/tracing: Kill uprobe_trace_consumer, embed uprobe_consumer into trace_uprobe · a932b738
      Oleg Nesterov 提交于
      trace_uprobe->consumer and "struct uprobe_trace_consumer" add the
      unnecessary indirection and complicate the code for no reason.
      
      This patch simply embeds uprobe_consumer into "struct trace_uprobe",
      all other changes only fix the compilation errors.
      Signed-off-by: NOleg Nesterov <oleg@redhat.com>
      a932b738
    • O
      uprobes/tracing: Introduce is_trace_uprobe_enabled() · b64b0077
      Oleg Nesterov 提交于
      probe_event_enable/disable() check tu->consumer != NULL to avoid the
      wrong uprobe_register/unregister().
      
      We are going to kill this pointer and "struct uprobe_trace_consumer",
      so we add the new helper, is_trace_uprobe_enabled(), which can rely
      on TP_FLAG_TRACE/TP_FLAG_PROFILE instead.
      
      Note: the current logic doesn't look optimal, it is not clear why
      TP_FLAG_TRACE/TP_FLAG_PROFILE are mutually exclusive, we will probably
      change this later.
      
      Also kill the unused TP_FLAG_UPROBE.
      Signed-off-by: NOleg Nesterov <oleg@redhat.com>
      Acked-by: NSrikar Dronamraju <srikar@linux.vnet.ibm.com>
      b64b0077
    • O
      uprobes/tracing: Ensure inode != NULL in create_trace_uprobe() · 7e4e28c5
      Oleg Nesterov 提交于
      probe_event_enable/disable() check tu->inode != NULL at the start.
      This is ugly, if igrab() can fail create_trace_uprobe() should not
      succeed and "postpone" the failure.
      
      And S_ISREG(inode->i_mode) check added by d24d7dbf is not safe.
      
      Note: alloc_uprobe() should probably check igrab() != NULL as well.
      Signed-off-by: NOleg Nesterov <oleg@redhat.com>
      Acked-by: NSrikar Dronamraju <srikar@linux.vnet.ibm.com>
      7e4e28c5
    • O
      uprobes/tracing: Fully initialize uprobe_trace_consumer before uprobe_register() · 4161824f
      Oleg Nesterov 提交于
      probe_event_enable() does uprobe_register() and only after that sets
      utc->tu and tu->consumer/flags. This can race with uprobe_dispatcher()
      which can miss these assignments or see them out of order. Nothing
      really bad can happen, but this doesn't look clean/safe.
      
      And this does not allow to use uprobe_consumer->filter() we are going
      to add, it is called by uprobe_register() and it needs utc->tu.
      
      Change this code to initialize everything before uprobe_register(), and
      reset tu->consumer/flags if it fails. We can't race with event_disable(),
      the caller holds event_mutex, and if we could the code would be wrong
      anyway.
      
      In fact I think uprobe_trace_consumer should die, it buys nothing but
      complicates the code. We can simply add uprobe_consumer into trace_uprobe.
      Signed-off-by: NOleg Nesterov <oleg@redhat.com>
      Acked-by: NSrikar Dronamraju <srikar@linux.vnet.ibm.com>
      4161824f
    • O
      uprobes/tracing: Fix dentry/mount leak in create_trace_uprobe() · 84d7ed79
      Oleg Nesterov 提交于
      create_trace_uprobe() does kern_path() to find ->d_inode, but forgets
      to do path_put(). We can do this right after igrab().
      Signed-off-by: NOleg Nesterov <oleg@redhat.com>
      Acked-by: NSrikar Dronamraju <srikar@linux.vnet.ibm.com>
      84d7ed79
    • O
      uprobes: Change handle_swbp() to expose bp_vaddr to handler_chain() · 74e59dfc
      Oleg Nesterov 提交于
      Change handle_swbp() to set regs->ip = bp_vaddr in advance, this is
      what consumer->handler() needs but uprobe_get_swbp_addr() is not
      exported.
      
      This also simplifies the code and makes it more consistent across
      the supported architectures. handle_swbp() becomes the only caller
      of uprobe_get_swbp_addr().
      Signed-off-by: NOleg Nesterov <oleg@redhat.com>
      Acked-by: NAnanth N Mavinakayanahalli <ananth@in.ibm.com>
      74e59dfc
    • O
      uprobes: Kill uprobe_consumer->filter() · fe20d71f
      Oleg Nesterov 提交于
      uprobe_consumer->filter() is pointless in its current form, kill it.
      
      We will add it back, but with the different signature/semantics. Perhaps
      we will even re-introduce the callsite in handler_chain(), but not to
      just skip uc->handler().
      Signed-off-by: NOleg Nesterov <oleg@redhat.com>
      Acked-by: NSrikar Dronamraju <srikar@linux.vnet.ibm.com>
      fe20d71f
  5. 08 2月, 2013 1 次提交
  6. 02 2月, 2013 1 次提交
    • S
      tracing: Init current_trace to nop_trace and remove NULL checks · d840f718
      Steven Rostedt (Red Hat) 提交于
      On early boot up, when the ftrace ring buffer is initialized, the
      static variable current_trace is initialized to &nop_trace.
      Before this initialization, current_trace is NULL and will never
      become NULL again. It is always reassigned to a ftrace tracer.
      
      Several places check if current_trace is NULL before it uses
      it, and this check is frivolous, because at the point in time
      when the checks are made the only way current_trace could be
      NULL is if ftrace failed its allocations at boot up, and the
      paths to these locations would probably not be possible.
      
      By initializing current_trace to &nop_trace where it is declared,
      current_trace will never be NULL, and we can remove all these
      checks of current_trace being NULL which never needed to be
      checked in the first place.
      
      Cc: Dan Carpenter <dan.carpenter@oracle.com>
      Cc: Hiraku Toyooka <hiraku.toyooka.gu@hitachi.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      d840f718
  7. 31 1月, 2013 4 次提交
  8. 30 1月, 2013 1 次提交
    • S
      tracing/fgraph: Adjust fgraph depth before calling trace return callback · 03274a3f
      Steven Rostedt (Red Hat) 提交于
      While debugging the virtual cputime with the function graph tracer
      with a max_depth of 1 (most common use of the max_depth so far),
      I found that I was missing kernel execution because of a race condition.
      
      The code for the return side of the function has a slight race:
      
      	ftrace_pop_return_trace(&trace, &ret, frame_pointer);
      	trace.rettime = trace_clock_local();
      	ftrace_graph_return(&trace);
      	barrier();
      	current->curr_ret_stack--;
      
      The ftrace_pop_return_trace() initializes the trace structure for
      the callback. The ftrace_graph_return() uses the trace structure
      for its own use as that structure is on the stack and is local
      to this function. Then the curr_ret_stack is decremented which
      is what the trace.depth is set to.
      
      If an interrupt comes in after the ftrace_graph_return() but
      before the curr_ret_stack, then the called function will get
      a depth of 2. If max_depth is set to 1 this function will be
      ignored.
      
      The problem is that the trace has already been called, and the
      timestamp for that trace will not reflect the time the function
      was about to re-enter userspace. Calls to the interrupt will not
      be traced because the max_depth has prevented this.
      
      To solve this issue, the ftrace_graph_return() can safely be
      moved after the current->curr_ret_stack has been updated.
      This way the timestamp for the return callback will reflect
      the actual time.
      
      If an interrupt comes in after the curr_ret_stack update and
      ftrace_graph_return(), it will be traced. It may look a little
      confusing to see it within the other function, but at least
      it will not be lost.
      
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      03274a3f
  9. 29 1月, 2013 1 次提交
  10. 26 1月, 2013 2 次提交
  11. 25 1月, 2013 1 次提交
  12. 24 1月, 2013 1 次提交
  13. 23 1月, 2013 12 次提交
    • S
      ring-buffer: Remove trace.h from ring_buffer.c · 0b07436d
      Steven Rostedt 提交于
      ring_buffer.c use to require declarations from trace.h, but
      these have moved to the generic header files. There's nothing
      in trace.h that ring_buffer.c requires.
      
      There's some headers that trace.h included that ring_buffer.c
      needs, but it's best that it includes them directly, and not
      include trace.h.
      
      Also, some things may use ring_buffer.c without having tracing
      configured. This removes the dependency that may come in the
      future.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      0b07436d
    • S
      ring-buffer: User context bit recursion checking · 567cd4da
      Steven Rostedt 提交于
      Using context bit recursion checking, we can help increase the
      performance of the ring buffer.
      
      Before this patch:
      
       # echo function > /debug/tracing/current_tracer
       # for i in `seq 10`; do ./hackbench 50; done
      Time: 10.285
      Time: 10.407
      Time: 10.243
      Time: 10.372
      Time: 10.380
      Time: 10.198
      Time: 10.272
      Time: 10.354
      Time: 10.248
      Time: 10.253
      
      (average: 10.3012)
      
      Now we have:
      
       # echo function > /debug/tracing/current_tracer
       # for i in `seq 10`; do ./hackbench 50; done
      Time: 9.712
      Time: 9.824
      Time: 9.861
      Time: 9.827
      Time: 9.962
      Time: 9.905
      Time: 9.886
      Time: 10.088
      Time: 9.861
      Time: 9.834
      
      (average: 9.876)
      
       a 4% savings!
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      567cd4da
    • S
      ftrace: Use only the preempt version of function tracing · 897f68a4
      Steven Rostedt 提交于
      The function tracer had two different versions of function tracing.
      
      The disabling of irqs version and the preempt disable version.
      
      As function tracing in very intrusive and can cause nasty recursion
      issues, it has its own recursion protection. But the old method to
      do this was a flat layer. If it detected that a recursion was happening
      then it would just return without recording.
      
      This made the preempt version (much faster than the irq disabling one)
      not very useful, because if an interrupt were to occur after the
      recursion flag was set, the interrupt would not be traced at all,
      because every function that was traced would think it recursed on
      itself (due to the context it preempted setting the recursive flag).
      
      Now that we have a recursion flag for every context level, we
      no longer need to worry about that. We can disable preemption,
      set the current context recursion check bit, and go on. If an
      interrupt were to come along, it would check its own context bit
      and happily continue to trace.
      
      As the preempt version is faster than the irq disable version,
      there's no more reason to keep the preempt version around.
      And the irq disable version still had an issue with missing
      out on tracing NMI code.
      
      Remove the irq disable function tracer version and have the
      preempt disable version be the default (and only version).
      
      Before this patch we had from running:
      
       # echo function > /debug/tracing/current_tracer
       # for i in `seq 10`; do ./hackbench 50; done
      Time: 12.028
      Time: 11.945
      Time: 11.925
      Time: 11.964
      Time: 12.002
      Time: 11.910
      Time: 11.944
      Time: 11.929
      Time: 11.941
      Time: 11.924
      
      (average: 11.9512)
      
      Now we have:
      
       # echo function > /debug/tracing/current_tracer
       # for i in `seq 10`; do ./hackbench 50; done
      Time: 10.285
      Time: 10.407
      Time: 10.243
      Time: 10.372
      Time: 10.380
      Time: 10.198
      Time: 10.272
      Time: 10.354
      Time: 10.248
      Time: 10.253
      
      (average: 10.3012)
      
       a 13.8% savings!
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      897f68a4
    • S
      tracing: Avoid unnecessary multiple recursion checks · edc15caf
      Steven Rostedt 提交于
      When function tracing occurs, the following steps are made:
        If arch does not support a ftrace feature:
         call internal function (uses INTERNAL bits) which calls...
        If callback is registered to the "global" list, the list
         function is called and recursion checks the GLOBAL bits.
         then this function calls...
        The function callback, which can use the FTRACE bits to
         check for recursion.
      
      Now if the arch does not suppport a feature, and it calls
      the global list function which calls the ftrace callback
      all three of these steps will do a recursion protection.
      There's no reason to do one if the previous caller already
      did. The recursion that we are protecting against will
      go through the same steps again.
      
      To prevent the multiple recursion checks, if a recursion
      bit is set that is higher than the MAX bit of the current
      check, then we know that the check was made by the previous
      caller, and we can skip the current check.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      edc15caf
    • S
      tracing: Make the trace recursion bits into enums · e46cbf75
      Steven Rostedt 提交于
      Convert the bits into enums which makes the code a little easier
      to maintain.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      e46cbf75
    • S
      ftrace: Add context level recursion bit checking · c29f122c
      Steven Rostedt 提交于
      Currently for recursion checking in the function tracer, ftrace
      tests a task_struct bit to determine if the function tracer had
      recursed or not. If it has, then it will will return without going
      further.
      
      But this leads to races. If an interrupt came in after the bit
      was set, the functions being traced would see that bit set and
      think that the function tracer recursed on itself, and would return.
      
      Instead add a bit for each context (normal, softirq, irq and nmi).
      
      A check of which context the task is in is made before testing the
      associated bit. Now if an interrupt preempts the function tracer
      after the previous context has been set, the interrupt functions
      can still be traced.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      c29f122c
    • S
      ftrace: Optimize the function tracer list loop · 0a016409
      Steven Rostedt 提交于
      There is lots of places that perform:
      
             op = rcu_dereference_raw(ftrace_control_list);
             while (op != &ftrace_list_end) {
      
      Add a helper macro to do this, and also optimize for a single
      entity. That is, gcc will optimize a loop for either no iterations
      or more than one iteration. But usually only a single callback
      is registered to the function tracer, thus the optimized case
      should be a single pass. to do this we now do:
      
      	op = rcu_dereference_raw(list);
      	do {
      		[...]
      	} while (likely(op = rcu_dereference_raw((op)->next)) &&
      	       unlikely((op) != &ftrace_list_end));
      
      An op is always registered (ftrace_list_end when no callbacks is
      registered), thus when a single callback is registered, the link
      list looks like:
      
       top => callback => ftrace_list_end => NULL.
      
      The likely(op = op->next) still must be performed due to the race
      of removing the callback, where the first op assignment could
      equal ftrace_list_end. In that case, the op->next would be NULL.
      But this is unlikely (only happens in a race condition when
      removing the callback).
      
      But it is very likely that the next op would be ftrace_list_end,
      unless more than one callback has been registered. This tells
      gcc what the most common case is and makes the fast path with
      the least amount of branches.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      0a016409
    • S
      ftrace: Fix function tracing recursion self test · 9640388b
      Steven Rostedt 提交于
      The function tracing recursion self test should not crash
      the machine if the resursion test fails. If it detects that
      the function tracing is recursing when it should not be, then
      bail, don't go into an infinite recursive loop.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      9640388b
    • S
      ftrace: Fix global function tracers that are not recursion safe · 63503794
      Steven Rostedt 提交于
      If one of the function tracers set by the global ops is not recursion
      safe, it can still be called directly without the added recursion
      supplied by the ftrace infrastructure.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      63503794
    • S
      tracing: Fix selftest function recursion accounting · 05cbbf64
      Steven Rostedt 提交于
      The test that checks function recursion does things differently
      if the arch does not support all ftrace features. But that really
      doesn't make a difference with how the test runs, and either way
      the count variable should be 2 at the end.
      
      Currently the test wrongly fails for archs that don't support all
      the ftrace features.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      05cbbf64
    • S
      tracing: Fix race with max_tr and changing tracers · 34600f0e
      Steven Rostedt 提交于
      There's a race condition between the setting of a new tracer and
      the update of the max trace buffers (the swap). When a new tracer
      is added, it sets current_trace to nop_trace before disabling
      the old tracer. At this moment, if the old tracer uses update_max_tr(),
      the update may trigger the warning against !current_trace->use_max-tr,
      as nop_trace doesn't have that set.
      
      As update_max_tr() requires that interrupts be disabled, we can
      add a check to see if current_trace == nop_trace and bail if it
      does. Then when disabling the current_trace, set it to nop_trace
      and run synchronize_sched(). This will make sure all calls to
      update_max_tr() have completed (it was called with interrupts disabled).
      
      As a clean up, this commit also removes shrinking and recreating
      the max_tr buffer if the old and new tracers both have use_max_tr set.
      The old way use to always shrink the buffer, and then expand it
      for the next tracer. This is a waste of time.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      34600f0e
    • S
      tracing: Remove trace.h header from trace_clock.c · 0a71e4c6
      Steven Rostedt 提交于
      As trace_clock is used by other things besides tracing, and it
      does not require anything from trace.h, it is best not to include
      the header file in trace_clock.c.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      0a71e4c6
  14. 22 1月, 2013 1 次提交
    • S
      tracing: Remove the extra 4 bytes of padding in events · b000c806
      Steven Rostedt 提交于
      Due to a userspace issue with PowerTop v2beta, which hardcoded
      the offset of event fields that it was using, it broke when
      we removed the Big Kernel Lock counter from the event header.
      
       (commit e6e1e259 "tracing: Remove lock_depth from event entry")
      
      Because this broke userspace, it was determined that we must
      keep those 4 bytes around.
      
       (commit a3a4a5ac "Regression: partial revert "tracing: Remove lock_depth from event entry"")
      
      This unfortunately wastes space in the ring buffer. 4 bytes per
      event, where a lot of events are just 24 bytes. That's 16% of the
      buffer wasted. A million events will add 4 megs of white space
      into the buffer.
      
      It was later noticed that PowerTop v2beta could not work on systems
      where the kernel was 64 bit but the userspace was 32 bits.
      The reason was because the offsets are different between the
      two and the hard coded offset of one would not work with the other.
      
      With PowerTop v2 final, it implemented the same interface that both
      perf and trace-cmd use. That is, it reads the format file of
      the event to find the offsets of the fields it needs. This fixes
      the problem with running powertop on a 32 bit userspace running
      on a 64 bit kernel. It also no longer requires the 4 byte padding.
      
      As PowerTop v2 has been out for a while, and is included in all
      major distributions, it is time that we can safely remove the
      4 bytes of padding. Users of PowerTop v2beta should upgrade to
      PowerTop v2 final.
      
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Acked-by: NArjan van de Ven <arjan@linux.intel.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      b000c806