1. 07 7月, 2011 1 次提交
    • S
      tracing: Fix bug when reading system filters on module removal · e9dbfae5
      Steven Rostedt 提交于
      The event system is freed when its nr_events is set to zero. This happens
      when a module created an event system and then later the module is
      removed. Modules may share systems, so the system is allocated when
      it is created and freed when the modules are unloaded and all the
      events under the system are removed (nr_events set to zero).
      
      The problem arises when a task opened the "filter" file for the
      system. If the module is unloaded and it removed the last event for
      that system, the system structure is freed. If the task that opened
      the filter file accesses the "filter" file after the system has
      been freed, the system will access an invalid pointer.
      
      By adding a ref_count, and using it to keep track of what
      is using the event system, we can free it after all users
      are finished with the event system.
      
      Cc: <stable@kernel.org>
      Reported-by: NJohannes Berg <johannes.berg@intel.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      e9dbfae5
  2. 15 6月, 2011 2 次提交
    • M
      tracing/kprobes: Fix kprobe-tracer to support stack trace · 1fd8df2c
      Masami Hiramatsu 提交于
      Fix to support kernel stack trace correctly on kprobe-tracer.
      Since the execution path of kprobe-based dynamic events is different
      from other tracepoint-based events, normal ftrace_trace_stack() doesn't
      work correctly. To fix that, this introduces ftrace_trace_stack_regs()
      which traces stack via pt_regs instead of current stack register.
      
      e.g.
      
       # echo p schedule+4 > /sys/kernel/debug/tracing/kprobe_events
       # echo 1 > /sys/kernel/debug/tracing/options/stacktrace
       # echo 1 > /sys/kernel/debug/tracing/events/kprobes/enable
       # head -n 20 /sys/kernel/debug/tracing/trace
                  bash-2968  [000] 10297.050245: p_schedule_4: (schedule+0x4/0x4ca)
                  bash-2968  [000] 10297.050247: <stack trace>
       => schedule_timeout
       => n_tty_read
       => tty_read
       => vfs_read
       => sys_read
       => system_call_fastpath
           kworker/0:1-2940  [000] 10297.050265: p_schedule_4: (schedule+0x4/0x4ca)
           kworker/0:1-2940  [000] 10297.050266: <stack trace>
       => worker_thread
       => kthread
       => kernel_thread_helper
                  sshd-1132  [000] 10297.050365: p_schedule_4: (schedule+0x4/0x4ca)
                  sshd-1132  [000] 10297.050365: <stack trace>
       => sysret_careful
      
      Note: Even with this fix, the first entry will be skipped
      if the probe is put on the function entry area before
      the frame pointer is set up (usually, that is 4 bytes
       (push %bp; mov %sp %bp) on x86), because stack unwinder
      depends on the frame pointer.
      Signed-off-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: yrl.pp-manager.tt@hitachi.com
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Namhyung Kim <namhyung@gmail.com>
      Link: http://lkml.kernel.org/r/20110608070934.17777.17116.stgit@fedora15Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      1fd8df2c
    • S
      tracing: Add disable_on_free option · cf30cf67
      Steven Rostedt 提交于
      Add a trace option to disable tracing on free. When this option is
      set, a write into the free_buffer file will not only shrink the
      ring buffer down to zero, but it will also disable tracing.
      
      Cc: Vaibhav Nagarnaik <vnagarnaik@google.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      cf30cf67
  3. 26 5月, 2011 1 次提交
    • S
      ftrace: Add internal recursive checks · b1cff0ad
      Steven Rostedt 提交于
      Witold reported a reboot caused by the selftests of the dynamic function
      tracer. He sent me a config and I used ktest to do a config_bisect on it
      (as my config did not cause the crash). It pointed out that the problem
      config was CONFIG_PROVE_RCU.
      
      What happened was that if multiple callbacks are attached to the
      function tracer, we iterate a list of callbacks. Because the list is
      managed by synchronize_sched() and preempt_disable, the access to the
      pointers uses rcu_dereference_raw().
      
      When PROVE_RCU is enabled, the rcu_dereference_raw() calls some
      debugging functions, which happen to be traced. The tracing of the debug
      function would then call rcu_dereference_raw() which would then call the
      debug function and then... well you get the idea.
      
      I first wrote two different patches to solve this bug.
      
      1) add a __rcu_dereference_raw() that would not do any checks.
      2) add notrace to the offending debug functions.
      
      Both of these patches worked.
      
      Talking with Paul McKenney on IRC, he suggested to add recursion
      detection instead. This seemed to be a better solution, so I decided to
      implement it. As the task_struct already has a trace_recursion to detect
      recursion in the ring buffer, and that has a very small number it
      allows, I decided to use that same variable to add flags that can detect
      the recursion inside the infrastructure of the function tracer.
      
      I plan to change it so that the task struct bit can be checked in
      mcount, but as that requires changes to all archs, I will hold that off
      to the next merge window.
      
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Link: http://lkml.kernel.org/r/1306348063.1465.116.camel@gandalf.stny.rr.comReported-by: NWitold Baryluk <baryluk@smp.if.uj.edu.pl>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      b1cff0ad
  4. 19 5月, 2011 1 次提交
  5. 10 3月, 2011 2 次提交
  6. 08 2月, 2011 7 次提交
    • S
      tracing/filter: Increase the max preds to 2^14 · bf93f9ed
      Steven Rostedt 提交于
      Now that the filter logic does not require to save the pred results
      on the stack, we can increase the max number of preds we allow.
      As the preds are index by a short value, and we use the MSBs as flags
      we can increase the max preds to 2^14 (16384) which should be way
      more than enough.
      
      Cc: Tom Zanussi <tzanussi@gmail.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      bf93f9ed
    • S
      tracing/filter: Move MAX_FILTER_PRED to local tracing directory · 4a3d27e9
      Steven Rostedt 提交于
      The MAX_FILTER_PRED is only needed by the kernel/trace/*.c files.
      Move it to kernel/trace/trace.h.
      
      Cc: Tom Zanussi <tzanussi@gmail.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      4a3d27e9
    • S
      tracing/filter: Optimize filter by folding the tree · 43cd4145
      Steven Rostedt 提交于
      There are many cases that a filter will contain multiple ORs or
      ANDs together near the leafs. Walking up and down the tree to get
      to the next compare can be a waste.
      
      If there are several ORs or ANDs together, fold them into a single
      pred and allocate an array of the conditions that they check.
      This will speed up the filter by linearly walking an array
      and can still break out if a short circuit condition is met.
      
      Cc: Tom Zanussi <tzanussi@gmail.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      43cd4145
    • S
      tracing/filter: Use a tree instead of stack for filter_match_preds() · 61e9dea2
      Steven Rostedt 提交于
      Currently the filter_match_preds() requires a stack to push
      and pop the preds to determine if the filter matches the record or not.
      This has two drawbacks:
      
      1) It requires a stack to store state information. As this is done
         in fast paths we can't allocate the storage for this stack, and
         we can't use a global as it must be re-entrant. The stack is stored
         on the kernel stack and this greatly limits how many preds we
         may allow.
      
      2) All conditions are calculated even when a short circuit exists.
         a || b  will always calculate a and b even though a was determined
         to be true.
      
      Using a tree we can walk a constant structure that will save
      the state as we go. The algorithm is simply:
      
        pred = root;
        do {
      	switch (move) {
      	case MOVE_DOWN:
      		if (OR or AND) {
      			pred = left;
      			continue;
      		}
      		if (pred == root)
      			break;
      		match = pred->fn();
      		pred = pred->parent;
      		move = left child ? MOVE_UP_FROM_LEFT : MOVE_UP_FROM_RIGHT;
      		continue;
      
      	case MOVE_UP_FROM_LEFT:
      		/* Only OR or AND can be a parent */
      		if (match && OR || !match && AND) {
      			/* short circuit */
      			if (pred == root)
      				break;
      			pred = pred->parent;
      			move = left child ?
      				MOVE_UP_FROM_LEFT :
      				MOVE_UP_FROM_RIGHT;
      			continue;
      		}
      		pred = pred->right;
      		move = MOVE_DOWN;
      		continue;
      
      	case MOVE_UP_FROM_RIGHT:
      		if (pred == root)
      			break;
      		pred = pred->parent;
      		move = left child ? MOVE_UP_FROM_LEFT : MOVE_UP_FROM_RIGHT;
      		continue;
      	}
      	done = 1;
        } while (!done);
      
      This way there's no strict limit to how many preds we allow
      and it also will short circuit the logical operations when possible.
      
      Cc: Tom Zanussi <tzanussi@gmail.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      61e9dea2
    • S
      tracing/filter: Allocate the preds in an array · 74e9e58c
      Steven Rostedt 提交于
      Currently we allocate an array of pointers to filter_preds, and then
      allocate a separate filter_pred for each item in the array.
      This adds slight overhead in the filters as it needs to derefernce
      twice to get to the op condition.
      
      Allocating the preds themselves in a single array removes a dereference
      as well as helps on the cache footprint.
      
      Cc: Tom Zanussi <tzanussi@gmail.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      74e9e58c
    • S
      tracing/filter: Dynamically allocate preds · c9c53ca0
      Steven Rostedt 提交于
      For every filter that is made, we create predicates to hold every
      operation within the filter. We have a max of 32 predicates that we
      can hold. Currently, we allocate all 32 even if we only need to
      use one.
      
      Part of the reason we do this is that the filter can be used at
      any moment by any event. Fortunately, the filter is only used
      with preemption disabled. By reseting the count of preds used "n_preds"
      to zero, then performing a synchronize_sched(), we can safely
      free and reallocate a new array of preds.
      
      Cc: Tom Zanussi <tzanussi@gmail.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      c9c53ca0
    • S
      tracing/filter: Move OR and AND logic out of fn() method · 58d9a597
      Steven Rostedt 提交于
      The ops OR and AND act different from the other ops, as they
      are the only ones to take other ops as their arguements.
      These ops als change the logic of the filter_match_preds.
      
      By removing the OR and AND fn's we can also remove the val1 and val2
      that is passed to all other fn's and are unused.
      
      Cc: Tom Zanussi <tzanussi@gmail.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      58d9a597
  7. 18 10月, 2010 1 次提交
  8. 05 8月, 2010 1 次提交
  9. 21 7月, 2010 2 次提交
    • K
      tracing: Shrink max latency ringbuffer if unnecessary · ef710e10
      KOSAKI Motohiro 提交于
      Documentation/trace/ftrace.txt says
      
        buffer_size_kb:
      
              This sets or displays the number of kilobytes each CPU
              buffer can hold. The tracer buffers are the same size
              for each CPU. The displayed number is the size of the
              CPU buffer and not total size of all buffers. The
              trace buffers are allocated in pages (blocks of memory
              that the kernel uses for allocation, usually 4 KB in size).
              If the last page allocated has room for more bytes
              than requested, the rest of the page will be used,
              making the actual allocation bigger than requested.
              ( Note, the size may not be a multiple of the page size
                due to buffer management overhead. )
      
              This can only be updated when the current_tracer
              is set to "nop".
      
      But it's incorrect. currently total memory consumption is
      'buffer_size_kb x CPUs x 2'.
      
      Why two times difference is there? because ftrace implicitly allocate
      the buffer for max latency too.
      
      That makes sad result when admin want to use large buffer. (If admin
      want full logging and makes detail analysis). example, If admin
      have 24 CPUs machine and write 200MB to buffer_size_kb, the system
      consume ~10GB memory (200MB x 24 x 2). umm.. 5GB memory waste is
      usually unacceptable.
      
      Fortunatelly, almost all users don't use max latency feature.
      The max latency buffer can be disabled easily.
      
      This patch shrink buffer size of the max latency buffer if
      unnecessary.
      Signed-off-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      LKML-Reference: <20100701104554.DA2D.A69D9226@jp.fujitsu.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      ef710e10
    • L
      tracing: Allow to disable cmdline recording · e870e9a1
      Li Zefan 提交于
      We found that even enabling a single trace event that will rarely be
      triggered can add big overhead to context switch.
      
      (lmbench context switch test)
       -------------------------------------------------
       2p/0K 2p/16K 2p/64K 8p/16K 8p/64K 16p/16K 16p/64K
       ctxsw  ctxsw  ctxsw ctxsw  ctxsw   ctxsw   ctxsw
      ------ ------ ------ ------ ------ ------- -------
        2.19   2.3   2.21   2.56   2.13     2.54    2.07
        2.39   2.51  2.35   2.75   2.27     2.81    2.24
      
      The overhead is 6% ~ 11%.
      
      It's because when a trace event is enabled 3 tracepoints (sched_switch,
      sched_wakeup, sched_wakeup_new) will be activated to map pid to cmdname.
      
      We'd like to avoid this overhead, so add a trace option '(no)record-cmd'
      to allow to disable cmdline recording.
      Signed-off-by: NLi Zefan <lizf@cn.fujitsu.com>
      LKML-Reference: <4C2D57F4.2050204@cn.fujitsu.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      e870e9a1
  10. 20 7月, 2010 2 次提交
  11. 16 7月, 2010 1 次提交
    • F
      tracing: Remove ksym tracer · 5d550467
      Frederic Weisbecker 提交于
      The ksym (breakpoint) ftrace plugin has been superseded by perf
      tools that are much more poweful to use the cpu breakpoints.
      This tracer doesn't bring more feature. It has been deprecated
      for a while now, lets remove it.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Prasad <prasad@linux.vnet.ibm.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      5d550467
  12. 29 6月, 2010 1 次提交
  13. 09 6月, 2010 2 次提交
  14. 04 6月, 2010 1 次提交
    • S
      tracing: Remove ftrace_preempt_disable/enable · 5168ae50
      Steven Rostedt 提交于
      The ftrace_preempt_disable/enable functions were to address a
      recursive race caused by the function tracer. The function tracer
      traces all functions which makes it easily susceptible to recursion.
      One area was preempt_enable(). This would call the scheduler and
      the schedulre would call the function tracer and loop.
      (So was it thought).
      
      The ftrace_preempt_disable/enable was made to protect against recursion
      inside the scheduler by storing the NEED_RESCHED flag. If it was
      set before the ftrace_preempt_disable() it would not call schedule
      on ftrace_preempt_enable(), thinking that if it was set before then
      it would have already scheduled unless it was already in the scheduler.
      
      This worked fine except in the case of SMP, where another task would set
      the NEED_RESCHED flag for a task on another CPU, and then kick off an
      IPI to trigger it. This could cause the NEED_RESCHED to be saved at
      ftrace_preempt_disable() but the IPI to arrive in the the preempt
      disabled section. The ftrace_preempt_enable() would not call the scheduler
      because the flag was already set before entring the section.
      
      This bug would cause a missed preemption check and cause lower latencies.
      
      Investigating further, I found that the recusion caused by the function
      tracer was not due to schedule(), but due to preempt_schedule(). Now
      that preempt_schedule is completely annotated with notrace, the recusion
      no longer is an issue.
      Reported-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      5168ae50
  15. 15 5月, 2010 3 次提交
    • L
      tracing: Fix function declarations if !CONFIG_STACKTRACE · e1f7992e
      Li Zefan 提交于
      ftrace_trace_stack() and frace_trace_userstacke() take a
      struct ring_buffer argument, not struct trace_array. Commit
      e77405ad("tracing: pass around ring buffer instead of tracer")
      made this change.
      Signed-off-by: NLi Zefan <lizf@cn.fujitsu.com>
      LKML-Reference: <4BE77C14.5010806@cn.fujitsu.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      e1f7992e
    • S
      tracing: Combine event filter_active and enable into single flags field · 553552ce
      Steven Rostedt 提交于
      The filter_active and enable both use an int (4 bytes each) to
      set a single flag. We can save 4 bytes per event by combining the
      two into a single integer.
      
         text	   data	    bss	    dec	    hex	filename
      4913961	1088356	 861512	6863829	 68bbd5	vmlinux.orig
      4894944	1018052	 861512	6774508	 675eec	vmlinux.id
      4894871	1012292	 861512	6768675	 674823	vmlinux.flags
      
      This gives us another 5K in savings.
      
      The modification of both the enable and filter fields are done
      under the event_mutex, so it is still safe to combine the two.
      
      Note: Although Mathieu gave his Acked-by, he would like it documented
       that the reads of flags are not protected by the mutex. The way the
       code works, these reads will not break anything, but will have a
       residual effect. Since this behavior is the same even before this
       patch, describing this situation is left to another patch, as this
       patch does not change the behavior, but just brought it to Mathieu's
       attention.
      
      v2: Updated the event trace self test to for this change.
      Acked-by: NMathieu Desnoyers <mathieu.desnoyers@efficios.com>
      Acked-by: NMasami Hiramatsu <mhiramat@redhat.com>
      Acked-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Tom Zanussi <tzanussi@gmail.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      553552ce
    • S
      tracing: Move fields from event to class structure · 2e33af02
      Steven Rostedt 提交于
      Move the defined fields from the event to the class structure.
      Since the fields of the event are defined by the class they belong
      to, it makes sense to have the class hold the information instead
      of the individual events. The events of the same class would just
      hold duplicate information.
      
      After this change the size of the kernel dropped another 3K:
      
         text	   data	    bss	    dec	    hex	filename
      4913961	1088356	 861512	6863829	 68bbd5	vmlinux.orig
      4900252	1057412	 861512	6819176	 680d68	vmlinux.regs
      4900375	1053380	 861512	6815267	 67fe23	vmlinux.fields
      
      Although the text increased, this was mainly due to the C files
      having to adapt to the change. This is a constant increase, where
      new tracepoints will not increase the Text. But the big drop is
      in the data size (as well as needed allocations to hold the fields).
      This will give even more savings as more tracepoints are created.
      
      Note, if just TRACE_EVENT()s are used and not DECLARE_EVENT_CLASS()
      with several DEFINE_EVENT()s, then the savings will be lost. But
      we are pushing developers to consolidate events with DEFINE_EVENT()
      so this should not be an issue.
      
      The kprobes define a unique class to every new event, but are dynamic
      so it should not be a issue.
      
      The syscalls however have a single class but the fields for the individual
      events are different. The syscalls use a metadata to define the
      fields. I moved the fields list from the event to the metadata and
      added a "get_fields()" function to the class. This function is used
      to find the fields. For normal events and kprobes, get_fields() just
      returns a pointer to the fields list_head in the class. For syscall
      events, it returns the fields list_head in the metadata for the event.
      
      v2:  Fixed the syscall fields. The syscall metadata needs a list
           of fields for both enter and exit.
      Acked-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Acked-by: NMathieu Desnoyers <mathieu.desnoyers@efficios.com>
      Acked-by: NMasami Hiramatsu <mhiramat@redhat.com>
      Cc: Tom Zanussi <tzanussi@gmail.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      2e33af02
  16. 28 4月, 2010 1 次提交
  17. 27 4月, 2010 1 次提交
  18. 15 4月, 2010 1 次提交
    • M
      tracing/kprobes: Support basic types on dynamic events · 93ccae7a
      Masami Hiramatsu 提交于
      Support basic types of integer (u8, u16, u32, u64, s8, s16, s32, s64) in
      kprobe tracer. With this patch, users can specify above basic types on
      each arguments after ':'. If omitted, the argument type is set as
      unsigned long (u32 or u64, arch-dependent).
      
       e.g.
        echo 'p account_system_time+0 hardirq_offset=%si:s32' > kprobe_events
      
        adds a probe recording hardirq_offset in signed-32bits value on the
        entry of account_system_time.
      
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      LKML-Reference: <20100412171708.3790.18599.stgit@localhost6.localdomain6>
      Signed-off-by: NMasami Hiramatsu <mhiramat@redhat.com>
      Signed-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      93ccae7a
  19. 26 3月, 2010 1 次提交
    • P
      x86, perf, bts, mm: Delete the never used BTS-ptrace code · faa4602e
      Peter Zijlstra 提交于
      Support for the PMU's BTS features has been upstreamed in
      v2.6.32, but we still have the old and disabled ptrace-BTS,
      as Linus noticed it not so long ago.
      
      It's buggy: TIF_DEBUGCTLMSR is trampling all over that MSR without
      regard for other uses (perf) and doesn't provide the flexibility
      needed for perf either.
      
      Its users are ptrace-block-step and ptrace-bts, since ptrace-bts
      was never used and ptrace-block-step can be implemented using a
      much simpler approach.
      
      So axe all 3000 lines of it. That includes the *locked_memory*()
      APIs in mm/mlock.c as well.
      Reported-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Roland McGrath <roland@redhat.com>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Markus Metzger <markus.t.metzger@intel.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      LKML-Reference: <20100325135413.938004390@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      faa4602e
  20. 06 3月, 2010 1 次提交
    • T
      function-graph: Add tracing_thresh support to function_graph tracer · 0e950173
      Tim Bird 提交于
      Add support for tracing_thresh to the function_graph tracer.  This
      version of this feature isolates the checks into new entry and
      return functions, to avoid adding more conditional code into the
      main function_graph paths.
      
      When the tracing_thresh is set and the function graph tracer is
      enabled, only the functions that took longer than the time in
      microseconds that was set in tracing_thresh are recorded. To do this
      efficiently, only the function exits are recorded:
      
       [tracing]# echo 100 > tracing_thresh
       [tracing]# echo function_graph > current_tracer
       [tracing]# cat trace
       # tracer: function_graph
       #
       # CPU  DURATION                  FUNCTION CALLS
       # |     |   |                     |   |   |   |
        1) ! 119.214 us  |  } /* smp_apic_timer_interrupt */
        1)   <========== |
        0) ! 101.527 us  |              } /* __rcu_process_callbacks */
        0) ! 126.461 us  |            } /* rcu_process_callbacks */
        0) ! 145.111 us  |          } /* __do_softirq */
        0) ! 149.667 us  |        } /* do_softirq */
        0) ! 168.817 us  |      } /* irq_exit */
        0) ! 248.254 us  |    } /* smp_apic_timer_interrupt */
      
      Also, add support for specifying tracing_thresh on the kernel
      command line.  When used like so: "tracing_thresh=200 ftrace=function_graph"
      this can be used to analyse system startup.  It is important to disable
      tracing soon after boot, in order to avoid losing the trace data.
      Acked-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Signed-off-by: NTim Bird <tim.bird@am.sony.com>
      LKML-Reference: <4B87098B.4040308@am.sony.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      0e950173
  21. 25 2月, 2010 1 次提交
    • J
      tracing: Fix ftrace_event_call alignment for use with gcc 4.5 · 86c38a31
      Jeff Mahoney 提交于
      GCC 4.5 introduces behavior that forces the alignment of structures to
       use the largest possible value. The default value is 32 bytes, so if
       some structures are defined with a 4-byte alignment and others aren't
       declared with an alignment constraint at all - it will align at 32-bytes.
      
       For things like the ftrace events, this results in a non-standard array.
       When initializing the ftrace subsystem, we traverse the _ftrace_events
       section and call the initialization callback for each event. When the
       structures are misaligned, we could be treating another part of the
       structure (or the zeroed out space between them) as a function pointer.
      
       This patch forces the alignment for all the ftrace_event_call structures
       to 4 bytes.
      
       Without this patch, the kernel fails to boot very early when built with
       gcc 4.5.
      
       It's trivial to check the alignment of the members of the array, so it
       might be worthwhile to add something to the build system to do that
       automatically. Unfortunately, that only covers this case. I've asked one
       of the gcc developers about adding a warning when this condition is seen.
      
      Cc: stable@kernel.org
      Signed-off-by: NJeff Mahoney <jeffm@suse.com>
      LKML-Reference: <4B85770B.6010901@suse.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      86c38a31
  22. 12 2月, 2010 1 次提交
  23. 05 2月, 2010 1 次提交
  24. 29 1月, 2010 1 次提交
    • L
      tracing: Simplify test for function_graph tracing start point · ea2c68a0
      Lai Jiangshan 提交于
      In the function graph tracer, a calling function is to be traced
      only when it is enabled through the set_graph_function file,
      or when it is nested in an enabled function.
      
      Current code uses TSK_TRACE_FL_GRAPH to test whether it is nested
      or not. Looking at the code, we can get this:
      (trace->depth > 0) <==> (TSK_TRACE_FL_GRAPH is set)
      
      trace->depth is more explicit to tell that it is nested.
      So we use trace->depth directly and simplify the code.
      
      No functionality is changed.
      TSK_TRACE_FL_GRAPH is not removed yet, it is left for future usage.
      Signed-off-by: NLai Jiangshan <laijs@cn.fujitsu.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      LKML-Reference: <4B4DB0B6.7040607@cn.fujitsu.com>
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      ea2c68a0
  25. 14 12月, 2009 1 次提交
  26. 08 12月, 2009 1 次提交
    • S
      tracing: Add pipe_close interface · c521efd1
      Steven Rostedt 提交于
      An ftrace plugin can add a pipe_open interface when the user opens
      trace_pipe. But if the plugin allocates something within the pipe_open
      it can not free it because there exists no pipe_close. The hook to
      the trace file open has a corresponding close. The closing of the
      trace_pipe file should also have a corresponding close.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      c521efd1
  27. 25 11月, 2009 1 次提交
    • T
      trace/syscalls: Change ret param in struct syscall_trace_exit to long · 99df5a6a
      Tom Zanussi 提交于
      Commit ee949a86 ("tracing/syscalls:
      Use long for syscall ret format and field definitions") changed the
      syscall exit return type to long, but forgot to change it in the
      struct.
      Signed-off-by: NTom Zanussi <tzanussi@gmail.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      LKML-Reference: <1259133299-23594-3-git-send-email-tzanussi@gmail.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      99df5a6a