1. 15 3月, 2013 6 次提交
    • S
      tracing: Add function probe to trigger stack traces · dd42cd3e
      Steven Rostedt (Red Hat) 提交于
      Add a function probe that will cause a stack trace to be traced in
      the ring buffer when the given function(s) are called.
      
      format is:
      
       <function>:stacktrace[:<count>]
      
       echo 'schedule:stacktrace' > /debug/tracing/set_ftrace_filter
       cat /debug/tracing/trace_pipe
           kworker/2:0-4329  [002] ...2  2933.558007: <stack trace>
       => kthread
       => ret_from_fork
                <idle>-0     [000] .N.2  2933.558019: <stack trace>
       => rest_init
       => start_kernel
       => x86_64_start_reservations
       => x86_64_start_kernel
           kworker/2:0-4329  [002] ...2  2933.558109: <stack trace>
       => kthread
       => ret_from_fork
      [...]
      
      This can be set to only trace a specific amount of times:
      
       echo 'schedule:stacktrace:3' > /debug/tracing/set_ftrace_filter
       cat /debug/tracing/trace_pipe
                 <...>-58    [003] ...2   841.801694: <stack trace>
       => kthread
       => ret_from_fork
                <idle>-0     [001] .N.2   841.801697: <stack trace>
       => start_secondary
                 <...>-2059  [001] ...2   841.801736: <stack trace>
       => wait_for_common
       => wait_for_completion
       => flush_work
       => tty_flush_to_ldisc
       => input_available_p
       => n_tty_poll
       => tty_poll
       => do_select
       => core_sys_select
       => sys_select
       => system_call_fastpath
      
      To remove these:
      
       echo '!schedule:stacktrace' > /debug/tracing/set_ftrace_filter
       echo '!schedule:stacktrace:0' > /debug/tracing/set_ftrace_filter
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      dd42cd3e
    • S
      ftrace: Separate unlimited probes from count limited probes · 8380d248
      Steven Rostedt (Red Hat) 提交于
      The function tracing probes that trigger traceon or traceoff can be
      set to unlimited, or given a count of # of times to execute.
      
      By separating these two types of probes, we can then use the dynamic
      ftrace function filtering directly, and remove the brute force
      "check if this function called is my probe" routines in ftrace.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      8380d248
    • S
      tracing: Consolidate ftrace_trace_onoff_unreg() into callback · 8b8fa62c
      Steven Rostedt (Red Hat) 提交于
      The only thing ftrace_trace_onoff_unreg() does is to do a strcmp()
      against the cmd parameter to determine what op to unregister. But
      this compare is also done after the location that this function is
      called (and returns). By moving the check for '!' to unregister after
      the strcmp(), the callback function itself can just do the unregister
      and we can get rid of the helper function.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      8b8fa62c
    • S
      tracing: Consolidate updating of count for traceon/off · 1c317143
      Steven Rostedt (Red Hat) 提交于
      Remove some duplicate code and replace it with a helper function.
      This makes the code a it cleaner.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      1c317143
    • S
      tracing: Consolidate max_tr into main trace_array structure · 12883efb
      Steven Rostedt (Red Hat) 提交于
      Currently, the way the latency tracers and snapshot feature works
      is to have a separate trace_array called "max_tr" that holds the
      snapshot buffer. For latency tracers, this snapshot buffer is used
      to swap the running buffer with this buffer to save the current max
      latency.
      
      The only items needed for the max_tr is really just a copy of the buffer
      itself, the per_cpu data pointers, the time_start timestamp that states
      when the max latency was triggered, and the cpu that the max latency
      was triggered on. All other fields in trace_array are unused by the
      max_tr, making the max_tr mostly bloat.
      
      This change removes the max_tr completely, and adds a new structure
      called trace_buffer, that holds the buffer pointer, the per_cpu data
      pointers, the time_start timestamp, and the cpu where the latency occurred.
      
      The trace_array, now has two trace_buffers, one for the normal trace and
      one for the max trace or snapshot. By doing this, not only do we remove
      the bloat from the max_trace but the instances of traces can now use
      their own snapshot feature and not have just the top level global_trace have
      the snapshot feature and latency tracers for itself.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      12883efb
    • S
      tracing: Replace the static global per_cpu arrays with allocated per_cpu · a7603ff4
      Steven Rostedt 提交于
      The global and max-tr currently use static per_cpu arrays for the CPU data
      descriptors. But in order to get new allocated trace_arrays, they need to
      be allocated per_cpu arrays. Instead of using the static arrays, switch
      the global and max-tr to use allocated data.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      a7603ff4
  2. 24 1月, 2013 1 次提交
  3. 23 1月, 2013 1 次提交
    • S
      ftrace: Use only the preempt version of function tracing · 897f68a4
      Steven Rostedt 提交于
      The function tracer had two different versions of function tracing.
      
      The disabling of irqs version and the preempt disable version.
      
      As function tracing in very intrusive and can cause nasty recursion
      issues, it has its own recursion protection. But the old method to
      do this was a flat layer. If it detected that a recursion was happening
      then it would just return without recording.
      
      This made the preempt version (much faster than the irq disabling one)
      not very useful, because if an interrupt were to occur after the
      recursion flag was set, the interrupt would not be traced at all,
      because every function that was traced would think it recursed on
      itself (due to the context it preempted setting the recursive flag).
      
      Now that we have a recursion flag for every context level, we
      no longer need to worry about that. We can disable preemption,
      set the current context recursion check bit, and go on. If an
      interrupt were to come along, it would check its own context bit
      and happily continue to trace.
      
      As the preempt version is faster than the irq disable version,
      there's no more reason to keep the preempt version around.
      And the irq disable version still had an issue with missing
      out on tracing NMI code.
      
      Remove the irq disable function tracer version and have the
      preempt disable version be the default (and only version).
      
      Before this patch we had from running:
      
       # echo function > /debug/tracing/current_tracer
       # for i in `seq 10`; do ./hackbench 50; done
      Time: 12.028
      Time: 11.945
      Time: 11.925
      Time: 11.964
      Time: 12.002
      Time: 11.910
      Time: 11.944
      Time: 11.929
      Time: 11.941
      Time: 11.924
      
      (average: 11.9512)
      
      Now we have:
      
       # echo function > /debug/tracing/current_tracer
       # for i in `seq 10`; do ./hackbench 50; done
      Time: 10.285
      Time: 10.407
      Time: 10.243
      Time: 10.372
      Time: 10.380
      Time: 10.198
      Time: 10.272
      Time: 10.354
      Time: 10.248
      Time: 10.253
      
      (average: 10.3012)
      
       a 13.8% savings!
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      897f68a4
  4. 06 12月, 2012 1 次提交
  5. 01 11月, 2012 2 次提交
  6. 07 9月, 2012 1 次提交
    • A
      pstore/ftrace: Convert to its own enable/disable debugfs knob · 65f8c95e
      Anton Vorontsov 提交于
      With this patch we no longer reuse function tracer infrastructure, now
      we register our own tracer back-end via a debugfs knob.
      
      It's a bit more code, but that is the only downside. On the bright side we
      have:
      
      - Ability to make persistent_ram module removable (when needed, we can
        move ftrace_ops struct into a module). Note that persistent_ram is still
        not removable for other reasons, but with this patch it's just one
        thing less to worry about;
      
      - Pstore part is more isolated from the generic function tracer. We tried
        it already by registering our own tracer in available_tracers, but that
        way we're loosing ability to see the traces while we record them to
        pstore. This solution is somewhere in the middle: we only register
        "internal ftracer" back-end, but not the "front-end";
      
      - When there is only pstore tracing enabled, the kernel will only write
        to the pstore buffer, omitting function tracer buffer (which, of course,
        still can be enabled via 'echo function > current_tracer').
      Suggested-by: NSteven Rostedt <rostedt@goodmis.org>
      Signed-off-by: NAnton Vorontsov <anton.vorontsov@linaro.org>
      65f8c95e
  7. 31 7月, 2012 1 次提交
    • S
      ftrace: Add default recursion protection for function tracing · 4740974a
      Steven Rostedt 提交于
      As more users of the function tracer utility are being added, they do
      not always add the necessary recursion protection. To protect from
      function recursion due to tracing, if the callback ftrace_ops does not
      specifically specify that it protects against recursion (by setting
      the FTRACE_OPS_FL_RECURSION_SAFE flag), the list operation will be
      called by the mcount trampoline which adds recursion protection.
      
      If the flag is set, then the function will be called directly with no
      extra protection.
      
      Note, the list operation is called if more than one function callback
      is registered, or if the arch does not support all of the function
      tracer features.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      4740974a
  8. 20 7月, 2012 2 次提交
  9. 18 7月, 2012 2 次提交
  10. 07 7月, 2011 1 次提交
    • S
      ftrace: Fix regression of :mod:module function enabling · 43dd61c9
      Steven Rostedt 提交于
      The new code that allows different utilities to pick and choose
      what functions they trace broke the :mod: hook that allows users
      to trace only functions of a particular module.
      
      The reason is that the :mod: hook bypasses the hash that is setup
      to allow individual users to trace their own functions and uses
      the global hash directly. But if the global hash has not been
      set up, it will cause a bug:
      
      echo '*:mod:radeon' > /sys/kernel/debug/set_ftrace_filter
      
      produces:
      
       [drm:drm_mode_getfb] *ERROR* invalid framebuffer id
       [drm:radeon_crtc_page_flip] *ERROR* failed to reserve new rbo buffer before flip
       BUG: unable to handle kernel paging request at ffffffff8160ec90
       IP: [<ffffffff810d9136>] add_hash_entry+0x66/0xd0
       PGD 1a05067 PUD 1a09063 PMD 80000000016001e1
       Oops: 0003 [#1] SMP Jul  7 04:02:28 phyllis kernel: [55303.858604] CPU 1
       Modules linked in: cryptd aes_x86_64 aes_generic binfmt_misc rfcomm bnep ip6table_filter hid radeon r8169 ahci libahci mii ttm drm_kms_helper drm video i2c_algo_bit intel_agp intel_gtt
      
       Pid: 10344, comm: bash Tainted: G        WC  3.0.0-rc5 #1 Dell Inc. Inspiron N5010/0YXXJJ
       RIP: 0010:[<ffffffff810d9136>]  [<ffffffff810d9136>] add_hash_entry+0x66/0xd0
       RSP: 0018:ffff88003a96bda8  EFLAGS: 00010246
       RAX: ffff8801301735c0 RBX: ffffffff8160ec80 RCX: 0000000000306ee0
       RDX: 0000000000000000 RSI: 0000000000000000 RDI: ffff880137c92940
       RBP: ffff88003a96bdb8 R08: ffff880137c95680 R09: 0000000000000000
       R10: 0000000000000001 R11: 0000000000000000 R12: ffffffff81c9df78
       R13: ffff8801153d1000 R14: 0000000000000000 R15: 0000000000000000
       FS: 00007f329c18a700(0000) GS:ffff880137c80000(0000) knlGS:0000000000000000
       CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
       CR2: ffffffff8160ec90 CR3: 000000003002b000 CR4: 00000000000006e0
       DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
       DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
       Process bash (pid: 10344, threadinfo ffff88003a96a000, task ffff88012fcfc470)
       Stack:
        0000000000000fd0 00000000000000fc ffff88003a96be38 ffffffff810d92f5
        ffff88011c4c4e00 ffff880000000000 000000000b69f4d0 ffffffff8160ec80
        ffff8800300e6f06 0000000081130295 0000000000000282 ffff8800300e6f00
       Call Trace:
        [<ffffffff810d92f5>] match_records+0x155/0x1b0
        [<ffffffff810d940c>] ftrace_mod_callback+0xbc/0x100
        [<ffffffff810dafdf>] ftrace_regex_write+0x16f/0x210
        [<ffffffff810db09f>] ftrace_filter_write+0xf/0x20
        [<ffffffff81166e48>] vfs_write+0xc8/0x190
        [<ffffffff81167001>] sys_write+0x51/0x90
        [<ffffffff815c7e02>] system_call_fastpath+0x16/0x1b
       Code: 48 8b 33 31 d2 48 85 f6 75 33 49 89 d4 4c 03 63 08 49 8b 14 24 48 85 d2 48 89 10 74 04 48 89 42 08 49 89 04 24 4c 89 60 08 31 d2
       RIP [<ffffffff810d9136>] add_hash_entry+0x66/0xd0
        RSP <ffff88003a96bda8>
       CR2: ffffffff8160ec90
       ---[ end trace a5d031828efdd88e ]---
      Reported-by: NBrian Marete <marete@toshnix.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      43dd61c9
  11. 19 5月, 2011 1 次提交
    • S
      ftrace: Implement separate user function filtering · b848914c
      Steven Rostedt 提交于
      ftrace_ops that are registered to trace functions can now be
      agnostic to each other in respect to what functions they trace.
      Each ops has their own hash of the functions they want to trace
      and a hash to what they do not want to trace. A empty hash for
      the functions they want to trace denotes all functions should
      be traced that are not in the notrace hash.
      
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      b848914c
  12. 04 6月, 2010 1 次提交
    • S
      tracing: Remove ftrace_preempt_disable/enable · 5168ae50
      Steven Rostedt 提交于
      The ftrace_preempt_disable/enable functions were to address a
      recursive race caused by the function tracer. The function tracer
      traces all functions which makes it easily susceptible to recursion.
      One area was preempt_enable(). This would call the scheduler and
      the schedulre would call the function tracer and loop.
      (So was it thought).
      
      The ftrace_preempt_disable/enable was made to protect against recursion
      inside the scheduler by storing the NEED_RESCHED flag. If it was
      set before the ftrace_preempt_disable() it would not call schedule
      on ftrace_preempt_enable(), thinking that if it was set before then
      it would have already scheduled unless it was already in the scheduler.
      
      This worked fine except in the case of SMP, where another task would set
      the NEED_RESCHED flag for a task on another CPU, and then kick off an
      IPI to trigger it. This could cause the NEED_RESCHED to be saved at
      ftrace_preempt_disable() but the IPI to arrive in the the preempt
      disabled section. The ftrace_preempt_enable() would not call the scheduler
      because the flag was already set before entring the section.
      
      This bug would cause a missed preemption check and cause lower latencies.
      
      Investigating further, I found that the recusion caused by the function
      tracer was not due to schedule(), but due to preempt_schedule(). Now
      that preempt_schedule is completely annotated with notrace, the recusion
      no longer is an issue.
      Reported-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      5168ae50
  13. 18 9月, 2009 1 次提交
  14. 17 7月, 2009 2 次提交
    • X
      tracing/function: Cleanup for function tracer · 6f2f3cf0
      Xiao Guangrong 提交于
      We can directly use %pf input format instead of kallsyms_lookup()
      and %s input format
      Signed-off-by: NXiao Guangrong <xiaoguangrong@cn.fujitsu.com>
      Reviewed-by: NLi Zefan <lizf@cn.fujitsu.com>
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      6f2f3cf0
    • X
      tracing/function: Fix the return value of ftrace_trace_onoff_callback() · 04aef32d
      Xiao Guangrong 提交于
      ftrace_trace_onoff_callback() will return an error even if we do the
      right operation, for example:
      
       # echo _spin_*:traceon:10 > set_ftrace_filter
       -bash: echo: write error: Invalid argument
       # cat set_ftrace_filter
       #### all functions enabled ####
       _spin_trylock_bh:traceon:count=10
       _spin_unlock_irq:traceon:count=10
       _spin_unlock_bh:traceon:count=10
       _spin_lock_irq:traceon:count=10
       _spin_unlock:traceon:count=10
       _spin_trylock:traceon:count=10
       _spin_unlock_irqrestore:traceon:count=10
       _spin_lock_irqsave:traceon:count=10
       _spin_lock_bh:traceon:count=10
       _spin_lock:traceon:count=10
      
      We want to set _spin_*:traceon:10 to set_ftrace_filter, it complains
      with "Invalid argument", but the operation is successful.
      
      This is because ftrace_process_regex() returns the number of functions that
      matched the pattern. If the number is not 0, this value is returned
      by ftrace_regex_write() whereas we want to return the number of bytes
      virtually written.
      Also the file offset pointer is not updated in this case.
      
      If the number of matched functions is lower than the number of bytes written
      by the user, this results to a reprocessing of the string given by the user with
      a lower size, leading to a malformed ftrace regex and then a -EINVAL returned.
      
      So, this patch fixes it by returning 0 if no error occured.
      The fix also applies on 2.6.30
      Signed-off-by: NXiao Guangrong <xiaoguangrong@cn.fujitsu.com>
      Reviewed-by: NLi Zefan <lizf@cn.fujitsu.com>
      Cc: stable@kernel.org
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      04aef32d
  15. 25 6月, 2009 1 次提交
    • L
      ftrace: Remove duplicate newline · 00e54d08
      Li Zefan 提交于
      Before:
        # echo 'sys_open:traceon:' > set_ftrace_filter
        # echo 'sys_close:traceoff:5' > set_ftrace_filter
        # cat set_ftrace_filter
        #### all functions enabled ####
        sys_open:traceon:unlimited
      
        sys_close:traceoff:count=0
      
      After:
        # cat set_ftrace_filter
        #### all functions enabled ####
        sys_open:traceon:unlimited
        sys_close:traceoff:count=0
      Signed-off-by: NLi Zefan <lizf@cn.fujitsu.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      LKML-Reference: <4A4313A7.7030105@cn.fujitsu.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      00e54d08
  16. 20 6月, 2009 1 次提交
    • F
      tracing/urgent: fix unbalanced ftrace_start_up · c85a17e2
      Frederic Weisbecker 提交于
      Perfcounter reports the following stats for a wide system
      profiling:
      
       #
       # (2364 samples)
       #
       # Overhead  Symbol
       # ........  ......
       #
          15.40%  [k] mwait_idle_with_hints
           8.29%  [k] read_hpet
           5.75%  [k] ftrace_caller
           3.60%  [k] ftrace_call
           [...]
      
      This snapshot has been taken while neither the function tracer nor
      the function graph tracer was running.
      With dynamic ftrace, such results show a wrong ftrace behaviour
      because all calls to ftrace_caller or ftrace_graph_caller (the patched
      calls to mcount) are supposed to be patched into nop if none of those
      tracers are running.
      
      The problem occurs after the first run of the function tracer. Once we
      launch it a second time, the callsites will never be nopped back,
      unless you set custom filters.
      For example it happens during the self tests at boot time.
      The function tracer selftest runs, and then the dynamic tracing is
      tested too. After that, the callsites are left un-nopped.
      
      This is because the reset callback of the function tracer tries to
      unregister two ftrace callbacks in once: the common function tracer
      and the function tracer with stack backtrace, regardless of which
      one is currently in use.
      It then creates an unbalance on ftrace_start_up value which is expected
      to be zero when the last ftrace callback is unregistered. When it
      reaches zero, the FTRACE_DISABLE_CALLS is set on the next ftrace
      command, triggering the patching into nop. But since it becomes
      unbalanced, ie becomes lower than zero, if the kernel functions
      are patched again (as in every further function tracer runs), they
      won't ever be nopped back.
      
      Note that ftrace_call and ftrace_graph_call are still patched back
      to ftrace_stub in the off case, but not the callers of ftrace_call
      and ftrace_graph_caller. It means that the tracing is well deactivated
      but we waste a useless call into every kernel function.
      
      This patch just unregisters the right ftrace_ops for the function
      tracer on its reset callback and ignores the other one which is
      not registered, fixing the unbalance. The problem also happens
      is .30
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: stable@kernel.org
      c85a17e2
  17. 18 2月, 2009 4 次提交
    • F
      tracing/core: use appropriate waiting on trace_pipe · 6eaaa5d5
      Frederic Weisbecker 提交于
      Impact: api and pipe waiting change
      
      Currently, the waiting used in tracing_read_pipe() is done through a
      100 msecs schedule_timeout() loop which periodically check if there
      are traces on the buffer.
      
      This can cause small latencies for programs which are reading the incoming
      events.
      
      This patch makes the reader waiting for the trace_wait waitqueue except
      for few tracers such as the sched and functions tracers which might be
      already hold the runqueue lock while waking up the reader.
      
      This is performed through a new callback wait_pipe() on struct tracer.
      If none is implemented on a specific tracer, the default waiting for
      trace_wait queue is attached.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      6eaaa5d5
    • S
      ftrace: show unlimited when traceon or traceoff has no counter · 35ebf1ca
      Steven Rostedt 提交于
      Impact: clean up
      
      The traceon and traceoff function probes are confusing to developers
      to what happens when a counter is not specified. This should help
      clear things up.
      
       # echo "*:traceoff" > set_ftrace_filter
       # cat /debug/tracing/set_ftrace_filter
      
        #### all functions enabled ####
        do_fork:traceoff:unlimited
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      35ebf1ca
    • S
      ftrace: rename _hook to _probe · b6887d79
      Steven Rostedt 提交于
      Impact: clean up
      
      Ingo Molnar did not like the _hook naming convention used by the
      select function tracer. Luis Claudio R. Goncalves suggested using
      the "_probe" extension. This patch implements the change of
      calling the functions and variables "_hook" and replacing them
      with "_probe".
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      b6887d79
    • S
      ftrace: clean up coding style · 6a24a244
      Steven Rostedt 提交于
      Ingo Molnar pointed out some coding style issues with the recent ftrace
      updates. This patch cleans them up.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      6a24a244
  18. 17 2月, 2009 2 次提交
    • S
      ftrace: add pretty print function for traceon and traceoff hooks · e110e3d1
      Steven Rostedt 提交于
      This patch adds a pretty print version of traceon and traceoff
      output for set_ftrace_filter.
      
        # echo 'sys_open:traceon:4' > set_ftrace_filter
        # cat set_ftrace_filter
      
       #### all functions enabled ####
       sys_open:traceon:count=4
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      e110e3d1
    • S
      ftrace: add traceon traceoff commands to enable/disable the buffers · 23b4ff3a
      Steven Rostedt 提交于
      This patch adds the new function selection commands traceon and
      traceoff. traceon sets the function to enable the ring buffers
      while traceoff disables the ring buffers.  You can pass in the
      number of times you want the command to be executed when the function
      is hit. It will only execute if the state of the buffers are not
      already in that state.
      
      Example:
      
       # echo do_fork:traceon:4
      
      Will enable the ring buffers if they are disabled every time it
      hits do_fork, up to 4 times.
      
       # echo sys_close:traceoff
      
      This will disable the ring buffers every time (unlimited) when
      sys_close is called.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      23b4ff3a
  19. 06 2月, 2009 1 次提交
  20. 05 2月, 2009 1 次提交
  21. 16 1月, 2009 4 次提交
    • S
      ftrace: remove static from function tracer functions · a225cdd2
      Steven Rostedt 提交于
      Impact: clean up
      
      After reorganizing the functions in trace.c and trace_function.c,
      they no longer need to be in global context. This patch makes the
      functions and one variable into static.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      a225cdd2
    • S
      ftrace: combine stack trace in function call · 3eb36aa0
      Steven Rostedt 提交于
      Impact: less likely to interleave function and stack traces
      
      This patch does replaces the separate stack trace on function with
      a record function and stack trace together. This will switch between
      the function only recording to a function and stack recording.
      
      Also some whitespace fix ups as well.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      3eb36aa0
    • S
      ftrace: move function tracer functions out of trace.c · bb3c3c95
      Steven Rostedt 提交于
      Impact: clean up of trace.c
      
      The function tracer functions were put in trace.c because it needed
      to share static variables that were in trace.c.  Since then, those
      variables have become global for various reasons. This patch moves
      the function tracer functions into trace_function.c where they belong.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      bb3c3c95
    • S
      ftrace: add stack trace to function tracer · 53614991
      Steven Rostedt 提交于
      Impact: new feature to stack trace any function
      
      Chris Mason asked about being able to pick and choose a function
      and get a stack trace from it. This feature enables his request.
      
       # echo io_schedule > /debug/tracing/set_ftrace_filter
       # echo function > /debug/tracing/current_tracer
       # echo func_stack_trace > /debug/tracing/trace_options
      
      Produces the following in /debug/tracing/trace:
      
             kjournald-702   [001]   135.673060: io_schedule <-sync_buffer
             kjournald-702   [002]   135.673671:
       <= sync_buffer
       <= __wait_on_bit
       <= out_of_line_wait_on_bit
       <= __wait_on_buffer
       <= sync_dirty_buffer
       <= journal_commit_transaction
       <= kjournald
      
      Note, be careful about turning this on without filtering the functions.
      You may find that you have a 10 second lag between typing and seeing
      what you typed. This is why the stack trace for the function tracer
      does not use the same stack_trace flag as the other tracers use.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      53614991
  22. 19 12月, 2008 1 次提交
  23. 16 11月, 2008 1 次提交
  24. 08 11月, 2008 1 次提交