1. 16 11月, 2008 1 次提交
  2. 08 11月, 2008 2 次提交
  3. 06 11月, 2008 1 次提交
    • S
      ftrace: restructure tracing start/stop infrastructure · 9036990d
      Steven Rostedt 提交于
      Impact: change where tracing is started up and stopped
      
      Currently, when a new tracer is selected via echo'ing a tracer name into
      the current_tracer file, the startup is only done if tracing_enabled is
      set to one. If tracing_enabled is changed to zero (by echo'ing 0 into
      the tracing_enabled file) a full shutdown is performed.
      
      The full startup and shutdown of a tracer can be expensive and the
      user can lose out traces when echo'ing in 0 to the tracing_enabled file,
      because the process takes too long. There can also be places that
      the user would like to start and stop the tracer several times and
      doing the full startup and shutdown of a tracer might be too expensive.
      
      This patch performs the full startup and shutdown when a tracer is
      selected. It also adds a way to do a quick start or stop of a tracer.
      The quick version is just a flag that prevents the tracing from
      taking place, but the overhead of the code is still there.
      
      For example, the startup of a tracer may enable tracepoints, or enable
      the function tracer.  The stop and start will just set a flag to
      have the tracer ignore the calls when the tracepoint or function trace
      is called.  The overhead of the tracer may still be present when
      the tracer is stopped, but no tracing will occur. Setting the tracer
      to the 'nop' tracer (or any other tracer) will perform the shutdown
      of the tracer which will disable the tracepoint or disable the
      function tracer.
      
      The tracing_enabled file will simply start or stop tracing.
      
      This change is all internal. The end result for the user should be the same
      as before. If tracing_enabled is not set, no trace will happen.
      If tracing_enabled is set, then the trace will happen. The tracing_enabled
      variable is static between tracers. Enabling  tracing_enabled and
      going to another tracer will keep tracing_enabled enabled. Same
      is true with disabling tracing_enabled.
      
      This patch will now provide a fast start/stop method to the users
      for enabling or disabling tracing.
      
      Note: There were two methods to the struct tracer that were never
       used: The methods start and stop. These were to be used as a hook
       to the reading of the trace output, but ended up not being
       necessary. These two methods are now used to enable the start
       and stop of each tracer, in case the tracer needs to do more than
       just not write into the buffer. For example, the irqsoff tracer
       must stop recording max latencies when tracing is stopped.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      9036990d
  4. 04 11月, 2008 1 次提交
  5. 21 10月, 2008 1 次提交
  6. 14 10月, 2008 3 次提交
    • S
      ftrace: preempt disable over interrupt disable · 38697053
      Steven Rostedt 提交于
      With the new ring buffer infrastructure in ftrace, I'm trying to make
      ftrace a little more light weight.
      
      This patch converts a lot of the local_irq_save/restore into
      preempt_disable/enable.  The original preempt count in a lot of cases
      has to be sent in as a parameter so that it can be recorded correctly.
      Some places were recording it incorrectly before anyway.
      
      This is also laying the ground work to make ftrace a little bit
      more reentrant, and remove all locking. The function tracers must
      still protect from reentrancy.
      
      Note: All the function tracers must be careful when using preempt_disable.
        It must do the following:
      
        resched = need_resched();
        preempt_disable_notrace();
        [...]
        if (resched)
      	preempt_enable_no_resched_notrace();
        else
      	preempt_enable_notrace();
      
      The reason is that if this function traces schedule() itself, the
      preempt_enable_notrace() will cause a schedule, which will lead
      us into a recursive failure.
      
      If we needed to reschedule before calling preempt_disable, we
      should have already scheduled. Since we did not, this is most
      likely that we should not and are probably inside a schedule
      function.
      
      If resched was not set, we still need to catch the need resched
      flag being set when preemption was off and the if case at the
      end will catch that for us.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      38697053
    • S
      ftrace: make work with new ring buffer · 3928a8a2
      Steven Rostedt 提交于
      This patch ports ftrace over to the new ring buffer.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      3928a8a2
    • M
      ftrace: port to tracepoints · b07c3f19
      Mathieu Desnoyers 提交于
      Porting the trace_mark() used by ftrace to tracepoints. (cleanup)
      
      Changelog :
      - Change error messages : marker -> tracepoint
      
      [ mingo@elte.hu: conflict resolutions ]
      Signed-off-by: NMathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
      Acked-by: N'Peter Zijlstra' <peterz@infradead.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      b07c3f19
  7. 18 7月, 2008 1 次提交
    • S
      ftrace: fix 4d3702b6 (post-v2.6.26): WARNING: at kernel/lockdep.c:2731 check_flags (ftrace) · e59494f4
      Steven Rostedt 提交于
      On Wed, 16 Jul 2008, Vegard Nossum wrote:
      
      > When booting 4d3702b6, I got this huge thing:
      >
      > Testing tracer wakeup: <4>------------[ cut here ]------------
      > WARNING: at kernel/lockdep.c:2731 check_flags+0x123/0x160()
      > Modules linked in:
      > Pid: 1, comm: swapper Not tainted 2.6.26-crashing-02127-g4d3702b6 #30
      >  [<c015c349>] warn_on_slowpath+0x59/0xb0
      >  [<c01276c6>] ? ftrace_call+0x5/0x8
      >  [<c012d800>] ? native_read_tsc+0x0/0x20
      >  [<c0158de2>] ? sub_preempt_count+0x12/0xf0
      >  [<c01814eb>] ? trace_hardirqs_off+0xb/0x10
      >  [<c0182fbc>] ? __lock_acquire+0x2cc/0x1120
      >  [<c01814eb>] ? trace_hardirqs_off+0xb/0x10
      >  [<c01276af>] ? mcount_call+0x5/0xa
      >  [<c017ff53>] check_flags+0x123/0x160
      >  [<c0183e61>] lock_acquire+0x51/0xd0
      >  [<c01276c6>] ? ftrace_call+0x5/0x8
      >  [<c0613d4f>] _spin_lock_irqsave+0x5f/0xa0
      >  [<c01a8d45>] ? ftrace_record_ip+0xf5/0x220
      >  [<c02d5413>] ? debug_locks_off+0x3/0x50
      >  [<c01a8d45>] ftrace_record_ip+0xf5/0x220
      >  [<c01276af>] mcount_call+0x5/0xa
      >  [<c02d5418>] ? debug_locks_off+0x8/0x50
      >  [<c017ff27>] check_flags+0xf7/0x160
      >  [<c0183e61>] lock_acquire+0x51/0xd0
      >  [<c01276c6>] ? ftrace_call+0x5/0x8
      >  [<c0613d4f>] _spin_lock_irqsave+0x5f/0xa0
      >  [<c01affcd>] ? wakeup_tracer_call+0x6d/0xf0
      >  [<c01625e2>] ? _local_bh_enable+0x62/0xb0
      >  [<c0158ddd>] ? sub_preempt_count+0xd/0xf0
      >  [<c01affcd>] wakeup_tracer_call+0x6d/0xf0
      >  [<c0162724>] ? __do_softirq+0xf4/0x110
      >  [<c01afff1>] ? wakeup_tracer_call+0x91/0xf0
      >  [<c01276c6>] ftrace_call+0x5/0x8
      >  [<c0162724>] ? __do_softirq+0xf4/0x110
      >  [<c0158de2>] ? sub_preempt_count+0x12/0xf0
      >  [<c01625e2>] _local_bh_enable+0x62/0xb0
      >  [<c0162724>] __do_softirq+0xf4/0x110
      >  [<c01627ed>] do_softirq+0xad/0xb0
      >  [<c0162a15>] irq_exit+0xa5/0xb0
      >  [<c013a506>] smp_apic_timer_interrupt+0x66/0xa0
      >  [<c02d3fac>] ? trace_hardirqs_off_thunk+0xc/0x10
      >  [<c0127449>] apic_timer_interrupt+0x2d/0x34
      >  [<c018007b>] ? find_usage_backwards+0xb/0xf0
      >  [<c0613a09>] ? _spin_unlock_irqrestore+0x69/0x80
      >  [<c014ef32>] tg_shares_up+0x132/0x1d0
      >  [<c014d2a2>] walk_tg_tree+0x62/0xa0
      >  [<c014ee00>] ? tg_shares_up+0x0/0x1d0
      >  [<c014a860>] ? tg_nop+0x0/0x10
      >  [<c015499d>] update_shares+0x5d/0x80
      >  [<c0154a2f>] try_to_wake_up+0x6f/0x280
      >  [<c01a8b90>] ? __ftrace_modify_code+0x0/0xc0
      >  [<c01a8b90>] ? __ftrace_modify_code+0x0/0xc0
      >  [<c0154c94>] wake_up_process+0x14/0x20
      >  [<c01725f6>] kthread_create+0x66/0xb0
      >  [<c0195400>] ? do_stop+0x0/0x200
      >  [<c0195320>] ? __stop_machine_run+0x30/0xb0
      >  [<c0195340>] __stop_machine_run+0x50/0xb0
      >  [<c0195400>] ? do_stop+0x0/0x200
      >  [<c01a8b90>] ? __ftrace_modify_code+0x0/0xc0
      >  [<c061242d>] ? mutex_unlock+0xd/0x10
      >  [<c01953cc>] stop_machine_run+0x2c/0x60
      >  [<c01a94d3>] unregister_ftrace_function+0x103/0x180
      >  [<c01b0517>] stop_wakeup_tracer+0x17/0x60
      >  [<c01b056f>] wakeup_tracer_ctrl_update+0xf/0x30
      >  [<c01ab8d5>] trace_selftest_startup_wakeup+0xb5/0x130
      >  [<c01ab950>] ? trace_wakeup_test_thread+0x0/0x70
      >  [<c01aadf5>] register_tracer+0x135/0x1b0
      >  [<c0877d02>] init_wakeup_tracer+0xd/0xf
      >  [<c085d437>] kernel_init+0x1a9/0x2ce
      >  [<c061397b>] ? _spin_unlock_irq+0x3b/0x60
      >  [<c02d3f9c>] ? trace_hardirqs_on_thunk+0xc/0x10
      >  [<c0877cf5>] ? init_wakeup_tracer+0x0/0xf
      >  [<c0182646>] ? trace_hardirqs_on_caller+0x126/0x180
      >  [<c02d3f9c>] ? trace_hardirqs_on_thunk+0xc/0x10
      >  [<c01269c8>] ? restore_nocheck_notrace+0x0/0xe
      >  [<c085d28e>] ? kernel_init+0x0/0x2ce
      >  [<c085d28e>] ? kernel_init+0x0/0x2ce
      >  [<c01275fb>] kernel_thread_helper+0x7/0x10
      >  =======================
      > ---[ end trace a7919e7f17c0a725 ]---
      > irq event stamp: 579530
      > hardirqs last  enabled at (579528): [<c01826ab>] trace_hardirqs_on+0xb/0x10
      > hardirqs last disabled at (579529): [<c01814eb>] trace_hardirqs_off+0xb/0x10
      > softirqs last  enabled at (579530): [<c0162724>] __do_softirq+0xf4/0x110
      > softirqs last disabled at (579517): [<c01627ed>] do_softirq+0xad/0xb0
      > irq event stamp: 579530
      > hardirqs last  enabled at (579528): [<c01826ab>] trace_hardirqs_on+0xb/0x10
      > hardirqs last disabled at (579529): [<c01814eb>] trace_hardirqs_off+0xb/0x10
      > softirqs last  enabled at (579530): [<c0162724>] __do_softirq+0xf4/0x110
      > softirqs last disabled at (579517): [<c01627ed>] do_softirq+0xad/0xb0
      > PASSED
      >
      > Incidentally, the kernel also hung while I was typing in this report.
      
      Things get weird between lockdep and ftrace because ftrace can be called
      within lockdep internal code (via the mcount pointer) and lockdep can be
      called with ftrace (via spin_locks).
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Tested-by: NVegard Nossum <vegard.nossum@gmail.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      e59494f4
  8. 11 7月, 2008 1 次提交
  9. 27 5月, 2008 1 次提交
  10. 24 5月, 2008 10 次提交
    • M
      Port ftrace to markers · 5b82a1b0
      Mathieu Desnoyers 提交于
      Porting ftrace to the marker infrastructure.
      
      Don't need to chain to the wakeup tracer from the sched tracer, because markers
      support multiple probes connected.
      Signed-off-by: NMathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
      CC: Steven Rostedt <rostedt@goodmis.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      5b82a1b0
    • S
      ftrace: printk and trace irqsoff and wakeups · 4fe8c304
      Steven Rostedt 提交于
      printk called from wakeup critical timings and irqs off can
      cause deadlocks since printk might do a wakeup itself. If the
      call to printk happens with the runqueue lock held, it can
      deadlock.
      
      This patch protects the printk from being called in trace irqs off
      with a test to see if the runqueue for the current CPU is locked.
      If it is locked, the printk is skipped.
      
      The wakeup always holds the runqueue lock, so the printk is
      simply removed.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      4fe8c304
    • I
      ftrace: sched tracer fix · 8ac0fca4
      Ingo Molnar 提交于
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      8ac0fca4
    • I
      ftrace: add wakeup events to sched tracer · 57422797
      Ingo Molnar 提交于
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      57422797
    • I
      ftrace: remove notrace · e309b41d
      Ingo Molnar 提交于
      now that we have a kbuild method for notrace, no need to pollute the
      C code with the annotations.
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      e309b41d
    • S
      ftrace: add trace_function api for other tracers to use · 6fb44b71
      Steven Rostedt 提交于
      A new check was added in the ftrace function that wont trace if the CPU
      trace buffer is disabled.  Unfortunately, other tracers used ftrace() to
      write to the buffer after they disabled it. The new disable check makes
      these calls into a nop.
      
      This patch changes the __ftrace that is called without the check into a
      new api for the other tracers to use, called "trace_function". The other
      tracers use this interface instead when the trace CPU buffer is already
      disabled.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      6fb44b71
    • I
      ftrace: timestamp syncing, prepare · 750ed1a4
      Ingo Molnar 提交于
      rename and uninline now() to ftrace_now().
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      750ed1a4
    • I
      ftrace: cleanups · c7aafc54
      Ingo Molnar 提交于
      factor out code and clean it up.
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      c7aafc54
    • S
      ftrace: add self-tests · 60a11774
      Steven Rostedt 提交于
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      60a11774
    • S
      ftrace: tracer for scheduler wakeup latency · 352ad25a
      Steven Rostedt 提交于
      This patch adds the tracer that tracks the wakeup latency of the
      highest priority waking task.
      
        "wakeup" is added to /debugfs/tracing/available_tracers
      
      Also added to /debugfs/tracing
      
        tracing_max_latency
           holds the current max latency for the wakeup
      
        wakeup_thresh
           if set to other than zero, a log will be recorded
           for every wakeup that takes longer than the number
           entered in here (usecs for all counters)
           (deletes previous trace)
      
      Examples:
      
        (with ftrace_enabled = 0)
      
      ============
      preemption latency trace v1.1.5 on 2.6.24-rc8
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      --------------------------------------------------------------------
       latency: 26 us, #2/2, CPU#1 | (M:rt VP:0, KP:0, SP:0 HP:0 #P:2)
          -----------------
          | task: migration/0-3 (uid:0 nice:-5 policy:1 rt_prio:99)
          -----------------
      
                       _------=> CPU#
                      / _-----=> irqs-off
                     | / _----=> need-resched
                     || / _---=> hardirq/softirq
                     ||| / _--=> preempt-depth
                     |||| /
                     |||||     delay
         cmd     pid ||||| time  |   caller
            \   /    |||||   \   |   /
         quilt-8551  0d..3    0us+: wake_up_process+0x15/0x17 <ffffffff80233e80> (sched_exec+0xc9/0x100 <ffffffff80235343>)
         quilt-8551  0d..4   26us : sched_switch_callback+0x73/0x81 <ffffffff80338d2f> (schedule+0x483/0x6d5 <ffffffff8048b3ee>)
      
      vim:ft=help
      ============
      
        (with ftrace_enabled = 1)
      
      ============
      preemption latency trace v1.1.5 on 2.6.24-rc8
      --------------------------------------------------------------------
       latency: 36 us, #45/45, CPU#0 | (M:rt VP:0, KP:0, SP:0 HP:0 #P:2)
          -----------------
          | task: migration/1-5 (uid:0 nice:-5 policy:1 rt_prio:99)
          -----------------
      
                       _------=> CPU#
                      / _-----=> irqs-off
                     | / _----=> need-resched
                     || / _---=> hardirq/softirq
                     ||| / _--=> preempt-depth
                     |||| /
                     |||||     delay
         cmd     pid ||||| time  |   caller
            \   /    |||||   \   |   /
          bash-10653 1d..3    0us : wake_up_process+0x15/0x17 <ffffffff80233e80> (sched_exec+0xc9/0x100 <ffffffff80235343>)
          bash-10653 1d..3    1us : try_to_wake_up+0x271/0x2e7 <ffffffff80233dcf> (sub_preempt_count+0xc/0x7a <ffffffff8023309e>)
          bash-10653 1d..2    2us : try_to_wake_up+0x296/0x2e7 <ffffffff80233df4> (update_rq_clock+0x9/0x20 <ffffffff802303f3>)
          bash-10653 1d..2    2us : update_rq_clock+0x1e/0x20 <ffffffff80230408> (__update_rq_clock+0xc/0x90 <ffffffff80230366>)
          bash-10653 1d..2    3us : __update_rq_clock+0x1b/0x90 <ffffffff80230375> (sched_clock+0x9/0x29 <ffffffff80214529>)
          bash-10653 1d..2    4us : try_to_wake_up+0x2a6/0x2e7 <ffffffff80233e04> (activate_task+0xc/0x3f <ffffffff8022ffca>)
          bash-10653 1d..2    4us : activate_task+0x2d/0x3f <ffffffff8022ffeb> (enqueue_task+0xe/0x66 <ffffffff8022ff66>)
          bash-10653 1d..2    5us : enqueue_task+0x5b/0x66 <ffffffff8022ffb3> (enqueue_task_rt+0x9/0x3c <ffffffff80233351>)
          bash-10653 1d..2    6us : try_to_wake_up+0x2ba/0x2e7 <ffffffff80233e18> (check_preempt_wakeup+0x12/0x99 <ffffffff80234f84>)
      [...]
          bash-10653 1d..5   33us : tracing_record_cmdline+0xcf/0xd4 <ffffffff80338aad> (_spin_unlock+0x9/0x33 <ffffffff8048d3ec>)
          bash-10653 1d..5   34us : _spin_unlock+0x19/0x33 <ffffffff8048d3fc> (sub_preempt_count+0xc/0x7a <ffffffff8023309e>)
          bash-10653 1d..4   35us : wakeup_sched_switch+0x65/0x2ff <ffffffff80339f66> (_spin_lock_irqsave+0xc/0xa9 <ffffffff8048d08b>)
          bash-10653 1d..4   35us : _spin_lock_irqsave+0x19/0xa9 <ffffffff8048d098> (add_preempt_count+0xe/0x77 <ffffffff8023311a>)
          bash-10653 1d..4   36us : sched_switch_callback+0x73/0x81 <ffffffff80338d2f> (schedule+0x483/0x6d5 <ffffffff8048b3ee>)
      
      vim:ft=help
      ============
      
      The [...] was added here to not waste your email box space.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      352ad25a