1. 04 6月, 2010 1 次提交
    • S
      tracing: Remove ftrace_preempt_disable/enable · 5168ae50
      Steven Rostedt 提交于
      The ftrace_preempt_disable/enable functions were to address a
      recursive race caused by the function tracer. The function tracer
      traces all functions which makes it easily susceptible to recursion.
      One area was preempt_enable(). This would call the scheduler and
      the schedulre would call the function tracer and loop.
      (So was it thought).
      
      The ftrace_preempt_disable/enable was made to protect against recursion
      inside the scheduler by storing the NEED_RESCHED flag. If it was
      set before the ftrace_preempt_disable() it would not call schedule
      on ftrace_preempt_enable(), thinking that if it was set before then
      it would have already scheduled unless it was already in the scheduler.
      
      This worked fine except in the case of SMP, where another task would set
      the NEED_RESCHED flag for a task on another CPU, and then kick off an
      IPI to trigger it. This could cause the NEED_RESCHED to be saved at
      ftrace_preempt_disable() but the IPI to arrive in the the preempt
      disabled section. The ftrace_preempt_enable() would not call the scheduler
      because the flag was already set before entring the section.
      
      This bug would cause a missed preemption check and cause lower latencies.
      
      Investigating further, I found that the recusion caused by the function
      tracer was not due to schedule(), but due to preempt_schedule(). Now
      that preempt_schedule is completely annotated with notrace, the recusion
      no longer is an issue.
      Reported-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      5168ae50
  2. 14 5月, 2010 1 次提交
    • S
      tracing: Let tracepoints have data passed to tracepoint callbacks · 38516ab5
      Steven Rostedt 提交于
      This patch adds data to be passed to tracepoint callbacks.
      
      The created functions from DECLARE_TRACE() now need a mandatory data
      parameter. For example:
      
      DECLARE_TRACE(mytracepoint, int value, value)
      
      Will create the register function:
      
      int register_trace_mytracepoint((void(*)(void *data, int value))probe,
                                      void *data);
      
      As the first argument, all callbacks (probes) must take a (void *data)
      parameter. So a callback for the above tracepoint will look like:
      
      void myprobe(void *data, int value)
      {
      }
      
      The callback may choose to ignore the data parameter.
      
      This change allows callbacks to register a private data pointer along
      with the function probe.
      
      	void mycallback(void *data, int value);
      
      	register_trace_mytracepoint(mycallback, mydata);
      
      Then the mycallback() will receive the "mydata" as the first parameter
      before the args.
      
      A more detailed example:
      
        DECLARE_TRACE(mytracepoint, TP_PROTO(int status), TP_ARGS(status));
      
        /* In the C file */
      
        DEFINE_TRACE(mytracepoint, TP_PROTO(int status), TP_ARGS(status));
      
        [...]
      
             trace_mytracepoint(status);
      
        /* In a file registering this tracepoint */
      
        int my_callback(void *data, int status)
        {
      	struct my_struct my_data = data;
      	[...]
        }
      
        [...]
      	my_data = kmalloc(sizeof(*my_data), GFP_KERNEL);
      	init_my_data(my_data);
      	register_trace_mytracepoint(my_callback, my_data);
      
      The same callback can also be registered to the same tracepoint as long
      as the data registered is different. Note, the data must also be used
      to unregister the callback:
      
      	unregister_trace_mytracepoint(my_callback, my_data);
      
      Because of the data parameter, tracepoints declared this way can not have
      no args. That is:
      
        DECLARE_TRACE(mytracepoint, TP_PROTO(void), TP_ARGS());
      
      will cause an error.
      
      If no arguments are needed, a new macro can be used instead:
      
        DECLARE_TRACE_NOARGS(mytracepoint);
      
      Since there are no arguments, the proto and args fields are left out.
      
      This is part of a series to make the tracepoint footprint smaller:
      
         text	   data	    bss	    dec	    hex	filename
      4913961	1088356	 861512	6863829	 68bbd5	vmlinux.orig
      4914025	1088868	 861512	6864405	 68be15	vmlinux.class
      4918492	1084612	 861512	6864616	 68bee8	vmlinux.tracepoint
      
      Again, this patch also increases the size of the kernel, but
      lays the ground work for decreasing it.
      
       v5: Fixed net/core/drop_monitor.c to handle these updates.
      
       v4: Moved the DECLARE_TRACE() DECLARE_TRACE_NOARGS out of the
           #ifdef CONFIG_TRACE_POINTS, since the two are the same in both
           cases. The __DECLARE_TRACE() is what changes.
           Thanks to Frederic Weisbecker for pointing this out.
      
       v3: Made all register_* functions require data to be passed and
           all callbacks to take a void * parameter as its first argument.
           This makes the calling functions comply with C standards.
      
           Also added more comments to the modifications of DECLARE_TRACE().
      
       v2: Made the DECLARE_TRACE() have the ability to pass arguments
           and added a new DECLARE_TRACE_NOARGS() for tracepoints that
           do not need any arguments.
      Acked-by: NMathieu Desnoyers <mathieu.desnoyers@efficios.com>
      Acked-by: NMasami Hiramatsu <mhiramat@redhat.com>
      Acked-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Neil Horman <nhorman@tuxdriver.com>
      Cc: David S. Miller <davem@davemloft.net>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      38516ab5
  3. 07 5月, 2010 1 次提交
  4. 15 12月, 2009 3 次提交
  5. 13 9月, 2009 2 次提交
  6. 10 9月, 2009 1 次提交
    • S
      tracing: do not grab lock in wakeup latency function tracing · 478142c3
      Steven Rostedt 提交于
      The wakeup tracer, when enabled, has its own function tracer.
      It only traces the functions on the CPU where the task it is following
      is on. If a task is woken on one CPU but then migrates to another CPU
      before it wakes up, the latency tracer will then start tracing functions
      on the other CPU.
      
      To find which CPU the task is on, the wakeup function tracer performs
      a task_cpu(wakeup_task). But to make sure the task does not disappear
      it grabs the wakeup_lock, which is also taken when the task wakes up.
      By taking this lock, the function tracer does not need to worry about
      the task being freed as it checks its cpu.
      
      Jan Blunck found a problem with this approach on his 32 CPU box. When
      a task is being traced by the wakeup tracer, all functions take this
      lock. That means that on all 32 CPUs, each function call is taking
      this one lock to see if the task is on that CPU. This lock has just
      serialized all functions on all 32 CPUs. Needless to say, this caused
      major issues on that box. It would even lockup.
      
      This patch changes the wakeup latency to insert a probe on the migrate task
      tracepoint. When a task changes its CPU that it will run on, the
      probe will take note. Now the wakeup function tracer no longer needs
      to take the lock. It only compares the current CPU with a variable that
      holds the current CPU the task is on. We don't worry about races since
      it is OK to add or miss a function trace.
      Reported-by: NJan Blunck <jblunck@suse.de>
      Tested-by: NJan Blunck <jblunck@suse.de>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      478142c3
  7. 05 9月, 2009 1 次提交
    • S
      tracing: use timestamp to determine start of latency traces · 2f26ebd5
      Steven Rostedt 提交于
      Currently the latency tracers reset the ring buffer. Unfortunately
      if a commit is in process (due to a trace event), this can corrupt
      the ring buffer. When this happens, the ring buffer will detect
      the corruption and then permanently disable the ring buffer.
      
      The bug does not crash the system, but it does prevent further tracing
      after the bug is hit.
      
      Instead of reseting the trace buffers, the timestamp of the start of
      the trace is used instead. The buffers will still contain the previous
      data, but the output will not count any data that is before the
      timestamp of the trace.
      
      Note, this only affects the static trace output (trace) and not the
      runtime trace output (trace_pipe). The runtime trace output does not
      make sense for the latency tracers anyway.
      Reported-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      2f26ebd5
  8. 24 4月, 2009 1 次提交
    • S
      tracing/wakeup: move access to wakeup_cpu into spinlock · 9be24414
      Steven Rostedt 提交于
      The code had the following outside the lock:
      
              if (next != wakeup_task)
                      return;
      
              pc = preempt_count();
      
              /* The task we are waiting for is waking up */
              data = wakeup_trace->data[wakeup_cpu];
      
      On initialization, wakeup_task is NULL and wakeup_cpu -1. This code
      is not under a lock. If wakeup_task is set on another CPU as that
      task is waking up, we can see the wakeup_task before wakeup_cpu is
      set. If we read wakeup_cpu while it is still -1 then we will have
      a bad data pointer.
      
      This patch moves the reading of wakeup_cpu within the protection of
      the spinlock used to protect the writing of wakeup_cpu and wakeup_task.
      
      [ Impact: remove possible race causing invalid pointer dereference ]
      Reported-by: NManeesh Soni <maneesh@in.ibm.com>
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      9be24414
  9. 15 4月, 2009 1 次提交
    • S
      tracing/events: move trace point headers into include/trace/events · ad8d75ff
      Steven Rostedt 提交于
      Impact: clean up
      
      Create a sub directory in include/trace called events to keep the
      trace point headers in their own separate directory. Only headers that
      declare trace points should be defined in this directory.
      
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Neil Horman <nhorman@tuxdriver.com>
      Cc: Zhao Lei <zhaolei@cn.fujitsu.com>
      Cc: Eduard - Gabriel Munteanu <eduard.munteanu@linux360.ro>
      Cc: Pekka Enberg <penberg@cs.helsinki.fi>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      ad8d75ff
  10. 07 4月, 2009 1 次提交
    • S
      tracing: remove CALLER_ADDR2 from wakeup tracer · 301fd748
      Steven Rostedt 提交于
      Maneesh Soni was getting a crash when running the wakeup tracer.
      We debugged it down to the recording of the function with the
      CALLER_ADDR2 macro.  This is used to get the location of the caller
      to schedule.
      
      But the problem comes when schedule is called by assmebly. In the case
      that Maneesh had, retint_careful would call schedule. But retint_careful
      does not set up a proper frame pointer. CALLER_ADDR2 is defined as
      __builtin_return_address(2). This produces the following assembly in
      the wakeup tracer code.
      
         mov    0x0(%rbp),%rcx  <--- get the frame pointer of the caller
         mov    %r14d,%r8d
         mov    0xf2de8e(%rip),%rdi
      
         mov    0x8(%rcx),%rsi  <-- this is __builtin_return_address(1)
         mov    0x28(%rdi,%rax,8),%rbx
      
         mov    (%rcx),%rax  <-- get the frame pointer of the caller's caller
         mov    %r12,%rcx
         mov    0x8(%rax),%rdx <-- this is __builtin_return_address(2)
      
      At the reading of 0x8(%rax) Maneesh's machine would take a fault.
      The reason is that retint_careful did not set up the return address
      and the content of %rax here was zero.
      
      To verify this, I sent Maneesh a patch to create a frame pointer
      in retint_careful. He ran the test again but this time he would take
      the same type of fault from sysret_careful. The retint_careful was no
      longer an issue, but there are other callers that still have issues.
      
      Instead of adding frame pointers for all callers to schedule (in possibly
      all archs), it is much safer to simply not use CALLER_ADDR2. This
      loses out on knowing what called schedule, but the function tracer
      will help there if needed.
      Reported-by: NManeesh Soni <maneesh@in.ibm.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      301fd748
  11. 05 3月, 2009 1 次提交
  12. 18 2月, 2009 2 次提交
  13. 05 2月, 2009 1 次提交
  14. 29 1月, 2009 1 次提交
  15. 22 1月, 2009 3 次提交
    • S
      wakeup-tracer: show scheduling data in output · f8ec1062
      Steven Rostedt 提交于
      Impact: better data for wakeup tracer
      
      This patch adds the wakeup and schedule calls that are used by
      the scheduler tracer to make the wakeup tracer more readable.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      f8ec1062
    • S
      trace: separate out rt tasks from wakeup tracer · 3244351c
      Steven Rostedt 提交于
      Impact: add option to trace all tasks or just RT tasks
      
      The current wakeup tracer only traces RT task wakeups. This is
      fine for those interested in wake up timings of RT tasks, but
      it is useless for those that are interested in the causes
      of long wakeups for non RT tasks.
      
      This patch creates a "wakeup_rt" to implement the tracing of just
      RT tasks (as the current "wakeup" does). And makes "wakeup" now
      trace all tasks as an average developer would expect.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      3244351c
    • S
      trace: do not disable wake up tracer on output of trace · 5bc4564b
      Steven Rostedt 提交于
      Impact: fix to erased trace output
      
      To try not to have the outputing of a trace interfere with the wakeup
      tracer, it would disable tracing while the output was printing. But
      if a trace had started when it was disabled, it can show a partial
      trace. To try to solve this, on closing of the tracer, it would
      clear the trace buffer.
      
      The latency tracers (wakeup and irqsoff) have two buffers. One for
      recording and one for holding the max trace that is printed. The
      clearing of the trace above should only affect the recording buffer.
      But for some reason it would move the erased trace to the print
      buffer. Probably due to a race with the closing of the trace and
      the saving ofhe max race.
      
      The above is all pretty useless, and if the user does not want the
      printing of the trace to be traced itself, then the user can manual
      disable tracing. This patch removes all the code that tries to keep
      the output of the tracer from modifying the trace.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      5bc4564b
  16. 21 1月, 2009 1 次提交
    • S
      trace: set max latency variable to zero on default · 1092307d
      Steven Rostedt 提交于
      Impact: trace max latencies on start of latency tracing
      
      This patch sets the max latency to zero whenever one of the
      irq variant tracers or the wakeup tracer is set to current tracer.
      
      Most developers expect to see output when starting up a latency
      tracer. But since the max_latency is already set to max, and
      it takes a latency greater than max_latency to be recorded, there
      is no trace. This is not the expected behavior and has even confused
      myself.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      1092307d
  17. 16 1月, 2009 1 次提交
    • S
      trace: set max latency variable to zero on default · 745b1626
      Steven Rostedt 提交于
      Impact: trace max latencies on start of latency tracing
      
      This patch sets the max latency to zero whenever one of the
      irq variant tracers or the wakeup tracer is set to current tracer.
      
      Most developers expect to see output when starting up a latency
      tracer. But since the max_latency is already set to max, and
      it takes a latency greater than max_latency to be recorded, there
      is no trace. This is not the expected behavior and has even confused
      myself.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      745b1626
  18. 25 12月, 2008 1 次提交
  19. 16 11月, 2008 1 次提交
  20. 08 11月, 2008 2 次提交
  21. 06 11月, 2008 1 次提交
    • S
      ftrace: restructure tracing start/stop infrastructure · 9036990d
      Steven Rostedt 提交于
      Impact: change where tracing is started up and stopped
      
      Currently, when a new tracer is selected via echo'ing a tracer name into
      the current_tracer file, the startup is only done if tracing_enabled is
      set to one. If tracing_enabled is changed to zero (by echo'ing 0 into
      the tracing_enabled file) a full shutdown is performed.
      
      The full startup and shutdown of a tracer can be expensive and the
      user can lose out traces when echo'ing in 0 to the tracing_enabled file,
      because the process takes too long. There can also be places that
      the user would like to start and stop the tracer several times and
      doing the full startup and shutdown of a tracer might be too expensive.
      
      This patch performs the full startup and shutdown when a tracer is
      selected. It also adds a way to do a quick start or stop of a tracer.
      The quick version is just a flag that prevents the tracing from
      taking place, but the overhead of the code is still there.
      
      For example, the startup of a tracer may enable tracepoints, or enable
      the function tracer.  The stop and start will just set a flag to
      have the tracer ignore the calls when the tracepoint or function trace
      is called.  The overhead of the tracer may still be present when
      the tracer is stopped, but no tracing will occur. Setting the tracer
      to the 'nop' tracer (or any other tracer) will perform the shutdown
      of the tracer which will disable the tracepoint or disable the
      function tracer.
      
      The tracing_enabled file will simply start or stop tracing.
      
      This change is all internal. The end result for the user should be the same
      as before. If tracing_enabled is not set, no trace will happen.
      If tracing_enabled is set, then the trace will happen. The tracing_enabled
      variable is static between tracers. Enabling  tracing_enabled and
      going to another tracer will keep tracing_enabled enabled. Same
      is true with disabling tracing_enabled.
      
      This patch will now provide a fast start/stop method to the users
      for enabling or disabling tracing.
      
      Note: There were two methods to the struct tracer that were never
       used: The methods start and stop. These were to be used as a hook
       to the reading of the trace output, but ended up not being
       necessary. These two methods are now used to enable the start
       and stop of each tracer, in case the tracer needs to do more than
       just not write into the buffer. For example, the irqsoff tracer
       must stop recording max latencies when tracing is stopped.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      9036990d
  22. 04 11月, 2008 1 次提交
  23. 21 10月, 2008 1 次提交
  24. 14 10月, 2008 3 次提交
    • S
      ftrace: preempt disable over interrupt disable · 38697053
      Steven Rostedt 提交于
      With the new ring buffer infrastructure in ftrace, I'm trying to make
      ftrace a little more light weight.
      
      This patch converts a lot of the local_irq_save/restore into
      preempt_disable/enable.  The original preempt count in a lot of cases
      has to be sent in as a parameter so that it can be recorded correctly.
      Some places were recording it incorrectly before anyway.
      
      This is also laying the ground work to make ftrace a little bit
      more reentrant, and remove all locking. The function tracers must
      still protect from reentrancy.
      
      Note: All the function tracers must be careful when using preempt_disable.
        It must do the following:
      
        resched = need_resched();
        preempt_disable_notrace();
        [...]
        if (resched)
      	preempt_enable_no_resched_notrace();
        else
      	preempt_enable_notrace();
      
      The reason is that if this function traces schedule() itself, the
      preempt_enable_notrace() will cause a schedule, which will lead
      us into a recursive failure.
      
      If we needed to reschedule before calling preempt_disable, we
      should have already scheduled. Since we did not, this is most
      likely that we should not and are probably inside a schedule
      function.
      
      If resched was not set, we still need to catch the need resched
      flag being set when preemption was off and the if case at the
      end will catch that for us.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      38697053
    • S
      ftrace: make work with new ring buffer · 3928a8a2
      Steven Rostedt 提交于
      This patch ports ftrace over to the new ring buffer.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      3928a8a2
    • M
      ftrace: port to tracepoints · b07c3f19
      Mathieu Desnoyers 提交于
      Porting the trace_mark() used by ftrace to tracepoints. (cleanup)
      
      Changelog :
      - Change error messages : marker -> tracepoint
      
      [ mingo@elte.hu: conflict resolutions ]
      Signed-off-by: NMathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
      Acked-by: N'Peter Zijlstra' <peterz@infradead.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      b07c3f19
  25. 18 7月, 2008 1 次提交
    • S
      ftrace: fix 4d3702b6 (post-v2.6.26): WARNING: at kernel/lockdep.c:2731 check_flags (ftrace) · e59494f4
      Steven Rostedt 提交于
      On Wed, 16 Jul 2008, Vegard Nossum wrote:
      
      > When booting 4d3702b6, I got this huge thing:
      >
      > Testing tracer wakeup: <4>------------[ cut here ]------------
      > WARNING: at kernel/lockdep.c:2731 check_flags+0x123/0x160()
      > Modules linked in:
      > Pid: 1, comm: swapper Not tainted 2.6.26-crashing-02127-g4d3702b6 #30
      >  [<c015c349>] warn_on_slowpath+0x59/0xb0
      >  [<c01276c6>] ? ftrace_call+0x5/0x8
      >  [<c012d800>] ? native_read_tsc+0x0/0x20
      >  [<c0158de2>] ? sub_preempt_count+0x12/0xf0
      >  [<c01814eb>] ? trace_hardirqs_off+0xb/0x10
      >  [<c0182fbc>] ? __lock_acquire+0x2cc/0x1120
      >  [<c01814eb>] ? trace_hardirqs_off+0xb/0x10
      >  [<c01276af>] ? mcount_call+0x5/0xa
      >  [<c017ff53>] check_flags+0x123/0x160
      >  [<c0183e61>] lock_acquire+0x51/0xd0
      >  [<c01276c6>] ? ftrace_call+0x5/0x8
      >  [<c0613d4f>] _spin_lock_irqsave+0x5f/0xa0
      >  [<c01a8d45>] ? ftrace_record_ip+0xf5/0x220
      >  [<c02d5413>] ? debug_locks_off+0x3/0x50
      >  [<c01a8d45>] ftrace_record_ip+0xf5/0x220
      >  [<c01276af>] mcount_call+0x5/0xa
      >  [<c02d5418>] ? debug_locks_off+0x8/0x50
      >  [<c017ff27>] check_flags+0xf7/0x160
      >  [<c0183e61>] lock_acquire+0x51/0xd0
      >  [<c01276c6>] ? ftrace_call+0x5/0x8
      >  [<c0613d4f>] _spin_lock_irqsave+0x5f/0xa0
      >  [<c01affcd>] ? wakeup_tracer_call+0x6d/0xf0
      >  [<c01625e2>] ? _local_bh_enable+0x62/0xb0
      >  [<c0158ddd>] ? sub_preempt_count+0xd/0xf0
      >  [<c01affcd>] wakeup_tracer_call+0x6d/0xf0
      >  [<c0162724>] ? __do_softirq+0xf4/0x110
      >  [<c01afff1>] ? wakeup_tracer_call+0x91/0xf0
      >  [<c01276c6>] ftrace_call+0x5/0x8
      >  [<c0162724>] ? __do_softirq+0xf4/0x110
      >  [<c0158de2>] ? sub_preempt_count+0x12/0xf0
      >  [<c01625e2>] _local_bh_enable+0x62/0xb0
      >  [<c0162724>] __do_softirq+0xf4/0x110
      >  [<c01627ed>] do_softirq+0xad/0xb0
      >  [<c0162a15>] irq_exit+0xa5/0xb0
      >  [<c013a506>] smp_apic_timer_interrupt+0x66/0xa0
      >  [<c02d3fac>] ? trace_hardirqs_off_thunk+0xc/0x10
      >  [<c0127449>] apic_timer_interrupt+0x2d/0x34
      >  [<c018007b>] ? find_usage_backwards+0xb/0xf0
      >  [<c0613a09>] ? _spin_unlock_irqrestore+0x69/0x80
      >  [<c014ef32>] tg_shares_up+0x132/0x1d0
      >  [<c014d2a2>] walk_tg_tree+0x62/0xa0
      >  [<c014ee00>] ? tg_shares_up+0x0/0x1d0
      >  [<c014a860>] ? tg_nop+0x0/0x10
      >  [<c015499d>] update_shares+0x5d/0x80
      >  [<c0154a2f>] try_to_wake_up+0x6f/0x280
      >  [<c01a8b90>] ? __ftrace_modify_code+0x0/0xc0
      >  [<c01a8b90>] ? __ftrace_modify_code+0x0/0xc0
      >  [<c0154c94>] wake_up_process+0x14/0x20
      >  [<c01725f6>] kthread_create+0x66/0xb0
      >  [<c0195400>] ? do_stop+0x0/0x200
      >  [<c0195320>] ? __stop_machine_run+0x30/0xb0
      >  [<c0195340>] __stop_machine_run+0x50/0xb0
      >  [<c0195400>] ? do_stop+0x0/0x200
      >  [<c01a8b90>] ? __ftrace_modify_code+0x0/0xc0
      >  [<c061242d>] ? mutex_unlock+0xd/0x10
      >  [<c01953cc>] stop_machine_run+0x2c/0x60
      >  [<c01a94d3>] unregister_ftrace_function+0x103/0x180
      >  [<c01b0517>] stop_wakeup_tracer+0x17/0x60
      >  [<c01b056f>] wakeup_tracer_ctrl_update+0xf/0x30
      >  [<c01ab8d5>] trace_selftest_startup_wakeup+0xb5/0x130
      >  [<c01ab950>] ? trace_wakeup_test_thread+0x0/0x70
      >  [<c01aadf5>] register_tracer+0x135/0x1b0
      >  [<c0877d02>] init_wakeup_tracer+0xd/0xf
      >  [<c085d437>] kernel_init+0x1a9/0x2ce
      >  [<c061397b>] ? _spin_unlock_irq+0x3b/0x60
      >  [<c02d3f9c>] ? trace_hardirqs_on_thunk+0xc/0x10
      >  [<c0877cf5>] ? init_wakeup_tracer+0x0/0xf
      >  [<c0182646>] ? trace_hardirqs_on_caller+0x126/0x180
      >  [<c02d3f9c>] ? trace_hardirqs_on_thunk+0xc/0x10
      >  [<c01269c8>] ? restore_nocheck_notrace+0x0/0xe
      >  [<c085d28e>] ? kernel_init+0x0/0x2ce
      >  [<c085d28e>] ? kernel_init+0x0/0x2ce
      >  [<c01275fb>] kernel_thread_helper+0x7/0x10
      >  =======================
      > ---[ end trace a7919e7f17c0a725 ]---
      > irq event stamp: 579530
      > hardirqs last  enabled at (579528): [<c01826ab>] trace_hardirqs_on+0xb/0x10
      > hardirqs last disabled at (579529): [<c01814eb>] trace_hardirqs_off+0xb/0x10
      > softirqs last  enabled at (579530): [<c0162724>] __do_softirq+0xf4/0x110
      > softirqs last disabled at (579517): [<c01627ed>] do_softirq+0xad/0xb0
      > irq event stamp: 579530
      > hardirqs last  enabled at (579528): [<c01826ab>] trace_hardirqs_on+0xb/0x10
      > hardirqs last disabled at (579529): [<c01814eb>] trace_hardirqs_off+0xb/0x10
      > softirqs last  enabled at (579530): [<c0162724>] __do_softirq+0xf4/0x110
      > softirqs last disabled at (579517): [<c01627ed>] do_softirq+0xad/0xb0
      > PASSED
      >
      > Incidentally, the kernel also hung while I was typing in this report.
      
      Things get weird between lockdep and ftrace because ftrace can be called
      within lockdep internal code (via the mcount pointer) and lockdep can be
      called with ftrace (via spin_locks).
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Tested-by: NVegard Nossum <vegard.nossum@gmail.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      e59494f4
  26. 11 7月, 2008 1 次提交
  27. 27 5月, 2008 1 次提交
  28. 24 5月, 2008 4 次提交