1. 10 5月, 2013 2 次提交
  2. 02 11月, 2012 1 次提交
    • S
      tracing: Use irq_work for wake ups and remove *_nowake_*() functions · 0d5c6e1c
      Steven Rostedt 提交于
      Have the ring buffer commit function use the irq_work infrastructure to
      wake up any waiters waiting on the ring buffer for new data. The irq_work
      was created for such a purpose, where doing the actual wake up at the
      time of adding data is too dangerous, as an event or function trace may
      be in the midst of the work queue locks and cause deadlocks. The irq_work
      will either delay the action to the next timer interrupt, or trigger an IPI
      to itself forcing an interrupt to do the work (in a safe location).
      
      With irq_work, all ring buffer commits can safely do wakeups, removing
      the need for the ring buffer commit "nowake" variants, which were used
      by events and function tracing. All commits can now safely use the
      normal commit, and the "nowake" variants can be removed.
      
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      0d5c6e1c
  3. 01 11月, 2012 1 次提交
  4. 31 7月, 2012 1 次提交
  5. 07 5月, 2012 3 次提交
  6. 22 2月, 2012 2 次提交
  7. 11 10月, 2011 1 次提交
  8. 16 7月, 2011 3 次提交
  9. 15 7月, 2011 1 次提交
  10. 15 6月, 2011 1 次提交
    • M
      tracing/kprobes: Fix kprobe-tracer to support stack trace · 1fd8df2c
      Masami Hiramatsu 提交于
      Fix to support kernel stack trace correctly on kprobe-tracer.
      Since the execution path of kprobe-based dynamic events is different
      from other tracepoint-based events, normal ftrace_trace_stack() doesn't
      work correctly. To fix that, this introduces ftrace_trace_stack_regs()
      which traces stack via pt_regs instead of current stack register.
      
      e.g.
      
       # echo p schedule+4 > /sys/kernel/debug/tracing/kprobe_events
       # echo 1 > /sys/kernel/debug/tracing/options/stacktrace
       # echo 1 > /sys/kernel/debug/tracing/events/kprobes/enable
       # head -n 20 /sys/kernel/debug/tracing/trace
                  bash-2968  [000] 10297.050245: p_schedule_4: (schedule+0x4/0x4ca)
                  bash-2968  [000] 10297.050247: <stack trace>
       => schedule_timeout
       => n_tty_read
       => tty_read
       => vfs_read
       => sys_read
       => system_call_fastpath
           kworker/0:1-2940  [000] 10297.050265: p_schedule_4: (schedule+0x4/0x4ca)
           kworker/0:1-2940  [000] 10297.050266: <stack trace>
       => worker_thread
       => kthread
       => kernel_thread_helper
                  sshd-1132  [000] 10297.050365: p_schedule_4: (schedule+0x4/0x4ca)
                  sshd-1132  [000] 10297.050365: <stack trace>
       => sysret_careful
      
      Note: Even with this fix, the first entry will be skipped
      if the probe is put on the function entry area before
      the frame pointer is set up (usually, that is 4 bytes
       (push %bp; mov %sp %bp) on x86), because stack unwinder
      depends on the frame pointer.
      Signed-off-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: yrl.pp-manager.tt@hitachi.com
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Namhyung Kim <namhyung@gmail.com>
      Link: http://lkml.kernel.org/r/20110608070934.17777.17116.stgit@fedora15Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      1fd8df2c
  11. 08 6月, 2011 1 次提交
  12. 24 4月, 2011 1 次提交
  13. 31 3月, 2011 1 次提交
  14. 14 2月, 2011 1 次提交
  15. 07 2月, 2011 3 次提交
  16. 28 10月, 2010 1 次提交
  17. 24 10月, 2010 1 次提交
  18. 08 9月, 2010 3 次提交
  19. 09 8月, 2010 1 次提交
  20. 04 8月, 2010 1 次提交
  21. 06 7月, 2010 1 次提交
    • M
      tracing/kprobes: Support "string" type · e09c8614
      Masami Hiramatsu 提交于
      Support string type tracing and printing in kprobe-tracer.
      
      This allows user to trace string data in kernel including __user data. Note
      that sometimes __user data may not be accessed if it is paged-out (sorry, but
      kprobes operation should be done in atomic, we can not wait for page-in).
      
      Commiter note: Fixed up conflicts with b7e2ecef.
      
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      LKML-Reference: <20100519195724.2885.18788.stgit@localhost6.localdomain6>
      Signed-off-by: NMasami Hiramatsu <mhiramat@redhat.com>
      Signed-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      e09c8614
  22. 29 6月, 2010 1 次提交
  23. 31 5月, 2010 1 次提交
  24. 21 5月, 2010 2 次提交
  25. 19 5月, 2010 1 次提交
  26. 15 5月, 2010 4 次提交
    • S
      tracing: Combine event filter_active and enable into single flags field · 553552ce
      Steven Rostedt 提交于
      The filter_active and enable both use an int (4 bytes each) to
      set a single flag. We can save 4 bytes per event by combining the
      two into a single integer.
      
         text	   data	    bss	    dec	    hex	filename
      4913961	1088356	 861512	6863829	 68bbd5	vmlinux.orig
      4894944	1018052	 861512	6774508	 675eec	vmlinux.id
      4894871	1012292	 861512	6768675	 674823	vmlinux.flags
      
      This gives us another 5K in savings.
      
      The modification of both the enable and filter fields are done
      under the event_mutex, so it is still safe to combine the two.
      
      Note: Although Mathieu gave his Acked-by, he would like it documented
       that the reads of flags are not protected by the mutex. The way the
       code works, these reads will not break anything, but will have a
       residual effect. Since this behavior is the same even before this
       patch, describing this situation is left to another patch, as this
       patch does not change the behavior, but just brought it to Mathieu's
       attention.
      
      v2: Updated the event trace self test to for this change.
      Acked-by: NMathieu Desnoyers <mathieu.desnoyers@efficios.com>
      Acked-by: NMasami Hiramatsu <mhiramat@redhat.com>
      Acked-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Tom Zanussi <tzanussi@gmail.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      553552ce
    • S
      tracing: Remove duplicate id information in event structure · 32c0edae
      Steven Rostedt 提交于
      Now that the trace_event structure is embedded in the ftrace_event_call
      structure, there is no need for the ftrace_event_call id field.
      The id field is the same as the trace_event type field.
      
      Removing the id and re-arranging the structure brings down the tracepoint
      footprint by another 5K.
      
         text	   data	    bss	    dec	    hex	filename
      4913961	1088356	 861512	6863829	 68bbd5	vmlinux.orig
      4895024	1023812	 861512	6780348	 6775bc	vmlinux.print
      4894944	1018052	 861512	6774508	 675eec	vmlinux.id
      Acked-by: NMathieu Desnoyers <mathieu.desnoyers@efficios.com>
      Acked-by: NMasami Hiramatsu <mhiramat@redhat.com>
      Acked-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      32c0edae
    • S
      tracing: Move print functions into event class · 80decc70
      Steven Rostedt 提交于
      Currently, every event has its own trace_event structure. This is
      fine since the structure is needed anyway. But the print function
      structure (trace_event_functions) is now separate. Since the output
      of the trace event is done by the class (with the exception of events
      defined by DEFINE_EVENT_PRINT), it makes sense to have the class
      define the print functions that all events in the class can use.
      
      This makes a bigger deal with the syscall events since all syscall events
      use the same class. The savings here is another 30K.
      
         text	   data	    bss	    dec	    hex	filename
      4913961	1088356	 861512	6863829	 68bbd5	vmlinux.orig
      4900382	1048964	 861512	6810858	 67ecea	vmlinux.init
      4900446	1049028	 861512	6810986	 67ed6a	vmlinux.preprint
      4895024	1023812	 861512	6780348	 6775bc	vmlinux.print
      
      To accomplish this, and to let the class know what event is being
      printed, the event structure is embedded in the ftrace_event_call
      structure. This should not be an issues since the event structure
      was created for each event anyway.
      Acked-by: NMathieu Desnoyers <mathieu.desnoyers@efficios.com>
      Acked-by: NMasami Hiramatsu <mhiramat@redhat.com>
      Acked-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      80decc70
    • S
      tracing: Allow events to share their print functions · a9a57763
      Steven Rostedt 提交于
      Multiple events may use the same method to print their data.
      Instead of having all events have a pointer to their print funtions,
      the trace_event structure now points to a trace_event_functions structure
      that will hold the way to print ouf the event.
      
      The event itself is now passed to the print function to let the print
      function know what kind of event it should print.
      
      This opens the door to consolidating the way several events print
      their output.
      
         text	   data	    bss	    dec	    hex	filename
      4913961	1088356	 861512	6863829	 68bbd5	vmlinux.orig
      4900382	1048964	 861512	6810858	 67ecea	vmlinux.init
      4900446	1049028	 861512	6810986	 67ed6a	vmlinux.preprint
      
      This change slightly increases the size but is needed for the next change.
      
      v3: Fix the branch tracer events to handle this change.
      
      v2: Fix the new function graph tracer event calls to handle this change.
      Acked-by: NMathieu Desnoyers <mathieu.desnoyers@efficios.com>
      Acked-by: NMasami Hiramatsu <mhiramat@redhat.com>
      Acked-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      a9a57763