1. 24 11月, 2016 2 次提交
  2. 23 11月, 2016 1 次提交
  3. 16 11月, 2016 1 次提交
  4. 15 11月, 2016 1 次提交
  5. 26 9月, 2016 1 次提交
  6. 25 9月, 2016 1 次提交
  7. 12 9月, 2016 1 次提交
  8. 03 9月, 2016 1 次提交
    • S
      tracing: Added hardware latency tracer · e7c15cd8
      Steven Rostedt (Red Hat) 提交于
      The hardware latency tracer has been in the PREEMPT_RT patch for some time.
      It is used to detect possible SMIs or any other hardware interruptions that
      the kernel is unaware of. Note, NMIs may also be detected, but that may be
      good to note as well.
      
      The logic is pretty simple. It simply creates a thread that spins on a
      single CPU for a specified amount of time (width) within a periodic window
      (window). These numbers may be adjusted by their cooresponding names in
      
         /sys/kernel/tracing/hwlat_detector/
      
      The defaults are window = 1000000 us (1 second)
                       width  =  500000 us (1/2 second)
      
      The loop consists of:
      
      	t1 = trace_clock_local();
      	t2 = trace_clock_local();
      
      Where trace_clock_local() is a variant of sched_clock().
      
      The difference of t2 - t1 is recorded as the "inner" timestamp and also the
      timestamp  t1 - prev_t2 is recorded as the "outer" timestamp. If either of
      these differences are greater than the time denoted in
      /sys/kernel/tracing/tracing_thresh then it records the event.
      
      When this tracer is started, and tracing_thresh is zero, it changes to the
      default threshold of 10 us.
      
      The hwlat tracer in the PREEMPT_RT patch was originally written by
      Jon Masters. I have modified it quite a bit and turned it into a
      tracer.
      Based-on-code-by: NJon Masters <jcm@redhat.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      e7c15cd8
  9. 24 8月, 2016 1 次提交
  10. 05 7月, 2016 2 次提交
  11. 24 6月, 2016 1 次提交
    • S
      tracing: Skip more functions when doing stack tracing of events · be54f69c
      Steven Rostedt (Red Hat) 提交于
       # echo 1 > options/stacktrace
       # echo 1 > events/sched/sched_switch/enable
       # cat trace
                <idle>-0     [002] d..2  1982.525169: <stack trace>
       => save_stack_trace
       => __ftrace_trace_stack
       => trace_buffer_unlock_commit_regs
       => event_trigger_unlock_commit
       => trace_event_buffer_commit
       => trace_event_raw_event_sched_switch
       => __schedule
       => schedule
       => schedule_preempt_disabled
       => cpu_startup_entry
       => start_secondary
      
      The above shows that we are seeing 6 functions before ever making it to the
      caller of the sched_switch event.
      
       # echo stacktrace > events/sched/sched_switch/trigger
       # cat trace
                <idle>-0     [002] d..3  2146.335208: <stack trace>
       => trace_event_buffer_commit
       => trace_event_raw_event_sched_switch
       => __schedule
       => schedule
       => schedule_preempt_disabled
       => cpu_startup_entry
       => start_secondary
      
      The stacktrace trigger isn't as bad, because it adds its own skip to the
      stacktracing, but still has two events extra.
      
      One issue is that if the stacktrace passes its own "regs" then there should
      be no addition to the skip, as the regs will not include the functions being
      called. This was an issue that was fixed by commit 7717c6be ("tracing:
      Fix stacktrace skip depth in trace_buffer_unlock_commit_regs()" as adding
      the skip number for kprobes made the probes not have any stack at all.
      
      But since this is only an issue when regs is being used, a skip should be
      added if regs is NULL. Now we have:
      
       # echo 1 > options/stacktrace
       # echo 1 > events/sched/sched_switch/enable
       # cat trace
                <idle>-0     [000] d..2  1297.676333: <stack trace>
       => __schedule
       => schedule
       => schedule_preempt_disabled
       => cpu_startup_entry
       => rest_init
       => start_kernel
       => x86_64_start_reservations
       => x86_64_start_kernel
      
       # echo stacktrace > events/sched/sched_switch/trigger
       # cat trace
                <idle>-0     [002] d..3  1370.759745: <stack trace>
       => __schedule
       => schedule
       => schedule_preempt_disabled
       => cpu_startup_entry
       => start_secondary
      
      And kprobes are not touched.
      Reported-by: NPeter Zijlstra <peterz@infradead.org>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      be54f69c
  12. 20 6月, 2016 5 次提交
  13. 04 5月, 2016 1 次提交
    • S
      tracing: Use temp buffer when filtering events · 0fc1b09f
      Steven Rostedt (Red Hat) 提交于
      Filtering of events requires the data to be written to the ring buffer
      before it can be decided to filter or not. This is because the parameters of
      the filter are based on the result that is written to the ring buffer and
      not on the parameters that are passed into the trace functions.
      
      The ftrace ring buffer is optimized for writing into the ring buffer and
      committing. The discard procedure used when filtering decides the event
      should be discarded is much more heavy weight. Thus, using a temporary
      filter when filtering events can speed things up drastically.
      
      Without a temp buffer we have:
      
       # trace-cmd start -p nop
       # perf stat -r 10 hackbench 50
             0.790706626 seconds time elapsed ( +-  0.71% )
      
       # trace-cmd start -e all
       # perf stat -r 10 hackbench 50
             1.566904059 seconds time elapsed ( +-  0.27% )
      
       # trace-cmd start -e all -f 'common_preempt_count==20'
       # perf stat -r 10 hackbench 50
             1.690598511 seconds time elapsed ( +-  0.19% )
      
       # trace-cmd start -e all -f 'common_preempt_count!=20'
       # perf stat -r 10 hackbench 50
             1.707486364 seconds time elapsed ( +-  0.30% )
      
      The first run above is without any tracing, just to get a based figure.
      hackbench takes ~0.79 seconds to run on the system.
      
      The second run enables tracing all events where nothing is filtered. This
      increases the time by 100% and hackbench takes 1.57 seconds to run.
      
      The third run filters all events where the preempt count will equal "20"
      (this should never happen) thus all events are discarded. This takes 1.69
      seconds to run. This is 10% slower than just committing the events!
      
      The last run enables all events and filters where the filter will commit all
      events, and this takes 1.70 seconds to run. The filtering overhead is
      approximately 10%. Thus, the discard and commit of an event from the ring
      buffer may be about the same time.
      
      With this patch, the numbers change:
      
       # trace-cmd start -p nop
       # perf stat -r 10 hackbench 50
             0.778233033 seconds time elapsed ( +-  0.38% )
      
       # trace-cmd start -e all
       # perf stat -r 10 hackbench 50
             1.582102692 seconds time elapsed ( +-  0.28% )
      
       # trace-cmd start -e all -f 'common_preempt_count==20'
       # perf stat -r 10 hackbench 50
             1.309230710 seconds time elapsed ( +-  0.22% )
      
       # trace-cmd start -e all -f 'common_preempt_count!=20'
       # perf stat -r 10 hackbench 50
             1.786001924 seconds time elapsed ( +-  0.20% )
      
      The first run is again the base with no tracing.
      
      The second run is all tracing with no filtering. It is a little slower, but
      that may be well within the noise.
      
      The third run shows that discarding all events only took 1.3 seconds. This
      is a speed up of 23%! The discard is much faster than even the commit.
      
      The one downside is shown in the last run. Events that are not discarded by
      the filter will take longer to add, this is due to the extra copy of the
      event.
      
      Cc: Alexei Starovoitov <ast@kernel.org>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      0fc1b09f
  14. 30 4月, 2016 5 次提交
  15. 27 4月, 2016 2 次提交
  16. 26 4月, 2016 1 次提交
    • S
      tracing: Do not inherit event-fork option for instances · 20550622
      Steven Rostedt (Red Hat) 提交于
      As the event-fork option requires doing work when enabled and disabled, it
      can not be passed down to created instances. The instance must clear this
      flag when it is created, and must clear it when its removed.
      
      As more options may be created with this need, a macro ZEROED_TRACE_FLAGS is
      created that holds the flags that must not be inherited by the top level
      instance, and must be cleared on removal of instances.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      20550622
  17. 20 4月, 2016 13 次提交