1. 25 3月, 2017 1 次提交
  2. 15 2月, 2017 1 次提交
  3. 11 2月, 2017 1 次提交
  4. 21 1月, 2017 2 次提交
  5. 25 12月, 2016 1 次提交
  6. 09 12月, 2016 1 次提交
    • S
      tracing/fgraph: Have wakeup and irqsoff tracers ignore graph functions too · 1a414428
      Steven Rostedt (Red Hat) 提交于
      Currently both the wakeup and irqsoff traces do not handle set_graph_notrace
      well. The ftrace infrastructure will ignore the return paths of all
      functions leaving them hanging without an end:
      
        # echo '*spin*' > set_graph_notrace
        # cat trace
        [...]
                _raw_spin_lock() {
                  preempt_count_add() {
                  do_raw_spin_lock() {
                update_rq_clock();
      
      Where the '*spin*' functions should have looked like this:
      
                _raw_spin_lock() {
                  preempt_count_add();
                  do_raw_spin_lock();
                }
                update_rq_clock();
      
      Instead, have the wakeup and irqsoff tracers ignore the functions that are
      set by the set_graph_notrace like the function_graph tracer does. Move
      the logic in the function_graph tracer into a header to allow wakeup and
      irqsoff tracers to use it as well.
      
      Cc: Namhyung Kim <namhyung.kim@lge.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      1a414428
  7. 24 11月, 2016 1 次提交
  8. 16 11月, 2016 1 次提交
  9. 15 11月, 2016 1 次提交
  10. 12 9月, 2016 1 次提交
  11. 03 9月, 2016 1 次提交
    • S
      tracing: Added hardware latency tracer · e7c15cd8
      Steven Rostedt (Red Hat) 提交于
      The hardware latency tracer has been in the PREEMPT_RT patch for some time.
      It is used to detect possible SMIs or any other hardware interruptions that
      the kernel is unaware of. Note, NMIs may also be detected, but that may be
      good to note as well.
      
      The logic is pretty simple. It simply creates a thread that spins on a
      single CPU for a specified amount of time (width) within a periodic window
      (window). These numbers may be adjusted by their cooresponding names in
      
         /sys/kernel/tracing/hwlat_detector/
      
      The defaults are window = 1000000 us (1 second)
                       width  =  500000 us (1/2 second)
      
      The loop consists of:
      
      	t1 = trace_clock_local();
      	t2 = trace_clock_local();
      
      Where trace_clock_local() is a variant of sched_clock().
      
      The difference of t2 - t1 is recorded as the "inner" timestamp and also the
      timestamp  t1 - prev_t2 is recorded as the "outer" timestamp. If either of
      these differences are greater than the time denoted in
      /sys/kernel/tracing/tracing_thresh then it records the event.
      
      When this tracer is started, and tracing_thresh is zero, it changes to the
      default threshold of 10 us.
      
      The hwlat tracer in the PREEMPT_RT patch was originally written by
      Jon Masters. I have modified it quite a bit and turned it into a
      tracer.
      Based-on-code-by: NJon Masters <jcm@redhat.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      e7c15cd8
  12. 06 7月, 2016 1 次提交
  13. 05 7月, 2016 1 次提交
  14. 20 6月, 2016 4 次提交
  15. 04 5月, 2016 1 次提交
    • S
      tracing: Use temp buffer when filtering events · 0fc1b09f
      Steven Rostedt (Red Hat) 提交于
      Filtering of events requires the data to be written to the ring buffer
      before it can be decided to filter or not. This is because the parameters of
      the filter are based on the result that is written to the ring buffer and
      not on the parameters that are passed into the trace functions.
      
      The ftrace ring buffer is optimized for writing into the ring buffer and
      committing. The discard procedure used when filtering decides the event
      should be discarded is much more heavy weight. Thus, using a temporary
      filter when filtering events can speed things up drastically.
      
      Without a temp buffer we have:
      
       # trace-cmd start -p nop
       # perf stat -r 10 hackbench 50
             0.790706626 seconds time elapsed ( +-  0.71% )
      
       # trace-cmd start -e all
       # perf stat -r 10 hackbench 50
             1.566904059 seconds time elapsed ( +-  0.27% )
      
       # trace-cmd start -e all -f 'common_preempt_count==20'
       # perf stat -r 10 hackbench 50
             1.690598511 seconds time elapsed ( +-  0.19% )
      
       # trace-cmd start -e all -f 'common_preempt_count!=20'
       # perf stat -r 10 hackbench 50
             1.707486364 seconds time elapsed ( +-  0.30% )
      
      The first run above is without any tracing, just to get a based figure.
      hackbench takes ~0.79 seconds to run on the system.
      
      The second run enables tracing all events where nothing is filtered. This
      increases the time by 100% and hackbench takes 1.57 seconds to run.
      
      The third run filters all events where the preempt count will equal "20"
      (this should never happen) thus all events are discarded. This takes 1.69
      seconds to run. This is 10% slower than just committing the events!
      
      The last run enables all events and filters where the filter will commit all
      events, and this takes 1.70 seconds to run. The filtering overhead is
      approximately 10%. Thus, the discard and commit of an event from the ring
      buffer may be about the same time.
      
      With this patch, the numbers change:
      
       # trace-cmd start -p nop
       # perf stat -r 10 hackbench 50
             0.778233033 seconds time elapsed ( +-  0.38% )
      
       # trace-cmd start -e all
       # perf stat -r 10 hackbench 50
             1.582102692 seconds time elapsed ( +-  0.28% )
      
       # trace-cmd start -e all -f 'common_preempt_count==20'
       # perf stat -r 10 hackbench 50
             1.309230710 seconds time elapsed ( +-  0.22% )
      
       # trace-cmd start -e all -f 'common_preempt_count!=20'
       # perf stat -r 10 hackbench 50
             1.786001924 seconds time elapsed ( +-  0.20% )
      
      The first run is again the base with no tracing.
      
      The second run is all tracing with no filtering. It is a little slower, but
      that may be well within the noise.
      
      The third run shows that discarding all events only took 1.3 seconds. This
      is a speed up of 23%! The discard is much faster than even the commit.
      
      The one downside is shown in the last run. Events that are not discarded by
      the filter will take longer to add, this is due to the extra copy of the
      event.
      
      Cc: Alexei Starovoitov <ast@kernel.org>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      0fc1b09f
  16. 30 4月, 2016 3 次提交
  17. 27 4月, 2016 2 次提交
  18. 20 4月, 2016 3 次提交
    • T
      tracing: Add support for named triggers · db1388b4
      Tom Zanussi 提交于
      Named triggers are sets of triggers that share a common set of trigger
      data.  An example of functionality that could benefit from this type
      of capability would be a set of inlined probes that would each
      contribute event counts, for example, to a shared counter data
      structure.
      
      The first named trigger registered with a given name owns the common
      trigger data that the others subsequently registered with the same
      name will reference.  The functions defined here allow users to add,
      delete, and find named triggers.
      
      It also adds functions to pause and unpause named triggers; since
      named triggers act upon common data, they should also be paused and
      unpaused as a group.
      
      Link: http://lkml.kernel.org/r/c09ff648360f65b10a3e321eddafe18060b4a04f.1457029949.git.tom.zanussi@linux.intel.comSigned-off-by: NTom Zanussi <tom.zanussi@linux.intel.com>
      Tested-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Reviewed-by: NNamhyung Kim <namhyung@kernel.org>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      db1388b4
    • T
      tracing: Add enable_hist/disable_hist triggers · d0bad49b
      Tom Zanussi 提交于
      Similar to enable_event/disable_event triggers, these triggers enable
      and disable the aggregation of events into maps rather than enabling
      and disabling their writing into the trace buffer.
      
      They can be used to automatically start and stop hist triggers based
      on a matching filter condition.
      
      If there's a paused hist trigger on system:event, the following would
      start it when the filter condition was hit:
      
        # echo enable_hist:system:event [ if filter] > event/trigger
      
      And the following would disable a running system:event hist trigger:
      
        # echo disable_hist:system:event [ if filter] > event/trigger
      
      See Documentation/trace/events.txt for real examples.
      
      Link: http://lkml.kernel.org/r/f812f086e52c8b7c8ad5443487375e03c96a601f.1457029949.git.tom.zanussi@linux.intel.comSigned-off-by: NTom Zanussi <tom.zanussi@linux.intel.com>
      Tested-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Reviewed-by: NNamhyung Kim <namhyung@kernel.org>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      d0bad49b
    • T
      tracing: Add 'hist' event trigger command · 7ef224d1
      Tom Zanussi 提交于
      'hist' triggers allow users to continually aggregate trace events,
      which can then be viewed afterwards by simply reading a 'hist' file
      containing the aggregation in a human-readable format.
      
      The basic idea is very simple and boils down to a mechanism whereby
      trace events, rather than being exhaustively dumped in raw form and
      viewed directly, are automatically 'compressed' into meaningful tables
      completely defined by the user.
      
      This is done strictly via single-line command-line commands and
      without the aid of any kind of programming language or interpreter.
      
      A surprising number of typical use cases can be accomplished by users
      via this simple mechanism.  In fact, a large number of the tasks that
      users typically do using the more complicated script-based tracing
      tools, at least during the initial stages of an investigation, can be
      accomplished by simply specifying a set of keys and values to be used
      in the creation of a hash table.
      
      The Linux kernel trace event subsystem happens to provide an extensive
      list of keys and values ready-made for such a purpose in the form of
      the event format files associated with each trace event.  By simply
      consulting the format file for field names of interest and by plugging
      them into the hist trigger command, users can create an endless number
      of useful aggregations to help with investigating various properties
      of the system.  See Documentation/trace/events.txt for examples.
      
      hist triggers are implemented on top of the existing event trigger
      infrastructure, and as such are consistent with the existing triggers
      from a user's perspective as well.
      
      The basic syntax follows the existing trigger syntax.  Users start an
      aggregation by writing a 'hist' trigger to the event of interest's
      trigger file:
      
        # echo hist:keys=xxx [ if filter] > event/trigger
      
      Once a hist trigger has been set up, by default it continually
      aggregates every matching event into a hash table using the event key
      and a value field named 'hitcount'.
      
      To view the aggregation at any point in time, simply read the 'hist'
      file in the same directory as the 'trigger' file:
      
        # cat event/hist
      
      The detailed syntax provides additional options for user control, and
      is described exhaustively in Documentation/trace/events.txt and in the
      virtual tracing/README file in the tracing subsystem.
      
      Link: http://lkml.kernel.org/r/72d263b5e1853fe9c314953b65833c3aa75479f2.1457029949.git.tom.zanussi@linux.intel.comSigned-off-by: NTom Zanussi <tom.zanussi@linux.intel.com>
      Tested-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Reviewed-by: NNamhyung Kim <namhyung@kernel.org>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      7ef224d1
  19. 19 4月, 2016 2 次提交
    • S
      tracing: Add infrastructure to allow set_event_pid to follow children · c37775d5
      Steven Rostedt 提交于
      Add the infrastructure needed to have the PIDs in set_event_pid to
      automatically add PIDs of the children of the tasks that have their PIDs in
      set_event_pid. This will also remove PIDs from set_event_pid when a task
      exits
      
      This is implemented by adding hooks into the fork and exit tracepoints. On
      fork, the PIDs are added to the list, and on exit, they are removed.
      
      Add a new option called event_fork that when set, PIDs in set_event_pid will
      automatically get their children PIDs added when they fork, as well as any
      task that exits will have its PID removed from set_event_pid.
      
      This works for instances as well.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      c37775d5
    • S
      tracing: Use pid bitmap instead of a pid array for set_event_pid · f4d34a87
      Steven Rostedt 提交于
      In order to add the ability to let tasks that are filtered by the events
      have their children also be traced on fork (and then not traced on exit),
      convert the array into a pid bitmask. Most of the time the number of pids is
      only 32768 pids or a 4k bitmask, which is the same size as the default list
      currently is, and that list could grow if more pids are listed.
      
      This also greatly simplifies the code.
      Suggested-by: N"H. Peter Anvin" <hpa@zytor.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      f4d34a87
  20. 23 3月, 2016 1 次提交
  21. 09 3月, 2016 9 次提交
  22. 24 12月, 2015 1 次提交