1. 04 12月, 2008 1 次提交
    • S
      ftrace: graph of a single function · ea4e2bc4
      Steven Rostedt 提交于
      This patch adds the file:
      
         /debugfs/tracing/set_graph_function
      
      which can be used along with the function graph tracer.
      
      When this file is empty, the function graph tracer will act as
      usual. When the file has a function in it, the function graph
      tracer will only trace that function.
      
      For example:
      
       # echo blk_unplug > /debugfs/tracing/set_graph_function
       # cat /debugfs/tracing/trace
       [...]
       ------------------------------------------
       | 2)  make-19003  =>  kjournald-2219
       ------------------------------------------
      
       2)               |  blk_unplug() {
       2)               |    dm_unplug_all() {
       2)               |      dm_get_table() {
       2)      1.381 us |        _read_lock();
       2)      0.911 us |        dm_table_get();
       2)      1. 76 us |        _read_unlock();
       2) +   12.912 us |      }
       2)               |      dm_table_unplug_all() {
       2)               |        blk_unplug() {
       2)      0.778 us |          generic_unplug_device();
       2)      2.409 us |        }
       2)      5.992 us |      }
       2)      0.813 us |      dm_table_put();
       2) +   29. 90 us |    }
       2) +   34.532 us |  }
      
      You can add up to 32 functions into this file. Currently we limit it
      to 32, but this may change with later improvements.
      
      To add another function, use the append '>>':
      
        # echo sys_read >> /debugfs/tracing/set_graph_function
        # cat /debugfs/tracing/set_graph_function
        blk_unplug
        sys_read
      
      Using the '>' will clear out the function and write anew:
      
        # echo sys_write > /debug/tracing/set_graph_function
        # cat /debug/tracing/set_graph_function
        sys_write
      
      Note, if you have function graph running while doing this, the small
      time between clearing it and updating it will cause the graph to
      record all functions. This should not be an issue because after
      it sets the filter, only those functions will be recorded from then on.
      If you need to only record a particular function then set this
      file first before starting the function graph tracer. In the future
      this side effect may be corrected.
      
      The set_graph_function file is similar to the set_ftrace_filter but
      it does not take wild cards nor does it allow for more than one
      function to be set with a single write. There is no technical reason why
      this is the case, I just do not have the time yet to implement that.
      
      Note, dynamic ftrace must be enabled for this to appear because it
      uses the dynamic ftrace records to match the name to the mcount
      call sites.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      ea4e2bc4
  2. 03 12月, 2008 2 次提交
  3. 28 11月, 2008 1 次提交
  4. 26 11月, 2008 4 次提交
    • S
      ftrace: add thread comm to function graph tracer · 660c7f9b
      Steven Rostedt 提交于
      Impact: enhancement to function graph tracer
      
      Export the trace_find_cmdline so the function graph tracer can
      use it to print the comms of the threads.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      660c7f9b
    • F
      tracing/function-return-tracer: set a more human readable output · 287b6e68
      Frederic Weisbecker 提交于
      Impact: feature
      
      This patch sets a C-like output for the function graph tracing.
      For this aim, we now call two handler for each function: one on the entry
      and one other on return. This way we can draw a well-ordered call stack.
      
      The pid of the previous trace is loosely stored to be compared against
      the one of the current trace to see if there were a context switch.
      
      Without this little feature, the call tree would seem broken at
      some locations.
      We could use the sched_tracer to capture these sched_events but this
      way of processing is much more simpler.
      
      2 spaces have been chosen for indentation to fit the screen while deep
      calls. The time of execution in nanosecs is printed just after closed
      braces, it seems more easy this way to find the corresponding function.
      If the time was printed as a first column, it would be not so easy to
      find the corresponding function if it is called on a deep depth.
      
      I plan to output the return value but on 32 bits CPU, the return value
      can be 32 or 64, and its difficult to guess on which case we are.
      I don't know what would be the better solution on X86-32: only print
      eax (low-part) or even edx (high-part).
      
      Actually it's thee same problem when a function return a 8 bits value, the
      high part of eax could contain junk values...
      
      Here is an example of trace:
      
      sys_read() {
        fget_light() {
        } 526
        vfs_read() {
          rw_verify_area() {
            security_file_permission() {
              cap_file_permission() {
              } 519
            } 1564
          } 2640
          do_sync_read() {
            pipe_read() {
              __might_sleep() {
              } 511
              pipe_wait() {
                prepare_to_wait() {
                } 760
                deactivate_task() {
                  dequeue_task() {
                    dequeue_task_fair() {
                      dequeue_entity() {
                        update_curr() {
                          update_min_vruntime() {
                          } 504
                        } 1587
                        clear_buddies() {
                        } 512
                        add_cfs_task_weight() {
                        } 519
                        update_min_vruntime() {
                        } 511
                      } 5602
                      dequeue_entity() {
                        update_curr() {
                          update_min_vruntime() {
                          } 496
                        } 1631
                        clear_buddies() {
                        } 496
                        update_min_vruntime() {
                        } 527
                      } 4580
                      hrtick_update() {
                        hrtick_start_fair() {
                        } 488
                      } 1489
                    } 13700
                  } 14949
                } 16016
                msecs_to_jiffies() {
                } 496
                put_prev_task_fair() {
                } 504
                pick_next_task_fair() {
                } 489
                pick_next_task_rt() {
                } 496
                pick_next_task_fair() {
                } 489
                pick_next_task_idle() {
                } 489
      
      ------------8<---------- thread 4 ------------8<----------
      
      finish_task_switch() {
      } 1203
      do_softirq() {
        __do_softirq() {
          __local_bh_disable() {
          } 669
          rcu_process_callbacks() {
            __rcu_process_callbacks() {
              cpu_quiet() {
                rcu_start_batch() {
                } 503
              } 1647
            } 3128
            __rcu_process_callbacks() {
            } 542
          } 5362
          _local_bh_enable() {
          } 587
        } 8880
      } 9986
      kthread_should_stop() {
      } 669
      deactivate_task() {
        dequeue_task() {
          dequeue_task_fair() {
            dequeue_entity() {
              update_curr() {
                calc_delta_mine() {
                } 511
                update_min_vruntime() {
                } 511
              } 2813
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Acked-by: NSteven Rostedt <rostedt@goodmis.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      287b6e68
    • F
      tracing/function-return-tracer: change the name into function-graph-tracer · fb52607a
      Frederic Weisbecker 提交于
      Impact: cleanup
      
      This patch changes the name of the "return function tracer" into
      function-graph-tracer which is a more suitable name for a tracing
      which makes one able to retrieve the ordered call stack during
      the code flow.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Acked-by: NSteven Rostedt <rostedt@goodmis.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      fb52607a
    • M
      x86, ftrace: call trace->open() before stopping tracing; add trace->print_header() · 8bba1bf5
      Markus Metzger 提交于
      Add a callback to allow an ftrace plug-in to write its own header.
      
      Move the call to trace->open() up a few lines.
      
      The changes are required by the BTS ftrace plug-in.
      Signed-off-by: NMarkus Metzger <markus.t.metzger@intel.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      8bba1bf5
  5. 23 11月, 2008 6 次提交
  6. 19 11月, 2008 2 次提交
    • I
      ftrace: fix selftest locking · 86fa2f60
      Ingo Molnar 提交于
      Impact: fix self-test boot crash
      
      Self-test failure forgot to re-lock the BKL - crashing the next
      initcall:
      
      Testing tracer irqsoff: .. no entries found ..FAILED!
      initcall init_irqsoff_tracer+0x0/0x11 returned 0 after 3906 usecs
      calling  init_mmio_trace+0x0/0xf @ 1
      ------------[ cut here ]------------
      Kernel BUG at c0c0a915 [verbose debug info unavailable]
      invalid opcode: 0000 [#1] SMP DEBUG_PAGEALLOC
      last sysfs file:
      
      Pid: 1, comm: swapper Not tainted (2.6.28-rc5-tip #53704)
      EIP: 0060:[<c0c0a915>] EFLAGS: 00010286 CPU: 1
      EIP is at unlock_kernel+0x10/0x2b
      EAX: ffffffff EBX: 00000000 ECX: 00000000 EDX: f7030000
      ESI: c12da19c EDI: 00000000 EBP: f7039f54 ESP: f7039f54
       DS: 007b ES: 007b FS: 00d8 GS: 0000 SS: 0068
      Process swapper (pid: 1, ti=f7038000 task=f7030000 task.ti=f7038000)
      Stack:
       f7039f6c c0164d30 c013fed8 a7d8d7b4 00000000 00000000 f7039f74 c12fb78a
       f7039fd0 c0101132 c12fb77d 00000000 6f727200 6f632072 2d206564 c1002031
       0000000f f7039fa2 f7039fb0 3531b171 00000000 00000000 0000002f c12ca480
      Call Trace:
       [<c0164d30>] ? register_tracer+0x66/0x13f
       [<c013fed8>] ? ktime_get+0x19/0x1b
       [<c12fb78a>] ? init_mmio_trace+0xd/0xf
       [<c0101132>] ? do_one_initcall+0x4a/0x111
       [<c12fb77d>] ? init_mmio_trace+0x0/0xf
       [<c015c7e6>] ? init_irq_proc+0x46/0x59
       [<c12e851d>] ? kernel_init+0x104/0x152
       [<c12e8419>] ? kernel_init+0x0/0x152
       [<c01038b7>] ? kernel_thread_helper+0x7/0x10
      Code: 58 14 43 75 0a b8 00 9b 2d c1 e8 51 43 7a ff 64 a1 00 a0 37 c1 89 58 14 5b 5d c3 55 64 8b 15 00 a0 37 c1 83 7a 14 00 89 e5 79 04 <0f> 0b eb fe 8b 42 14 48 85 c0 89 42 14 79 0a b8 00 9b 2d c1 e8
      EIP: [<c0c0a915>] unlock_kernel+0x10/0x2b SS:ESP 0068:f7039f54
      ---[ end trace a7919e7f17c0a725 ]---
      Kernel panic - not syncing: Attempted to kill init!
      
      So clean up the flow a bit.
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      86fa2f60
    • H
      ftrace: preemptoff selftest not working · a2250634
      Heiko Carstens 提交于
      Impact: fix preemptoff and preemptirqsoff tracer self-tests
      
      I was wondering why the preemptoff and preemptirqsoff tracer selftests
      don't work on s390. After all its just that they get called from
      non-preemptible context:
      
      kernel_init() will execute all initcalls, however the first line in
      kernel_init() is lock_kernel(), which causes the preempt_count to be
      increased. Any later calls to add_preempt_count() (especially those
      from the selftests) will therefore not result in a call to
      trace_preempt_off() since the check below in add_preempt_count()
      will be false:
      
              if (preempt_count() == val)
                      trace_preempt_off(CALLER_ADDR0, get_parent_ip(CALLER_ADDR1));
      
      Hence the trace buffer will be empty.
      
      Fix this by releasing the BKL during the self-tests.
      Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
      Acked-by: NSteven Rostedt <rostedt@goodmis.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      a2250634
  7. 18 11月, 2008 3 次提交
    • J
      tracing: kernel/trace/trace.c: introduce missing kfree() · 0bb943c7
      Julia Lawall 提交于
      Impact: fix memory leak
      
      Error handling code following a kzalloc should free the allocated data.
      
      The semantic match that finds the problem is as follows:
      (http://www.emn.fr/x-info/coccinelle/)
      
      // <smpl>
      @r exists@
      local idexpression x;
      statement S;
      expression E;
      identifier f,l;
      position p1,p2;
      expression *ptr != NULL;
      @@
      
      (
      if ((x@p1 = \(kmalloc\|kzalloc\|kcalloc\)(...)) == NULL) S
      |
      x@p1 = \(kmalloc\|kzalloc\|kcalloc\)(...);
      ...
      if (x == NULL) S
      )
      <... when != x
           when != if (...) { <+...x...+> }
      x->f = E
      ...>
      (
       return \(0\|<+...x...+>\|ptr\);
      |
       return@p2 ...;
      )
      
      @script:python@
      p1 << r.p1;
      p2 << r.p2;
      @@
      
      print "* file: %s kmalloc %s return %s" % (p1[0].file,p1[0].line,p2[0].line)
      // </smpl>
      Signed-off-by: NJulia Lawall <julia@diku.dk>
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      0bb943c7
    • F
      tracing/function-return-tracer: add the overrun field · 0231022c
      Frederic Weisbecker 提交于
      Impact: help to find the better depth of trace
      
      We decided to arbitrary define the depth of function return trace as
      "20". Perhaps this is not enough. To help finding an optimal depth, we
      measure now the overrun: the number of functions that have been missed
      for the current thread. By default this is not displayed, we have to
      do set a particular flag on the return tracer: echo overrun >
      /debug/tracing/trace_options And the overrun will be printed on the
      right.
      
      As the trace shows below, the current 20 depth is not enough.
      
      update_wall_time+0x37f/0x8c0 -> update_xtime_cache (345 ns) (Overruns: 2838)
      update_wall_time+0x384/0x8c0 -> clocksource_get_next (1141 ns) (Overruns: 2838)
      do_timer+0x23/0x100 -> update_wall_time (3882 ns) (Overruns: 2838)
      tick_do_update_jiffies64+0xbf/0x160 -> do_timer (5339 ns) (Overruns: 2838)
      tick_sched_timer+0x6a/0xf0 -> tick_do_update_jiffies64 (7209 ns) (Overruns: 2838)
      vgacon_set_cursor_size+0x98/0x120 -> native_io_delay (2613 ns) (Overruns: 274)
      vgacon_cursor+0x16e/0x1d0 -> vgacon_set_cursor_size (33151 ns) (Overruns: 274)
      set_cursor+0x5f/0x80 -> vgacon_cursor (36432 ns) (Overruns: 274)
      con_flush_chars+0x34/0x40 -> set_cursor (38790 ns) (Overruns: 274)
      release_console_sem+0x1ec/0x230 -> up (721 ns) (Overruns: 274)
      release_console_sem+0x225/0x230 -> wake_up_klogd (316 ns) (Overruns: 274)
      con_flush_chars+0x39/0x40 -> release_console_sem (2996 ns) (Overruns: 274)
      con_write+0x22/0x30 -> con_flush_chars (46067 ns) (Overruns: 274)
      n_tty_write+0x1cc/0x360 -> con_write (292670 ns) (Overruns: 274)
      smp_apic_timer_interrupt+0x2a/0x90 -> native_apic_mem_write (330 ns) (Overruns: 274)
      irq_enter+0x17/0x70 -> idle_cpu (413 ns) (Overruns: 274)
      smp_apic_timer_interrupt+0x2f/0x90 -> irq_enter (1525 ns) (Overruns: 274)
      ktime_get_ts+0x40/0x70 -> getnstimeofday (465 ns) (Overruns: 274)
      ktime_get_ts+0x60/0x70 -> set_normalized_timespec (436 ns) (Overruns: 274)
      ktime_get+0x16/0x30 -> ktime_get_ts (2501 ns) (Overruns: 274)
      hrtimer_interrupt+0x77/0x1a0 -> ktime_get (3439 ns) (Overruns: 274)
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Acked-by: NSteven Rostedt <rostedt@goodmis.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      0231022c
    • F
      tracing/ftrace: implement a set_flag callback for tracers · adf9f195
      Frederic Weisbecker 提交于
      Impact: give a way to send specific messages to tracers
      
      The current implementation of tracing uses some flags to control the
      output of general tracers. But we have no way to implement custom
      flags handling for a specific tracer. This patch proposes a new
      callback for the struct tracer which called set_flag and a structure
      that represents a 32 bits variable flag.
      
      A tracer can implement a struct tracer_flags on which it puts the
      initial value of the flag integer. Than it can place a range of flags
      with their name and their flag mask on the flag integer. The structure
      that implement a single flag is called struct tracer_opt.
      
      These custom flags will be available through the trace_options file
      like the general tracing flags. Changing their value is done like the
      other general flags. For example if you have a flag that calls "foo",
      you can activate it by writing "foo" or "nofoo" on trace_options.
      
      Note that the set_flag callback is optional and is only needed if you
      want the flags changing to be signaled to your tracer and let it to
      accept or refuse their assignment.
      
      V2: Some arrangements in coding style....
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Acked-by: NSteven Rostedt <rostedt@goodmis.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      adf9f195
  8. 17 11月, 2008 1 次提交
  9. 16 11月, 2008 3 次提交
  10. 13 11月, 2008 6 次提交
    • S
      ftrace: CPU buffer start annotation clean ups · 12ef7d44
      Steven Rostedt 提交于
      Impact: better handling of CPU buffer start annotation
      
      Because of the confusion with the per CPU buffers wrapping where
      one CPU might be more active at the end of the trace than the other
      CPUs causing that one CPU to have a shorter history. Kernel
      developers were confused by the "missing" data of that one CPU
      at the beginning of the trace output. An annotation was added to
      the trace output to show that the buffer had started:
      
       # tracer: function
       #
       #           TASK-PID    CPU#    TIMESTAMP  FUNCTION
       #              | |       |          |         |
       ##### CPU 3 buffer started ####
                <idle>-0     [003]   158.192959: smp_apic_timer_interrupt
       [...]
                 <idle>-0     [003]   161.556520: default_idle
       ##### CPU 1 buffer started ####
                 <idle>-0     [001]   161.592494: hrtimer_force_reprogram
       [etc]
      
      But this annotation gets a bit messy when tracers do not fill the
      buffers. This patch does a couple of things:
      
       One) it adds a flag to trace_options to disable these annotations
      
       Two) it does not annotate if the tracer did not overflow its buffer.
      
      This makes the output much cleaner.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      12ef7d44
    • S
      ftrace: rename iter_ctrl to trace_options · ee6bce52
      Steven Rostedt 提交于
      Impact: rename file /debug/tracing/iter_ctrl to /debug/tracing/trace_options
      
      The original ftrace had a file called "iter_ctrl" that would control
      the way the output was iterated. But this file grew into a catch all
      for different trace options. This patch renames the file from iter_ctrl
      to trace_options to reflect this change.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      ee6bce52
    • S
      ftrace: show buffer size in kilobytes · 1696b2b0
      Steven Rostedt 提交于
      Impact: change the units of buffer_size_kb to kilobytes
      
      This patch changes the units of the buffer_size_kb file to kilobytes.
      Reading and writing to the file uses kilobytes as units. To help
      users to know what units are used, the output of the file now
      looks like:
      
        # cat /debug/tracing/buffer_size_kb
        1408
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      1696b2b0
    • S
      ftrace: rename trace_entries to buffer_size_kb · a94c80e7
      Steven Rostedt 提交于
      Impact: rename of debugfs file trace_entries to buffer_size_kb
      
      The original ftrace had fixed size entries, and the number of entries
      was shown and modified via the file called trace_entries. By converting
      to the unified trace buffer, we now allow for variable size entries
      which makes the meaning of trace_entries pointless.
      
      Since trace_size might be confused to the size of the trace, this patch
      names it "buffer_size_kb" (thanks to Arjan van de Ven for this idea).
      
      [ mingo@elte.hu: changed from buffer_size to buffer_size_kb ]
      
      ( Note, the units are still bytes - the next patch changes that,
        to keep the wide rename patch separate from the unit-change patch. )
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      a94c80e7
    • S
      ftrace: rename unlikely iter_ctrl to branch · 9f029e83
      Steven Rostedt 提交于
      Impact: rename of iter_ctrl unlikely to branch
      
      The unlikely name is ugly. This patch converts the iter_ctrl command
      "unlikely" and "nounlikely" to "branch" and "nobranch" respectively.
      
      It also renames a lot of internal functions to use "branch" instead
      of "unlikely".
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      9f029e83
    • S
      trace: rename unlikely profiler to branch profiler · 2ed84eeb
      Steven Rostedt 提交于
      Impact: name change of unlikely tracer and profiler
      
      Ingo Molnar suggested changing the config from UNLIKELY_PROFILE
      to BRANCH_PROFILING. I never did like the "unlikely" name so I
      went one step farther, and renamed all the unlikely configurations
      to a "BRANCH" variant.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      2ed84eeb
  11. 12 11月, 2008 3 次提交
    • I
      tracing: finetune branch-tracer output · 68d119f0
      Ingo Molnar 提交于
      Steve suggested the to change the output from this:
      
      >  bash-3471  [003]   357.014755: [ MISS ] sched_info_dequeued:sched_stats.h:177
      >  bash-3471  [003]   357.014756: [ .... ] update_curr:sched_fair.c:489
      >  bash-3471  [003]   357.014758: [ .... ] calc_delta_fair:sched_fair.c:411
      
      to this:
      
      >  bash-3471  [003]   357.014755: [ MISS ] sched_info_dequeued:sched_stats.h:177
      >  bash-3471  [003]   357.014756: [  ok  ] update_curr:sched_fair.c:489
      >  bash-3471  [003]   357.014758: [  ok  ] calc_delta_fair:sched_fair.c:411
      
      as it makes it clearer to the user what it means exactly.
      Acked-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      68d119f0
    • I
      tracing: branch tracer, tweak output · f88c4ae9
      Ingo Molnar 提交于
      Impact: modify the tracer output, to make it a bit easier to read
      
      Change the output from:
      
      >  bash-3471  [003]   357.014755: [INCORRECT] sched_info_dequeued:sched_stats.h:177
      >  bash-3471  [003]   357.014756: [correct] update_curr:sched_fair.c:489
      >  bash-3471  [003]   357.014758: [correct] calc_delta_fair:sched_fair.c:411
      
      to:
      
      >  bash-3471  [003]   357.014755: [ MISS ] sched_info_dequeued:sched_stats.h:177
      >  bash-3471  [003]   357.014756: [ .... ] update_curr:sched_fair.c:489
      >  bash-3471  [003]   357.014758: [ .... ] calc_delta_fair:sched_fair.c:411
      
      it's good to have fields aligned vertically, and the only important
      information is a prediction miss, so display only that information.
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      f88c4ae9
    • S
      tracing: likely/unlikely branch annotation tracer · 52f232cb
      Steven Rostedt 提交于
      Impact: new likely/unlikely branch tracer
      
      This patch adds a way to record the instances of the likely() and unlikely()
      branch condition annotations.
      
      When "unlikely" is set in /debugfs/tracing/iter_ctrl the unlikely conditions
      will be added to any of the ftrace tracers. The change takes effect when
      a new tracer is passed into the current_tracer file.
      
      For example:
      
       bash-3471  [003]   357.014755: [INCORRECT] sched_info_dequeued:sched_stats.h:177
       bash-3471  [003]   357.014756: [correct] update_curr:sched_fair.c:489
       bash-3471  [003]   357.014758: [correct] calc_delta_fair:sched_fair.c:411
       bash-3471  [003]   357.014759: [correct] account_group_exec_runtime:sched_stats.h:356
       bash-3471  [003]   357.014761: [correct] update_curr:sched_fair.c:489
       bash-3471  [003]   357.014763: [INCORRECT] calc_delta_fair:sched_fair.c:411
       bash-3471  [003]   357.014765: [correct] calc_delta_mine:sched.c:1279
      
      Which shows the normal tracer heading, as well as whether the condition was
      correct "[correct]" or was mistaken "[INCORRECT]", followed by the function,
      file name and line number.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      52f232cb
  12. 11 11月, 2008 3 次提交
    • F
      tracing: add a tracer to catch execution time of kernel functions · 15e6cb36
      Frederic Weisbecker 提交于
      Impact: add new tracing plugin which can trace full (entry+exit) function calls
      
      This tracer uses the low level function return ftrace plugin to
      measure the execution time of the kernel functions.
      
      The first field is the caller of the function, the second is the
      measured function, and the last one is the execution time in
      nanoseconds.
      
      - v3:
      
      - HAVE_FUNCTION_RET_TRACER have been added. Each arch that support ftrace return
        should enable it.
      - ftrace_return_stub becomes ftrace_stub.
      - CONFIG_FUNCTION_RET_TRACER depends now on CONFIG_FUNCTION_TRACER
      - Return traces printing can be used for other tracers on trace.c
      - Adapt to the new tracing API (no more ctrl_update callback)
      - Correct the check of "disabled" during insertion.
      - Minor changes...
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      15e6cb36
    • S
      ftrace: prevent ftrace_special from recursion · 5aa1ba6a
      Steven Rostedt 提交于
      Impact: stop ftrace_special from recursion
      
      The ftrace_special is used to help debug areas of the kernel.
      Because of this, if it is put in certain locations, the fact that
      it allows recursion can become a problem if the kernel developer
      using does not realize that.
      
      This patch changes ftrace_special to not allow recursion into itself
      to make it more robust.
      
      It also changes from preempt disable interrupts disable to prevent
      any loss of trace entries.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      5aa1ba6a
    • S
      ftrace: disable tracing on resize · bf5e6519
      Steven Rostedt 提交于
      Impact: fix for bug on resize
      
      This patch addresses the bug found here:
      
       http://bugzilla.kernel.org/show_bug.cgi?id=11996
      
      When ftrace converted to the new unified trace buffer, the resizing of
      the buffer was not protected as much as it was originally. If tracing
      is performed while the resize occurs, then the buffer can be corrupted.
      
      This patch disables all ftrace buffer modifications before a resize
      takes place.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      bf5e6519
  13. 08 11月, 2008 4 次提交
    • S
      ftrace: display start of CPU buffer in trace output · a309720c
      Steven Rostedt 提交于
      Impact: change in trace output
      
      Because the trace buffers are per cpu ring buffers, the start of
      the trace can be confusing. If one CPU is very active at the
      end of the trace, its history will not go as far back as the
      other CPU traces.  This means that output for a particular CPU
      may not appear for the first part of a trace.
      
      To help annotate what is happening, and to prevent any more
      confusion, this patch adds a line that annotates the start of
      a CPU buffer output.
      
      For example:
      
             automount-3495  [001]   184.596443: dnotify_parent <-vfs_write
      [...]
             automount-3495  [001]   184.596449: dput <-path_put
             automount-3496  [002]   184.596450: down_read_trylock <-do_page_fault
      [...]
                 sshd-3497  [001]   184.597069: up_read <-do_page_fault
                <idle>-0     [000]   184.597074: __exit_idle <-exit_idle
      [...]
             automount-3496  [002]   184.597257: filemap_fault <-__do_fault
                <idle>-0     [003]   184.597261: exit_idle <-smp_apic_timer_interrupt
      
      Note, parsers of a trace output should always ignore any lines that
      start with a '#'.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      a309720c
    • S
      ftrace: remove trace array ctrl · c76f0694
      Steven Rostedt 提交于
      Impact: remove obsolete variable in trace_array structure
      
      With the new start / stop method of ftrace, the ctrl variable
      in the trace_array structure is now obsolete. Remove it.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      c76f0694
    • S
      ftrace: remove ctrl_update method · bbf5b1a0
      Steven Rostedt 提交于
      Impact: Remove the ctrl_update tracer method
      
      With the new quick start/stop method of tracing, the ctrl_update
      method is out of date.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      bbf5b1a0
    • S
      ftrace: enable trace_printk by default · 49833fc2
      Steven Rostedt 提交于
      Impact: have the ftrace_printk enabled on startup
      
      It is confusing to have to "echo trace_printk > /debug/tracing/iter_ctrl"
      after adding ftrace_printk in the kernel.
      
      Currently the trace_printk is set to off by default. ftrace_printk
      should only be in open kernel code when used for debugging, and thus
      it should be enabled by default.
      
      It may also be used to record data within a tracer, but those ftrace_printks
      should be within wrappers that are either enabled by trace_points or
      have a variable protecting the code path from being entered when the
      tracer is disabled.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      49833fc2
  14. 06 11月, 2008 1 次提交
    • S
      ftrace: restructure tracing start/stop infrastructure · 9036990d
      Steven Rostedt 提交于
      Impact: change where tracing is started up and stopped
      
      Currently, when a new tracer is selected via echo'ing a tracer name into
      the current_tracer file, the startup is only done if tracing_enabled is
      set to one. If tracing_enabled is changed to zero (by echo'ing 0 into
      the tracing_enabled file) a full shutdown is performed.
      
      The full startup and shutdown of a tracer can be expensive and the
      user can lose out traces when echo'ing in 0 to the tracing_enabled file,
      because the process takes too long. There can also be places that
      the user would like to start and stop the tracer several times and
      doing the full startup and shutdown of a tracer might be too expensive.
      
      This patch performs the full startup and shutdown when a tracer is
      selected. It also adds a way to do a quick start or stop of a tracer.
      The quick version is just a flag that prevents the tracing from
      taking place, but the overhead of the code is still there.
      
      For example, the startup of a tracer may enable tracepoints, or enable
      the function tracer.  The stop and start will just set a flag to
      have the tracer ignore the calls when the tracepoint or function trace
      is called.  The overhead of the tracer may still be present when
      the tracer is stopped, but no tracing will occur. Setting the tracer
      to the 'nop' tracer (or any other tracer) will perform the shutdown
      of the tracer which will disable the tracepoint or disable the
      function tracer.
      
      The tracing_enabled file will simply start or stop tracing.
      
      This change is all internal. The end result for the user should be the same
      as before. If tracing_enabled is not set, no trace will happen.
      If tracing_enabled is set, then the trace will happen. The tracing_enabled
      variable is static between tracers. Enabling  tracing_enabled and
      going to another tracer will keep tracing_enabled enabled. Same
      is true with disabling tracing_enabled.
      
      This patch will now provide a fast start/stop method to the users
      for enabling or disabling tracing.
      
      Note: There were two methods to the struct tracer that were never
       used: The methods start and stop. These were to be used as a hook
       to the reading of the trace output, but ended up not being
       necessary. These two methods are now used to enable the start
       and stop of each tracer, in case the tracer needs to do more than
       just not write into the buffer. For example, the irqsoff tracer
       must stop recording max latencies when tracing is stopped.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      9036990d