1. 15 6月, 2011 1 次提交
    • J
      tracing, function_graph: Remove dependency of abstime and duration fields on latency · 321e68b0
      Jiri Olsa 提交于
      The display of absolute time and duration fields is based on the
      latency field. This was added during the irqsoff/wakeup tracers
      graph support changes.
      
      It's causing confusion in what fields will be displayed for the
      function_graph tracer itself. So I'm removing this depency, and
      adding absolute time and duration fields to the preemptirqsoff
      preemptoff irqsoff wakeup tracers.
      
      With following commands:
      	# echo function_graph > ./current_tracer
      	# cat trace
      
      This is what it looked like before:
      # tracer: function_graph
      #
      #     TIME        CPU  DURATION                  FUNCTION CALLS
      #      |          |     |   |                     |   |   |   |
       0)   0.068 us    |          } /* page_add_file_rmap */
       0)               |          _raw_spin_unlock() {
      ...
      
      This is what it looks like now:
      # tracer: function_graph
      #
      # CPU  DURATION                  FUNCTION CALLS
      # |     |   |                     |   |   |   |
       0)   0.068 us    |                } /* add_preempt_count */
       0)   0.993 us    |              } /* vfsmount_lock_local_lock */
      ...
      
      For preemptirqsoff preemptoff irqsoff wakeup tracers,
      this is what it looked like before:
      SNIP
      #                       _-----=> irqs-off
      #                      / _----=> need-resched
      #                     | / _---=> hardirq/softirq
      #                     || / _--=> preempt-depth
      #                     ||| / _-=> lock-depth
      #                     |||| /
      # CPU  TASK/PID       |||||  DURATION                  FUNCTION CALLS
      # |     |    |        |||||   |   |                     |   |   |   |
       1)    <idle>-0    |  d..1  0.000 us    |  acpi_idle_enter_simple();
      ...
      
      This is what it looks like now:
      SNIP
      #
      #                                       _-----=> irqs-off
      #                                      / _----=> need-resched
      #                                     | / _---=> hardirq/softirq
      #                                     || / _--=> preempt-depth
      #                                     ||| /
      #     TIME        CPU  TASK/PID       ||||  DURATION                  FUNCTION CALLS
      #      |          |     |    |        ||||   |   |                     |   |   |   |
         19.847735 |   1)    <idle>-0    |  d..1  0.000 us    |  acpi_idle_enter_simple();
      ...
      Signed-off-by: NJiri Olsa <jolsa@redhat.com>
      Link: http://lkml.kernel.org/r/1307113131-10045-2-git-send-email-jolsa@redhat.comSigned-off-by: NSteven Rostedt <rostedt@goodmis.org>
      321e68b0
  2. 19 5月, 2011 1 次提交
    • S
      ftrace: Implement separate user function filtering · b848914c
      Steven Rostedt 提交于
      ftrace_ops that are registered to trace functions can now be
      agnostic to each other in respect to what functions they trace.
      Each ops has their own hash of the functions they want to trace
      and a hash to what they do not want to trace. A empty hash for
      the functions they want to trace denotes all functions should
      be traced that are not in the notrace hash.
      
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      b848914c
  3. 31 3月, 2011 1 次提交
  4. 20 1月, 2011 1 次提交
    • T
      lockdep: Move early boot local IRQ enable/disable status to init/main.c · 2ce802f6
      Tejun Heo 提交于
      During early boot, local IRQ is disabled until IRQ subsystem is
      properly initialized.  During this time, no one should enable
      local IRQ and some operations which usually are not allowed with
      IRQ disabled, e.g. operations which might sleep or require
      communications with other processors, are allowed.
      
      lockdep tracked this with early_boot_irqs_off/on() callbacks.
      As other subsystems need this information too, move it to
      init/main.c and make it generally available.  While at it,
      toggle the boolean to early_boot_irqs_disabled instead of
      enabled so that it can be initialized with %false and %true
      indicates the exceptional condition.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Acked-by: NPekka Enberg <penberg@kernel.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      LKML-Reference: <20110120110635.GB6036@htj.dyndns.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      2ce802f6
  5. 18 10月, 2010 2 次提交
  6. 21 7月, 2010 1 次提交
    • K
      tracing: Shrink max latency ringbuffer if unnecessary · ef710e10
      KOSAKI Motohiro 提交于
      Documentation/trace/ftrace.txt says
      
        buffer_size_kb:
      
              This sets or displays the number of kilobytes each CPU
              buffer can hold. The tracer buffers are the same size
              for each CPU. The displayed number is the size of the
              CPU buffer and not total size of all buffers. The
              trace buffers are allocated in pages (blocks of memory
              that the kernel uses for allocation, usually 4 KB in size).
              If the last page allocated has room for more bytes
              than requested, the rest of the page will be used,
              making the actual allocation bigger than requested.
              ( Note, the size may not be a multiple of the page size
                due to buffer management overhead. )
      
              This can only be updated when the current_tracer
              is set to "nop".
      
      But it's incorrect. currently total memory consumption is
      'buffer_size_kb x CPUs x 2'.
      
      Why two times difference is there? because ftrace implicitly allocate
      the buffer for max latency too.
      
      That makes sad result when admin want to use large buffer. (If admin
      want full logging and makes detail analysis). example, If admin
      have 24 CPUs machine and write 200MB to buffer_size_kb, the system
      consume ~10GB memory (200MB x 24 x 2). umm.. 5GB memory waste is
      usually unacceptable.
      
      Fortunatelly, almost all users don't use max latency feature.
      The max latency buffer can be disabled easily.
      
      This patch shrink buffer size of the max latency buffer if
      unnecessary.
      Signed-off-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      LKML-Reference: <20100701104554.DA2D.A69D9226@jp.fujitsu.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      ef710e10
  7. 28 4月, 2010 1 次提交
  8. 12 12月, 2009 1 次提交
    • S
      tracing: Add stack trace to irqsoff tracer · cc51a0fc
      Steven Rostedt 提交于
      The irqsoff and friends tracers help in finding causes of latency in the
      kernel. The also work with the function tracer to show what was happening
      when interrupts or preemption are disabled. But the function tracer has
      a bit of an overhead and can cause exagerated readings.
      
      Currently, when tracing with /proc/sys/kernel/ftrace_enabled = 0, where the
      function tracer is disabled, the information that is provided can end up
      being useless. For example, a 2 and a half millisecond latency only showed:
      
       # tracer: preemptirqsoff
       #
       # preemptirqsoff latency trace v1.1.5 on 2.6.32
       # --------------------------------------------------------------------
       # latency: 2463 us, #4/4, CPU#2 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:4)
       #    -----------------
       #    | task: -4242 (uid:0 nice:0 policy:0 rt_prio:0)
       #    -----------------
       #  => started at: _spin_lock_irqsave
       #  => ended at:   remove_wait_queue
       #
       #
       #                  _------=> CPU#
       #                 / _-----=> irqs-off
       #                | / _----=> need-resched
       #                || / _---=> hardirq/softirq
       #                ||| / _--=> preempt-depth
       #                |||| /_--=> lock-depth
       #                |||||/     delay
       #  cmd     pid   |||||| time  |   caller
       #     \   /      ||||||   \   |   /
       hackbenc-4242    2d....    0us!: trace_hardirqs_off <-_spin_lock_irqsave
       hackbenc-4242    2...1. 2463us+: _spin_unlock_irqrestore <-remove_wait_queue
       hackbenc-4242    2...1. 2466us : trace_preempt_on <-remove_wait_queue
      
      The above lets us know that hackbench with pid 2463 grabbed a spin lock
      somewhere and enabled preemption at remove_wait_queue. This helps a little
      but where this actually happened is not informative.
      
      This patch adds the stack dump to the end of the irqsoff tracer. This provides
      the following output:
      
       hackbenc-4242    2d....    0us!: trace_hardirqs_off <-_spin_lock_irqsave
       hackbenc-4242    2...1. 2463us+: _spin_unlock_irqrestore <-remove_wait_queue
       hackbenc-4242    2...1. 2466us : trace_preempt_on <-remove_wait_queue
       hackbenc-4242    2...1. 2467us : <stack trace>
        => sub_preempt_count
        => _spin_unlock_irqrestore
        => remove_wait_queue
        => free_poll_entry
        => poll_freewait
        => do_sys_poll
        => sys_poll
        => system_call_fastpath
      
      Now we see that the culprit of this latency was the free_poll_entry code.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      cc51a0fc
  9. 13 9月, 2009 2 次提交
  10. 05 9月, 2009 1 次提交
    • S
      tracing: use timestamp to determine start of latency traces · 2f26ebd5
      Steven Rostedt 提交于
      Currently the latency tracers reset the ring buffer. Unfortunately
      if a commit is in process (due to a trace event), this can corrupt
      the ring buffer. When this happens, the ring buffer will detect
      the corruption and then permanently disable the ring buffer.
      
      The bug does not crash the system, but it does prevent further tracing
      after the bug is hit.
      
      Instead of reseting the trace buffers, the timestamp of the start of
      the trace is used instead. The buffers will still contain the previous
      data, but the output will not count any data that is before the
      timestamp of the trace.
      
      Note, this only affects the static trace output (trace) and not the
      runtime trace output (trace_pipe). The runtime trace output does not
      make sense for the latency tracers anyway.
      Reported-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      2f26ebd5
  11. 05 3月, 2009 1 次提交
  12. 18 2月, 2009 1 次提交
  13. 05 2月, 2009 1 次提交
  14. 23 1月, 2009 1 次提交
  15. 21 1月, 2009 1 次提交
    • S
      trace: set max latency variable to zero on default · 1092307d
      Steven Rostedt 提交于
      Impact: trace max latencies on start of latency tracing
      
      This patch sets the max latency to zero whenever one of the
      irq variant tracers or the wakeup tracer is set to current tracer.
      
      Most developers expect to see output when starting up a latency
      tracer. But since the max_latency is already set to max, and
      it takes a latency greater than max_latency to be recorded, there
      is no trace. This is not the expected behavior and has even confused
      myself.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      1092307d
  16. 16 1月, 2009 1 次提交
    • S
      trace: set max latency variable to zero on default · 745b1626
      Steven Rostedt 提交于
      Impact: trace max latencies on start of latency tracing
      
      This patch sets the max latency to zero whenever one of the
      irq variant tracers or the wakeup tracer is set to current tracer.
      
      Most developers expect to see output when starting up a latency
      tracer. But since the max_latency is already set to max, and
      it takes a latency greater than max_latency to be recorded, there
      is no trace. This is not the expected behavior and has even confused
      myself.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      745b1626
  17. 16 11月, 2008 1 次提交
  18. 08 11月, 2008 3 次提交
  19. 06 11月, 2008 1 次提交
    • S
      ftrace: restructure tracing start/stop infrastructure · 9036990d
      Steven Rostedt 提交于
      Impact: change where tracing is started up and stopped
      
      Currently, when a new tracer is selected via echo'ing a tracer name into
      the current_tracer file, the startup is only done if tracing_enabled is
      set to one. If tracing_enabled is changed to zero (by echo'ing 0 into
      the tracing_enabled file) a full shutdown is performed.
      
      The full startup and shutdown of a tracer can be expensive and the
      user can lose out traces when echo'ing in 0 to the tracing_enabled file,
      because the process takes too long. There can also be places that
      the user would like to start and stop the tracer several times and
      doing the full startup and shutdown of a tracer might be too expensive.
      
      This patch performs the full startup and shutdown when a tracer is
      selected. It also adds a way to do a quick start or stop of a tracer.
      The quick version is just a flag that prevents the tracing from
      taking place, but the overhead of the code is still there.
      
      For example, the startup of a tracer may enable tracepoints, or enable
      the function tracer.  The stop and start will just set a flag to
      have the tracer ignore the calls when the tracepoint or function trace
      is called.  The overhead of the tracer may still be present when
      the tracer is stopped, but no tracing will occur. Setting the tracer
      to the 'nop' tracer (or any other tracer) will perform the shutdown
      of the tracer which will disable the tracepoint or disable the
      function tracer.
      
      The tracing_enabled file will simply start or stop tracing.
      
      This change is all internal. The end result for the user should be the same
      as before. If tracing_enabled is not set, no trace will happen.
      If tracing_enabled is set, then the trace will happen. The tracing_enabled
      variable is static between tracers. Enabling  tracing_enabled and
      going to another tracer will keep tracing_enabled enabled. Same
      is true with disabling tracing_enabled.
      
      This patch will now provide a fast start/stop method to the users
      for enabling or disabling tracing.
      
      Note: There were two methods to the struct tracer that were never
       used: The methods start and stop. These were to be used as a hook
       to the reading of the trace output, but ended up not being
       necessary. These two methods are now used to enable the start
       and stop of each tracer, in case the tracer needs to do more than
       just not write into the buffer. For example, the irqsoff tracer
       must stop recording max latencies when tracing is stopped.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      9036990d
  20. 21 10月, 2008 1 次提交
  21. 14 10月, 2008 3 次提交
    • S
      ftrace: move pc counter in irqtrace · 6450c1d3
      Steven Rostedt 提交于
      The assigning of the pc counter is in the wrong spot in the
      check_critical_timing function. The pc variable is used in the
      out jump.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      6450c1d3
    • S
      ftrace: preempt disable over interrupt disable · 38697053
      Steven Rostedt 提交于
      With the new ring buffer infrastructure in ftrace, I'm trying to make
      ftrace a little more light weight.
      
      This patch converts a lot of the local_irq_save/restore into
      preempt_disable/enable.  The original preempt count in a lot of cases
      has to be sent in as a parameter so that it can be recorded correctly.
      Some places were recording it incorrectly before anyway.
      
      This is also laying the ground work to make ftrace a little bit
      more reentrant, and remove all locking. The function tracers must
      still protect from reentrancy.
      
      Note: All the function tracers must be careful when using preempt_disable.
        It must do the following:
      
        resched = need_resched();
        preempt_disable_notrace();
        [...]
        if (resched)
      	preempt_enable_no_resched_notrace();
        else
      	preempt_enable_notrace();
      
      The reason is that if this function traces schedule() itself, the
      preempt_enable_notrace() will cause a schedule, which will lead
      us into a recursive failure.
      
      If we needed to reschedule before calling preempt_disable, we
      should have already scheduled. Since we did not, this is most
      likely that we should not and are probably inside a schedule
      function.
      
      If resched was not set, we still need to catch the need resched
      flag being set when preemption was off and the if case at the
      end will catch that for us.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      38697053
    • S
      ftrace: make work with new ring buffer · 3928a8a2
      Steven Rostedt 提交于
      This patch ports ftrace over to the new ring buffer.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      3928a8a2
  22. 26 7月, 2008 1 次提交
  23. 19 7月, 2008 1 次提交
    • S
      ftrace: only trace preempt off with preempt tracer · 1e01cb0c
      Steven Rostedt 提交于
      When PREEMPT_TRACER and IRQSOFF_TRACER are both configured and irqsoff
      tracer is running, the preempt_off sections might also be traced.
      
      Thanks to Andrew Morton for pointing out my mistake of spin_lock disabling
      interrupts while he was reviewing ftrace.txt. Seems that my example I used
      actually hit this bug.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      1e01cb0c
  24. 27 5月, 2008 1 次提交
    • S
      ftrace: remove printks from irqsoff trace · da89a7a2
      Steven Rostedt 提交于
      Printing out new max latencies was fine for the old RT tracer. But for
      mainline it is a bit messy. We also need to test if the run queue
      is locked before we can do the print. This means that we may not be
      printing out latencies if the run queue is locked on another CPU.
      This produces inconsistencies in the output.
      
      This patch simply removes the print altogether.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Cc: pq@iki.fi
      Cc: proski@gnu.org
      Cc: sandmann@redhat.com
      Cc: a.p.zijlstra@chello.nl
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      da89a7a2
  25. 24 5月, 2008 10 次提交