1. 24 5月, 2008 26 次提交
    • I
      ftrace: fix time offset · 4e3c3333
      Ingo Molnar 提交于
      fix time offset calculations and ordering, plus make code more consistent.
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      4e3c3333
    • S
      ftrace: startup tester on dynamic tracing. · 77a2b37d
      Steven Rostedt 提交于
      This patch adds a startup self test on dynamic code modification
      and filters. The test filters on a specific function, makes sure that
      no other function is traced, exectutes the function, then makes sure that
      the function is traced.
      
      This patch also fixes a slight bug with the ftrace selftest, where
      tracer_enabled was not being set.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      77a2b37d
    • I
      ftrace: add README · 7bd2f24c
      Ingo Molnar 提交于
      make it easier for newbies to find their way around.
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      7bd2f24c
    • I
      ftrace: cleanups · c7aafc54
      Ingo Molnar 提交于
      factor out code and clean it up.
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      c7aafc54
    • S
      ftrace: add self-tests · 60a11774
      Steven Rostedt 提交于
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      60a11774
    • S
      ftrace: force recording · e1c08bdd
      Steven Rostedt 提交于
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      e1c08bdd
    • S
      ftrace: fix max latency · 57f50be1
      Steven Rostedt 提交于
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      57f50be1
    • S
      ftrace: fix updates to max trace · 89b2f978
      Steven Rostedt 提交于
      This patch fixes some bugs to the updating of the max trace that
      was caused by implementing the new buffering.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      89b2f978
    • S
      ftrace: don't use raw_local_irq_save/restore · 18cef379
      Steven Rostedt 提交于
      Using raw_local_irq_save/restore confuses lockdep.
      It's fine to use the normal ones.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      18cef379
    • S
      ftrace: lockdep notrace annotations · 0764d23c
      Steven Rostedt 提交于
      Add notrace annotations to lockdep to keep ftrace from causing
      recursive problems with lock tracing and debugging.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      0764d23c
    • S
      ftrace: irqs off smp_processor_id() fix · 361943ad
      Steven Rostedt 提交于
      The irqsoff function tracer did a __get_cpu_var to determine
      if it should trace the function or not. The problem is that
      __get_cpu_var can preempt between getting the CPU and reading
      the cpu variable. This means that the cpu variable that is
      being read is not from the cpu being run on.
      
      At worst, this can give a false positive, where we trace the
      function when we should not.  It will never give a false negative
      since we only want to trace when interrupts are disabled
      and we never preempt when they are.
      
      This fix adds a check after reading the irq flags to only
      trace if the interrupts are actually disabled. It also changes
      the reading of the cpu variable to use a raw_smp_processor_id
      since we now don't care if we preempt. We still catch that fact.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      361943ad
    • S
      ftrace: convert single large buffer into single pages. · 4c11d7ae
      Steven Rostedt 提交于
      Allocating large buffers for the tracer may fail easily.
      This patch converts the buffer from a large ordered allocation
      to single pages. It uses the struct page LRU field to link the
      pages together.
      
      Later patches may also implement dynamic increasing and decreasing
      of the trace buffers.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      4c11d7ae
    • S
      ftrace: add filter select functions to trace · 5072c59f
      Steven Rostedt 提交于
      This patch adds two files to the debugfs system:
      
       /debugfs/tracing/available_filter_functions
      
      and
      
       /debugfs/tracing/set_ftrace_filter
      
      The available_filter_functions lists all functions that has been
      recorded by the ftraced that has called the ftrace_record_ip function.
      This is to allow users to see what functions have been converted
      to nops and can be enabled for tracing.
      
      To enable functions, simply echo the names (whitespace delimited)
      into set_ftrace_filter. Simple wildcards are also allowed.
      
      echo 'scheduler' > /debugfs/tracing/set_ftrace_filter
      
      Will have only the scheduler be activated when tracing is enabled.
      
      echo 'sched_*' > /debugfs/tracing/set_ftrace_filter
      
      Will have only the functions starting with 'sched_' be activated.
      
      echo '*lock' > /debugfs/tracing/set_ftrace_filter
      
      Will have only functions ending with 'lock' be activated.
      
      echo '*lock*' > /debugfs/tracing/set_ftrace_filter
      
      Will have only functions with 'lock' in its name be activated.
      
      Note: 'sched*lock' will not work. The only wildcards that are
      allowed is an asterisk and the beginning and or end of the string
      passed in.
      
      Multiple names can be passed in with whitespace delimited:
      
      echo 'scheduler *lock *acpi*' > /debugfs/tracing/set_ftrace_filter
      
      is also the same as:
      
      echo 'scheduler' > /debugfs/tracing/set_ftrace_filter
      echo '*lock' >> /debugfs/tracing/set_ftrace_filter
      echo '*acpi*' >> /debugfs/tracing/set_ftrace_filter
      
      Appending does just that. It appends to the list.
      
      To disable all filters simply echo an empty line in:
      
      echo > /debugfs/tracing/set_ftrace_filter
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      5072c59f
    • S
      ftrace: use dynamic patching for updating mcount calls · d61f82d0
      Steven Rostedt 提交于
      This patch replaces the indirect call to the mcount function
      pointer with a direct call that will be patched by the
      dynamic ftrace routines.
      
      On boot up, the mcount function calls the ftace_stub function.
      When the dynamic ftrace code is initialized, the ftrace_stub
      is replaced with a call to the ftrace_record_ip, which records
      the instruction pointers of the locations that call it.
      
      Later, the ftraced daemon will call kstop_machine and patch all
      the locations to nops.
      
      When a ftrace is enabled, the original calls to mcount will now
      be set top call ftrace_caller, which will do a direct call
      to the registered ftrace function. This direct call is also patched
      when the function that should be called is updated.
      
      All patching is performed by a kstop_machine routine to prevent any
      type of race conditions that is associated with modifying code
      on the fly.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      d61f82d0
    • S
      ftrace: move memory management out of arch code · 3c1720f0
      Steven Rostedt 提交于
      This patch moves the memory management of the ftrace
      records out of the arch code and into the generic code
      making the arch code simpler.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      3c1720f0
    • S
      ftrace: add ftrace_enabled sysctl to disable mcount function · b0fc494f
      Steven Rostedt 提交于
      This patch adds back the sysctl ftrace_enabled. This time it is
      defaulted to on, if DYNAMIC_FTRACE is configured. When ftrace_enabled
      is disabled, the ftrace function is set to the stub return.
      
      If DYNAMIC_FTRACE is also configured, on ftrace_enabled = 0,
      the registered ftrace functions will all be set to jmps, but no more
      new calls to ftrace recording (used to find the ftrace calling sites)
      will be called.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      b0fc494f
    • S
      ftrace: dynamic enabling/disabling of function calls · 3d083395
      Steven Rostedt 提交于
      This patch adds a feature to dynamically replace the ftrace code
      with the jmps to allow a kernel with ftrace configured to run
      as fast as it can without it configured.
      
      The way this works, is on bootup (if ftrace is enabled), a ftrace
      function is registered to record the instruction pointer of all
      places that call the function.
      
      Later, if there's still any code to patch, a kthread is awoken
      (rate limited to at most once a second) that performs a stop_machine,
      and replaces all the code that was called with a jmp over the call
      to ftrace. It only replaces what was found the previous time. Typically
      the system reaches equilibrium quickly after bootup and there's no code
      patching needed at all.
      
      e.g.
      
        call ftrace  /* 5 bytes */
      
      is replaced with
      
        jmp 3f  /* jmp is 2 bytes and we jump 3 forward */
      3:
      
      When we want to enable ftrace for function tracing, the IP recording
      is removed, and stop_machine is called again to replace all the locations
      of that were recorded back to the call of ftrace.  When it is disabled,
      we replace the code back to the jmp.
      
      Allocation is done by the kthread. If the ftrace recording function is
      called, and we don't have any record slots available, then we simply
      skip that call. Once a second a new page (if needed) is allocated for
      recording new ftrace function calls.  A large batch is allocated at
      boot up to get most of the calls there.
      
      Because we do this via stop_machine, we don't have to worry about another
      CPU executing a ftrace call as we modify it. But we do need to worry
      about NMI's so all functions that might be called via nmi must be
      annotated with notrace_nmi. When this code is configured in, the NMI code
      will not call notrace.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      3d083395
    • S
      ftrace: trace preempt off critical timings · 6cd8a4bb
      Steven Rostedt 提交于
      Add preempt off timings. A lot of kernel core code is taken from the RT patch
      latency trace that was written by Ingo Molnar.
      
      This adds "preemptoff" and "preemptirqsoff" to /debugfs/tracing/available_tracers
      
      Now instead of just tracing irqs off, preemption off can be selected
      to be recorded.
      
      When this is selected, it shares the same files as irqs off timings.
      One can either trace preemption off, irqs off, or one or the other off.
      
      By echoing "preemptoff" into /debugfs/tracing/current_tracer, recording
      of preempt off only is performed. "irqsoff" will only record the time
      irqs are disabled, but "preemptirqsoff" will take the total time irqs
      or preemption are disabled. Runtime switching of these options is now
      supported by simpling echoing in the appropriate trace name into
      /debugfs/tracing/current_tracer.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      6cd8a4bb
    • S
      ftrace: trace irq disabled critical timings · 81d68a96
      Steven Rostedt 提交于
      This patch adds latency tracing for critical timings
      (how long interrupts are disabled for).
      
       "irqsoff" is added to /debugfs/tracing/available_tracers
      
      Note:
        tracing_max_latency
          also holds the max latency for irqsoff (in usecs).
         (default to large number so one must start latency tracing)
      
        tracing_thresh
          threshold (in usecs) to always print out if irqs off
          is detected to be longer than stated here.
          If irq_thresh is non-zero, then max_irq_latency
          is ignored.
      
      Here's an example of a trace with ftrace_enabled = 0
      
      =======
      preemption latency trace v1.1.5 on 2.6.24-rc7
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      --------------------------------------------------------------------
       latency: 100 us, #3/3, CPU#1 | (M:rt VP:0, KP:0, SP:0 HP:0 #P:2)
          -----------------
          | task: swapper-0 (uid:0 nice:0 policy:0 rt_prio:0)
          -----------------
       => started at: _spin_lock_irqsave+0x2a/0xb7
       => ended at:   _spin_unlock_irqrestore+0x32/0x5f
      
                       _------=> CPU#
                      / _-----=> irqs-off
                     | / _----=> need-resched
                     || / _---=> hardirq/softirq
                     ||| / _--=> preempt-depth
                     |||| /
                     |||||     delay
         cmd     pid ||||| time  |   caller
            \   /    |||||   \   |   /
       swapper-0     1d.s3    0us+: _spin_lock_irqsave+0x2a/0xb7 (e1000_update_stats+0x47/0x64c [e1000])
       swapper-0     1d.s3  100us : _spin_unlock_irqrestore+0x32/0x5f (e1000_update_stats+0x641/0x64c [e1000])
       swapper-0     1d.s3  100us : trace_hardirqs_on_caller+0x75/0x89 (_spin_unlock_irqrestore+0x32/0x5f)
      
      vim:ft=help
      =======
      
      And this is a trace with ftrace_enabled == 1
      
      =======
      preemption latency trace v1.1.5 on 2.6.24-rc7
      --------------------------------------------------------------------
       latency: 102 us, #12/12, CPU#1 | (M:rt VP:0, KP:0, SP:0 HP:0 #P:2)
          -----------------
          | task: swapper-0 (uid:0 nice:0 policy:0 rt_prio:0)
          -----------------
       => started at: _spin_lock_irqsave+0x2a/0xb7
       => ended at:   _spin_unlock_irqrestore+0x32/0x5f
      
                       _------=> CPU#
                      / _-----=> irqs-off
                     | / _----=> need-resched
                     || / _---=> hardirq/softirq
                     ||| / _--=> preempt-depth
                     |||| /
                     |||||     delay
         cmd     pid ||||| time  |   caller
            \   /    |||||   \   |   /
       swapper-0     1dNs3    0us+: _spin_lock_irqsave+0x2a/0xb7 (e1000_update_stats+0x47/0x64c [e1000])
       swapper-0     1dNs3   46us : e1000_read_phy_reg+0x16/0x225 [e1000] (e1000_update_stats+0x5e2/0x64c [e1000])
       swapper-0     1dNs3   46us : e1000_swfw_sync_acquire+0x10/0x99 [e1000] (e1000_read_phy_reg+0x49/0x225 [e1000])
       swapper-0     1dNs3   46us : e1000_get_hw_eeprom_semaphore+0x12/0xa6 [e1000] (e1000_swfw_sync_acquire+0x36/0x99 [e1000])
       swapper-0     1dNs3   47us : __const_udelay+0x9/0x47 (e1000_read_phy_reg+0x116/0x225 [e1000])
       swapper-0     1dNs3   47us+: __delay+0x9/0x50 (__const_udelay+0x45/0x47)
       swapper-0     1dNs3   97us : preempt_schedule+0xc/0x84 (__delay+0x4e/0x50)
       swapper-0     1dNs3   98us : e1000_swfw_sync_release+0xc/0x55 [e1000] (e1000_read_phy_reg+0x211/0x225 [e1000])
       swapper-0     1dNs3   99us+: e1000_put_hw_eeprom_semaphore+0x9/0x35 [e1000] (e1000_swfw_sync_release+0x50/0x55 [e1000])
       swapper-0     1dNs3  101us : _spin_unlock_irqrestore+0xe/0x5f (e1000_update_stats+0x641/0x64c [e1000])
       swapper-0     1dNs3  102us : _spin_unlock_irqrestore+0x32/0x5f (e1000_update_stats+0x641/0x64c [e1000])
       swapper-0     1dNs3  102us : trace_hardirqs_on_caller+0x75/0x89 (_spin_unlock_irqrestore+0x32/0x5f)
      
      vim:ft=help
      =======
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      81d68a96
    • S
      ftrace: tracer for scheduler wakeup latency · 352ad25a
      Steven Rostedt 提交于
      This patch adds the tracer that tracks the wakeup latency of the
      highest priority waking task.
      
        "wakeup" is added to /debugfs/tracing/available_tracers
      
      Also added to /debugfs/tracing
      
        tracing_max_latency
           holds the current max latency for the wakeup
      
        wakeup_thresh
           if set to other than zero, a log will be recorded
           for every wakeup that takes longer than the number
           entered in here (usecs for all counters)
           (deletes previous trace)
      
      Examples:
      
        (with ftrace_enabled = 0)
      
      ============
      preemption latency trace v1.1.5 on 2.6.24-rc8
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      --------------------------------------------------------------------
       latency: 26 us, #2/2, CPU#1 | (M:rt VP:0, KP:0, SP:0 HP:0 #P:2)
          -----------------
          | task: migration/0-3 (uid:0 nice:-5 policy:1 rt_prio:99)
          -----------------
      
                       _------=> CPU#
                      / _-----=> irqs-off
                     | / _----=> need-resched
                     || / _---=> hardirq/softirq
                     ||| / _--=> preempt-depth
                     |||| /
                     |||||     delay
         cmd     pid ||||| time  |   caller
            \   /    |||||   \   |   /
         quilt-8551  0d..3    0us+: wake_up_process+0x15/0x17 <ffffffff80233e80> (sched_exec+0xc9/0x100 <ffffffff80235343>)
         quilt-8551  0d..4   26us : sched_switch_callback+0x73/0x81 <ffffffff80338d2f> (schedule+0x483/0x6d5 <ffffffff8048b3ee>)
      
      vim:ft=help
      ============
      
        (with ftrace_enabled = 1)
      
      ============
      preemption latency trace v1.1.5 on 2.6.24-rc8
      --------------------------------------------------------------------
       latency: 36 us, #45/45, CPU#0 | (M:rt VP:0, KP:0, SP:0 HP:0 #P:2)
          -----------------
          | task: migration/1-5 (uid:0 nice:-5 policy:1 rt_prio:99)
          -----------------
      
                       _------=> CPU#
                      / _-----=> irqs-off
                     | / _----=> need-resched
                     || / _---=> hardirq/softirq
                     ||| / _--=> preempt-depth
                     |||| /
                     |||||     delay
         cmd     pid ||||| time  |   caller
            \   /    |||||   \   |   /
          bash-10653 1d..3    0us : wake_up_process+0x15/0x17 <ffffffff80233e80> (sched_exec+0xc9/0x100 <ffffffff80235343>)
          bash-10653 1d..3    1us : try_to_wake_up+0x271/0x2e7 <ffffffff80233dcf> (sub_preempt_count+0xc/0x7a <ffffffff8023309e>)
          bash-10653 1d..2    2us : try_to_wake_up+0x296/0x2e7 <ffffffff80233df4> (update_rq_clock+0x9/0x20 <ffffffff802303f3>)
          bash-10653 1d..2    2us : update_rq_clock+0x1e/0x20 <ffffffff80230408> (__update_rq_clock+0xc/0x90 <ffffffff80230366>)
          bash-10653 1d..2    3us : __update_rq_clock+0x1b/0x90 <ffffffff80230375> (sched_clock+0x9/0x29 <ffffffff80214529>)
          bash-10653 1d..2    4us : try_to_wake_up+0x2a6/0x2e7 <ffffffff80233e04> (activate_task+0xc/0x3f <ffffffff8022ffca>)
          bash-10653 1d..2    4us : activate_task+0x2d/0x3f <ffffffff8022ffeb> (enqueue_task+0xe/0x66 <ffffffff8022ff66>)
          bash-10653 1d..2    5us : enqueue_task+0x5b/0x66 <ffffffff8022ffb3> (enqueue_task_rt+0x9/0x3c <ffffffff80233351>)
          bash-10653 1d..2    6us : try_to_wake_up+0x2ba/0x2e7 <ffffffff80233e18> (check_preempt_wakeup+0x12/0x99 <ffffffff80234f84>)
      [...]
          bash-10653 1d..5   33us : tracing_record_cmdline+0xcf/0xd4 <ffffffff80338aad> (_spin_unlock+0x9/0x33 <ffffffff8048d3ec>)
          bash-10653 1d..5   34us : _spin_unlock+0x19/0x33 <ffffffff8048d3fc> (sub_preempt_count+0xc/0x7a <ffffffff8023309e>)
          bash-10653 1d..4   35us : wakeup_sched_switch+0x65/0x2ff <ffffffff80339f66> (_spin_lock_irqsave+0xc/0xa9 <ffffffff8048d08b>)
          bash-10653 1d..4   35us : _spin_lock_irqsave+0x19/0xa9 <ffffffff8048d098> (add_preempt_count+0xe/0x77 <ffffffff8023311a>)
          bash-10653 1d..4   36us : sched_switch_callback+0x73/0x81 <ffffffff80338d2f> (schedule+0x483/0x6d5 <ffffffff8048b3ee>)
      
      vim:ft=help
      ============
      
      The [...] was added here to not waste your email box space.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      352ad25a
    • S
      ftrace: add tracing of context switches · 35e8e302
      Steven Rostedt 提交于
      This patch adds context switch tracing, of the format of:
      
                        _------=> CPU#
                       / _-----=> irqs-off
                      | / _----=> need-resched
                      || / _---=> hardirq/softirq
                      ||| / _--=> preempt-depth
                      |||| /
                      |||||     delay
          cmd     pid ||||| time  |      pid:prio:state
             \   /    |||||   \   |      /
        swapper-0     1d..3    137us+:  0:140:R --> 2912:120
           sshd-2912  1d..3    216us+:  2912:120:S --> 0:140
        swapper-0     1d..3    261us+:  0:140:R --> 2912:120
           bash-2920  0d..3    267us+:  2920:120:S --> 0:140
           sshd-2912  1d..3    330us!:  2912:120:S --> 0:140
        swapper-0     1d..3   2389us+:  0:140:R --> 2847:120
       yum-upda-2847  1d..3   2411us!:  2847:120:S --> 0:140
        swapper-0     0d..3  11089us+:  0:140:R --> 3139:120
       gdm-bina-3139  0d..3  11113us!:  3139:120:S --> 0:140
        swapper-0     1d..3 102328us+:  0:140:R --> 2847:120
       yum-upda-2847  1d..3 102348us!:  2847:120:S --> 0:140
      
       "sched_switch" is added to /debugfs/tracing/available_tracers
      
      [ Eugene Teo <eugeneteo@kernel.sg: remove unused tracing_sched_switch_enabled ]
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Cc: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      35e8e302
    • S
      ftrace: function tracer · 1b29b018
      Steven Rostedt 提交于
      This is a simple trace that uses the ftrace infrastructure. It is
      designed to be fast and small, and easy to use. It is useful to
      record things that happen over a very short period of time, and
      not to analyze the system in general.
      
       Updates:
      
        available_tracers
           "function" is added to this file.
      
        current_tracer
          To enable the function tracer:
      
            echo function > /debugfs/tracing/current_tracer
      
           To disable the tracer:
      
             echo disable > /debugfs/tracing/current_tracer
      
      The output of the function_trace file is as follows
      
        "echo noverbose > /debugfs/tracing/iter_ctrl"
      
      preemption latency trace v1.1.5 on 2.6.24-rc7-tst
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      --------------------------------------------------------------------
       latency: 0 us, #419428/4361791, CPU#1 | (M:desktop VP:0, KP:0, SP:0 HP:0 #P:4)
          -----------------
          | task: -0 (uid:0 nice:0 policy:0 rt_prio:0)
          -----------------
      
                       _------=> CPU#
                      / _-----=> irqs-off
                     | / _----=> need-resched
                     || / _---=> hardirq/softirq
                     ||| / _--=> preempt-depth
                     |||| /
                     |||||     delay
         cmd     pid ||||| time  |   caller
            \   /    |||||   \   |   /
       swapper-0     0d.h. 1595128us+: set_normalized_timespec+0x8/0x2d <c043841d> (ktime_get_ts+0x4a/0x4e <c04499d4>)
       swapper-0     0d.h. 1595131us+: _spin_lock+0x8/0x18 <c0630690> (hrtimer_interrupt+0x6e/0x1b0 <c0449c56>)
      
      Or with verbose turned on:
      
        "echo verbose > /debugfs/tracing/iter_ctrl"
      
      preemption latency trace v1.1.5 on 2.6.24-rc7-tst
      --------------------------------------------------------------------
       latency: 0 us, #419428/4361791, CPU#1 | (M:desktop VP:0, KP:0, SP:0 HP:0 #P:4)
          -----------------
          | task: -0 (uid:0 nice:0 policy:0 rt_prio:0)
          -----------------
      
               swapper     0 0 9 00000000 00000000 [f3675f41] 1595.128ms (+0.003ms): set_normalized_timespec+0x8/0x2d <c043841d> (ktime_get_ts+0x4a/0x4e <c04499d4>)
               swapper     0 0 9 00000000 00000001 [f3675f45] 1595.131ms (+0.003ms): _spin_lock+0x8/0x18 <c0630690> (hrtimer_interrupt+0x6e/0x1b0 <c0449c56>)
               swapper     0 0 9 00000000 00000002 [f3675f48] 1595.135ms (+0.003ms): _spin_lock+0x8/0x18 <c0630690> (hrtimer_interrupt+0x6e/0x1b0 <c0449c56>)
      
      The "trace" file is not affected by the verbose mode, but is by the symonly.
      
       echo "nosymonly" > /debugfs/tracing/iter_ctrl
      
      tracer:
      [   81.479967] CPU 0: bash:3154 register_ftrace_function+0x5f/0x66 <ffffffff80337a4d> <-- _spin_unlock_irqrestore+0xe/0x5a <ffffffff8048cc8f>
      [   81.479967] CPU 0: bash:3154 _spin_unlock_irqrestore+0x3e/0x5a <ffffffff8048ccbf> <-- sub_preempt_count+0xc/0x7a <ffffffff80233d7b>
      [   81.479968] CPU 0: bash:3154 sub_preempt_count+0x30/0x7a <ffffffff80233d9f> <-- in_lock_functions+0x9/0x24 <ffffffff8025a75d>
      [   81.479968] CPU 0: bash:3154 vfs_write+0x11d/0x155 <ffffffff8029a043> <-- dnotify_parent+0x12/0x78 <ffffffff802d54fb>
      [   81.479968] CPU 0: bash:3154 dnotify_parent+0x2d/0x78 <ffffffff802d5516> <-- _spin_lock+0xe/0x70 <ffffffff8048c910>
      [   81.479969] CPU 0: bash:3154 _spin_lock+0x1b/0x70 <ffffffff8048c91d> <-- add_preempt_count+0xe/0x77 <ffffffff80233df7>
      [   81.479969] CPU 0: bash:3154 add_preempt_count+0x3e/0x77 <ffffffff80233e27> <-- in_lock_functions+0x9/0x24 <ffffffff8025a75d>
      
       echo "symonly" > /debugfs/tracing/iter_ctrl
      
      tracer:
      [   81.479913] CPU 0: bash:3154 register_ftrace_function+0x5f/0x66 <-- _spin_unlock_irqrestore+0xe/0x5a
      [   81.479913] CPU 0: bash:3154 _spin_unlock_irqrestore+0x3e/0x5a <-- sub_preempt_count+0xc/0x7a
      [   81.479913] CPU 0: bash:3154 sub_preempt_count+0x30/0x7a <-- in_lock_functions+0x9/0x24
      [   81.479914] CPU 0: bash:3154 vfs_write+0x11d/0x155 <-- dnotify_parent+0x12/0x78
      [   81.479914] CPU 0: bash:3154 dnotify_parent+0x2d/0x78 <-- _spin_lock+0xe/0x70
      [   81.479914] CPU 0: bash:3154 _spin_lock+0x1b/0x70 <-- add_preempt_count+0xe/0x77
      [   81.479914] CPU 0: bash:3154 add_preempt_count+0x3e/0x77 <-- in_lock_functions+0x9/0x24
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NArnaldo Carvalho de Melo <acme@ghostprotocols.net>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      1b29b018
    • S
      ftrace: latency tracer infrastructure · bc0c38d1
      Steven Rostedt 提交于
      This patch adds the latency tracer infrastructure. This patch
      does not add anything that will select and turn it on, but will
      be used by later patches.
      
      If it were to be compiled, it would add the following files
      to the debugfs:
      
       The root tracing directory:
      
        /debugfs/tracing/
      
      This patch also adds the following files:
      
        available_tracers
           list of available tracers. Currently no tracers are
           available. Looking into this file only shows
           "none" which is used to unregister all tracers.
      
        current_tracer
           The trace that is currently active. Empty on start up.
           To switch to a tracer simply echo one of the tracers that
           are listed in available_tracers:
      
         example: (used with later patches)
      
            echo function > /debugfs/tracing/current_tracer
      
           To disable the tracer:
      
             echo disable > /debugfs/tracing/current_tracer
      
        tracing_enabled
           echoing "1" into this file starts the ftrace function tracing
            (if sysctl kernel.ftrace_enabled=1)
           echoing "0" turns it off.
      
        latency_trace
            This file is readonly and holds the result of the trace.
      
        trace
            This file outputs a easier to read version of the trace.
      
        iter_ctrl
            Controls the way the output of traces look.
            So far there's two controls:
              echoing in "symonly" will only show the kallsyms variables
                  without the addresses (if kallsyms was configured)
              echoing in "verbose" will change the output to show
                  a lot more data, but not very easy to understand by
                  humans.
              echoing in "nosymonly" turns off symonly.
              echoing in "noverbose" turns off verbose.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NArnaldo Carvalho de Melo <acme@ghostprotocols.net>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      bc0c38d1
    • A
      ftrace: add basic support for gcc profiler instrumentation · 16444a8a
      Arnaldo Carvalho de Melo 提交于
      If CONFIG_FTRACE is selected and /proc/sys/kernel/ftrace_enabled is
      set to a non-zero value the ftrace routine will be called everytime
      we enter a kernel function that is not marked with the "notrace"
      attribute.
      
      The ftrace routine will then call a registered function if a function
      happens to be registered.
      
      [ This code has been highly hacked by Steven Rostedt and Ingo Molnar,
        so don't blame Arnaldo for all of this ;-) ]
      
      Update:
        It is now possible to register more than one ftrace function.
        If only one ftrace function is registered, that will be the
        function that ftrace calls directly. If more than one function
        is registered, then ftrace will call a function that will loop
        through the functions to call.
      Signed-off-by: NArnaldo Carvalho de Melo <acme@ghostprotocols.net>
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      16444a8a
    • S
      ftrace: make the task state char-string visible to all · 7c731e0a
      Steven Rostedt 提交于
      The tracer wants to be able to convert the state number
      into a user visible character. This patch pulls that conversion
      string out the scheduler into the header. This way if it were to
      ever change, other parts of the kernel will know.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      7c731e0a
    • I
      sched: add latency tracer callbacks to the scheduler · bd3bff9e
      Ingo Molnar 提交于
      add 3 lightweight callbacks to the tracer backend.
      
      zero impact if tracing is turned off.
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      bd3bff9e
  2. 17 5月, 2008 4 次提交
  3. 15 5月, 2008 2 次提交
  4. 12 5月, 2008 1 次提交
    • L
      Add new 'cond_resched_bkl()' helper function · c3921ab7
      Linus Torvalds 提交于
      It acts exactly like a regular 'cond_resched()', but will not get
      optimized away when CONFIG_PREEMPT is set.
      
      Normal kernel code is already preemptable in the presense of
      CONFIG_PREEMPT, so cond_resched() is optimized away (see commit
      02b67cc3 "sched: do not do
      cond_resched() when CONFIG_PREEMPT").
      
      But when wanting to conditionally reschedule while holding a lock, you
      need to use "cond_sched_lock(lock)", and the new function is the BKL
      equivalent of that.
      
      Also make fs/locks.c use it.
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      c3921ab7
  5. 11 5月, 2008 2 次提交
    • L
      BKL: revert back to the old spinlock implementation · 8e3e076c
      Linus Torvalds 提交于
      The generic semaphore rewrite had a huge performance regression on AIM7
      (and potentially other BKL-heavy benchmarks) because the generic
      semaphores had been rewritten to be simple to understand and fair.  The
      latter, in particular, turns a semaphore-based BKL implementation into a
      mess of scheduling.
      
      The attempt to fix the performance regression failed miserably (see the
      previous commit 00b41ec2 'Revert
      "semaphore: fix"'), and so for now the simple and sane approach is to
      instead just go back to the old spinlock-based BKL implementation that
      never had any issues like this.
      
      This patch also has the advantage of being reported to fix the
      regression completely according to Yanmin Zhang, unlike the semaphore
      hack which still left a couple percentage point regression.
      
      As a spinlock, the BKL obviously has the potential to be a latency
      issue, but it's not really any different from any other spinlock in that
      respect.  We do want to get rid of the BKL asap, but that has been the
      plan for several years.
      
      These days, the biggest users are in the tty layer (open/release in
      particular) and Alan holds out some hope:
      
        "tty release is probably a few months away from getting cured - I'm
         afraid it will almost certainly be the very last user of the BKL in
         tty to get fixed as it depends on everything else being sanely locked."
      
      so while we're not there yet, we do have a plan of action.
      Tested-by: NYanmin Zhang <yanmin_zhang@linux.intel.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: Matthew Wilcox <matthew@wil.cx>
      Cc: Alexander Viro <viro@ftp.linux.org.uk>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      8e3e076c
    • L
      Revert "semaphore: fix" · 00b41ec2
      Linus Torvalds 提交于
      This reverts commit bf726eab, as it has
      been reported to cause a regression with processes stuck in __down(),
      apparently because some missing wakeup.
      
      Quoth Sven Wegener:
       "I'm currently investigating a regression that has showed up with my
        last git pull yesterday.  Bisecting the commits showed bf726e
        "semaphore: fix" to be the culprit, reverting it fixed the issue.
      
        Symptoms: During heavy filesystem usage (e.g.  a kernel compile) I get
        several compiler processes in uninterruptible sleep, blocking all i/o
        on the filesystem.  System is an Intel Core 2 Quad running a 64bit
        kernel and userspace.  Filesystem is xfs on top of lvm.  See below for
        the output of sysrq-w."
      
      See
      
      	http://lkml.org/lkml/2008/5/10/45
      
      for full report.
      
      In the meantime, we can just fix the BKL performance regression by
      reverting back to the good old BKL spinlock implementation instead,
      since any sleeping lock will generally perform badly, especially if it
      tries to be fair.
      Reported-by: NSven Wegener <sven.wegener@stealer.net>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Ingo Molnar <mingo@elte.hu>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      00b41ec2
  6. 09 5月, 2008 3 次提交
  7. 08 5月, 2008 2 次提交
    • M
      sched: fix weight calculations · 46151122
      Mike Galbraith 提交于
      The conversion between virtual and real time is as follows:
      
        dvt = rw/w * dt <=> dt = w/rw * dvt
      
      Since we want the fair sleeper granularity to be in real time, we actually
      need to do:
      
        dvt = - rw/w * l
      
      This bug could be related to the regression reported by Yanmin Zhang:
      
      | Comparing with kernel 2.6.25, sysbench+mysql(oltp, readonly) has lots
      | of regressions with 2.6.26-rc1:
      |
      | 1) 8-core stoakley: 28%;
      | 2) 16-core tigerton: 20%;
      | 3) Itanium Montvale: 50%.
      Reported-by: N"Zhang, Yanmin" <yanmin_zhang@linux.intel.com>
      Signed-off-by: NMike Galbraith <efault@gmx.de>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      46151122
    • I
      semaphore: fix · bf726eab
      Ingo Molnar 提交于
      Yanmin Zhang reported:
      
      | Comparing with kernel 2.6.25, AIM7 (use tmpfs) has more th
      | regression under 2.6.26-rc1 on my 8-core stoakley, 16-core tigerton,
      | and Itanium Montecito. Bisect located the patch below:
      |
      | 64ac24e7 is first bad commit
      | commit 64ac24e7
      | Author: Matthew Wilcox <matthew@wil.cx>
      | Date:   Fri Mar 7 21:55:58 2008 -0500
      |
      |     Generic semaphore implementation
      |
      | After I manually reverted the patch against 2.6.26-rc1 while fixing
      | lots of conflicts/errors, aim7 regression became less than 2%.
      
      i reproduced the AIM7 workload and can confirm Yanmin's findings that
      -.26-rc1 regresses over .25 - by over 67% here.
      
      Looking at the workload i found and fixed what i believe to be the real
      bug causing the AIM7 regression: it was inefficient wakeup / scheduling
      / locking behavior of the new generic semaphore code, causing suboptimal
      performance.
      
      The problem comes from the following code. The new semaphore code does
      this on down():
      
              spin_lock_irqsave(&sem->lock, flags);
              if (likely(sem->count > 0))
                      sem->count--;
              else
                      __down(sem);
              spin_unlock_irqrestore(&sem->lock, flags);
      
      and this on up():
      
              spin_lock_irqsave(&sem->lock, flags);
              if (likely(list_empty(&sem->wait_list)))
                      sem->count++;
              else
                      __up(sem);
              spin_unlock_irqrestore(&sem->lock, flags);
      
      where __up() does:
      
              list_del(&waiter->list);
              waiter->up = 1;
              wake_up_process(waiter->task);
      
      and where __down() does this in essence:
      
              list_add_tail(&waiter.list, &sem->wait_list);
              waiter.task = task;
              waiter.up = 0;
              for (;;) {
                      [...]
                      spin_unlock_irq(&sem->lock);
                      timeout = schedule_timeout(timeout);
                      spin_lock_irq(&sem->lock);
                      if (waiter.up)
                              return 0;
              }
      
      the fastpath looks good and obvious, but note the following property of
      the contended path: if there's a task on the ->wait_list, the up() of
      the current owner will "pass over" ownership to that waiting task, in a
      wake-one manner, via the waiter->up flag and by removing the waiter from
      the wait list.
      
      That is all and fine in principle, but as implemented in
      kernel/semaphore.c it also creates a nasty, hidden source of contention!
      
      The contention comes from the following property of the new semaphore
      code: the new owner owns the semaphore exclusively, even if it is not
      running yet.
      
      So if the old owner, even if just a few instructions later, does a
      down() [lock_kernel()] again, it will be blocked and will have to wait
      on the new owner to eventually be scheduled (possibly on another CPU)!
      Or if another task gets to lock_kernel() sooner than the "new owner"
      scheduled, it will be blocked unnecessarily and for a very long time
      when there are 2000 tasks running.
      
      I.e. the implementation of the new semaphores code does wake-one and
      lock ownership in a very restrictive way - it does not allow
      opportunistic re-locking of the lock at all and keeps the scheduler from
      picking task order intelligently.
      
      This kind of scheduling, with 2000 AIM7 processes running, creates awful
      cross-scheduling between those 2000 tasks, causes reduced parallelism, a
      throttled runqueue length and a lot of idle time. With increasing number
      of CPUs it causes an exponentially worse behavior in AIM7, as the chance
      for a newly woken new-owner task to actually run anytime soon is less
      and less likely.
      
      Note that it takes just a tiny bit of contention for the 'new-semaphore
      catastrophy' to happen: the wakeup latencies get added to whatever small
      contention there is, and quickly snowball out of control!
      
      I believe Yanmin's findings and numbers support this analysis too.
      
      The best fix for this problem is to use the same scheduling logic that
      the kernel/mutex.c code uses: keep the wake-one behavior (that is OK and
      wanted because we do not want to over-schedule), but also allow
      opportunistic locking of the lock even if a wakee is already "in
      flight".
      
      The patch below implements this new logic. With this patch applied the
      AIM7 regression is largely fixed on my quad testbox:
      
        # v2.6.25 vanilla:
        ..................
        Tasks   Jobs/Min        JTI     Real    CPU     Jobs/sec/task
        2000    56096.4         91      207.5   789.7   0.4675
        2000    55894.4         94      208.2   792.7   0.4658
      
        # v2.6.26-rc1-166-gc0a18111 vanilla:
        ...................................
        Tasks   Jobs/Min        JTI     Real    CPU     Jobs/sec/task
        2000    33230.6         83      350.3   784.5   0.2769
        2000    31778.1         86      366.3   783.6   0.2648
      
        # v2.6.26-rc1-166-gc0a18111 + semaphore-speedup:
        ...............................................
        Tasks   Jobs/Min        JTI     Real    CPU     Jobs/sec/task
        2000    55707.1         92      209.0   795.6   0.4642
        2000    55704.4         96      209.0   796.0   0.4642
      
      i.e. a 67% speedup. We are now back to within 1% of the v2.6.25
      performance levels and have zero idle time during the test, as expected.
      
      Btw., interactivity also improved dramatically with the fix - for
      example console-switching became almost instantaneous during this
      workload (which after all is running 2000 tasks at once!), without the
      patch it was stuck for a minute at times.
      
      There's another nice side-effect of this speedup patch, the new generic
      semaphore code got even smaller:
      
         text    data     bss     dec     hex filename
         1241       0       0    1241     4d9 semaphore.o.before
         1207       0       0    1207     4b7 semaphore.o.after
      
      (because the waiter.up complication got removed.)
      
      Longer-term we should look into using the mutex code for the generic
      semaphore code as well - but i's not easy due to legacies and it's
      outside of the scope of v2.6.26 and outside the scope of this patch as
      well.
      Bisected-by: N"Zhang, Yanmin" <yanmin_zhang@linux.intel.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      bf726eab