1. 15 9月, 2009 1 次提交
  2. 11 9月, 2009 3 次提交
    • J
      writeback: add name to backing_dev_info · d993831f
      Jens Axboe 提交于
      This enables us to track who does what and print info. Its main use
      is catching dirty inodes on the default_backing_dev_info, so we can
      fix that up.
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      d993831f
    • I
      sched: Fix sched::sched_stat_wait tracepoint field · e1f84508
      Ingo Molnar 提交于
      This weird perf trace output:
      
        cc1-9943  [001]  2802.059479616: sched_stat_wait: task: as:9944 wait: 2801938766276 [ns]
      
      Is caused by setting one component field of the delta to zero
      a bit too early. Move it to later.
      
      ( Note, this does not affect the NEW_FAIR_SLEEPERS interactivity bug,
        it's just a reporting bug in essence. )
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Nikos Chantziaras <realnc@arcor.de>
      Cc: Jens Axboe <jens.axboe@oracle.com>
      Cc: Mike Galbraith <efault@gmx.de>
      LKML-Reference: <4AA93D34.8040500@arcor.de>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      e1f84508
    • I
      sched: Disable NEW_FAIR_SLEEPERS for now · 3f2aa307
      Ingo Molnar 提交于
      Nikos Chantziaras and Jens Axboe reported that turning off
      NEW_FAIR_SLEEPERS improves desktop interactivity visibly.
      
      Nikos described his experiences the following way:
      
        " With this setting, I can do "nice -n 19 make -j20" and
          still have a very smooth desktop and watch a movie at
          the same time.  Various other annoyances (like the
          "logout/shutdown/restart" dialog of KDE not appearing
          at all until the background fade-out effect has finished)
          are also gone.  So this seems to be the single most
          important setting that vastly improves desktop behavior,
          at least here. "
      
      Jens described it the following way, referring to a 10-seconds
      xmodmap scheduling delay he was trying to debug:
      
        " Then I tried switching NO_NEW_FAIR_SLEEPERS on, and then
          I get:
      
          Performance counter stats for 'xmodmap .xmodmap-carl':
      
               9.009137  task-clock-msecs         #      0.447 CPUs
                     18  context-switches         #      0.002 M/sec
                      1  CPU-migrations           #      0.000 M/sec
                    315  page-faults              #      0.035 M/sec
      
          0.020167093  seconds time elapsed
      
          Woot! "
      
      So disable it for now. In perf trace output i can see weird
      delta timestamps:
      
        cc1-9943  [001]  2802.059479616: sched_stat_wait: task: as:9944 wait: 2801938766276 [ns]
      
      That nsec field is not supposed to be that large. More digging
      is needed - but lets turn it off while the real bug is found.
      Reported-by: NNikos Chantziaras <realnc@arcor.de>
      Tested-by: NNikos Chantziaras <realnc@arcor.de>
      Reported-by: NJens Axboe <jens.axboe@oracle.com>
      Tested-by: NJens Axboe <jens.axboe@oracle.com>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mike Galbraith <efault@gmx.de>
      LKML-Reference: <4AA93D34.8040500@arcor.de>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      3f2aa307
  3. 09 9月, 2009 3 次提交
  4. 08 9月, 2009 3 次提交
    • M
      sched: Ensure that a child can't gain time over it's parent after fork() · b5d9d734
      Mike Galbraith 提交于
      A fork/exec load is usually "pass the baton", so the child
      should never be placed behind the parent.  With START_DEBIT we
      make room for the new task, but with child_runs_first, that
      room comes out of the _parent's_ hide. There's nothing to say
      that the parent wasn't ahead of min_vruntime at fork() time,
      which means that the "baton carrier", who is essentially the
      parent in drag, can gain time and increase scheduling latencies
      for waiters.
      
      With NEW_FAIR_SLEEPERS + START_DEBIT + child_runs_first
      enabled, we essentially pass the sleeper fairness off to the
      child, which is fine, but if we don't base placement on the
      parent's updated vruntime, we can end up compounding latency
      woes if the child itself then does fork/exec.  The debit
      incurred at fork doesn't hurt the parent who is then going to
      sleep and maybe exit, but the child who acquires the error
      harms all comers.
      
      This improves latencies of make -j<n> kernel build workloads.
      Reported-by: NJens Axboe <jens.axboe@oracle.com>
      Signed-off-by: NMike Galbraith <efault@gmx.de>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      b5d9d734
    • P
      sched: Deal with low-load in wake_affine() · 71a29aa7
      Peter Zijlstra 提交于
      wake_affine() would always fail under low-load situations where
      both prev and this were idle, because adding a single task will
      always be a significant imbalance, even if there's nothing
      around that could balance it.
      
      Deal with this by allowing imbalance when there's nothing you
      can do about it.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      71a29aa7
    • P
      sched: Remove short cut from select_task_rq_fair() · cdd2ab3d
      Peter Zijlstra 提交于
      select_task_rq_fair() incorrectly skips the wake_affine()
      logic, remove this.
      
      When prev_cpu == this_cpu, the code jumps straight to the
      wake_idle() logic, this doesn't give the wake_affine() logic
      the chance to pin the task to this cpu.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      cdd2ab3d
  5. 07 9月, 2009 1 次提交
  6. 05 9月, 2009 10 次提交
    • S
      ring-buffer: only enable ring_buffer_swap_cpu when needed · 85bac32c
      Steven Rostedt 提交于
      Since the ability to swap the cpu buffers adds a small overhead to
      the recording of a trace, we only want to add it when needed.
      
      Only the irqsoff and preemptoff tracers use this feature, and both are
      not recommended for production kernels. This patch disables its use
      when neither irqsoff nor preemptoff is configured.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      85bac32c
    • S
      ring-buffer: check for swapped buffers in start of committing · 62f0b3eb
      Steven Rostedt 提交于
      Because the irqsoff tracer can swap an internal CPU buffer, it is possible
      that a swap happens between the start of the write and before the committing
      bit is set (the committing bit will disable swapping).
      
      This patch adds a check for this and will fail the write if it detects it.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      62f0b3eb
    • S
      tracing: report error in trace if we fail to swap latency buffer · e8165dbb
      Steven Rostedt 提交于
      The irqsoff tracer will fail to swap the cpu buffer with the max
      buffer if it preempts a commit. Instead of ignoring this, this patch
      makes the tracer report it if the last max latency failed due to preempting
      a current commit.
      
      The output of the latency tracer will look like this:
      
       # tracer: irqsoff
       #
       # irqsoff latency trace v1.1.5 on 2.6.31-rc5
       # --------------------------------------------------------------------
       # latency: 112 us, #1/1, CPU#1 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:4)
       #    -----------------
       #    | task: -4281 (uid:0 nice:0 policy:0 rt_prio:0)
       #    -----------------
       #  => started at: save_args
       #  => ended at:   __do_softirq
       #
       #
       #                  _------=> CPU#
       #                 / _-----=> irqs-off
       #                | / _----=> need-resched
       #                || / _---=> hardirq/softirq
       #                ||| / _--=> preempt-depth
       #                |||| /
       #                |||||     delay
       #  cmd     pid   ||||| time  |   caller
       #     \   /      |||||   \   |   /
          bash-4281    1d.s6  265us : update_max_tr_single: Failed to swap buffers due to commit in progress
      
      Note the latency time and the functions that disabled the irqs or preemption
      will still be listed.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      e8165dbb
    • S
      tracing: add trace_array_printk for internal tracers to use · 659372d3
      Steven Rostedt 提交于
      This patch adds a trace_array_printk to allow a tracer to use the
      trace_printk on its own trace array.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      659372d3
    • S
      tracing: pass around ring buffer instead of tracer · e77405ad
      Steven Rostedt 提交于
      The latency tracers (irqsoff and wakeup) can swap trace buffers
      on the fly. If an event is happening and has reserved data on one of
      the buffers, and the latency tracer swaps the global buffer with the
      max buffer, the result is that the event may commit the data to the
      wrong buffer.
      
      This patch changes the API to the trace recording to be recieve the
      buffer that was used to reserve a commit. Then this buffer can be passed
      in to the commit.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      e77405ad
    • S
      tracing: make tracing_reset safe for external use · f633903a
      Steven Rostedt 提交于
      Reseting the trace buffer without first disabling the buffer and
      waiting for any writers to complete, can corrupt the ring buffer.
      
      This patch makes the external version of tracing_reset safe from
      corruption by disabling the ring buffer and calling synchronize_sched.
      
      This version can no longer be called from interrupt context. But all those
      callers have been removed.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      f633903a
    • S
      tracing: use timestamp to determine start of latency traces · 2f26ebd5
      Steven Rostedt 提交于
      Currently the latency tracers reset the ring buffer. Unfortunately
      if a commit is in process (due to a trace event), this can corrupt
      the ring buffer. When this happens, the ring buffer will detect
      the corruption and then permanently disable the ring buffer.
      
      The bug does not crash the system, but it does prevent further tracing
      after the bug is hit.
      
      Instead of reseting the trace buffers, the timestamp of the start of
      the trace is used instead. The buffers will still contain the previous
      data, but the output will not count any data that is before the
      timestamp of the trace.
      
      Note, this only affects the static trace output (trace) and not the
      runtime trace output (trace_pipe). The runtime trace output does not
      make sense for the latency tracers anyway.
      Reported-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      2f26ebd5
    • L
      tracing/filters: Defer pred allocation, fix memory leak · c58b4321
      Li Zefan 提交于
      The predicates of an event and their filter structure are allocated
      when we create an event filter for the first time.
      
      These objects must be created once but each time we come with a new
      filter, we overwrite such pre-existing allocation, if any.
      
      Thus, this patch checks if the filter has already been allocated
      before going ahead.
      Spotted-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Signed-off-by: NLi Zefan <lizf@cn.fujitsu.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Tom Zanussi <tzanussi@gmail.com>
      Cc: Masami Hiramatsu <mhiramat@redhat.com>
      LKML-Reference: <4A9CB1BA.3060402@cn.fujitsu.com>
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      c58b4321
    • S
      tracing: remove users of tracing_reset · 76f0d073
      Steven Rostedt 提交于
      The function tracing_reset is deprecated for outside use of trace.c.
      
      The new function to reset the the buffers is tracing_reset_online_cpus.
      
      The reason for this is that resetting the buffers while the event
      trace points are active can corrupt the buffers, because they may
      be writing at the time of reset. The tracing_reset_online_cpus disables
      writes and waits for current writers to finish.
      
      This patch replaces all users of tracing_reset except for the latency
      tracers. Those changes require more work and will be removed in the
      following patches.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      76f0d073
    • S
      tracing: disable buffers and synchronize_sched before resetting · 621968cd
      Steven Rostedt 提交于
      Resetting the ring buffers while traces are happening can corrupt
      the ring buffer and disable it (no kernel crash to worry about).
      
      The safest thing to do is disable the ring buffers, call synchronize_sched()
      to wait for all current writers to finish and then reset the buffer.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      621968cd
  7. 04 9月, 2009 19 次提交