1. 29 8月, 2011 2 次提交
  2. 22 7月, 2011 5 次提交
  3. 21 7月, 2011 4 次提交
  4. 16 7月, 2011 1 次提交
  5. 14 7月, 2011 2 次提交
    • G
      sched: adjust scheduler cpu power for stolen time · 095c0aa8
      Glauber Costa 提交于
      This patch makes update_rq_clock() aware of steal time.
      The mechanism of operation is not different from irq_time,
      and follows the same principles. This lives in a CONFIG
      option itself, and can be compiled out independently of
      the rest of steal time reporting. The effect of disabling it
      is that the scheduler will still report steal time (that cannot be
      disabled), but won't use this information for cpu power adjustments.
      
      Everytime update_rq_clock_task() is invoked, we query information
      about how much time was stolen since last call, and feed it into
      sched_rt_avg_update().
      
      Although steal time reporting in account_process_tick() keeps
      track of the last time we read the steal clock, in prev_steal_time,
      this patch do it independently using another field,
      prev_steal_time_rq. This is because otherwise, information about time
      accounted in update_process_tick() would never reach us in update_rq_clock().
      Signed-off-by: NGlauber Costa <glommer@redhat.com>
      Acked-by: NRik van Riel <riel@redhat.com>
      Acked-by: NPeter Zijlstra <peterz@infradead.org>
      Tested-by: NEric B Munson <emunson@mgebm.net>
      CC: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
      CC: Anthony Liguori <aliguori@us.ibm.com>
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      095c0aa8
    • G
      KVM guest: Steal time accounting · e6e6685a
      Glauber Costa 提交于
      This patch accounts steal time time in account_process_tick.
      If one or more tick is considered stolen in the current
      accounting cycle, user/system accounting is skipped. Idle is fine,
      since the hypervisor does not report steal time if the guest
      is halted.
      
      Accounting steal time from the core scheduler give us the
      advantage of direct acess to the runqueue data. In a later
      opportunity, it can be used to tweak cpu power and make
      the scheduler aware of the time it lost.
      
      [avi: <asm/paravirt.h> doesn't exist on many archs]
      Signed-off-by: NGlauber Costa <glommer@redhat.com>
      Acked-by: NRik van Riel <riel@redhat.com>
      Acked-by: NPeter Zijlstra <peterz@infradead.org>
      Tested-by: NEric B Munson <emunson@mgebm.net>
      CC: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
      CC: Anthony Liguori <aliguori@us.ibm.com>
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      e6e6685a
  6. 09 7月, 2011 1 次提交
  7. 08 7月, 2011 1 次提交
  8. 01 7月, 2011 3 次提交
  9. 23 6月, 2011 1 次提交
  10. 10 6月, 2011 1 次提交
  11. 08 6月, 2011 1 次提交
  12. 07 6月, 2011 1 次提交
  13. 31 5月, 2011 2 次提交
  14. 28 5月, 2011 2 次提交
  15. 27 5月, 2011 1 次提交
    • B
      cgroups: add per-thread subsystem callbacks · f780bdb7
      Ben Blum 提交于
      Add cgroup subsystem callbacks for per-thread attachment in atomic contexts
      
      Add can_attach_task(), pre_attach(), and attach_task() as new callbacks
      for cgroups's subsystem interface.  Unlike can_attach and attach, these
      are for per-thread operations, to be called potentially many times when
      attaching an entire threadgroup.
      
      Also, the old "bool threadgroup" interface is removed, as replaced by
      this.  All subsystems are modified for the new interface - of note is
      cpuset, which requires from/to nodemasks for attach to be globally scoped
      (though per-cpuset would work too) to persist from its pre_attach to
      attach_task and attach.
      
      This is a pre-patch for cgroup-procs-writable.patch.
      Signed-off-by: NBen Blum <bblum@andrew.cmu.edu>
      Cc: "Eric W. Biederman" <ebiederm@xmission.com>
      Cc: Li Zefan <lizf@cn.fujitsu.com>
      Cc: Matt Helsley <matthltc@us.ibm.com>
      Reviewed-by: NPaul Menage <menage@google.com>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Miao Xie <miaox@cn.fujitsu.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      f780bdb7
  16. 21 5月, 2011 1 次提交
  17. 20 5月, 2011 3 次提交
    • N
      sched: Increase SCHED_LOAD_SCALE resolution · c8b28116
      Nikhil Rao 提交于
      Introduce SCHED_LOAD_RESOLUTION, which scales is added to
      SCHED_LOAD_SHIFT and increases the resolution of
      SCHED_LOAD_SCALE. This patch sets the value of
      SCHED_LOAD_RESOLUTION to 10, scaling up the weights for all
      sched entities by a factor of 1024. With this extra resolution,
      we can handle deeper cgroup hiearchies and the scheduler can do
      better shares distribution and load load balancing on larger
      systems (especially for low weight task groups).
      
      This does not change the existing user interface, the scaled
      weights are only used internally. We do not modify
      prio_to_weight values or inverses, but use the original weights
      when calculating the inverse which is used to scale execution
      time delta in calc_delta_mine(). This ensures we do not lose
      accuracy when accounting time to the sched entities. Thanks to
      Nikunj Dadhania for fixing an bug in c_d_m() that broken fairness.
      
      Below is some analysis of the performance costs/improvements of
      this patch.
      
      1. Micro-arch performance costs:
      
      Experiment was to run Ingo's pipe_test_100k 200 times with the
      task pinned to one cpu. I measured instruction, cycles and
      stalled-cycles for the runs. See:
      
         http://thread.gmane.org/gmane.linux.kernel/1129232/focus=1129389
      
      for more info.
      
      -tip (baseline):
      
       Performance counter stats for '/root/load-scale/pipe-test-100k' (200 runs):
      
             964,991,769 instructions             #    0.82  insns per cycle
                                                  #    0.33  stalled cycles per insn
                                                  #    ( +-  0.05% )
           1,171,186,635 cycles                   #    0.000 GHz                      ( +-  0.08% )
             306,373,664 stalled-cycles-backend   #   26.16% backend  cycles idle     ( +-  0.28% )
             314,933,621 stalled-cycles-frontend  #   26.89% frontend cycles idle     ( +-  0.34% )
      
              1.122405684  seconds time elapsed  ( +-  0.05% )
      
      -tip+patches:
      
       Performance counter stats for './load-scale/pipe-test-100k' (200 runs):
      
             963,624,821 instructions             #    0.82  insns per cycle
                                                  #    0.33  stalled cycles per insn
                                                  #    ( +-  0.04% )
           1,175,215,649 cycles                   #    0.000 GHz                      ( +-  0.08% )
             315,321,126 stalled-cycles-backend   #   26.83% backend  cycles idle     ( +-  0.28% )
             316,835,873 stalled-cycles-frontend  #   26.96% frontend cycles idle     ( +-  0.29% )
      
              1.122238659  seconds time elapsed  ( +-  0.06% )
      
      With this patch, instructions decrease by ~0.10% and cycles
      increase by 0.27%. This doesn't look statistically significant.
      The number of stalled cycles in the backend increased from
      26.16% to 26.83%. This can be attributed to the shifts we do in
      c_d_m() and other places. The fraction of stalled cycles in the
      frontend remains about the same, at 26.96% compared to 26.89% in -tip.
      
      2. Balancing low-weight task groups
      
      Test setup: run 50 tasks with random sleep/busy times (biased
      around 100ms) in a low weight container (with cpu.shares = 2).
      Measure %idle as reported by mpstat over a 10s window.
      
      -tip (baseline):
      
      06:47:48 PM  CPU    %usr   %nice    %sys %iowait    %irq   %soft  %steal  %guest   %idle    intr/s
      06:47:49 PM  all   94.32    0.00    0.06    0.00    0.00    0.00    0.00    0.00    5.62  15888.00
      06:47:50 PM  all   94.57    0.00    0.62    0.00    0.00    0.00    0.00    0.00    4.81  16180.00
      06:47:51 PM  all   94.69    0.00    0.06    0.00    0.00    0.00    0.00    0.00    5.25  15966.00
      06:47:52 PM  all   95.81    0.00    0.00    0.00    0.00    0.00    0.00    0.00    4.19  16053.00
      06:47:53 PM  all   94.88    0.06    0.00    0.00    0.00    0.00    0.00    0.00    5.06  15984.00
      06:47:54 PM  all   93.31    0.00    0.00    0.00    0.00    0.00    0.00    0.00    6.69  15806.00
      06:47:55 PM  all   94.19    0.00    0.06    0.00    0.00    0.00    0.00    0.00    5.75  15896.00
      06:47:56 PM  all   92.87    0.00    0.00    0.00    0.00    0.00    0.00    0.00    7.13  15716.00
      06:47:57 PM  all   94.88    0.00    0.00    0.00    0.00    0.00    0.00    0.00    5.12  15982.00
      06:47:58 PM  all   95.44    0.00    0.00    0.00    0.00    0.00    0.00    0.00    4.56  16075.00
      Average:     all   94.49    0.01    0.08    0.00    0.00    0.00    0.00    0.00    5.42  15954.60
      
      -tip+patches:
      
      06:47:03 PM  CPU    %usr   %nice    %sys %iowait    %irq   %soft  %steal  %guest   %idle    intr/s
      06:47:04 PM  all  100.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00  16630.00
      06:47:05 PM  all   99.69    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.31  16580.20
      06:47:06 PM  all   99.69    0.00    0.06    0.00    0.00    0.00    0.00    0.00    0.25  16596.00
      06:47:07 PM  all   99.20    0.00    0.74    0.00    0.00    0.06    0.00    0.00    0.00  17838.61
      06:47:08 PM  all  100.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00  16540.00
      06:47:09 PM  all  100.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00  16575.00
      06:47:10 PM  all  100.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00  16614.00
      06:47:11 PM  all   99.94    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.06  16588.00
      06:47:12 PM  all   99.94    0.00    0.06    0.00    0.00    0.00    0.00    0.00    0.00  16593.00
      06:47:13 PM  all   99.94    0.00    0.06    0.00    0.00    0.00    0.00    0.00    0.00  16551.00
      Average:     all   99.84    0.00    0.09    0.00    0.00    0.01    0.00    0.00    0.06  16711.58
      
      We see an improvement in idle% on the system (drops from 5.42% on -tip to 0.06%
      with the patches).
      
      We see an improvement in idle% on the system (drops from 5.42%
      on -tip to 0.06% with the patches).
      Signed-off-by: NNikhil Rao <ncrao@google.com>
      Acked-by: NPeter Zijlstra <peterz@infradead.org>
      Cc: Nikunj A. Dadhania <nikunj@linux.vnet.ibm.com>
      Cc: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>
      Cc: Stephan Barwolf <stephan.baerwolf@tu-ilmenau.de>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Link: http://lkml.kernel.org/r/1305754668-18792-1-git-send-email-ncrao@google.comSigned-off-by: NIngo Molnar <mingo@elte.hu>
      c8b28116
    • N
      sched: Introduce SCHED_POWER_SCALE to scale cpu_power calculations · 1399fa78
      Nikhil Rao 提交于
      SCHED_LOAD_SCALE is used to increase nice resolution and to
      scale cpu_power calculations in the scheduler. This patch
      introduces SCHED_POWER_SCALE and converts all uses of
      SCHED_LOAD_SCALE for scaling cpu_power to use SCHED_POWER_SCALE
      instead.
      
      This is a preparatory patch for increasing the resolution of
      SCHED_LOAD_SCALE, and there is no need to increase resolution
      for cpu_power calculations.
      Signed-off-by: NNikhil Rao <ncrao@google.com>
      Acked-by: NPeter Zijlstra <peterz@infradead.org>
      Cc: Nikunj A. Dadhania <nikunj@linux.vnet.ibm.com>
      Cc: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>
      Cc: Stephan Barwolf <stephan.baerwolf@tu-ilmenau.de>
      Cc: Mike Galbraith <efault@gmx.de>
      Link: http://lkml.kernel.org/r/1305738580-9924-3-git-send-email-ncrao@google.comSigned-off-by: NIngo Molnar <mingo@elte.hu>
      1399fa78
    • N
      sched: Cleanup set_load_weight() · f05998d4
      Nikhil Rao 提交于
      Avoid using long repetitious names; make this simpler and nicer
      to read. No functional change introduced in this patch.
      Signed-off-by: NNikhil Rao <ncrao@google.com>
      Acked-by: NPeter Zijlstra <peterz@infradead.org>
      Cc: Nikunj A. Dadhania <nikunj@linux.vnet.ibm.com>
      Cc: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>
      Cc: Stephan Barwolf <stephan.baerwolf@tu-ilmenau.de>
      Cc: Mike Galbraith <efault@gmx.de>
      Link: http://lkml.kernel.org/r/1305738580-9924-2-git-send-email-ncrao@google.comSigned-off-by: NIngo Molnar <mingo@elte.hu>
      f05998d4
  18. 16 5月, 2011 3 次提交
  19. 12 5月, 2011 1 次提交
  20. 06 5月, 2011 2 次提交
  21. 26 4月, 2011 1 次提交
  22. 24 4月, 2011 1 次提交