1. 26 1月, 2008 5 次提交
    • S
      sched: group scheduler, fix fairness of cpu bandwidth allocation for task groups · 6b2d7700
      Srivatsa Vaddagiri 提交于
      The current load balancing scheme isn't good enough for precise
      group fairness.
      
      For example: on a 8-cpu system, I created 3 groups as under:
      
      	a = 8 tasks (cpu.shares = 1024)
      	b = 4 tasks (cpu.shares = 1024)
      	c = 3 tasks (cpu.shares = 1024)
      
      a, b and c are task groups that have equal weight. We would expect each
      of the groups to receive 33.33% of cpu bandwidth under a fair scheduler.
      
      This is what I get with the latest scheduler git tree:
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      --------------------------------------------------------------------------------
      Col1  | Col2    | Col3  |  Col4
      ------|---------|-------|-------------------------------------------------------
      a     | 277.676 | 57.8% | 54.1%  54.1%  54.1%  54.2%  56.7%  62.2%  62.8% 64.5%
      b     | 116.108 | 24.2% | 47.4%  48.1%  48.7%  49.3%
      c     |  86.326 | 18.0% | 47.5%  47.9%  48.5%
      --------------------------------------------------------------------------------
      
      Explanation of o/p:
      
      Col1 -> Group name
      Col2 -> Cumulative execution time (in seconds) received by all tasks of that
      	group in a 60sec window across 8 cpus
      Col3 -> CPU bandwidth received by the group in the 60sec window, expressed in
              percentage. Col3 data is derived as:
      		Col3 = 100 * Col2 / (NR_CPUS * 60)
      Col4 -> CPU bandwidth received by each individual task of the group.
      		Col4 = 100 * cpu_time_recd_by_task / 60
      
      [I can share the test case that produces a similar o/p if reqd]
      
      The deviation from desired group fairness is as below:
      
      	a = +24.47%
      	b = -9.13%
      	c = -15.33%
      
      which is quite high.
      
      After the patch below is applied, here are the results:
      
      --------------------------------------------------------------------------------
      Col1  | Col2    | Col3  |  Col4
      ------|---------|-------|-------------------------------------------------------
      a     | 163.112 | 34.0% | 33.2%  33.4%  33.5%  33.5%  33.7%  34.4%  34.8% 35.3%
      b     | 156.220 | 32.5% | 63.3%  64.5%  66.1%  66.5%
      c     | 160.653 | 33.5% | 85.8%  90.6%  91.4%
      --------------------------------------------------------------------------------
      
      Deviation from desired group fairness is as below:
      
      	a = +0.67%
      	b = -0.83%
      	c = +0.17%
      
      which is far better IMO. Most of other runs have yielded a deviation within
      +-2% at the most, which is good.
      
      Why do we see bad (group) fairness with current scheuler?
      =========================================================
      
      Currently cpu's weight is just the summation of individual task weights.
      This can yield incorrect results. For ex: consider three groups as below
      on a 2-cpu system:
      
      	CPU0	CPU1
      ---------------------------
      	A (10)  B(5)
      		C(5)
      ---------------------------
      
      Group A has 10 tasks, all on CPU0, Group B and C have 5 tasks each all
      of which are on CPU1. Each task has the same weight (NICE_0_LOAD =
      1024).
      
      The current scheme would yield a cpu weight of 10240 (10*1024) for each cpu and
      the load balancer will think both CPUs are perfectly balanced and won't
      move around any tasks. This, however, would yield this bandwidth:
      
      	A = 50%
      	B = 25%
      	C = 25%
      
      which is not the desired result.
      
      What's changing in the patch?
      =============================
      
      	- How cpu weights are calculated when CONFIF_FAIR_GROUP_SCHED is
      	  defined (see below)
      	- API Change
      		- Two tunables introduced in sysfs (under SCHED_DEBUG) to
      		  control the frequency at which the load balance monitor
      		  thread runs.
      
      The basic change made in this patch is how cpu weight (rq->load.weight) is
      calculated. Its now calculated as the summation of group weights on a cpu,
      rather than summation of task weights. Weight exerted by a group on a
      cpu is dependent on the shares allocated to it and also the number of
      tasks the group has on that cpu compared to the total number of
      (runnable) tasks the group has in the system.
      
      Let,
      	W(K,i)  = Weight of group K on cpu i
      	T(K,i)  = Task load present in group K's cfs_rq on cpu i
      	T(K)    = Total task load of group K across various cpus
      	S(K) 	= Shares allocated to group K
      	NRCPUS	= Number of online cpus in the scheduler domain to
      	 	  which group K is assigned.
      
      Then,
      	W(K,i) = S(K) * NRCPUS * T(K,i) / T(K)
      
      A load balance monitor thread is created at bootup, which periodically
      runs and adjusts group's weight on each cpu. To avoid its overhead, two
      min/max tunables are introduced (under SCHED_DEBUG) to control the rate
      at which it runs.
      
      Fixes from: Peter Zijlstra <a.p.zijlstra@chello.nl>
      
      - don't start the load_balance_monitor when there is only a single cpu.
      - rename the kthread because its currently longer than TASK_COMM_LEN
      Signed-off-by: NSrivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      6b2d7700
    • S
      sched: introduce a mutex and corresponding API to serialize access to doms_curarray · a1835615
      Srivatsa Vaddagiri 提交于
      doms_cur[] array represents various scheduling domains which are
      mutually exclusive. Currently cpusets code can modify this array (by
      calling partition_sched_domains()) as a result of user modifying
      sched_load_balance flag for various cpusets.
      
      This patch introduces a mutex and corresponding API (only when
      CONFIG_FAIR_GROUP_SCHED is defined) which allows a reader to safely read
      the doms_cur[] array w/o worrying abt concurrent modifications to the
      array.
      
      The fair group scheduler code (introduced in next patch of this series)
      makes use of this mutex to walk thr' doms_cur[] array while rebalancing
      shares of task groups across cpus.
      Signed-off-by: NSrivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      a1835615
    • S
      sched: group scheduling, change how cpu load is calculated · 58e2d4ca
      Srivatsa Vaddagiri 提交于
      This patch changes how the cpu load exerted by fair_sched_class tasks
      is calculated. Load exerted by fair_sched_class tasks on a cpu is now
      a summation of the group weights, rather than summation of task weights.
      Weight exerted by a group on a cpu is dependent on the shares allocated
      to it.
      
      This version of patch has a minor impact on code size, but should have
      no runtime/functional impact for !CONFIG_FAIR_GROUP_SCHED.
      Signed-off-by: NSrivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      58e2d4ca
    • S
      sched: group scheduling, minor fixes · ec2c507f
      Srivatsa Vaddagiri 提交于
      Minor bug fixes for the group scheduler:
      
      - Use a mutex to serialize add/remove of task groups and also when
        changing shares of a task group. Use the same mutex when printing
        cfs_rq debugging stats for various task groups.
      
      - Use list_for_each_entry_rcu in for_each_leaf_cfs_rq macro (when
        walking task group list)
      Signed-off-by: NSrivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      ec2c507f
    • S
      sched: group scheduling code cleanup · 93f992cc
      Srivatsa Vaddagiri 提交于
      Minor cleanups:
      
      - Fix coding style
      - remove obsolete comment
      Signed-off-by: NSrivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      93f992cc
  2. 22 1月, 2008 1 次提交
  3. 10 1月, 2008 1 次提交
  4. 18 12月, 2007 2 次提交
  5. 08 12月, 2007 1 次提交
    • I
      sched: enable early use of sched_clock() · 8ced5f69
      Ingo Molnar 提交于
      some platforms have sched_clock() implementations that cannot be called
      very early during wakeup. If it's called it might hang or crash in hard
      to debug ways. So only call update_rq_clock() [which calls sched_clock()]
      if sched_init() has already been called. (rq->idle is NULL before the
      scheduler is initialized.)
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      8ced5f69
  6. 05 12月, 2007 2 次提交
  7. 03 12月, 2007 1 次提交
    • S
      sched: cpu accounting controller (V2) · d842de87
      Srivatsa Vaddagiri 提交于
      Commit cfb52856 removed a useful feature for
      us, which provided a cpu accounting resource controller.  This feature would be
      useful if someone wants to group tasks only for accounting purpose and doesnt
      really want to exercise any control over their cpu consumption.
      
      The patch below reintroduces the feature. It is based on Paul Menage's
      original patch (Commit 62d0df64), with
      these differences:
      
              - Removed load average information. I felt it needs more thought (esp
      	  to deal with SMP and virtualized platforms) and can be added for
      	  2.6.25 after more discussions.
              - Convert group cpu usage to be nanosecond accurate (as rest of the cfs
      	  stats are) and invoke cpuacct_charge() from the respective scheduler
      	  classes
      	- Make accounting scalable on SMP systems by splitting the usage
      	  counter to be per-cpu
      	- Move the code from kernel/cpu_acct.c to kernel/sched.c (since the
      	  code is not big enough to warrant a new file and also this rightly
      	  needs to live inside the scheduler. Also things like accessing
      	  rq->lock while reading cpu usage becomes easier if the code lived in
      	  kernel/sched.c)
      
      The patch also modifies the cpu controller not to provide the same accounting
      information.
      Tested-by: NBalbir Singh <balbir@linux.vnet.ibm.com>
      
       Tested the patches on top of 2.6.24-rc3. The patches work fine. Ran
       some simple tests like cpuspin (spin on the cpu), ran several tasks in
       the same group and timed them. Compared their time stamps with
       cpuacct.usage.
      Signed-off-by: NSrivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>
      Signed-off-by: NBalbir Singh <balbir@linux.vnet.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      d842de87
  8. 28 11月, 2007 2 次提交
  9. 16 11月, 2007 5 次提交
    • I
      sched: reorder SCHED_FEAT_ bits · 9612633a
      Ingo Molnar 提交于
      reorder SCHED_FEAT_ bits so that the used ones come first. Makes
      tuning instructions easier.
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      9612633a
    • D
      sched: remove activate_idle_task() · 94bc9a7b
      Dmitry Adamushko 提交于
      cpu_down() code is ok wrt sched_idle_next() placing the 'idle' task not
      at the beginning of the queue.
      
      So get rid of activate_idle_task() and make use of activate_task() instead.
      It is the same as activate_task(), except for the update_rq_clock(rq) call
      that is redundant.
      
      Code size goes down:
      
         text    data     bss     dec     hex filename
        47853    3934     336   52123    cb9b sched.o.before
        47828    3934     336   52098    cb82 sched.o.after
      Signed-off-by: NDmitry Adamushko <dmitry.adamushko@gmail.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      94bc9a7b
    • D
      sched: fix __set_task_cpu() SMP race · ce96b5ac
      Dmitry Adamushko 提交于
      Grant Wilson has reported rare SCHED_FAIR_USER crashes on his quad-core
      system, which crashes can only be explained via runqueue corruption.
      
      there is a narrow SMP race in __set_task_cpu(): after ->cpu is set up to
      a new value, task_rq_lock(p, ...) can be successfuly executed on another
      CPU. We must ensure that updates of per-task data have been completed by
      this moment.
      
      this bug has been hiding in the Linux scheduler for an eternity (we never
      had any explicit barrier for task->cpu in set_task_cpu() - so the bug was
      introduced in 2.5.1), but only became visible via set_task_cfs_rq() being
      accidentally put after the task->cpu update. It also probably needs a
      sufficiently out-of-order CPU to trigger.
      Reported-by: NGrant Wilson <grant.wilson@zen.co.uk>
      Signed-off-by: NDmitry Adamushko <dmitry.adamushko@gmail.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      ce96b5ac
    • O
      sched: fix SCHED_FIFO tasks & FAIR_GROUP_SCHED · dae51f56
      Oleg Nesterov 提交于
      Suppose that the SCHED_FIFO task does
      
      	switch_uid(new_user);
      
      Now, p->se.cfs_rq and p->se.parent both point into the old
      user_struct->tg because sched_move_task() doesn't call set_task_cfs_rq()
      for !fair_sched_class case.
      
      Suppose that old user_struct/task_group is freed/reused, and the task
      does
      
      	sched_setscheduler(SCHED_NORMAL);
      
      __setscheduler() sets fair_sched_class, but doesn't update
      ->se.cfs_rq/parent which point to the freed memory.
      
      This means that check_preempt_wakeup() doing
      
      		while (!is_same_group(se, pse)) {
      			se = parent_entity(se);
      			pse = parent_entity(pse);
      		}
      
      may OOPS in a similar way if rq->curr or p did something like above.
      
      Perhaps we need something like the patch below, note that
      __setscheduler() can't do set_task_cfs_rq().
      Signed-off-by: NOleg Nesterov <oleg@tv-sign.ru>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      dae51f56
    • C
      sched: fix accounting of interrupts during guest execution on s390 · 9778385d
      Christian Borntraeger 提交于
      Currently the scheduler checks for PF_VCPU to decide if this timeslice
      has to be accounted as guest time. On s390 host interrupts are not
      disabled during guest execution. This causes theses interrupts to be
      accounted as guest time if CONFIG_VIRT_CPU_ACCOUNTING is set. Solution
      is to check if an interrupt triggered account_system_time. As the tick
      is timer interrupt based, we have to subtract hardirq_offset.
      
      I tested the patch on s390 with CONFIG_VIRT_CPU_ACCOUNTING and on
      x86_64. Seems to work.
      
      CC: Avi Kivity <avi@qumranet.com>
      CC: Laurent Vivier <Laurent.Vivier@bull.net>
      Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      9778385d
  10. 15 11月, 2007 1 次提交
    • A
      revert "Task Control Groups: example CPU accounting subsystem" · cfb52856
      Andrew Morton 提交于
      Revert 62d0df64.
      
      This was originally intended as a simple initial example of how to create a
      control groups subsystem; it wasn't intended for mainline, but I didn't make
      this clear enough to Andrew.
      
      The CFS cgroup subsystem now has better functionality for the per-cgroup usage
      accounting (based directly on CFS stats) than the "usage" status file in this
      patch, and the "load" status file is rather simplistic - although having a
      per-cgroup load average report would be a useful feature, I don't believe this
      patch actually provides it.  If it gets into the final 2.6.24 we'd probably
      have to support this interface for ever.
      
      Cc: Paul Menage <menage@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      cfb52856
  11. 10 11月, 2007 6 次提交
  12. 30 10月, 2007 5 次提交
  13. 25 10月, 2007 7 次提交
    • P
      sched: isolate SMP balancing code a bit more · 681f3e68
      Peter Williams 提交于
      At the moment, a lot of load balancing code that is irrelevant to non
      SMP systems gets included during non SMP builds.
      
      This patch addresses this issue and reduces the binary size on non
      SMP systems:
      
         text    data     bss     dec     hex filename
        10983      28    1192   12203    2fab sched.o.before
        10739      28    1192   11959    2eb7 sched.o.after
      Signed-off-by: NPeter Williams <pwil3058@bigpond.net.au>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      681f3e68
    • P
      sched: reduce balance-tasks overhead · e1d1484f
      Peter Williams 提交于
      At the moment, balance_tasks() provides low level functionality for both
        move_tasks() and move_one_task() (indirectly) via the load_balance()
      function (in the sched_class interface) which also provides dual
      functionality.  This dual functionality complicates the interfaces and
      internal mechanisms and makes the run time overhead of operations that
      are called with two run queue locks held.
      
      This patch addresses this issue and reduces the overhead of these
      operations.
      Signed-off-by: NPeter Williams <pwil3058@bigpond.net.au>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      e1d1484f
    • P
      sched: clean up some control group code · 2b01dfe3
      Paul Menage 提交于
      - replace "cont" with "cgrp" in a few places in the CFS cgroup code, 
      - use write_uint rather than write for cpu.shares write function
      Signed-off-by: NPaul Menage <menage@google.com>
      Acked-by : Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      2b01dfe3
    • S
      sched: use show_regs() to improve __schedule_bug() output · 838225b4
      Satyam Sharma 提交于
      A full register dump along with stack backtrace would make the
      "scheduling while atomic" message more helpful. Use show_regs() instead
      of dump_stack() for this. We already know we're atomic in here (that is
      why this function was called) so show_regs()'s atomicity expectations
      are guaranteed.
      
      Also, modify the output of the "BUG: scheduling while atomic:" header a
      bit to keep task->comm and task->pid together and preempt_count() after
      them.
      Signed-off-by: NSatyam Sharma <satyam@infradead.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      838225b4
    • I
      sched: clean up sched_domain_debug() · 4dcf6aff
      Ingo Molnar 提交于
      clean up sched_domain_debug().
      
      this also shrinks the code a bit:
      
         text    data     bss     dec     hex filename
        50474    4306     480   55260    d7dc sched.o.before
        50404    4306     480   55190    d796 sched.o.after
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      4dcf6aff
    • I
      sched: fix fastcall mismatch in completion APIs · b15136e9
      Ingo Molnar 提交于
      Jeff Dike noticed that wait_for_completion_interruptible()'s prototype
      had a mismatched fastcall.
      
      Fix this by removing the fastcall attributes from all the completion APIs.
      Found-by: NJeff Dike <jdike@linux.intel.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      b15136e9
    • M
      sched: fix sched_domain sysctl registration again · 7378547f
      Milton Miller 提交于
      commit  029190c5 (cpuset
      sched_load_balance flag) was not tested SCHED_DEBUG enabled as
      committed as it dereferences NULL when used and it reordered
      the sysctl registration to cause it to never show any domains
      or their tunables.
      
      Fixes:
      
      1) restore arch_init_sched_domains ordering
      	we can't walk the domains before we build them
      
      	presently we register cpus with empty directories (no domain
      	directories or files).
      
      2) make unregister_sched_domain_sysctl do nothing when already unregistered
      	detach_destroy_domains is now called one set of cpus at a time
      	unregister_syctl dereferences NULL if called with a null.
      
      	While the the function would always dereference null if called
      	twice, in the previous code it was always called once and then
      	was followed a register.  So only the hidden bug of the
      	sysctl_root_table not being allocated followed by an attempt to
      	free it would have shown the error.
      
      3) always call unregister and register in partition_sched_domains
      	The code is "smart" about unregistering only needed domains.
      	Since we aren't guaranteed any calls to unregister, always 
      	unregister.   Without calling register on the way out we
      	will not have a table or any sysctl tree.
      
      4) warn if register is called without unregistering
      	The previous table memory is lost, leaving pointers to the
      	later freed memory in sysctl and leaking the memory of the
      	tables.
      
      Before this patch on a 2-core 4-thread box compiled for SMT and NUMA,
      the domains appear empty (there are actually 3 levels per cpu).  And as
      soon as two domains a null pointer is dereferenced (unreliable in this
      case is stack garbage):
      
      bu19a:~# ls -R /proc/sys/kernel/sched_domain/
      /proc/sys/kernel/sched_domain/:
      cpu0  cpu1  cpu2  cpu3
      
      /proc/sys/kernel/sched_domain/cpu0:
      
      /proc/sys/kernel/sched_domain/cpu1:
      
      /proc/sys/kernel/sched_domain/cpu2:
      
      /proc/sys/kernel/sched_domain/cpu3:
      
      bu19a:~# mkdir /dev/cpuset
      bu19a:~# mount -tcpuset cpuset /dev/cpuset/
      bu19a:~# cd /dev/cpuset/
      bu19a:/dev/cpuset# echo 0 > sched_load_balance 
      bu19a:/dev/cpuset# mkdir one
      bu19a:/dev/cpuset# echo 1 > one/cpus               
      bu19a:/dev/cpuset# echo 0 > one/sched_load_balance 
      Unable to handle kernel paging request for data at address 0x00000018
      Faulting instruction address: 0xc00000000006b608
      NIP: c00000000006b608 LR: c00000000006b604 CTR: 0000000000000000
      REGS: c000000018d973f0 TRAP: 0300   Not tainted  (2.6.23-bml)
      MSR: 9000000000009032 <EE,ME,IR,DR>  CR: 28242442  XER: 00000000
      DAR: 0000000000000018, DSISR: 0000000040000000
      TASK = c00000001912e340[1987] 'bash' THREAD: c000000018d94000 CPU: 2
      ..
      NIP [c00000000006b608] .unregister_sysctl_table+0x38/0x110
      LR [c00000000006b604] .unregister_sysctl_table+0x34/0x110
      Call Trace:
      [c000000018d97670] [c000000007017270] 0xc000000007017270 (unreliable)
      [c000000018d97720] [c000000000058710] .detach_destroy_domains+0x30/0xb0
      [c000000018d977b0] [c00000000005cf1c] .partition_sched_domains+0x1bc/0x230
      [c000000018d97870] [c00000000009fdc4] .rebuild_sched_domains+0xb4/0x4c0
      [c000000018d97970] [c0000000000a02e8] .update_flag+0x118/0x170
      [c000000018d97a80] [c0000000000a1768] .cpuset_common_file_write+0x568/0x820
      [c000000018d97c00] [c00000000009d95c] .cgroup_file_write+0x7c/0x180
      [c000000018d97cf0] [c0000000000e76b8] .vfs_write+0xe8/0x1b0
      [c000000018d97d90] [c0000000000e810c] .sys_write+0x4c/0x90
      [c000000018d97e30] [c00000000000852c] syscall_exit+0x0/0x40
      Signed-off-by: NMilton Miller <miltonm@bga.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      7378547f
  14. 22 10月, 2007 1 次提交