1. 26 1月, 2008 6 次提交
    • P
      sched: remove do_div() from __sched_slice() · 4bf0b771
      Peter Zijlstra 提交于
      Yanmin Zhang noticed a nice optimization:
      
        p = l * nr / nl, nl = l/g -> p = g * nr
      
      which eliminates a do_div() from __sched_period().
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      4bf0b771
    • D
      sched: no need for 'affine wakeup' balancing · 9ec3b77e
      Dmitry Adamushko 提交于
      No need to do a check for 'affine wakeup and passive balancing possibilities'
      in select_task_rq_fair() when task_cpu(p) == this_cpu.
      
      I guess, this part got missed upon introduction of per-sched_class
      select_task_rq() in try_to_wake_up().
      Signed-off-by: NDmitry Adamushko <dmitry.adamushko@gmail.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      9ec3b77e
    • G
      sched: de-SCHED_OTHER-ize the RT path · e7693a36
      Gregory Haskins 提交于
      The current wake-up code path tries to determine if it can optimize the
      wake-up to "this_cpu" by computing load calculations.  The problem is that
      these calculations are only relevant to SCHED_OTHER tasks where load is king.
      For RT tasks, priority is king.  So the load calculation is completely wasted
      bandwidth.
      
      Therefore, we create a new sched_class interface to help with
      pre-wakeup routing decisions and move the load calculation as a function
      of CFS task's class.
      Signed-off-by: NGregory Haskins <ghaskins@novell.com>
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      e7693a36
    • S
      sched: group scheduler, fix fairness of cpu bandwidth allocation for task groups · 6b2d7700
      Srivatsa Vaddagiri 提交于
      The current load balancing scheme isn't good enough for precise
      group fairness.
      
      For example: on a 8-cpu system, I created 3 groups as under:
      
      	a = 8 tasks (cpu.shares = 1024)
      	b = 4 tasks (cpu.shares = 1024)
      	c = 3 tasks (cpu.shares = 1024)
      
      a, b and c are task groups that have equal weight. We would expect each
      of the groups to receive 33.33% of cpu bandwidth under a fair scheduler.
      
      This is what I get with the latest scheduler git tree:
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      --------------------------------------------------------------------------------
      Col1  | Col2    | Col3  |  Col4
      ------|---------|-------|-------------------------------------------------------
      a     | 277.676 | 57.8% | 54.1%  54.1%  54.1%  54.2%  56.7%  62.2%  62.8% 64.5%
      b     | 116.108 | 24.2% | 47.4%  48.1%  48.7%  49.3%
      c     |  86.326 | 18.0% | 47.5%  47.9%  48.5%
      --------------------------------------------------------------------------------
      
      Explanation of o/p:
      
      Col1 -> Group name
      Col2 -> Cumulative execution time (in seconds) received by all tasks of that
      	group in a 60sec window across 8 cpus
      Col3 -> CPU bandwidth received by the group in the 60sec window, expressed in
              percentage. Col3 data is derived as:
      		Col3 = 100 * Col2 / (NR_CPUS * 60)
      Col4 -> CPU bandwidth received by each individual task of the group.
      		Col4 = 100 * cpu_time_recd_by_task / 60
      
      [I can share the test case that produces a similar o/p if reqd]
      
      The deviation from desired group fairness is as below:
      
      	a = +24.47%
      	b = -9.13%
      	c = -15.33%
      
      which is quite high.
      
      After the patch below is applied, here are the results:
      
      --------------------------------------------------------------------------------
      Col1  | Col2    | Col3  |  Col4
      ------|---------|-------|-------------------------------------------------------
      a     | 163.112 | 34.0% | 33.2%  33.4%  33.5%  33.5%  33.7%  34.4%  34.8% 35.3%
      b     | 156.220 | 32.5% | 63.3%  64.5%  66.1%  66.5%
      c     | 160.653 | 33.5% | 85.8%  90.6%  91.4%
      --------------------------------------------------------------------------------
      
      Deviation from desired group fairness is as below:
      
      	a = +0.67%
      	b = -0.83%
      	c = +0.17%
      
      which is far better IMO. Most of other runs have yielded a deviation within
      +-2% at the most, which is good.
      
      Why do we see bad (group) fairness with current scheuler?
      =========================================================
      
      Currently cpu's weight is just the summation of individual task weights.
      This can yield incorrect results. For ex: consider three groups as below
      on a 2-cpu system:
      
      	CPU0	CPU1
      ---------------------------
      	A (10)  B(5)
      		C(5)
      ---------------------------
      
      Group A has 10 tasks, all on CPU0, Group B and C have 5 tasks each all
      of which are on CPU1. Each task has the same weight (NICE_0_LOAD =
      1024).
      
      The current scheme would yield a cpu weight of 10240 (10*1024) for each cpu and
      the load balancer will think both CPUs are perfectly balanced and won't
      move around any tasks. This, however, would yield this bandwidth:
      
      	A = 50%
      	B = 25%
      	C = 25%
      
      which is not the desired result.
      
      What's changing in the patch?
      =============================
      
      	- How cpu weights are calculated when CONFIF_FAIR_GROUP_SCHED is
      	  defined (see below)
      	- API Change
      		- Two tunables introduced in sysfs (under SCHED_DEBUG) to
      		  control the frequency at which the load balance monitor
      		  thread runs.
      
      The basic change made in this patch is how cpu weight (rq->load.weight) is
      calculated. Its now calculated as the summation of group weights on a cpu,
      rather than summation of task weights. Weight exerted by a group on a
      cpu is dependent on the shares allocated to it and also the number of
      tasks the group has on that cpu compared to the total number of
      (runnable) tasks the group has in the system.
      
      Let,
      	W(K,i)  = Weight of group K on cpu i
      	T(K,i)  = Task load present in group K's cfs_rq on cpu i
      	T(K)    = Total task load of group K across various cpus
      	S(K) 	= Shares allocated to group K
      	NRCPUS	= Number of online cpus in the scheduler domain to
      	 	  which group K is assigned.
      
      Then,
      	W(K,i) = S(K) * NRCPUS * T(K,i) / T(K)
      
      A load balance monitor thread is created at bootup, which periodically
      runs and adjusts group's weight on each cpu. To avoid its overhead, two
      min/max tunables are introduced (under SCHED_DEBUG) to control the rate
      at which it runs.
      
      Fixes from: Peter Zijlstra <a.p.zijlstra@chello.nl>
      
      - don't start the load_balance_monitor when there is only a single cpu.
      - rename the kthread because its currently longer than TASK_COMM_LEN
      Signed-off-by: NSrivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      6b2d7700
    • S
      sched: group scheduling, change how cpu load is calculated · 58e2d4ca
      Srivatsa Vaddagiri 提交于
      This patch changes how the cpu load exerted by fair_sched_class tasks
      is calculated. Load exerted by fair_sched_class tasks on a cpu is now
      a summation of the group weights, rather than summation of task weights.
      Weight exerted by a group on a cpu is dependent on the shares allocated
      to it.
      
      This version of patch has a minor impact on code size, but should have
      no runtime/functional impact for !CONFIG_FAIR_GROUP_SCHED.
      Signed-off-by: NSrivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      58e2d4ca
    • S
      sched: group scheduling, minor fixes · ec2c507f
      Srivatsa Vaddagiri 提交于
      Minor bug fixes for the group scheduler:
      
      - Use a mutex to serialize add/remove of task groups and also when
        changing shares of a task group. Use the same mutex when printing
        cfs_rq debugging stats for various task groups.
      
      - Use list_for_each_entry_rcu in for_each_leaf_cfs_rq macro (when
        walking task group list)
      Signed-off-by: NSrivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      ec2c507f
  2. 18 12月, 2007 1 次提交
  3. 05 12月, 2007 1 次提交
  4. 03 12月, 2007 1 次提交
    • S
      sched: cpu accounting controller (V2) · d842de87
      Srivatsa Vaddagiri 提交于
      Commit cfb52856 removed a useful feature for
      us, which provided a cpu accounting resource controller.  This feature would be
      useful if someone wants to group tasks only for accounting purpose and doesnt
      really want to exercise any control over their cpu consumption.
      
      The patch below reintroduces the feature. It is based on Paul Menage's
      original patch (Commit 62d0df64), with
      these differences:
      
              - Removed load average information. I felt it needs more thought (esp
      	  to deal with SMP and virtualized platforms) and can be added for
      	  2.6.25 after more discussions.
              - Convert group cpu usage to be nanosecond accurate (as rest of the cfs
      	  stats are) and invoke cpuacct_charge() from the respective scheduler
      	  classes
      	- Make accounting scalable on SMP systems by splitting the usage
      	  counter to be per-cpu
      	- Move the code from kernel/cpu_acct.c to kernel/sched.c (since the
      	  code is not big enough to warrant a new file and also this rightly
      	  needs to live inside the scheduler. Also things like accessing
      	  rq->lock while reading cpu usage becomes easier if the code lived in
      	  kernel/sched.c)
      
      The patch also modifies the cpu controller not to provide the same accounting
      information.
      Tested-by: NBalbir Singh <balbir@linux.vnet.ibm.com>
      
       Tested the patches on top of 2.6.24-rc3. The patches work fine. Ran
       some simple tests like cpuspin (spin on the cpu), ran several tasks in
       the same group and timed them. Compared their time stamps with
       cpuacct.usage.
      Signed-off-by: NSrivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>
      Signed-off-by: NBalbir Singh <balbir@linux.vnet.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      d842de87
  5. 27 11月, 2007 1 次提交
  6. 16 11月, 2007 1 次提交
  7. 10 11月, 2007 9 次提交
    • S
      sched: fix copy_namespace() <-> sched_fork() dependency in do_fork · 3c90e6e9
      Srivatsa Vaddagiri 提交于
      Sukadev Bhattiprolu reported a kernel crash with control groups.
      There are couple of problems discovered by Suka's test:
      
      - The test requires the cgroup filesystem to be mounted with
        atleast the cpu and ns options (i.e both namespace and cpu 
        controllers are active in the same hierarchy). 
      
      	# mkdir /dev/cpuctl
      	# mount -t cgroup -ocpu,ns none cpuctl
      	(or simply)
      	# mount -t cgroup none cpuctl -> Will activate all controllers
      					 in same hierarchy.
      
      - The test invokes clone() with CLONE_NEWNS set. This causes a a new child
        to be created, also a new group (do_fork->copy_namespaces->ns_cgroup_clone->
        cgroup_clone) and the child is attached to the new group (cgroup_clone->
        attach_task->sched_move_task). At this point in time, the child's scheduler 
        related fields are uninitialized (including its on_rq field, which it has
        inherited from parent). As a result sched_move_task thinks its on
        runqueue, when it isn't.
      
        As a solution to this problem, I moved sched_fork() call, which
        initializes scheduler related fields on a new task, before
        copy_namespaces(). I am not sure though whether moving up will
        cause other side-effects. Do you see any issue?
      
      - The second problem exposed by this test is that task_new_fair()
        assumes that parent and child will be part of the same group (which 
        needn't be as this test shows). As a result, cfs_rq->curr can be NULL
        for the child.
      
        The solution is to test for curr pointer being NULL in
        task_new_fair().
      
      With the patch below, I could run ns_exec() fine w/o a crash.
      Reported-by: NSukadev Bhattiprolu <sukadev@us.ibm.com>
      Signed-off-by: NSrivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      3c90e6e9
    • I
      sched: clean up the wakeup preempt check, #2 · 502d26b5
      Ingo Molnar 提交于
      clean up the preemption check to not use unnecessary 64-bit
      variables. This improves code size:
      
         text    data     bss     dec     hex filename
        44227    3326      36   47589    b9e5 sched.o.before
        44201    3326      36   47563    b9cb sched.o.after
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      502d26b5
    • I
      sched: clean up the wakeup preempt check · 77d9cc44
      Ingo Molnar 提交于
      clean up the wakeup preemption check. No code changed:
      
         text    data     bss     dec     hex filename
        44227    3326      36   47589    b9e5 sched.o.before
        44227    3326      36   47589    b9e5 sched.o.after
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      77d9cc44
    • I
      sched: wakeup preemption fix · 8bc6767a
      Ingo Molnar 提交于
      wakeup preemption fix: do not make it dependent on p->prio.
      Preemption purely depends on ->vruntime.
      
      This improves preemption in mixed-nice-level workloads.
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      8bc6767a
    • I
      sched: remove PREEMPT_RESTRICT · 3e3e13f3
      Ingo Molnar 提交于
      remove PREEMPT_RESTRICT. (this is a separate commit so that any
      regression related to the removal itself is bisectable)
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      3e3e13f3
    • I
      sched: reintroduce SMP tunings again · 19978ca6
      Ingo Molnar 提交于
      Yanmin Zhang reported an aim7 regression and bisected it down to:
      
       |  commit 38ad464d
       |  Author: Ingo Molnar <mingo@elte.hu>
       |  Date:   Mon Oct 15 17:00:02 2007 +0200
       |
       |     sched: uniform tunings
       |
       |     use the same defaults on both UP and SMP.
      
      fix this by reintroducing similar SMP tunings again. This resolves
      the regression.
      
      (also update the comments to match the ilog2(nr_cpus) tuning effect)
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      19978ca6
    • P
      sched: reintroduce the sched_min_granularity tunable · b2be5e96
      Peter Zijlstra 提交于
      we lost the sched_min_granularity tunable to a clever optimization
      that uses the sched_latency/min_granularity ratio - but the ratio
      is quite unintuitive to users and can also crash the kernel if the
      ratio is set to 0. So reintroduce the min_granularity tunable,
      while keeping the ratio maintained internally.
      
      no functionality changed.
      
      [ mingo@elte.hu: some fixlets. ]
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      b2be5e96
    • P
      sched: documentation: place_entity() comments · 2cb8600e
      Peter Zijlstra 提交于
      Add a few comments to place_entity(). No code changed.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      2cb8600e
    • P
      sched: fix vslice · 10b77724
      Peter Zijlstra 提交于
      vslice was missing a factor NICE_0_LOAD, as weight is in
      weight*NICE_0_LOAD units.
      
      the effect of this bug was larger initial slices and
      thus latency-noisier forks.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      10b77724
  8. 30 10月, 2007 1 次提交
  9. 25 10月, 2007 2 次提交
    • P
      sched: isolate SMP balancing code a bit more · 681f3e68
      Peter Williams 提交于
      At the moment, a lot of load balancing code that is irrelevant to non
      SMP systems gets included during non SMP builds.
      
      This patch addresses this issue and reduces the binary size on non
      SMP systems:
      
         text    data     bss     dec     hex filename
        10983      28    1192   12203    2fab sched.o.before
        10739      28    1192   11959    2eb7 sched.o.after
      Signed-off-by: NPeter Williams <pwil3058@bigpond.net.au>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      681f3e68
    • P
      sched: reduce balance-tasks overhead · e1d1484f
      Peter Williams 提交于
      At the moment, balance_tasks() provides low level functionality for both
        move_tasks() and move_one_task() (indirectly) via the load_balance()
      function (in the sched_class interface) which also provides dual
      functionality.  This dual functionality complicates the interfaces and
      internal mechanisms and makes the run time overhead of operations that
      are called with two run queue locks held.
      
      This patch addresses this issue and reduces the overhead of these
      operations.
      Signed-off-by: NPeter Williams <pwil3058@bigpond.net.au>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      e1d1484f
  10. 17 10月, 2007 1 次提交
  11. 15 10月, 2007 16 次提交