1. 20 10月, 2008 3 次提交
    • P
      sched: revert back to per-rq vruntime · f9c0b095
      Peter Zijlstra 提交于
      Vatsa rightly points out that having the runqueue weight in the vruntime
      calculations can cause unfairness in the face of task joins/leaves.
      
      Suppose: dv = dt * rw / w
      
      Then take 10 tasks t_n, each of similar weight. If the first will run 1
      then its vruntime will increase by 10. Now, if the next 8 tasks leave after
      having run their 1, then the last task will get a vruntime increase of 2
      after having run 1.
      
      Which will leave us with 2 tasks of equal weight and equal runtime, of which
      one will not be scheduled for 8/2=4 units of time.
      
      Ergo, we cannot do that and must use: dv = dt / w.
      
      This means we cannot have a global vruntime based on effective priority, but
      must instead go back to the vruntime per rq model we started out with.
      
      This patch was lightly tested by doing starting while loops on each nice level
      and observing their execution time, and a simple group scenario of 1:2:3 pinned
      to a single cpu.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      f9c0b095
    • P
      sched: fair scheduler should not resched rt tasks · a4c2f00f
      Peter Zijlstra 提交于
      With use of ftrace Steven noticed that some RT tasks got rescheduled due
      to sched_fair interaction.
      
      What happens is that we reprogram the hrtick from enqueue/dequeue_fair_task()
      because that can change nr_running, and thus a current tasks ideal runtime.
      However, its possible the current task isn't a fair_sched_class task, and thus
      doesn't have a hrtick set to change.
      
      Fix this by wrapping those hrtick_start_fair() calls in a hrtick_update()
      function, which will check for the right conditions.
      Reported-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Acked-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      a4c2f00f
    • P
      sched: optimize group load balancer · ffda12a1
      Peter Zijlstra 提交于
      I noticed that tg_shares_up() unconditionally takes rq-locks for all cpus
      in the sched_domain. This hurts.
      
      We need the rq-locks whenever we change the weight of the per-cpu group sched
      entities. To allevate this a little, only change the weight when the new
      weight is at least shares_thresh away from the old value.
      
      This avoids the rq-lock for the top level entries, since those will never
      be re-weighted, and fuzzes the lower level entries a little to gain performance
      in semi-stable situations.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      ffda12a1
  2. 17 10月, 2008 37 次提交