提交 977dda7c 编写于 作者: P Paul Turner 提交者: Ingo Molnar

sched: Update effective_load() to use global share weights

Previously effective_load would approximate the global load weight present on
a group taking advantage of:

entity_weight = tg->shares ( lw / global_lw ), where entity_weight was provided
by tg_shares_up.

This worked (approximately) for an 'empty' (at tg level) cpu since we would
place boost load representative of what a newly woken task would receive.

However, now that load is instantaneously updated this assumption is no longer
true and the load calculation is rather incorrect in this case.

Fix this (and improve the general case) by re-writing effective_load to take
advantage of the new shares distribution code.
Signed-off-by: NPaul Turner <pjt@google.com>
Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <20110115015817.069769529@google.com>
Signed-off-by: NIngo Molnar <mingo@elte.hu>
上级 c9b5f501
......@@ -1362,27 +1362,27 @@ static long effective_load(struct task_group *tg, int cpu, long wl, long wg)
return wl;
for_each_sched_entity(se) {
long S, rw, s, a, b;
long lw, w;
S = se->my_q->tg->shares;
s = se->load.weight;
rw = se->my_q->load.weight;
tg = se->my_q->tg;
w = se->my_q->load.weight;
a = S*(rw + wl);
b = S*rw + s*wg;
/* use this cpu's instantaneous contribution */
lw = atomic_read(&tg->load_weight);
lw -= se->my_q->load_contribution;
lw += w + wg;
wl = s*(a-b);
wl += w;
if (likely(b))
wl /= b;
if (lw > 0 && wl < lw)
wl = (wl * tg->shares) / lw;
else
wl = tg->shares;
/*
* Assume the group is already running and will
* thus already be accounted for in the weight.
*
* That is, moving shares between CPUs, does not
* alter the group weight.
*/
/* zero point is MIN_SHARES */
if (wl < MIN_SHARES)
wl = MIN_SHARES;
wl -= se->load.weight;
wg = 0;
}
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册