提交 213c8af6 编写于 作者: I Ingo Molnar

sched: small schedstat fix

small schedstat fix: the cfs_rq->wait_runtime 'sum of all runtimes'
statistics counters missed newly forked tasks and thus had a constant
negative skew. Fix this.
Signed-off-by: NIngo Molnar <mingo@elte.hu>
Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: NMike Galbraith <efault@gmx.de>
上级 b77d69db
......@@ -1121,8 +1121,10 @@ static void task_new_fair(struct rq *rq, struct task_struct *p)
* The statistical average of wait_runtime is about
* -granularity/2, so initialize the task with that:
*/
if (sysctl_sched_features & SCHED_FEAT_START_DEBIT)
if (sysctl_sched_features & SCHED_FEAT_START_DEBIT) {
p->se.wait_runtime = -(sched_granularity(cfs_rq) / 2);
schedstat_add(cfs_rq, wait_runtime, se->wait_runtime);
}
__enqueue_entity(cfs_rq, se);
}
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册