提交 12b04875 编写于 作者: V Vincent Guittot 提交者: Ingo Molnar

sched/pelt: Fix update_blocked_averages() for RT and DL classes

update_blocked_averages() is called to periodiccally decay the stalled load
of idle CPUs and to sync all loads before running load balance.

When cfs rq is idle, it trigs a load balance during pick_next_task_fair()
in order to potentially pull tasks and to use this newly idle CPU. This
load balance happens whereas prev task from another class has not been put
and its utilization updated yet. This may lead to wrongly account running
time as idle time for RT or DL classes.

Test that no RT or DL task is running when updating their utilization in
update_blocked_averages().

We still update RT and DL utilization instead of simply skipping them to
make sure that all metrics are synced when used during load balance.
Signed-off-by: NVincent Guittot <vincent.guittot@linaro.org>
Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Fixes: 371bf427 ("sched/rt: Add rt_rq utilization tracking")
Fixes: 3727e0e1 ("sched/dl: Add dl_rq utilization tracking")
Link: http://lkml.kernel.org/r/1535728975-22799-1-git-send-email-vincent.guittot@linaro.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
上级 e5e96faf
...@@ -7263,6 +7263,7 @@ static void update_blocked_averages(int cpu) ...@@ -7263,6 +7263,7 @@ static void update_blocked_averages(int cpu)
{ {
struct rq *rq = cpu_rq(cpu); struct rq *rq = cpu_rq(cpu);
struct cfs_rq *cfs_rq, *pos; struct cfs_rq *cfs_rq, *pos;
const struct sched_class *curr_class;
struct rq_flags rf; struct rq_flags rf;
bool done = true; bool done = true;
...@@ -7299,8 +7300,10 @@ static void update_blocked_averages(int cpu) ...@@ -7299,8 +7300,10 @@ static void update_blocked_averages(int cpu)
if (cfs_rq_has_blocked(cfs_rq)) if (cfs_rq_has_blocked(cfs_rq))
done = false; done = false;
} }
update_rt_rq_load_avg(rq_clock_task(rq), rq, 0);
update_dl_rq_load_avg(rq_clock_task(rq), rq, 0); curr_class = rq->curr->sched_class;
update_rt_rq_load_avg(rq_clock_task(rq), rq, curr_class == &rt_sched_class);
update_dl_rq_load_avg(rq_clock_task(rq), rq, curr_class == &dl_sched_class);
update_irq_load_avg(rq, 0); update_irq_load_avg(rq, 0);
/* Don't need periodic decay once load/util_avg are null */ /* Don't need periodic decay once load/util_avg are null */
if (others_have_blocked(rq)) if (others_have_blocked(rq))
...@@ -7365,13 +7368,16 @@ static inline void update_blocked_averages(int cpu) ...@@ -7365,13 +7368,16 @@ static inline void update_blocked_averages(int cpu)
{ {
struct rq *rq = cpu_rq(cpu); struct rq *rq = cpu_rq(cpu);
struct cfs_rq *cfs_rq = &rq->cfs; struct cfs_rq *cfs_rq = &rq->cfs;
const struct sched_class *curr_class;
struct rq_flags rf; struct rq_flags rf;
rq_lock_irqsave(rq, &rf); rq_lock_irqsave(rq, &rf);
update_rq_clock(rq); update_rq_clock(rq);
update_cfs_rq_load_avg(cfs_rq_clock_task(cfs_rq), cfs_rq); update_cfs_rq_load_avg(cfs_rq_clock_task(cfs_rq), cfs_rq);
update_rt_rq_load_avg(rq_clock_task(rq), rq, 0);
update_dl_rq_load_avg(rq_clock_task(rq), rq, 0); curr_class = rq->curr->sched_class;
update_rt_rq_load_avg(rq_clock_task(rq), rq, curr_class == &rt_sched_class);
update_dl_rq_load_avg(rq_clock_task(rq), rq, curr_class == &dl_sched_class);
update_irq_load_avg(rq, 0); update_irq_load_avg(rq, 0);
#ifdef CONFIG_NO_HZ_COMMON #ifdef CONFIG_NO_HZ_COMMON
rq->last_blocked_load_update_tick = jiffies; rq->last_blocked_load_update_tick = jiffies;
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册