提交 5e9c8e83 编写于 作者: F Frederic Weisbecker 提交者: Zheng Zengkai

rcu: Fix callbacks processing time limit retaining cond_resched()

stable inclusion
from stable-v5.10.115
commit 40fb3812d99746f0322ada92fdc8d904a024e6de
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I5IZ9C

Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id=40fb3812d99746f0322ada92fdc8d904a024e6de

--------------------------------

commit 3e61e95e upstream.

The callbacks processing time limit makes sure we are not exceeding a
given amount of time executing the queue.

However its "continue" clause bypasses the cond_resched() call on
rcuc and NOCB kthreads, delaying it until we reach the limit, which can
be very long...

Make sure the scheduler has a higher priority than the time limit.
Reviewed-by: NValentin Schneider <valentin.schneider@arm.com>
Tested-by: NValentin Schneider <valentin.schneider@arm.com>
Tested-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: NFrederic Weisbecker <frederic@kernel.org>
Cc: Valentin Schneider <valentin.schneider@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Cc: Josh Triplett <josh@joshtriplett.org>
Cc: Joel Fernandes <joel@joelfernandes.org>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Neeraj Upadhyay <neeraju@codeaurora.org>
Cc: Uladzislau Rezki <urezki@gmail.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: NPaul E. McKenney <paulmck@kernel.org>
[UR: backport to 5.10-stable + commit update]
Signed-off-by: NUladzislau Rezki (Sony) <urezki@gmail.com>
Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
Acked-by: NXie XiuQi <xiexiuqi@huawei.com>
上级 148110e5
...@@ -2512,10 +2512,22 @@ static void rcu_do_batch(struct rcu_data *rdp) ...@@ -2512,10 +2512,22 @@ static void rcu_do_batch(struct rcu_data *rdp)
* Stop only if limit reached and CPU has something to do. * Stop only if limit reached and CPU has something to do.
* Note: The rcl structure counts down from zero. * Note: The rcl structure counts down from zero.
*/ */
if (-rcl.len >= bl && !offloaded && if (in_serving_softirq()) {
(need_resched() || if (-rcl.len >= bl && (need_resched() ||
(!is_idle_task(current) && !rcu_is_callbacks_kthread()))) (!is_idle_task(current) && !rcu_is_callbacks_kthread())))
break; break;
} else {
local_bh_enable();
lockdep_assert_irqs_enabled();
cond_resched_tasks_rcu_qs();
lockdep_assert_irqs_enabled();
local_bh_disable();
}
/*
* Make sure we don't spend too much time here and deprive other
* softirq vectors of CPU cycles.
*/
if (unlikely(tlimit)) { if (unlikely(tlimit)) {
/* only call local_clock() every 32 callbacks */ /* only call local_clock() every 32 callbacks */
if (likely((-rcl.len & 31) || local_clock() < tlimit)) if (likely((-rcl.len & 31) || local_clock() < tlimit))
...@@ -2523,14 +2535,6 @@ static void rcu_do_batch(struct rcu_data *rdp) ...@@ -2523,14 +2535,6 @@ static void rcu_do_batch(struct rcu_data *rdp)
/* Exceeded the time limit, so leave. */ /* Exceeded the time limit, so leave. */
break; break;
} }
if (offloaded) {
WARN_ON_ONCE(in_serving_softirq());
local_bh_enable();
lockdep_assert_irqs_enabled();
cond_resched_tasks_rcu_qs();
lockdep_assert_irqs_enabled();
local_bh_disable();
}
} }
local_irq_save(flags); local_irq_save(flags);
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册