提交 abaf3f9d 编写于 作者: L Lai Jiangshan 提交者: Paul E. McKenney

rcu: Revert "Allow post-unlock reference for rt_mutex" to avoid priority-inversion

The patch dfeb9765 ("Allow post-unlock reference for rt_mutex")
ensured rcu-boost safe even the rt_mutex has post-unlock reference.

But rt_mutex allowing post-unlock reference is definitely a bug and it was
fixed by the commit 27e35715 ("rtmutex: Plug slow unlock race").
This fix made the previous patch (dfeb9765) useless.

And even worse, the priority-inversion introduced by the the previous
patch still exists.

rcu_read_unlock_special() {
	rt_mutex_unlock(&rnp->boost_mtx);
	/* Priority-Inversion:
	 * the current task had been deboosted and preempted as a low
	 * priority task immediately, it could wait long before reschedule in,
	 * and the rcu-booster also waits on this low priority task and sleeps.
	 * This priority-inversion makes rcu-booster can't work
	 * as expected.
	 */
	complete(&rnp->boost_completion);
}

Just revert the patch to avoid it.

Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: NLai Jiangshan <laijs@cn.fujitsu.com>
Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
上级 3ba4d0e0
...@@ -172,11 +172,6 @@ struct rcu_node { ...@@ -172,11 +172,6 @@ struct rcu_node {
/* queued on this rcu_node structure that */ /* queued on this rcu_node structure that */
/* are blocking the current grace period, */ /* are blocking the current grace period, */
/* there can be no such task. */ /* there can be no such task. */
struct completion boost_completion;
/* Used to ensure that the rt_mutex used */
/* to carry out the boosting is fully */
/* released with no future boostee accesses */
/* before that rt_mutex is re-initialized. */
struct rt_mutex boost_mtx; struct rt_mutex boost_mtx;
/* Used only for the priority-boosting */ /* Used only for the priority-boosting */
/* side effect, not as a lock. */ /* side effect, not as a lock. */
......
...@@ -429,10 +429,8 @@ void rcu_read_unlock_special(struct task_struct *t) ...@@ -429,10 +429,8 @@ void rcu_read_unlock_special(struct task_struct *t)
#ifdef CONFIG_RCU_BOOST #ifdef CONFIG_RCU_BOOST
/* Unboost if we were boosted. */ /* Unboost if we were boosted. */
if (drop_boost_mutex) { if (drop_boost_mutex)
rt_mutex_unlock(&rnp->boost_mtx); rt_mutex_unlock(&rnp->boost_mtx);
complete(&rnp->boost_completion);
}
#endif /* #ifdef CONFIG_RCU_BOOST */ #endif /* #ifdef CONFIG_RCU_BOOST */
/* /*
...@@ -1081,15 +1079,11 @@ static int rcu_boost(struct rcu_node *rnp) ...@@ -1081,15 +1079,11 @@ static int rcu_boost(struct rcu_node *rnp)
*/ */
t = container_of(tb, struct task_struct, rcu_node_entry); t = container_of(tb, struct task_struct, rcu_node_entry);
rt_mutex_init_proxy_locked(&rnp->boost_mtx, t); rt_mutex_init_proxy_locked(&rnp->boost_mtx, t);
init_completion(&rnp->boost_completion);
raw_spin_unlock_irqrestore(&rnp->lock, flags); raw_spin_unlock_irqrestore(&rnp->lock, flags);
/* Lock only for side effect: boosts task t's priority. */ /* Lock only for side effect: boosts task t's priority. */
rt_mutex_lock(&rnp->boost_mtx); rt_mutex_lock(&rnp->boost_mtx);
rt_mutex_unlock(&rnp->boost_mtx); /* Then keep lockdep happy. */ rt_mutex_unlock(&rnp->boost_mtx); /* Then keep lockdep happy. */
/* Wait for boostee to be done w/boost_mtx before reinitializing. */
wait_for_completion(&rnp->boost_completion);
return ACCESS_ONCE(rnp->exp_tasks) != NULL || return ACCESS_ONCE(rnp->exp_tasks) != NULL ||
ACCESS_ONCE(rnp->boost_tasks) != NULL; ACCESS_ONCE(rnp->boost_tasks) != NULL;
} }
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册