提交 397335f0 编写于 作者: T Thomas Gleixner

rtmutex: Fix deadlock detector for real

The current deadlock detection logic does not work reliably due to the
following early exit path:

	/*
	 * Drop out, when the task has no waiters. Note,
	 * top_waiter can be NULL, when we are in the deboosting
	 * mode!
	 */
	if (top_waiter && (!task_has_pi_waiters(task) ||
			   top_waiter != task_top_pi_waiter(task)))
		goto out_unlock_pi;

So this not only exits when the task has no waiters, it also exits
unconditionally when the current waiter is not the top priority waiter
of the task.

So in a nested locking scenario, it might abort the lock chain walk
and therefor miss a potential deadlock.

Simple fix: Continue the chain walk, when deadlock detection is
enabled.

We also avoid the whole enqueue, if we detect the deadlock right away
(A-A). It's an optimization, but also prevents that another waiter who
comes in after the detection and before the task has undone the damage
observes the situation and detects the deadlock and returns
-EDEADLOCK, which is wrong as the other task is not in a deadlock
situation.
Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Reviewed-by: NSteven Rostedt <rostedt@goodmis.org>
Cc: Lai Jiangshan <laijs@cn.fujitsu.com>
Cc: stable@vger.kernel.org
Link: http://lkml.kernel.org/r/20140522031949.725272460@linutronix.deSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
上级 f0d71b3d
...@@ -343,9 +343,16 @@ static int rt_mutex_adjust_prio_chain(struct task_struct *task, ...@@ -343,9 +343,16 @@ static int rt_mutex_adjust_prio_chain(struct task_struct *task,
* top_waiter can be NULL, when we are in the deboosting * top_waiter can be NULL, when we are in the deboosting
* mode! * mode!
*/ */
if (top_waiter && (!task_has_pi_waiters(task) || if (top_waiter) {
top_waiter != task_top_pi_waiter(task))) if (!task_has_pi_waiters(task))
goto out_unlock_pi; goto out_unlock_pi;
/*
* If deadlock detection is off, we stop here if we
* are not the top pi waiter of the task.
*/
if (!detect_deadlock && top_waiter != task_top_pi_waiter(task))
goto out_unlock_pi;
}
/* /*
* When deadlock detection is off then we check, if further * When deadlock detection is off then we check, if further
...@@ -361,7 +368,12 @@ static int rt_mutex_adjust_prio_chain(struct task_struct *task, ...@@ -361,7 +368,12 @@ static int rt_mutex_adjust_prio_chain(struct task_struct *task,
goto retry; goto retry;
} }
/* Deadlock detection */ /*
* Deadlock detection. If the lock is the same as the original
* lock which caused us to walk the lock chain or if the
* current lock is owned by the task which initiated the chain
* walk, we detected a deadlock.
*/
if (lock == orig_lock || rt_mutex_owner(lock) == top_task) { if (lock == orig_lock || rt_mutex_owner(lock) == top_task) {
debug_rt_mutex_deadlock(deadlock_detect, orig_waiter, lock); debug_rt_mutex_deadlock(deadlock_detect, orig_waiter, lock);
raw_spin_unlock(&lock->wait_lock); raw_spin_unlock(&lock->wait_lock);
...@@ -527,6 +539,18 @@ static int task_blocks_on_rt_mutex(struct rt_mutex *lock, ...@@ -527,6 +539,18 @@ static int task_blocks_on_rt_mutex(struct rt_mutex *lock,
unsigned long flags; unsigned long flags;
int chain_walk = 0, res; int chain_walk = 0, res;
/*
* Early deadlock detection. We really don't want the task to
* enqueue on itself just to untangle the mess later. It's not
* only an optimization. We drop the locks, so another waiter
* can come in before the chain walk detects the deadlock. So
* the other will detect the deadlock and return -EDEADLOCK,
* which is wrong, as the other waiter is not in a deadlock
* situation.
*/
if (detect_deadlock && owner == task)
return -EDEADLK;
raw_spin_lock_irqsave(&task->pi_lock, flags); raw_spin_lock_irqsave(&task->pi_lock, flags);
__rt_mutex_adjust_prio(task); __rt_mutex_adjust_prio(task);
waiter->task = task; waiter->task = task;
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册