提交 e56fb287 编写于 作者: O Oleg Nesterov 提交者: Linus Torvalds

exec: do not abuse ->cred_guard_mutex in threadgroup_lock()

threadgroup_lock() takes signal->cred_guard_mutex to ensure that
thread_group_leader() is stable.  This doesn't look nice, the scope of
this lock in do_execve() is huge.

And as Dave pointed out this can lead to deadlock, we have the
following dependencies:

	do_execve:		cred_guard_mutex -> i_mutex
	cgroup_mount:		i_mutex -> cgroup_mutex
	attach_task_by_pid:	cgroup_mutex -> cred_guard_mutex

Change de_thread() to take threadgroup_change_begin() around the
switch-the-leader code and change threadgroup_lock() to avoid
->cred_guard_mutex.

Note that de_thread() can't sleep with ->group_rwsem held, this can
obviously deadlock with the exiting leader if the writer is active, so it
does threadgroup_change_end() before schedule().
Reported-by: NDave Jones <davej@redhat.com>
Acked-by: NTejun Heo <tj@kernel.org>
Acked-by: NLi Zefan <lizefan@huawei.com>
Signed-off-by: NOleg Nesterov <oleg@redhat.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
上级 12eaaf30
...@@ -898,11 +898,13 @@ static int de_thread(struct task_struct *tsk) ...@@ -898,11 +898,13 @@ static int de_thread(struct task_struct *tsk)
sig->notify_count = -1; /* for exit_notify() */ sig->notify_count = -1; /* for exit_notify() */
for (;;) { for (;;) {
threadgroup_change_begin(tsk);
write_lock_irq(&tasklist_lock); write_lock_irq(&tasklist_lock);
if (likely(leader->exit_state)) if (likely(leader->exit_state))
break; break;
__set_current_state(TASK_KILLABLE); __set_current_state(TASK_KILLABLE);
write_unlock_irq(&tasklist_lock); write_unlock_irq(&tasklist_lock);
threadgroup_change_end(tsk);
schedule(); schedule();
if (unlikely(__fatal_signal_pending(tsk))) if (unlikely(__fatal_signal_pending(tsk)))
goto killed; goto killed;
...@@ -960,6 +962,7 @@ static int de_thread(struct task_struct *tsk) ...@@ -960,6 +962,7 @@ static int de_thread(struct task_struct *tsk)
if (unlikely(leader->ptrace)) if (unlikely(leader->ptrace))
__wake_up_parent(leader, leader->parent); __wake_up_parent(leader, leader->parent);
write_unlock_irq(&tasklist_lock); write_unlock_irq(&tasklist_lock);
threadgroup_change_end(tsk);
release_task(leader); release_task(leader);
} }
......
...@@ -2249,27 +2249,18 @@ static inline void threadgroup_change_end(struct task_struct *tsk) ...@@ -2249,27 +2249,18 @@ static inline void threadgroup_change_end(struct task_struct *tsk)
* *
* Lock the threadgroup @tsk belongs to. No new task is allowed to enter * Lock the threadgroup @tsk belongs to. No new task is allowed to enter
* and member tasks aren't allowed to exit (as indicated by PF_EXITING) or * and member tasks aren't allowed to exit (as indicated by PF_EXITING) or
* perform exec. This is useful for cases where the threadgroup needs to * change ->group_leader/pid. This is useful for cases where the threadgroup
* stay stable across blockable operations. * needs to stay stable across blockable operations.
* *
* fork and exit paths explicitly call threadgroup_change_{begin|end}() for * fork and exit paths explicitly call threadgroup_change_{begin|end}() for
* synchronization. While held, no new task will be added to threadgroup * synchronization. While held, no new task will be added to threadgroup
* and no existing live task will have its PF_EXITING set. * and no existing live task will have its PF_EXITING set.
* *
* During exec, a task goes and puts its thread group through unusual * de_thread() does threadgroup_change_{begin|end}() when a non-leader
* changes. After de-threading, exclusive access is assumed to resources * sub-thread becomes a new leader.
* which are usually shared by tasks in the same group - e.g. sighand may
* be replaced with a new one. Also, the exec'ing task takes over group
* leader role including its pid. Exclude these changes while locked by
* grabbing cred_guard_mutex which is used to synchronize exec path.
*/ */
static inline void threadgroup_lock(struct task_struct *tsk) static inline void threadgroup_lock(struct task_struct *tsk)
{ {
/*
* exec uses exit for de-threading nesting group_rwsem inside
* cred_guard_mutex. Grab cred_guard_mutex first.
*/
mutex_lock(&tsk->signal->cred_guard_mutex);
down_write(&tsk->signal->group_rwsem); down_write(&tsk->signal->group_rwsem);
} }
...@@ -2282,7 +2273,6 @@ static inline void threadgroup_lock(struct task_struct *tsk) ...@@ -2282,7 +2273,6 @@ static inline void threadgroup_lock(struct task_struct *tsk)
static inline void threadgroup_unlock(struct task_struct *tsk) static inline void threadgroup_unlock(struct task_struct *tsk)
{ {
up_write(&tsk->signal->group_rwsem); up_write(&tsk->signal->group_rwsem);
mutex_unlock(&tsk->signal->cred_guard_mutex);
} }
#else #else
static inline void threadgroup_change_begin(struct task_struct *tsk) {} static inline void threadgroup_change_begin(struct task_struct *tsk) {}
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册