提交 3a13c4d7 编写于 作者: J Johannes Weiner 提交者: Linus Torvalds

x86: finish user fault error path with fatal signal

The x86 fault handler bails in the middle of error handling when the
task has a fatal signal pending.  For a subsequent patch this is a
problem in OOM situations because it relies on pagefault_out_of_memory()
being called even when the task has been killed, to perform proper
per-task OOM state unwinding.

Shortcutting the fault like this is a rather minor optimization that
saves a few instructions in rare cases.  Just remove it for
user-triggered faults.

Use the opportunity to split the fault retry handling from actual fault
errors and add locking documentation that reads suprisingly similar to
ARM's.
Signed-off-by: NJohannes Weiner <hannes@cmpxchg.org>
Reviewed-by: NMichal Hocko <mhocko@suse.cz>
Acked-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: David Rientjes <rientjes@google.com>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: azurIt <azurit@pobox.sk>
Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
上级 759496ba
...@@ -842,23 +842,15 @@ do_sigbus(struct pt_regs *regs, unsigned long error_code, unsigned long address, ...@@ -842,23 +842,15 @@ do_sigbus(struct pt_regs *regs, unsigned long error_code, unsigned long address,
force_sig_info_fault(SIGBUS, code, address, tsk, fault); force_sig_info_fault(SIGBUS, code, address, tsk, fault);
} }
static noinline int static noinline void
mm_fault_error(struct pt_regs *regs, unsigned long error_code, mm_fault_error(struct pt_regs *regs, unsigned long error_code,
unsigned long address, unsigned int fault) unsigned long address, unsigned int fault)
{ {
/* if (fatal_signal_pending(current) && !(error_code & PF_USER)) {
* Pagefault was interrupted by SIGKILL. We have no reason to
* continue pagefault.
*/
if (fatal_signal_pending(current)) {
if (!(fault & VM_FAULT_RETRY))
up_read(&current->mm->mmap_sem); up_read(&current->mm->mmap_sem);
if (!(error_code & PF_USER))
no_context(regs, error_code, address, 0, 0); no_context(regs, error_code, address, 0, 0);
return 1; return;
} }
if (!(fault & VM_FAULT_ERROR))
return 0;
if (fault & VM_FAULT_OOM) { if (fault & VM_FAULT_OOM) {
/* Kernel mode? Handle exceptions or die: */ /* Kernel mode? Handle exceptions or die: */
...@@ -866,7 +858,7 @@ mm_fault_error(struct pt_regs *regs, unsigned long error_code, ...@@ -866,7 +858,7 @@ mm_fault_error(struct pt_regs *regs, unsigned long error_code,
up_read(&current->mm->mmap_sem); up_read(&current->mm->mmap_sem);
no_context(regs, error_code, address, no_context(regs, error_code, address,
SIGSEGV, SEGV_MAPERR); SIGSEGV, SEGV_MAPERR);
return 1; return;
} }
up_read(&current->mm->mmap_sem); up_read(&current->mm->mmap_sem);
...@@ -884,7 +876,6 @@ mm_fault_error(struct pt_regs *regs, unsigned long error_code, ...@@ -884,7 +876,6 @@ mm_fault_error(struct pt_regs *regs, unsigned long error_code,
else else
BUG(); BUG();
} }
return 1;
} }
static int spurious_fault_check(unsigned long error_code, pte_t *pte) static int spurious_fault_check(unsigned long error_code, pte_t *pte)
...@@ -1189,8 +1180,16 @@ __do_page_fault(struct pt_regs *regs, unsigned long error_code) ...@@ -1189,8 +1180,16 @@ __do_page_fault(struct pt_regs *regs, unsigned long error_code)
*/ */
fault = handle_mm_fault(mm, vma, address, flags); fault = handle_mm_fault(mm, vma, address, flags);
if (unlikely(fault & (VM_FAULT_RETRY|VM_FAULT_ERROR))) { /*
if (mm_fault_error(regs, error_code, address, fault)) * If we need to retry but a fatal signal is pending, handle the
* signal first. We do not need to release the mmap_sem because it
* would already be released in __lock_page_or_retry in mm/filemap.c.
*/
if (unlikely((fault & VM_FAULT_RETRY) && fatal_signal_pending(current)))
return;
if (unlikely(fault & VM_FAULT_ERROR)) {
mm_fault_error(regs, error_code, address, fault);
return; return;
} }
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册