提交 542ba75d 编写于 作者: R Rik van Riel 提交者: Zheng Zengkai

mm,hwpoison: unmap poisoned page before invalidation

stable inclusion
from stable-v5.10.110
commit bc2f58b8e47cc01cb75e13e29930e4e547d6bc5c
bugzilla: https://gitee.com/openeuler/kernel/issues/I574AL

Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id=bc2f58b8e47cc01cb75e13e29930e4e547d6bc5c

--------------------------------

commit 3149c79f upstream.

In some cases it appears the invalidation of a hwpoisoned page fails
because the page is still mapped in another process.  This can cause a
program to be continuously restarted and die when it page faults on the
page that was not invalidated.  Avoid that problem by unmapping the
hwpoisoned page when we find it.

Another issue is that sometimes we end up oopsing in finish_fault, if
the code tries to do something with the now-NULL vmf->page.  I did not
hit this error when submitting the previous patch because there are
several opportunities for alloc_set_pte to bail out before accessing
vmf->page, and that apparently happened on those systems, and most of
the time on other systems, too.

However, across several million systems that error does occur a handful
of times a day.  It can be avoided by returning VM_FAULT_NOPAGE which
will cause do_read_fault to return before calling finish_fault.

Link: https://lkml.kernel.org/r/20220325161428.5068d97e@imladris.surriel.com
Fixes: e53ac737 ("mm: invalidate hwpoison page cache page in fault path")
Signed-off-by: NRik van Riel <riel@surriel.com>
Reviewed-by: NMiaohe Lin <linmiaohe@huawei.com>
Tested-by: NNaoya Horiguchi <naoya.horiguchi@nec.com>
Reviewed-by: NOscar Salvador <osalvador@suse.de>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: <stable@vger.kernel.org>
Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: NYu Liao <liaoyu15@huawei.com>
Reviewed-by: NWei Li <liwei391@huawei.com>
Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
上级 14832a51
...@@ -3726,14 +3726,18 @@ static vm_fault_t __do_fault(struct vm_fault *vmf) ...@@ -3726,14 +3726,18 @@ static vm_fault_t __do_fault(struct vm_fault *vmf)
return ret; return ret;
if (unlikely(PageHWPoison(vmf->page))) { if (unlikely(PageHWPoison(vmf->page))) {
struct page *page = vmf->page;
vm_fault_t poisonret = VM_FAULT_HWPOISON; vm_fault_t poisonret = VM_FAULT_HWPOISON;
if (ret & VM_FAULT_LOCKED) { if (ret & VM_FAULT_LOCKED) {
if (page_mapped(page))
unmap_mapping_pages(page_mapping(page),
page->index, 1, false);
/* Retry if a clean page was removed from the cache. */ /* Retry if a clean page was removed from the cache. */
if (invalidate_inode_page(vmf->page)) if (invalidate_inode_page(page))
poisonret = 0; poisonret = VM_FAULT_NOPAGE;
unlock_page(vmf->page); unlock_page(page);
} }
put_page(vmf->page); put_page(page);
vmf->page = NULL; vmf->page = NULL;
return poisonret; return poisonret;
} }
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册