提交 7a4765b9 编写于 作者: K Kirill A. Shutemov 提交者: Yang Yingliang

khugepaged: drain all LRU caches before scanning pages

mainline inclusion
from mainline-v5.8-rc1
commit a980df33
category: bugfix
bugzilla: 36242
CVE: NA

-------------------------------------------------

Having a page in LRU add cache offsets page refcount and gives
false-negative on PageLRU().  It reduces collapse success rate.

Drain all LRU add caches before scanning.  It happens relatively rare and
should not disturb the system too much.
Signed-off-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com>
Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
Tested-by: NZi Yan <ziy@nvidia.com>
Reviewed-by: NWilliam Kucharski <william.kucharski@oracle.com>
Reviewed-by: NZi Yan <ziy@nvidia.com>
Acked-by: NYang Shi <yang.shi@linux.alibaba.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: John Hubbard <jhubbard@nvidia.com>
Cc: Mike Kravetz <mike.kravetz@oracle.com>
Cc: Ralph Campbell <rcampbell@nvidia.com>
Link: http://lkml.kernel.org/r/20200416160026.16538-4-kirill.shutemov@linux.intel.comSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: NLiu Shixin <liushixin2@huawei.com>
Reviewed-by: NKefeng Wang <wangkefeng.wang@huawei.com>
Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
上级 c9ab6488
...@@ -1832,6 +1832,8 @@ static void khugepaged_do_scan(void) ...@@ -1832,6 +1832,8 @@ static void khugepaged_do_scan(void)
barrier(); /* write khugepaged_pages_to_scan to local stack */ barrier(); /* write khugepaged_pages_to_scan to local stack */
lru_add_drain_all();
while (progress < pages) { while (progress < pages) {
if (!khugepaged_prealloc_page(&hpage, &wait)) if (!khugepaged_prealloc_page(&hpage, &wait))
break; break;
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册