From 94cd783996b7fce871e7c42eb8adc4f7517fa054 Mon Sep 17 00:00:00 2001 From: "Kirill A. Shutemov" Date: Thu, 29 Oct 2020 21:04:31 +0800 Subject: [PATCH] khugepaged: drain LRU add pagevec after swapin mainline inclusion from mainline-v5.8-rc1 commit ae2c5d8042426b69c5f4a74296d1a20bb769a8ad category: bugfix bugzilla: 36222 CVE: NA ------------------------------------------------- collapse_huge_page() tries to swap in pages that are part of the PMD range. Just swapped in page goes though LRU add cache. The cache gets extra reference on the page. The extra reference can lead to the collapse fail: the following __collapse_huge_page_isolate() would check refcount and abort collapse seeing unexpected refcount. The fix is to drain local LRU add cache in __collapse_huge_page_swapin() if we successfully swapped in any pages. Signed-off-by: Kirill A. Shutemov Signed-off-by: Andrew Morton Tested-by: Zi Yan Reviewed-by: William Kucharski Reviewed-by: Zi Yan Acked-by: Yang Shi Cc: Andrea Arcangeli Cc: John Hubbard Cc: Mike Kravetz Cc: Ralph Campbell Link: http://lkml.kernel.org/r/20200416160026.16538-5-kirill.shutemov@linux.intel.com Signed-off-by: Linus Torvalds Signed-off-by: Liu Shixin Reviewed-by: Kefeng Wang Signed-off-by: Yang Yingliang --- mm/khugepaged.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/mm/khugepaged.c b/mm/khugepaged.c index a31028773e13..669404342fbe 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -937,6 +937,11 @@ static bool __collapse_huge_page_swapin(struct mm_struct *mm, } vmf.pte--; pte_unmap(vmf.pte); + + /* Drain LRU add pagevec to remove extra pin on the swapped in pages */ + if (swapped_in) + lru_add_drain(); + trace_mm_collapse_huge_page_swapin(mm, swapped_in, referenced, 1); return true; } -- GitLab