1. 01 7月, 2021 5 次提交
    • P
      mm/thp: simplify copying of huge zero page pmd when fork · 5fc7a5f6
      Peter Xu 提交于
      Patch series "mm/uffd: Misc fix for uffd-wp and one more test".
      
      This series tries to fix some corner case bugs for uffd-wp on either thp
      or fork().  Then it introduced a new test with pagemap/pageout.
      
      Patch layout:
      
      Patch 1:    cleanup for THP, it'll slightly simplify the follow up patches
      Patch 2-4:  misc fixes for uffd-wp here and there; please refer to each patch
      Patch 5:    add pagemap support for uffd-wp
      Patch 6:    add pagemap/pageout test for uffd-wp
      
      The last test introduced can also verify some of the fixes in previous
      patches, as the test will fail without the fixes.  However it's not easy
      to verify all the changes in patch 2-4, but hopefully they can still be
      properly reviewed.
      
      Note that if considering the ongoing uffd-wp shmem & hugetlbfs work, patch
      5 will be incomplete as it's missing e.g.  hugetlbfs part or the special
      swap pte detection.  However that's not needed in this series, and since
      that series is still during review, this series does not depend on that
      one (the last test only runs with anonymous memory, not file-backed).  So
      this series can be merged even before that series.
      
      This patch (of 6):
      
      Huge zero page is handled in a special path in copy_huge_pmd(), however it
      should share most codes with a normal thp page.  Trying to share more code
      with it by removing the special path.  The only leftover so far is the
      huge zero page refcounting (mm_get_huge_zero_page()), because that's
      separately done with a global counter.
      
      This prepares for a future patch to modify the huge pmd to be installed,
      so that we don't need to duplicate it explicitly into huge zero page case
      too.
      
      Link: https://lkml.kernel.org/r/20210428225030.9708-1-peterx@redhat.com
      Link: https://lkml.kernel.org/r/20210428225030.9708-2-peterx@redhat.comSigned-off-by: NPeter Xu <peterx@redhat.com>
      Cc: Kirill A. Shutemov <kirill@shutemov.name>
      Cc: Mike Kravetz <mike.kravetz@oracle.com>, peterx@redhat.com
      Cc: Mike Rapoport <rppt@linux.vnet.ibm.com>
      Cc: Axel Rasmussen <axelrasmussen@google.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Jerome Glisse <jglisse@redhat.com>
      Cc: Alexander Viro <viro@zeniv.linux.org.uk>
      Cc: Brian Geffon <bgeffon@google.com>
      Cc: "Dr . David Alan Gilbert" <dgilbert@redhat.com>
      Cc: Joe Perches <joe@perches.com>
      Cc: Lokesh Gidra <lokeshgidra@google.com>
      Cc: Mina Almasry <almasrymina@google.com>
      Cc: Oliver Upton <oupton@google.com>
      Cc: Shaohua Li <shli@fb.com>
      Cc: Shuah Khan <shuah@kernel.org>
      Cc: Stephen Rothwell <sfr@canb.auug.org.au>
      Cc: Wang Qing <wangqing@vivo.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      5fc7a5f6
    • M
      mm/huge_memory.c: don't discard hugepage if other processes are mapping it · babbbdd0
      Miaohe Lin 提交于
      If other processes are mapping any other subpages of the hugepage, i.e.
      in pte-mapped thp case, page_mapcount() will return 1 incorrectly.  Then
      we would discard the page while other processes are still mapping it.  Fix
      it by using total_mapcount() which can tell whether other processes are
      still mapping it.
      
      Link: https://lkml.kernel.org/r/20210511134857.1581273-6-linmiaohe@huawei.com
      Fixes: b8d3c4c3 ("mm/huge_memory.c: don't split THP page when MADV_FREE syscall is called")
      Reviewed-by: NYang Shi <shy828301@gmail.com>
      Signed-off-by: NMiaohe Lin <linmiaohe@huawei.com>
      Cc: Alexey Dobriyan <adobriyan@gmail.com>
      Cc: "Aneesh Kumar K . V" <aneesh.kumar@linux.ibm.com>
      Cc: Anshuman Khandual <anshuman.khandual@arm.com>
      Cc: David Hildenbrand <david@redhat.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: Ralph Campbell <rcampbell@nvidia.com>
      Cc: Rik van Riel <riel@surriel.com>
      Cc: Song Liu <songliubraving@fb.com>
      Cc: William Kucharski <william.kucharski@oracle.com>
      Cc: Zi Yan <ziy@nvidia.com>
      Cc: Mike Kravetz <mike.kravetz@oracle.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      babbbdd0
    • M
      mm/huge_memory.c: remove unnecessary tlb_remove_page_size() for huge zero pmd · 9132a468
      Miaohe Lin 提交于
      Commit aa88b68c ("thp: keep huge zero page pinned until tlb flush")
      introduced tlb_remove_page() for huge zero page to keep it pinned until
      flush is complete and prevents the page from being split under us.  But
      huge zero page is kept pinned until all relevant mm_users reach zero since
      the commit 6fcb52a5 ("thp: reduce usage of huge zero page's atomic
      counter").  So tlb_remove_page_size() for huge zero pmd is unnecessary
      now.
      
      Link: https://lkml.kernel.org/r/20210511134857.1581273-5-linmiaohe@huawei.comReviewed-by: NYang Shi <shy828301@gmail.com>
      Acked-by: NDavid Hildenbrand <david@redhat.com>
      Signed-off-by: NMiaohe Lin <linmiaohe@huawei.com>
      Cc: Alexey Dobriyan <adobriyan@gmail.com>
      Cc: "Aneesh Kumar K . V" <aneesh.kumar@linux.ibm.com>
      Cc: Anshuman Khandual <anshuman.khandual@arm.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: Ralph Campbell <rcampbell@nvidia.com>
      Cc: Rik van Riel <riel@surriel.com>
      Cc: Song Liu <songliubraving@fb.com>
      Cc: William Kucharski <william.kucharski@oracle.com>
      Cc: Zi Yan <ziy@nvidia.com>
      Cc: Mike Kravetz <mike.kravetz@oracle.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      9132a468
    • M
      mm/huge_memory.c: add missing read-only THP checking in transparent_hugepage_enabled() · e6be37b2
      Miaohe Lin 提交于
      Since commit 99cb0dbd ("mm,thp: add read-only THP support for
      (non-shmem) FS"), read-only THP file mapping is supported.  But it forgot
      to add checking for it in transparent_hugepage_enabled().  To fix it, we
      add checking for read-only THP file mapping and also introduce helper
      transhuge_vma_enabled() to check whether thp is enabled for specified vma
      to reduce duplicated code.  We rename transparent_hugepage_enabled to
      transparent_hugepage_active to make the code easier to follow as suggested
      by David Hildenbrand.
      
      [linmiaohe@huawei.com: define transhuge_vma_enabled next to transhuge_vma_suitable]
        Link: https://lkml.kernel.org/r/20210514093007.4117906-1-linmiaohe@huawei.com
      
      Link: https://lkml.kernel.org/r/20210511134857.1581273-4-linmiaohe@huawei.com
      Fixes: 99cb0dbd ("mm,thp: add read-only THP support for (non-shmem) FS")
      Signed-off-by: NMiaohe Lin <linmiaohe@huawei.com>
      Reviewed-by: NYang Shi <shy828301@gmail.com>
      Cc: Alexey Dobriyan <adobriyan@gmail.com>
      Cc: "Aneesh Kumar K . V" <aneesh.kumar@linux.ibm.com>
      Cc: Anshuman Khandual <anshuman.khandual@arm.com>
      Cc: David Hildenbrand <david@redhat.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: Ralph Campbell <rcampbell@nvidia.com>
      Cc: Rik van Riel <riel@surriel.com>
      Cc: Song Liu <songliubraving@fb.com>
      Cc: William Kucharski <william.kucharski@oracle.com>
      Cc: Zi Yan <ziy@nvidia.com>
      Cc: Mike Kravetz <mike.kravetz@oracle.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      e6be37b2
    • M
      mm/huge_memory.c: use page->deferred_list · dfe5c51c
      Miaohe Lin 提交于
      Now that we can represent the location of ->deferred_list instead of
      ->mapping + ->index, make use of it to improve readability.
      
      Link: https://lkml.kernel.org/r/20210511134857.1581273-3-linmiaohe@huawei.comSigned-off-by: NMiaohe Lin <linmiaohe@huawei.com>
      Reviewed-by: NYang Shi <shy828301@gmail.com>
      Reviewed-by: NDavid Hildenbrand <david@redhat.com>
      Cc: Alexey Dobriyan <adobriyan@gmail.com>
      Cc: "Aneesh Kumar K . V" <aneesh.kumar@linux.ibm.com>
      Cc: Anshuman Khandual <anshuman.khandual@arm.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: Ralph Campbell <rcampbell@nvidia.com>
      Cc: Rik van Riel <riel@surriel.com>
      Cc: Song Liu <songliubraving@fb.com>
      Cc: William Kucharski <william.kucharski@oracle.com>
      Cc: Zi Yan <ziy@nvidia.com>
      Cc: Mike Kravetz <mike.kravetz@oracle.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      dfe5c51c
  2. 17 6月, 2021 4 次提交
    • Y
      mm: thp: replace DEBUG_VM BUG with VM_WARN when unmap fails for split · 504e070d
      Yang Shi 提交于
      When debugging the bug reported by Wang Yugui [1], try_to_unmap() may
      fail, but the first VM_BUG_ON_PAGE() just checks page_mapcount() however
      it may miss the failure when head page is unmapped but other subpage is
      mapped.  Then the second DEBUG_VM BUG() that check total mapcount would
      catch it.  This may incur some confusion.
      
      As this is not a fatal issue, so consolidate the two DEBUG_VM checks
      into one VM_WARN_ON_ONCE_PAGE().
      
      [1] https://lore.kernel.org/linux-mm/20210412180659.B9E3.409509F4@e16-tech.com/
      
      Link: https://lkml.kernel.org/r/d0f0db68-98b8-ebfb-16dc-f29df24cf012@google.comSigned-off-by: NYang Shi <shy828301@gmail.com>
      Reviewed-by: NZi Yan <ziy@nvidia.com>
      Acked-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Signed-off-by: NHugh Dickins <hughd@google.com>
      Cc: Alistair Popple <apopple@nvidia.com>
      Cc: Jan Kara <jack@suse.cz>
      Cc: Jue Wang <juew@google.com>
      Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org>
      Cc: Miaohe Lin <linmiaohe@huawei.com>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: Naoya Horiguchi <naoya.horiguchi@nec.com>
      Cc: Oscar Salvador <osalvador@suse.de>
      Cc: Peter Xu <peterx@redhat.com>
      Cc: Ralph Campbell <rcampbell@nvidia.com>
      Cc: Shakeel Butt <shakeelb@google.com>
      Cc: Wang Yugui <wangyugui@e16-tech.com>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      504e070d
    • H
      mm/thp: try_to_unmap() use TTU_SYNC for safe splitting · 732ed558
      Hugh Dickins 提交于
      Stressing huge tmpfs often crashed on unmap_page()'s VM_BUG_ON_PAGE
      (!unmap_success): with dump_page() showing mapcount:1, but then its raw
      struct page output showing _mapcount ffffffff i.e.  mapcount 0.
      
      And even if that particular VM_BUG_ON_PAGE(!unmap_success) is removed,
      it is immediately followed by a VM_BUG_ON_PAGE(compound_mapcount(head)),
      and further down an IS_ENABLED(CONFIG_DEBUG_VM) total_mapcount BUG():
      all indicative of some mapcount difficulty in development here perhaps.
      But the !CONFIG_DEBUG_VM path handles the failures correctly and
      silently.
      
      I believe the problem is that once a racing unmap has cleared pte or
      pmd, try_to_unmap_one() may skip taking the page table lock, and emerge
      from try_to_unmap() before the racing task has reached decrementing
      mapcount.
      
      Instead of abandoning the unsafe VM_BUG_ON_PAGE(), and the ones that
      follow, use PVMW_SYNC in try_to_unmap_one() in this case: adding
      TTU_SYNC to the options, and passing that from unmap_page().
      
      When CONFIG_DEBUG_VM, or for non-debug too? Consensus is to do the same
      for both: the slight overhead added should rarely matter, except perhaps
      if splitting sparsely-populated multiply-mapped shmem.  Once confident
      that bugs are fixed, TTU_SYNC here can be removed, and the race
      tolerated.
      
      Link: https://lkml.kernel.org/r/c1e95853-8bcd-d8fd-55fa-e7f2488e78f@google.com
      Fixes: fec89c10 ("thp: rewrite freeze_page()/unfreeze_page() with generic rmap walkers")
      Signed-off-by: NHugh Dickins <hughd@google.com>
      Cc: Alistair Popple <apopple@nvidia.com>
      Cc: Jan Kara <jack@suse.cz>
      Cc: Jue Wang <juew@google.com>
      Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org>
      Cc: Miaohe Lin <linmiaohe@huawei.com>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: Naoya Horiguchi <naoya.horiguchi@nec.com>
      Cc: Oscar Salvador <osalvador@suse.de>
      Cc: Peter Xu <peterx@redhat.com>
      Cc: Ralph Campbell <rcampbell@nvidia.com>
      Cc: Shakeel Butt <shakeelb@google.com>
      Cc: Wang Yugui <wangyugui@e16-tech.com>
      Cc: Yang Shi <shy828301@gmail.com>
      Cc: Zi Yan <ziy@nvidia.com>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      732ed558
    • H
      mm/thp: make is_huge_zero_pmd() safe and quicker · 3b77e8c8
      Hugh Dickins 提交于
      Most callers of is_huge_zero_pmd() supply a pmd already verified
      present; but a few (notably zap_huge_pmd()) do not - it might be a pmd
      migration entry, in which the pfn is encoded differently from a present
      pmd: which might pass the is_huge_zero_pmd() test (though not on x86,
      since L1TF forced us to protect against that); or perhaps even crash in
      pmd_page() applied to a swap-like entry.
      
      Make it safe by adding pmd_present() check into is_huge_zero_pmd()
      itself; and make it quicker by saving huge_zero_pfn, so that
      is_huge_zero_pmd() will not need to do that pmd_page() lookup each time.
      
      __split_huge_pmd_locked() checked pmd_trans_huge() before: that worked,
      but is unnecessary now that is_huge_zero_pmd() checks present.
      
      Link: https://lkml.kernel.org/r/21ea9ca-a1f5-8b90-5e88-95fb1c49bbfa@google.com
      Fixes: e71769ae ("mm: enable thp migration for shmem thp")
      Signed-off-by: NHugh Dickins <hughd@google.com>
      Acked-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Reviewed-by: NYang Shi <shy828301@gmail.com>
      Cc: Alistair Popple <apopple@nvidia.com>
      Cc: Jan Kara <jack@suse.cz>
      Cc: Jue Wang <juew@google.com>
      Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org>
      Cc: Miaohe Lin <linmiaohe@huawei.com>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: Naoya Horiguchi <naoya.horiguchi@nec.com>
      Cc: Oscar Salvador <osalvador@suse.de>
      Cc: Peter Xu <peterx@redhat.com>
      Cc: Ralph Campbell <rcampbell@nvidia.com>
      Cc: Shakeel Butt <shakeelb@google.com>
      Cc: Wang Yugui <wangyugui@e16-tech.com>
      Cc: Zi Yan <ziy@nvidia.com>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      3b77e8c8
    • H
      mm/thp: fix __split_huge_pmd_locked() on shmem migration entry · 99fa8a48
      Hugh Dickins 提交于
      Patch series "mm/thp: fix THP splitting unmap BUGs and related", v10.
      
      Here is v2 batch of long-standing THP bug fixes that I had not got
      around to sending before, but prompted now by Wang Yugui's report
      https://lore.kernel.org/linux-mm/20210412180659.B9E3.409509F4@e16-tech.com/
      
      Wang Yugui has tested a rollup of these fixes applied to 5.10.39, and
      they have done no harm, but have *not* fixed that issue: something more
      is needed and I have no idea of what.
      
      This patch (of 7):
      
      Stressing huge tmpfs page migration racing hole punch often crashed on
      the VM_BUG_ON(!pmd_present) in pmdp_huge_clear_flush(), with DEBUG_VM=y
      kernel; or shortly afterwards, on a bad dereference in
      __split_huge_pmd_locked() when DEBUG_VM=n.  They forgot to allow for pmd
      migration entries in the non-anonymous case.
      
      Full disclosure: those particular experiments were on a kernel with more
      relaxed mmap_lock and i_mmap_rwsem locking, and were not repeated on the
      vanilla kernel: it is conceivable that stricter locking happens to avoid
      those cases, or makes them less likely; but __split_huge_pmd_locked()
      already allowed for pmd migration entries when handling anonymous THPs,
      so this commit brings the shmem and file THP handling into line.
      
      And while there: use old_pmd rather than _pmd, as in the following
      blocks; and make it clearer to the eye that the !vma_is_anonymous()
      block is self-contained, making an early return after accounting for
      unmapping.
      
      Link: https://lkml.kernel.org/r/af88612-1473-2eaa-903-8d1a448b26@google.com
      Link: https://lkml.kernel.org/r/dd221a99-efb3-cd1d-6256-7e646af29314@google.com
      Fixes: e71769ae ("mm: enable thp migration for shmem thp")
      Signed-off-by: NHugh Dickins <hughd@google.com>
      Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Yang Shi <shy828301@gmail.com>
      Cc: Wang Yugui <wangyugui@e16-tech.com>
      Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org>
      Cc: Naoya Horiguchi <naoya.horiguchi@nec.com>
      Cc: Alistair Popple <apopple@nvidia.com>
      Cc: Ralph Campbell <rcampbell@nvidia.com>
      Cc: Zi Yan <ziy@nvidia.com>
      Cc: Miaohe Lin <linmiaohe@huawei.com>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: Jue Wang <juew@google.com>
      Cc: Peter Xu <peterx@redhat.com>
      Cc: Jan Kara <jack@suse.cz>
      Cc: Shakeel Butt <shakeelb@google.com>
      Cc: Oscar Salvador <osalvador@suse.de>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      99fa8a48
  3. 07 5月, 2021 1 次提交
  4. 06 5月, 2021 9 次提交
  5. 14 3月, 2021 2 次提交
  6. 27 2月, 2021 1 次提交
    • R
      mm,thp,shmem: limit shmem THP alloc gfp_mask · 164cc4fe
      Rik van Riel 提交于
      Patch series "mm,thp,shm: limit shmem THP alloc gfp_mask", v6.
      
      The allocation flags of anonymous transparent huge pages can be controlled
      through the files in /sys/kernel/mm/transparent_hugepage/defrag, which can
      help the system from getting bogged down in the page reclaim and
      compaction code when many THPs are getting allocated simultaneously.
      
      However, the gfp_mask for shmem THP allocations were not limited by those
      configuration settings, and some workloads ended up with all CPUs stuck on
      the LRU lock in the page reclaim code, trying to allocate dozens of THPs
      simultaneously.
      
      This patch applies the same configurated limitation of THPs to shmem
      hugepage allocations, to prevent that from happening.
      
      This way a THP defrag setting of "never" or "defer+madvise" will result in
      quick allocation failures without direct reclaim when no 2MB free pages
      are available.
      
      With this patch applied, THP allocations for tmpfs will be a little more
      aggressive than today for files mmapped with MADV_HUGEPAGE, and a little
      less aggressive for files that are not mmapped or mapped without that
      flag.
      
      This patch (of 4):
      
      The allocation flags of anonymous transparent huge pages can be controlled
      through the files in /sys/kernel/mm/transparent_hugepage/defrag, which can
      help the system from getting bogged down in the page reclaim and
      compaction code when many THPs are getting allocated simultaneously.
      
      However, the gfp_mask for shmem THP allocations were not limited by those
      configuration settings, and some workloads ended up with all CPUs stuck on
      the LRU lock in the page reclaim code, trying to allocate dozens of THPs
      simultaneously.
      
      This patch applies the same configurated limitation of THPs to shmem
      hugepage allocations, to prevent that from happening.
      
      Controlling the gfp_mask of THP allocations through the knobs in sysfs
      allows users to determine the balance between how aggressively the system
      tries to allocate THPs at fault time, and how much the application may end
      up stalling attempting those allocations.
      
      This way a THP defrag setting of "never" or "defer+madvise" will result in
      quick allocation failures without direct reclaim when no 2MB free pages
      are available.
      
      With this patch applied, THP allocations for tmpfs will be a little more
      aggressive than today for files mmapped with MADV_HUGEPAGE, and a little
      less aggressive for files that are not mmapped or mapped without that
      flag.
      
      Link: https://lkml.kernel.org/r/20201124194925.623931-1-riel@surriel.com
      Link: https://lkml.kernel.org/r/20201124194925.623931-2-riel@surriel.comSigned-off-by: NRik van Riel <riel@surriel.com>
      Acked-by: NMichal Hocko <mhocko@suse.com>
      Acked-by: NVlastimil Babka <vbabka@suse.cz>
      Cc: Xu Yu <xuyu@linux.alibaba.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
      Cc: Hugh Dickins <hughd@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      164cc4fe
  7. 25 2月, 2021 7 次提交
  8. 06 2月, 2021 1 次提交
  9. 16 12月, 2020 10 次提交