1. 14 7月, 2021 3 次提交
  2. 03 11月, 2020 1 次提交
  3. 14 10月, 2020 2 次提交
  4. 15 8月, 2020 1 次提交
  5. 13 8月, 2020 5 次提交
  6. 17 7月, 2020 1 次提交
  7. 10 6月, 2020 3 次提交
  8. 04 6月, 2020 1 次提交
  9. 08 4月, 2020 6 次提交
  10. 03 4月, 2020 4 次提交
  11. 18 2月, 2020 2 次提交
  12. 01 2月, 2020 1 次提交
  13. 14 1月, 2020 1 次提交
    • V
      mm, thp: tweak reclaim/compaction effort of local-only and all-node allocations · cc638f32
      Vlastimil Babka 提交于
      THP page faults now attempt a __GFP_THISNODE allocation first, which
      should only compact existing free memory, followed by another attempt
      that can allocate from any node using reclaim/compaction effort
      specified by global defrag setting and madvise.
      
      This patch makes the following changes to the scheme:
      
       - Before the patch, the first allocation relies on a check for
         pageblock order and __GFP_IO to prevent excessive reclaim. This
         however affects also the second attempt, which is not limited to
         single node.
      
         Instead of that, reuse the existing check for costly order
         __GFP_NORETRY allocations, and make sure the first THP attempt uses
         __GFP_NORETRY. As a side-effect, all costly order __GFP_NORETRY
         allocations will bail out if compaction needs reclaim, while
         previously they only bailed out when compaction was deferred due to
         previous failures.
      
         This should be still acceptable within the __GFP_NORETRY semantics.
      
       - Before the patch, the second allocation attempt (on all nodes) was
         passing __GFP_NORETRY. This is redundant as the check for pageblock
         order (discussed above) was stronger. It's also contrary to
         madvise(MADV_HUGEPAGE) which means some effort to allocate THP is
         requested.
      
         After this patch, the second attempt doesn't pass __GFP_THISNODE nor
         __GFP_NORETRY.
      
      To sum up, THP page faults now try the following attempts:
      
      1. local node only THP allocation with no reclaim, just compaction.
      2. for madvised VMA's or when synchronous compaction is enabled always - THP
         allocation from any node with effort determined by global defrag setting
         and VMA madvise
      3. fallback to base pages on any node
      
      Link: http://lkml.kernel.org/r/08a3f4dd-c3ce-0009-86c5-9ee51aba8557@suse.cz
      Fixes: b39d0ee2 ("mm, page_alloc: avoid expensive reclaim when compaction may not succeed")
      Signed-off-by: NVlastimil Babka <vbabka@suse.cz>
      Acked-by: NMichal Hocko <mhocko@suse.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
      Cc: David Rientjes <rientjes@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      cc638f32
  14. 02 12月, 2019 2 次提交
    • L
      mm/mempolicy.c: fix checking unmapped holes for mbind · f18da660
      Li Xinhai 提交于
      mbind() is required to report EFAULT if range, specified by addr and
      len, contains unmapped holes.  In current implementation, below rules
      are applied for this checking:
      
       1: Unmapped holes at any part of the specified range should be reported
          as EFAULT if mbind() for none MPOL_DEFAULT cases;
      
       2: Unmapped holes at any part of the specified range should be ignored
          (do not reprot EFAULT) if mbind() for MPOL_DEFAULT case;
      
       3: The whole range in an unmapped hole should be reported as EFAULT;
      
      Note that rule 2 does not fullfill the mbind() API definition, but since
      that behavior has existed for long days (the internal flag
      MPOL_MF_DISCONTIG_OK is for this purpose), this patch does not plan to
      change it.
      
      In current code, application observed inconsistent behavior on rule 1
      and rule 2 respectively.  That inconsistency is fixed as below details.
      
      Cases of rule 1:
      
       - Hole at head side of range. Current code reprot EFAULT, no change by
         this patch.
      
          [  vma  ][ hole ][  vma  ]
                      [  range  ]
      
       - Hole at middle of range. Current code report EFAULT, no change by
         this patch.
      
          [  vma  ][ hole ][ vma ]
             [     range      ]
      
       - Hole at tail side of range. Current code do not report EFAULT, this
         patch fixes it.
      
          [  vma  ][ hole ][ vma ]
             [  range  ]
      
      Cases of rule 2:
      
       - Hole at head side of range. Current code reports EFAULT, this patch
         fixes it.
      
          [  vma  ][ hole ][  vma  ]
                      [  range  ]
      
       - Hole at middle of range. Current code does not report EFAULT, no
         change by this patch.
      
          [  vma  ][ hole ][ vma]
             [     range      ]
      
       - Hole at tail side of range. Current code does not report EFAULT, no
         change by this patch.
      
          [  vma  ][ hole ][ vma]
             [  range  ]
      
      This patch has no changes to rule 3.
      
      The unmapped hole checking can also be handled by using .pte_hole(),
      instead of .test_walk().  But .pte_hole() is called for holes inside and
      outside vma, which causes more cost, so this patch keeps the original
      design with .test_walk().
      
      Link: http://lkml.kernel.org/r/1573218104-11021-3-git-send-email-lixinhai.lxh@gmail.com
      Fixes: 6f4576e3 ("mempolicy: apply page table walker on queue_pages_range()")
      Signed-off-by: NLi Xinhai <lixinhai.lxh@gmail.com>
      Reviewed-by: NNaoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Cc: Michal Hocko <mhocko@suse.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: linux-man <linux-man@vger.kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      f18da660
    • L
      mm/mempolicy.c: check range first in queue_pages_test_walk · a18b3ac2
      Li Xinhai 提交于
      Patch series "mm: Fix checking unmapped holes for mbind", v4.
      
      This patchset fix checking unmapped holes for mbind().
      
      First patch makes sure the vma been correctly tracked in .test_walk(),
      so each time when .test_walk() is called, the neighborhood of two vma
      is correct.
      
      Current problem is that the !vma_migratable() check could cause return
      immediately without update tracking to vma.
      
      Second patch fix the inconsistent report of EFAULT when mbind() is
      called for MPOL_DEFAULT and non MPOL_DEFAULT cases, so application do
      not need to have workaround code to handle this special behavior.
      Currently there are two problems, one is that the .test_walk() can not
      know there is hole at tail side of range, because .test_walk() only
      call for vma not for hole.  The other one is that mbind_range() checks
      for hole at head side of range but do not consider the
      MPOL_MF_DISCONTIG_OK flag as done in .test_walk().
      
      This patch (of 2):
      
      Checking unmapped hole and updating the previous vma must be handled
      first, otherwise the unmapped hole could be calculated from a wrong
      previous vma.
      
      Several commits were relevant to this error:
      
       - commit 6f4576e3 ("mempolicy: apply page table walker on
         queue_pages_range()")
      
         This commit was correct, the VM_PFNMAP check was after updating
         previous vma
      
       - commit 48684a65 ("mm: pagewalk: fix misbehavior of
         walk_page_range for vma(VM_PFNMAP)")
      
         This commit added VM_PFNMAP check before updating previous vma. Then,
         there were two VM_PFNMAP check did same thing twice.
      
       - commit acda0c33 ("mm/mempolicy.c: get rid of duplicated check for
         vma(VM_PFNMAP) in queue_page s_range()")
      
         This commit tried to fix the duplicated VM_PFNMAP check, but it
         wrongly removed the one which was after updating vma.
      
      Link: http://lkml.kernel.org/r/1573218104-11021-2-git-send-email-lixinhai.lxh@gmail.com
      Fixes: acda0c33 (mm/mempolicy.c: get rid of duplicated check for vma(VM_PFNMAP) in queue_pages_range())
      Signed-off-by: NLi Xinhai <lixinhai.lxh@gmail.com>
      Reviewed-by: NNaoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Cc: Michal Hocko <mhocko@suse.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: linux-man <linux-man@vger.kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      a18b3ac2
  15. 16 11月, 2019 1 次提交
  16. 29 9月, 2019 3 次提交
    • D
      mm, page_alloc: allow hugepage fallback to remote nodes when madvised · 76e654cc
      David Rientjes 提交于
      For systems configured to always try hard to allocate transparent
      hugepages (thp defrag setting of "always") or for memory that has been
      explicitly madvised to MADV_HUGEPAGE, it is often better to fallback to
      remote memory to allocate the hugepage if the local allocation fails
      first.
      
      The point is to allow the initial call to __alloc_pages_node() to attempt
      to defragment local memory to make a hugepage available, if possible,
      rather than immediately fallback to remote memory.  Local hugepages will
      always have a better access latency than remote (huge)pages, so an attempt
      to make a hugepage available locally is always preferred.
      
      If memory compaction cannot be successful locally, however, it is likely
      better to fallback to remote memory.  This could take on two forms: either
      allow immediate fallback to remote memory or do per-zone watermark checks.
      It would be possible to fallback only when per-zone watermarks fail for
      order-0 memory, since that would require local reclaim for all subsequent
      faults so remote huge allocation is likely better than thrashing the local
      zone for large workloads.
      
      In this case, it is assumed that because the system is configured to try
      hard to allocate hugepages or the vma is advised to explicitly want to try
      hard for hugepages that remote allocation is better when local allocation
      and memory compaction have both failed.
      Signed-off-by: NDavid Rientjes <rientjes@google.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Michal Hocko <mhocko@suse.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: Stefan Priebe - Profihost AG <s.priebe@profihost.ag>
      Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      76e654cc
    • D
      Revert "Revert "Revert "mm, thp: consolidate THP gfp handling into alloc_hugepage_direct_gfpmask"" · 19deb769
      David Rientjes 提交于
      This reverts commit 92717d42.
      
      Since commit a8282608 ("Revert "mm, thp: restore node-local hugepage
      allocations"") is reverted in this series, it is better to restore the
      previous 5.2 behavior between the thp allocation and the page allocator
      rather than to attempt any consolidation or cleanup for a policy that is
      now reverted.  It's less risky during an rc cycle and subsequent patches
      in this series further modify the same policy that the pre-5.3 behavior
      implements.
      
      Consolidation and cleanup can be done subsequent to a sane default page
      allocation strategy, so this patch reverts a cleanup done on a strategy
      that is now reverted and thus is the least risky option.
      Signed-off-by: NDavid Rientjes <rientjes@google.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Michal Hocko <mhocko@suse.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: Stefan Priebe - Profihost AG <s.priebe@profihost.ag>
      Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      19deb769
    • D
      Revert "Revert "mm, thp: restore node-local hugepage allocations"" · ac79f78d
      David Rientjes 提交于
      This reverts commit a8282608.
      
      The commit references the original intended semantic for MADV_HUGEPAGE
      which has subsequently taken on three unique purposes:
      
       - enables or disables thp for a range of memory depending on the system's
         config (is thp "enabled" set to "always" or "madvise"),
      
       - determines the synchronous compaction behavior for thp allocations at
         fault (is thp "defrag" set to "always", "defer+madvise", or "madvise"),
         and
      
       - reverts a previous MADV_NOHUGEPAGE (there is no madvise mode to only
         clear previous hugepage advice).
      
      These are the three purposes that currently exist in 5.2 and over the
      past several years that userspace has been written around.  Adding a
      NUMA locality preference adds a fourth dimension to an already conflated
      advice mode.
      
      Based on the semantic that MADV_HUGEPAGE has provided over the past
      several years, there exist workloads that use the tunable based on these
      principles: specifically that the allocation should attempt to
      defragment a local node before falling back.  It is agreed that remote
      hugepages typically (but not always) have a better access latency than
      remote native pages, although on Naples this is at parity for
      intersocket.
      
      The revert commit that this patch reverts allows hugepage allocation to
      immediately allocate remotely when local memory is fragmented.  This is
      contrary to the semantic of MADV_HUGEPAGE over the past several years:
      that is, memory compaction should be attempted locally before falling
      back.
      
      The performance degradation of remote hugepages over local hugepages on
      Rome, for example, is 53.5% increased access latency.  For this reason,
      the goal is to revert back to the 5.2 and previous behavior that would
      attempt local defragmentation before falling back.  With the patch that
      is reverted by this patch, we see performance degradations at the tail
      because the allocator happily allocates the remote hugepage rather than
      even attempting to make a local hugepage available.
      
      zone_reclaim_mode is not a solution to this problem since it does not
      only impact hugepage allocations but rather changes the memory
      allocation strategy for *all* page allocations.
      Signed-off-by: NDavid Rientjes <rientjes@google.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Michal Hocko <mhocko@suse.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: Stefan Priebe - Profihost AG <s.priebe@profihost.ag>
      Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      ac79f78d
  17. 26 9月, 2019 1 次提交
  18. 25 9月, 2019 1 次提交
  19. 07 9月, 2019 1 次提交