1. 27 2月, 2021 1 次提交
  2. 16 12月, 2020 2 次提交
  3. 17 10月, 2020 5 次提交
  4. 14 10月, 2020 1 次提交
  5. 15 8月, 2020 2 次提交
  6. 13 8月, 2020 3 次提交
    • J
      mm/mempolicy: use a standard migration target allocation callback · a0976311
      Joonsoo Kim 提交于
      There is a well-defined migration target allocation callback.  Use it.
      Signed-off-by: NJoonsoo Kim <iamjoonsoo.kim@lge.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Acked-by: NMichal Hocko <mhocko@suse.com>
      Acked-by: NVlastimil Babka <vbabka@suse.cz>
      Cc: Christoph Hellwig <hch@infradead.org>
      Cc: Mike Kravetz <mike.kravetz@oracle.com>
      Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Cc: Roman Gushchin <guro@fb.com>
      Link: http://lkml.kernel.org/r/1594622517-20681-7-git-send-email-iamjoonsoo.kim@lge.comSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      a0976311
    • J
      mm/migrate: introduce a standard migration target allocation function · 19fc7bed
      Joonsoo Kim 提交于
      There are some similar functions for migration target allocation.  Since
      there is no fundamental difference, it's better to keep just one rather
      than keeping all variants.  This patch implements base migration target
      allocation function.  In the following patches, variants will be converted
      to use this function.
      
      Changes should be mechanical, but, unfortunately, there are some
      differences.  First, some callers' nodemask is assgined to NULL since NULL
      nodemask will be considered as all available nodes, that is,
      &node_states[N_MEMORY].  Second, for hugetlb page allocation, gfp_mask is
      redefined as regular hugetlb allocation gfp_mask plus __GFP_THISNODE if
      user provided gfp_mask has it.  This is because future caller of this
      function requires to set this node constaint.  Lastly, if provided nodeid
      is NUMA_NO_NODE, nodeid is set up to the node where migration source
      lives.  It helps to remove simple wrappers for setting up the nodeid.
      
      Note that PageHighmem() call in previous function is changed to open-code
      "is_highmem_idx()" since it provides more readability.
      
      [akpm@linux-foundation.org: tweak patch title, per Vlastimil]
      [akpm@linux-foundation.org: fix typo in comment]
      Signed-off-by: NJoonsoo Kim <iamjoonsoo.kim@lge.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Acked-by: NVlastimil Babka <vbabka@suse.cz>
      Acked-by: NMichal Hocko <mhocko@suse.com>
      Cc: Christoph Hellwig <hch@infradead.org>
      Cc: Mike Kravetz <mike.kravetz@oracle.com>
      Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Cc: Roman Gushchin <guro@fb.com>
      Link: http://lkml.kernel.org/r/1594622517-20681-6-git-send-email-iamjoonsoo.kim@lge.comSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      19fc7bed
    • N
      mm: proactive compaction · facdaa91
      Nitin Gupta 提交于
      For some applications, we need to allocate almost all memory as hugepages.
      However, on a running system, higher-order allocations can fail if the
      memory is fragmented.  Linux kernel currently does on-demand compaction as
      we request more hugepages, but this style of compaction incurs very high
      latency.  Experiments with one-time full memory compaction (followed by
      hugepage allocations) show that kernel is able to restore a highly
      fragmented memory state to a fairly compacted memory state within <1 sec
      for a 32G system.  Such data suggests that a more proactive compaction can
      help us allocate a large fraction of memory as hugepages keeping
      allocation latencies low.
      
      For a more proactive compaction, the approach taken here is to define a
      new sysctl called 'vm.compaction_proactiveness' which dictates bounds for
      external fragmentation which kcompactd tries to maintain.
      
      The tunable takes a value in range [0, 100], with a default of 20.
      
      Note that a previous version of this patch [1] was found to introduce too
      many tunables (per-order extfrag{low, high}), but this one reduces them to
      just one sysctl.  Also, the new tunable is an opaque value instead of
      asking for specific bounds of "external fragmentation", which would have
      been difficult to estimate.  The internal interpretation of this opaque
      value allows for future fine-tuning.
      
      Currently, we use a simple translation from this tunable to [low, high]
      "fragmentation score" thresholds (low=100-proactiveness, high=low+10%).
      The score for a node is defined as weighted mean of per-zone external
      fragmentation.  A zone's present_pages determines its weight.
      
      To periodically check per-node score, we reuse per-node kcompactd threads,
      which are woken up every 500 milliseconds to check the same.  If a node's
      score exceeds its high threshold (as derived from user-provided
      proactiveness value), proactive compaction is started until its score
      reaches its low threshold value.  By default, proactiveness is set to 20,
      which implies threshold values of low=80 and high=90.
      
      This patch is largely based on ideas from Michal Hocko [2].  See also the
      LWN article [3].
      
      Performance data
      ================
      
      System: x64_64, 1T RAM, 80 CPU threads.
      Kernel: 5.6.0-rc3 + this patch
      
      echo madvise | sudo tee /sys/kernel/mm/transparent_hugepage/enabled
      echo madvise | sudo tee /sys/kernel/mm/transparent_hugepage/defrag
      
      Before starting the driver, the system was fragmented from a userspace
      program that allocates all memory and then for each 2M aligned section,
      frees 3/4 of base pages using munmap.  The workload is mainly anonymous
      userspace pages, which are easy to move around.  I intentionally avoided
      unmovable pages in this test to see how much latency we incur when
      hugepage allocations hit direct compaction.
      
      1. Kernel hugepage allocation latencies
      
      With the system in such a fragmented state, a kernel driver then allocates
      as many hugepages as possible and measures allocation latency:
      
      (all latency values are in microseconds)
      
      - With vanilla 5.6.0-rc3
      
        percentile latency
        –––––––––– –––––––
      	   5    7894
      	  10    9496
      	  25   12561
      	  30   15295
      	  40   18244
      	  50   21229
      	  60   27556
      	  75   30147
      	  80   31047
      	  90   32859
      	  95   33799
      
      Total 2M hugepages allocated = 383859 (749G worth of hugepages out of 762G
      total free => 98% of free memory could be allocated as hugepages)
      
      - With 5.6.0-rc3 + this patch, with proactiveness=20
      
      sysctl -w vm.compaction_proactiveness=20
      
        percentile latency
        –––––––––– –––––––
      	   5       2
      	  10       2
      	  25       3
      	  30       3
      	  40       3
      	  50       4
      	  60       4
      	  75       4
      	  80       4
      	  90       5
      	  95     429
      
      Total 2M hugepages allocated = 384105 (750G worth of hugepages out of 762G
      total free => 98% of free memory could be allocated as hugepages)
      
      2. JAVA heap allocation
      
      In this test, we first fragment memory using the same method as for (1).
      
      Then, we start a Java process with a heap size set to 700G and request the
      heap to be allocated with THP hugepages.  We also set THP to madvise to
      allow hugepage backing of this heap.
      
      /usr/bin/time
       java -Xms700G -Xmx700G -XX:+UseTransparentHugePages -XX:+AlwaysPreTouch
      
      The above command allocates 700G of Java heap using hugepages.
      
      - With vanilla 5.6.0-rc3
      
      17.39user 1666.48system 27:37.89elapsed
      
      - With 5.6.0-rc3 + this patch, with proactiveness=20
      
      8.35user 194.58system 3:19.62elapsed
      
      Elapsed time remains around 3:15, as proactiveness is further increased.
      
      Note that proactive compaction happens throughout the runtime of these
      workloads.  The situation of one-time compaction, sufficient to supply
      hugepages for following allocation stream, can probably happen for more
      extreme proactiveness values, like 80 or 90.
      
      In the above Java workload, proactiveness is set to 20.  The test starts
      with a node's score of 80 or higher, depending on the delay between the
      fragmentation step and starting the benchmark, which gives more-or-less
      time for the initial round of compaction.  As t he benchmark consumes
      hugepages, node's score quickly rises above the high threshold (90) and
      proactive compaction starts again, which brings down the score to the low
      threshold level (80).  Repeat.
      
      bpftrace also confirms proactive compaction running 20+ times during the
      runtime of this Java benchmark.  kcompactd threads consume 100% of one of
      the CPUs while it tries to bring a node's score within thresholds.
      
      Backoff behavior
      ================
      
      Above workloads produce a memory state which is easy to compact.  However,
      if memory is filled with unmovable pages, proactive compaction should
      essentially back off.  To test this aspect:
      
      - Created a kernel driver that allocates almost all memory as hugepages
        followed by freeing first 3/4 of each hugepage.
      - Set proactiveness=40
      - Note that proactive_compact_node() is deferred maximum number of times
        with HPAGE_FRAG_CHECK_INTERVAL_MSEC of wait between each check
        (=> ~30 seconds between retries).
      
      [1] https://patchwork.kernel.org/patch/11098289/
      [2] https://lore.kernel.org/linux-mm/20161230131412.GI13301@dhcp22.suse.cz/
      [3] https://lwn.net/Articles/817905/Signed-off-by: NNitin Gupta <nigupta@nvidia.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Tested-by: NOleksandr Natalenko <oleksandr@redhat.com>
      Reviewed-by: NVlastimil Babka <vbabka@suse.cz>
      Reviewed-by: NKhalid Aziz <khalid.aziz@oracle.com>
      Reviewed-by: NOleksandr Natalenko <oleksandr@redhat.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: Khalid Aziz <khalid.aziz@oracle.com>
      Cc: Michal Hocko <mhocko@suse.com>
      Cc: Mel Gorman <mgorman@techsingularity.net>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Mike Kravetz <mike.kravetz@oracle.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Nitin Gupta <ngupta@nitingupta.dev>
      Cc: Oleksandr Natalenko <oleksandr@redhat.com>
      Link: http://lkml.kernel.org/r/20200616204527.19185-1-nigupta@nvidia.comSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      facdaa91
  7. 10 6月, 2020 2 次提交
  8. 05 6月, 2020 1 次提交
  9. 04 6月, 2020 3 次提交
    • M
      mm/vmscan.c: change prototype for shrink_page_list · 730ec8c0
      Maninder Singh 提交于
      commit 3c710c1a ("mm, vmscan extract shrink_page_list reclaim counters
      into a struct") changed data type for the function, so changing return
      type for funciton and its caller.
      Signed-off-by: NVaneet Narang <v.narang@samsung.com>
      Signed-off-by: NManinder Singh <maninder1.s@samsung.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Acked-by: NMichal Hocko <mhocko@suse.com>
      Cc: Amit Sahrawat <a.sahrawat@samsung.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Link: http://lkml.kernel.org/r/1588168259-25604-1-git-send-email-maninder1.s@samsung.comSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      730ec8c0
    • J
      mm/page_alloc: integrate classzone_idx and high_zoneidx · 97a225e6
      Joonsoo Kim 提交于
      classzone_idx is just different name for high_zoneidx now.  So, integrate
      them and add some comment to struct alloc_context in order to reduce
      future confusion about the meaning of this variable.
      
      The accessor, ac_classzone_idx() is also removed since it isn't needed
      after integration.
      
      In addition to integration, this patch also renames high_zoneidx to
      highest_zoneidx since it represents more precise meaning.
      Signed-off-by: NJoonsoo Kim <iamjoonsoo.kim@lge.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Reviewed-by: NBaoquan He <bhe@redhat.com>
      Acked-by: NVlastimil Babka <vbabka@suse.cz>
      Acked-by: NDavid Rientjes <rientjes@google.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Mel Gorman <mgorman@techsingularity.net>
      Cc: Michal Hocko <mhocko@kernel.org>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: Ye Xiaolong <xiaolong.ye@intel.com>
      Link: http://lkml.kernel.org/r/1587095923-7515-3-git-send-email-iamjoonsoo.kim@lge.comSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      97a225e6
    • J
      mm/page_alloc: use ac->high_zoneidx for classzone_idx · 3334a45e
      Joonsoo Kim 提交于
      Patch series "integrate classzone_idx and high_zoneidx", v5.
      
      This patchset is followup of the problem reported and discussed two years
      ago [1, 2].  The problem this patchset solves is related to the
      classzone_idx on the NUMA system.  It causes a problem when the lowmem
      reserve protection exists for some zones on a node that do not exist on
      other nodes.
      
      This problem was reported two years ago, and, at that time, the solution
      got general agreements [2].  But it was not upstreamed.
      
      [1]: http://lkml.kernel.org/r/20180102063528.GG30397@yexl-desktop
      [2]: http://lkml.kernel.org/r/1525408246-14768-1-git-send-email-iamjoonsoo.kim@lge.com
      
      This patch (of 2):
      
      Currently, we use classzone_idx to calculate lowmem reserve proetection
      for an allocation request.  This classzone_idx causes a problem on NUMA
      systems when the lowmem reserve protection exists for some zones on a node
      that do not exist on other nodes.
      
      Before further explanation, I should first clarify how to compute the
      classzone_idx and the high_zoneidx.
      
      - ac->high_zoneidx is computed via the arcane gfp_zone(gfp_mask) and
        represents the index of the highest zone the allocation can use
      
      - classzone_idx was supposed to be the index of the highest zone on the
        local node that the allocation can use, that is actually available in
        the system
      
      Think about following example.  Node 0 has 4 populated zone,
      DMA/DMA32/NORMAL/MOVABLE.  Node 1 has 1 populated zone, NORMAL.  Some
      zones, such as MOVABLE, doesn't exist on node 1 and this makes following
      difference.
      
      Assume that there is an allocation request whose gfp_zone(gfp_mask) is the
      zone, MOVABLE.  Then, it's high_zoneidx is 3.  If this allocation is
      initiated on node 0, it's classzone_idx is 3 since actually
      available/usable zone on local (node 0) is MOVABLE.  If this allocation is
      initiated on node 1, it's classzone_idx is 2 since actually
      available/usable zone on local (node 1) is NORMAL.
      
      You can see that classzone_idx of the allocation request are different
      according to their starting node, even if their high_zoneidx is the same.
      
      Think more about these two allocation requests.  If they are processed on
      local, there is no problem.  However, if allocation is initiated on node 1
      are processed on remote, in this example, at the NORMAL zone on node 0,
      due to memory shortage, problem occurs.  Their different classzone_idx
      leads to different lowmem reserve and then different min watermark.  See
      the following example.
      
      root@ubuntu:/sys/devices/system/memory# cat /proc/zoneinfo
      Node 0, zone      DMA
        per-node stats
      ...
        pages free     3965
              min      5
              low      8
              high     11
              spanned  4095
              present  3998
              managed  3977
              protection: (0, 2961, 4928, 5440)
      ...
      Node 0, zone    DMA32
        pages free     757955
              min      1129
              low      1887
              high     2645
              spanned  1044480
              present  782303
              managed  758116
              protection: (0, 0, 1967, 2479)
      ...
      Node 0, zone   Normal
        pages free     459806
              min      750
              low      1253
              high     1756
              spanned  524288
              present  524288
              managed  503620
              protection: (0, 0, 0, 4096)
      ...
      Node 0, zone  Movable
        pages free     130759
              min      195
              low      326
              high     457
              spanned  1966079
              present  131072
              managed  131072
              protection: (0, 0, 0, 0)
      ...
      Node 1, zone      DMA
        pages free     0
              min      0
              low      0
              high     0
              spanned  0
              present  0
              managed  0
              protection: (0, 0, 1006, 1006)
      Node 1, zone    DMA32
        pages free     0
              min      0
              low      0
              high     0
              spanned  0
              present  0
              managed  0
              protection: (0, 0, 1006, 1006)
      Node 1, zone   Normal
        per-node stats
      ...
        pages free     233277
              min      383
              low      640
              high     897
              spanned  262144
              present  262144
              managed  257744
              protection: (0, 0, 0, 0)
      ...
      Node 1, zone  Movable
        pages free     0
              min      0
              low      0
              high     0
              spanned  262144
              present  0
              managed  0
              protection: (0, 0, 0, 0)
      
      - static min watermark for the NORMAL zone on node 0 is 750.
      
      - lowmem reserve for the request with classzone idx 3 at the NORMAL on
        node 0 is 4096.
      
      - lowmem reserve for the request with classzone idx 2 at the NORMAL on
        node 0 is 0.
      
      So, overall min watermark is:
      allocation initiated on node 0 (classzone_idx 3): 750 + 4096 = 4846
      allocation initiated on node 1 (classzone_idx 2): 750 + 0 = 750
      
      Allocation initiated on node 1 will have some precedence than allocation
      initiated on node 0 because min watermark of the former allocation is
      lower than the other.  So, allocation initiated on node 1 could succeed on
      node 0 when allocation initiated on node 0 could not, and, this could
      cause too many numa_miss allocation.  Then, performance could be
      downgraded.
      
      Recently, there was a regression report about this problem on CMA patches
      since CMA memory are placed in ZONE_MOVABLE by those patches.  I checked
      that problem is disappeared with this fix that uses high_zoneidx for
      classzone_idx.
      
      http://lkml.kernel.org/r/20180102063528.GG30397@yexl-desktop
      
      Using high_zoneidx for classzone_idx is more consistent way than previous
      approach because system's memory layout doesn't affect anything to it.
      With this patch, both classzone_idx on above example will be 3 so will
      have the same min watermark.
      
      allocation initiated on node 0: 750 + 4096 = 4846
      allocation initiated on node 1: 750 + 4096 = 4846
      
      One could wonder if there is a side effect that allocation initiated on
      node 1 will use higher bar when allocation is handled on local since
      classzone_idx could be higher than before.  It will not happen because the
      zone without managed page doesn't contributes lowmem_reserve at all.
      Reported-by: NYe Xiaolong <xiaolong.ye@intel.com>
      Signed-off-by: NJoonsoo Kim <iamjoonsoo.kim@lge.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Tested-by: NYe Xiaolong <xiaolong.ye@intel.com>
      Reviewed-by: NBaoquan He <bhe@redhat.com>
      Acked-by: NVlastimil Babka <vbabka@suse.cz>
      Acked-by: NDavid Rientjes <rientjes@google.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Michal Hocko <mhocko@kernel.org>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: Mel Gorman <mgorman@techsingularity.net>
      Link: http://lkml.kernel.org/r/1587095923-7515-1-git-send-email-iamjoonsoo.kim@lge.com
      Link: http://lkml.kernel.org/r/1587095923-7515-2-git-send-email-iamjoonsoo.kim@lge.comSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      3334a45e
  10. 03 6月, 2020 2 次提交
  11. 08 4月, 2020 1 次提交
  12. 03 4月, 2020 4 次提交
    • R
      mm,compaction,cma: add alloc_contig flag to compact_control · b06eda09
      Rik van Riel 提交于
      Patch series "fix THP migration for CMA allocations", v2.
      
      Transparent huge pages are allocated with __GFP_MOVABLE, and can end up in
      CMA memory blocks.  Transparent huge pages also have most of the
      infrastructure in place to allow migration.
      
      However, a few pieces were missing, causing THP migration to fail when
      attempting to use CMA to allocate 1GB hugepages.
      
      With these patches in place, THP migration from CMA blocks seems to work,
      both for anonymous THPs and for tmpfs/shmem THPs.
      
      This patch (of 2):
      
      Add information to struct compact_control to indicate that the allocator
      would really like to clear out this specific part of memory, used by for
      example CMA.
      Signed-off-by: NRik van Riel <riel@surriel.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Reviewed-by: NVlastimil Babka <vbabka@suse.cz>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Mel Gorman <mgorman@techsingularity.net>
      Cc: Michal Hocko <mhocko@kernel.org>
      Cc: Zi Yan <ziy@nvidia.com>
      Cc: Joonsoo Kim <js1304@gmail.com>
      Link: http://lkml.kernel.org/r/20200227213238.1298752-1-riel@surriel.comSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      b06eda09
    • M
      mm, pagealloc: micro-optimisation: save two branches on hot page allocation path · 736838e9
      Mateusz Nosek 提交于
      This patch makes ALLOC_KSWAPD equal to __GFP_KSWAPD_RECLAIM (cast to int).
      
      Thanks to that code like:
      
          if (gfp_mask & __GFP_KSWAPD_RECLAIM)
      	    alloc_flags |= ALLOC_KSWAPD;
      
      can be changed to:
      
          alloc_flags |= (__force int) (gfp_mask &__GFP_KSWAPD_RECLAIM);
      
      Thanks to this one branch less is generated in the assembly.
      
      In case of ALLOC_KSWAPD flag two branches are saved, first one in code
      that always executes in the beginning of page allocation and the second
      one in loop in page allocator slowpath.
      Signed-off-by: NMateusz Nosek <mateusznosek0@gmail.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Acked-by: NVlastimil Babka <vbabka@suse.cz>
      Acked-by: NMel Gorman <mgorman@techsingularity.net>
      Link: http://lkml.kernel.org/r/20200304162118.14784-1-mateusznosek0@gmail.comSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      736838e9
    • P
      mm: allow VM_FAULT_RETRY for multiple times · 4064b982
      Peter Xu 提交于
      The idea comes from a discussion between Linus and Andrea [1].
      
      Before this patch we only allow a page fault to retry once.  We achieved
      this by clearing the FAULT_FLAG_ALLOW_RETRY flag when doing
      handle_mm_fault() the second time.  This was majorly used to avoid
      unexpected starvation of the system by looping over forever to handle the
      page fault on a single page.  However that should hardly happen, and after
      all for each code path to return a VM_FAULT_RETRY we'll first wait for a
      condition (during which time we should possibly yield the cpu) to happen
      before VM_FAULT_RETRY is really returned.
      
      This patch removes the restriction by keeping the FAULT_FLAG_ALLOW_RETRY
      flag when we receive VM_FAULT_RETRY.  It means that the page fault handler
      now can retry the page fault for multiple times if necessary without the
      need to generate another page fault event.  Meanwhile we still keep the
      FAULT_FLAG_TRIED flag so page fault handler can still identify whether a
      page fault is the first attempt or not.
      
      Then we'll have these combinations of fault flags (only considering
      ALLOW_RETRY flag and TRIED flag):
      
        - ALLOW_RETRY and !TRIED:  this means the page fault allows to
                                   retry, and this is the first try
      
        - ALLOW_RETRY and TRIED:   this means the page fault allows to
                                   retry, and this is not the first try
      
        - !ALLOW_RETRY and !TRIED: this means the page fault does not allow
                                   to retry at all
      
        - !ALLOW_RETRY and TRIED:  this is forbidden and should never be used
      
      In existing code we have multiple places that has taken special care of
      the first condition above by checking against (fault_flags &
      FAULT_FLAG_ALLOW_RETRY).  This patch introduces a simple helper to detect
      the first retry of a page fault by checking against both (fault_flags &
      FAULT_FLAG_ALLOW_RETRY) and !(fault_flag & FAULT_FLAG_TRIED) because now
      even the 2nd try will have the ALLOW_RETRY set, then use that helper in
      all existing special paths.  One example is in __lock_page_or_retry(), now
      we'll drop the mmap_sem only in the first attempt of page fault and we'll
      keep it in follow up retries, so old locking behavior will be retained.
      
      This will be a nice enhancement for current code [2] at the same time a
      supporting material for the future userfaultfd-writeprotect work, since in
      that work there will always be an explicit userfault writeprotect retry
      for protected pages, and if that cannot resolve the page fault (e.g., when
      userfaultfd-writeprotect is used in conjunction with swapped pages) then
      we'll possibly need a 3rd retry of the page fault.  It might also benefit
      other potential users who will have similar requirement like userfault
      write-protection.
      
      GUP code is not touched yet and will be covered in follow up patch.
      
      Please read the thread below for more information.
      
      [1] https://lore.kernel.org/lkml/20171102193644.GB22686@redhat.com/
      [2] https://lore.kernel.org/lkml/20181230154648.GB9832@redhat.com/Suggested-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Suggested-by: NAndrea Arcangeli <aarcange@redhat.com>
      Signed-off-by: NPeter Xu <peterx@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Tested-by: NBrian Geffon <bgeffon@google.com>
      Cc: Bobby Powers <bobbypowers@gmail.com>
      Cc: David Hildenbrand <david@redhat.com>
      Cc: Denis Plotnikov <dplotnikov@virtuozzo.com>
      Cc: "Dr . David Alan Gilbert" <dgilbert@redhat.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Jerome Glisse <jglisse@redhat.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: "Kirill A . Shutemov" <kirill@shutemov.name>
      Cc: Martin Cracauer <cracauer@cons.org>
      Cc: Marty McFadden <mcfadden8@llnl.gov>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Maya Gokhale <gokhale2@llnl.gov>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Mike Kravetz <mike.kravetz@oracle.com>
      Cc: Mike Rapoport <rppt@linux.vnet.ibm.com>
      Cc: Pavel Emelyanov <xemul@openvz.org>
      Link: http://lkml.kernel.org/r/20200220160246.9790-1-peterx@redhat.comSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      4064b982
    • Y
      mm: swap: make page_evictable() inline · 1eb6234e
      Yang Shi 提交于
      When backporting commit 9c4e6b1a ("mm, mlock, vmscan: no more skipping
      pagevecs") to our 4.9 kernel, our test bench noticed around 10% down with
      a couple of vm-scalability's test cases (lru-file-readonce,
      lru-file-readtwice and lru-file-mmap-read).  I didn't see that much down
      on my VM (32c-64g-2nodes).  It might be caused by the test configuration,
      which is 32c-256g with NUMA disabled and the tests were run in root memcg,
      so the tests actually stress only one inactive and active lru.  It sounds
      not very usual in mordern production environment.
      
      That commit did two major changes:
      1. Call page_evictable()
      2. Use smp_mb to force the PG_lru set visible
      
      It looks they contribute the most overhead.  The page_evictable() is a
      function which does function prologue and epilogue, and that was used by
      page reclaim path only.  However, lru add is a very hot path, so it sounds
      better to make it inline.  However, it calls page_mapping() which is not
      inlined either, but the disassemble shows it doesn't do push and pop
      operations and it sounds not very straightforward to inline it.
      
      Other than this, it sounds smp_mb() is not necessary for x86 since
      SetPageLRU is atomic which enforces memory barrier already, replace it
      with smp_mb__after_atomic() in the following patch.
      
      With the two fixes applied, the tests can get back around 5% on that test
      bench and get back normal on my VM.  Since the test bench configuration is
      not that usual and I also saw around 6% up on the latest upstream, so it
      sounds good enough IMHO.
      
      The below is test data (lru-file-readtwice throughput) against the v5.6-rc4:
      	mainline	w/ inline fix
                150MB            154MB
      
      With this patch the throughput gets 2.67% up.  The data with using
      smp_mb__after_atomic() is showed in the following patch.
      
      Shakeel Butt did the below test:
      
      On a real machine with limiting the 'dd' on a single node and reading 100
      GiB sparse file (less than a single node).  Just ran a single instance to
      not cause the lru lock contention.  The cmdline used is "dd if=file-100GiB
      of=/dev/null bs=4k".  Ran the cmd 10 times with drop_caches in between and
      measured the time it took.
      
      Without patch: 56.64143 +- 0.672 sec
      
      With patches: 56.10 +- 0.21 sec
      
      [akpm@linux-foundation.org: move page_evictable() to internal.h]
      Fixes: 9c4e6b1a ("mm, mlock, vmscan: no more skipping pagevecs")
      Signed-off-by: NYang Shi <yang.shi@linux.alibaba.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Tested-by: NShakeel Butt <shakeelb@google.com>
      Reviewed-by: NShakeel Butt <shakeelb@google.com>
      Reviewed-by: NMatthew Wilcox (Oracle) <willy@infradead.org>
      Acked-by: NVlastimil Babka <vbabka@suse.cz>
      Acked-by: NJohannes Weiner <hannes@cmpxchg.org>
      Link: http://lkml.kernel.org/r/1584500541-46817-1-git-send-email-yang.shi@linux.alibaba.comSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      1eb6234e
  13. 02 12月, 2019 1 次提交
  14. 01 12月, 2019 3 次提交
  15. 26 9月, 2019 1 次提交
    • M
      mm: introduce MADV_COLD · 9c276cc6
      Minchan Kim 提交于
      Patch series "Introduce MADV_COLD and MADV_PAGEOUT", v7.
      
      - Background
      
      The Android terminology used for forking a new process and starting an app
      from scratch is a cold start, while resuming an existing app is a hot
      start.  While we continually try to improve the performance of cold
      starts, hot starts will always be significantly less power hungry as well
      as faster so we are trying to make hot start more likely than cold start.
      
      To increase hot start, Android userspace manages the order that apps
      should be killed in a process called ActivityManagerService.
      ActivityManagerService tracks every Android app or service that the user
      could be interacting with at any time and translates that into a ranked
      list for lmkd(low memory killer daemon).  They are likely to be killed by
      lmkd if the system has to reclaim memory.  In that sense they are similar
      to entries in any other cache.  Those apps are kept alive for
      opportunistic performance improvements but those performance improvements
      will vary based on the memory requirements of individual workloads.
      
      - Problem
      
      Naturally, cached apps were dominant consumers of memory on the system.
      However, they were not significant consumers of swap even though they are
      good candidate for swap.  Under investigation, swapping out only begins
      once the low zone watermark is hit and kswapd wakes up, but the overall
      allocation rate in the system might trip lmkd thresholds and cause a
      cached process to be killed(we measured performance swapping out vs.
      zapping the memory by killing a process.  Unsurprisingly, zapping is 10x
      times faster even though we use zram which is much faster than real
      storage) so kill from lmkd will often satisfy the high zone watermark,
      resulting in very few pages actually being moved to swap.
      
      - Approach
      
      The approach we chose was to use a new interface to allow userspace to
      proactively reclaim entire processes by leveraging platform information.
      This allowed us to bypass the inaccuracy of the kernel’s LRUs for pages
      that are known to be cold from userspace and to avoid races with lmkd by
      reclaiming apps as soon as they entered the cached state.  Additionally,
      it could provide many chances for platform to use much information to
      optimize memory efficiency.
      
      To achieve the goal, the patchset introduce two new options for madvise.
      One is MADV_COLD which will deactivate activated pages and the other is
      MADV_PAGEOUT which will reclaim private pages instantly.  These new
      options complement MADV_DONTNEED and MADV_FREE by adding non-destructive
      ways to gain some free memory space.  MADV_PAGEOUT is similar to
      MADV_DONTNEED in a way that it hints the kernel that memory region is not
      currently needed and should be reclaimed immediately; MADV_COLD is similar
      to MADV_FREE in a way that it hints the kernel that memory region is not
      currently needed and should be reclaimed when memory pressure rises.
      
      This patch (of 5):
      
      When a process expects no accesses to a certain memory range, it could
      give a hint to kernel that the pages can be reclaimed when memory pressure
      happens but data should be preserved for future use.  This could reduce
      workingset eviction so it ends up increasing performance.
      
      This patch introduces the new MADV_COLD hint to madvise(2) syscall.
      MADV_COLD can be used by a process to mark a memory range as not expected
      to be used in the near future.  The hint can help kernel in deciding which
      pages to evict early during memory pressure.
      
      It works for every LRU pages like MADV_[DONTNEED|FREE]. IOW, It moves
      
      	active file page -> inactive file LRU
      	active anon page -> inacdtive anon LRU
      
      Unlike MADV_FREE, it doesn't move active anonymous pages to inactive file
      LRU's head because MADV_COLD is a little bit different symantic.
      MADV_FREE means it's okay to discard when the memory pressure because the
      content of the page is *garbage* so freeing such pages is almost zero
      overhead since we don't need to swap out and access afterward causes just
      minor fault.  Thus, it would make sense to put those freeable pages in
      inactive file LRU to compete other used-once pages.  It makes sense for
      implmentaion point of view, too because it's not swapbacked memory any
      longer until it would be re-dirtied.  Even, it could give a bonus to make
      them be reclaimed on swapless system.  However, MADV_COLD doesn't mean
      garbage so reclaiming them requires swap-out/in in the end so it's bigger
      cost.  Since we have designed VM LRU aging based on cost-model, anonymous
      cold pages would be better to position inactive anon's LRU list, not file
      LRU.  Furthermore, it would help to avoid unnecessary scanning if system
      doesn't have a swap device.  Let's start simpler way without adding
      complexity at this moment.  However, keep in mind, too that it's a caveat
      that workloads with a lot of pages cache are likely to ignore MADV_COLD on
      anonymous memory because we rarely age anonymous LRU lists.
      
      * man-page material
      
      MADV_COLD (since Linux x.x)
      
      Pages in the specified regions will be treated as less-recently-accessed
      compared to pages in the system with similar access frequencies.  In
      contrast to MADV_FREE, the contents of the region are preserved regardless
      of subsequent writes to pages.
      
      MADV_COLD cannot be applied to locked pages, Huge TLB pages, or VM_PFNMAP
      pages.
      
      [akpm@linux-foundation.org: resolve conflicts with hmm.git]
      Link: http://lkml.kernel.org/r/20190726023435.214162-2-minchan@kernel.orgSigned-off-by: NMinchan Kim <minchan@kernel.org>
      Reported-by: Nkbuild test robot <lkp@intel.com>
      Acked-by: NMichal Hocko <mhocko@suse.com>
      Acked-by: NJohannes Weiner <hannes@cmpxchg.org>
      Cc: James E.J. Bottomley <James.Bottomley@HansenPartnership.com>
      Cc: Richard Henderson <rth@twiddle.net>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Chris Zankel <chris@zankel.net>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Daniel Colascione <dancol@google.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Hillf Danton <hdanton@sina.com>
      Cc: Joel Fernandes (Google) <joel@joelfernandes.org>
      Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Oleksandr Natalenko <oleksandr@redhat.com>
      Cc: Shakeel Butt <shakeelb@google.com>
      Cc: Sonny Rao <sonnyrao@google.com>
      Cc: Suren Baghdasaryan <surenb@google.com>
      Cc: Tim Murray <timmurray@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      9c276cc6
  16. 31 5月, 2019 1 次提交
  17. 06 3月, 2019 7 次提交
    • M
      mm, compaction: capture a page under direct compaction · 5e1f0f09
      Mel Gorman 提交于
      Compaction is inherently race-prone as a suitable page freed during
      compaction can be allocated by any parallel task.  This patch uses a
      capture_control structure to isolate a page immediately when it is freed
      by a direct compactor in the slow path of the page allocator.  The
      intent is to avoid redundant scanning.
      
                                           5.0.0-rc1              5.0.0-rc1
                                     selective-v3r17          capture-v3r19
      Amean     fault-both-1         0.00 (   0.00%)        0.00 *   0.00%*
      Amean     fault-both-3      2582.11 (   0.00%)     2563.68 (   0.71%)
      Amean     fault-both-5      4500.26 (   0.00%)     4233.52 (   5.93%)
      Amean     fault-both-7      5819.53 (   0.00%)     6333.65 (  -8.83%)
      Amean     fault-both-12     9321.18 (   0.00%)     9759.38 (  -4.70%)
      Amean     fault-both-18     9782.76 (   0.00%)    10338.76 (  -5.68%)
      Amean     fault-both-24    15272.81 (   0.00%)    13379.55 *  12.40%*
      Amean     fault-both-30    15121.34 (   0.00%)    16158.25 (  -6.86%)
      Amean     fault-both-32    18466.67 (   0.00%)    18971.21 (  -2.73%)
      
      Latency is only moderately affected but the devil is in the details.  A
      closer examination indicates that base page fault latency is reduced but
      latency of huge pages is increased as it takes creater care to succeed.
      Part of the "problem" is that allocation success rates are close to 100%
      even when under pressure and compaction gets harder
      
                                      5.0.0-rc1              5.0.0-rc1
                                selective-v3r17          capture-v3r19
      Percentage huge-3        96.70 (   0.00%)       98.23 (   1.58%)
      Percentage huge-5        96.99 (   0.00%)       95.30 (  -1.75%)
      Percentage huge-7        94.19 (   0.00%)       97.24 (   3.24%)
      Percentage huge-12       94.95 (   0.00%)       97.35 (   2.53%)
      Percentage huge-18       96.74 (   0.00%)       97.30 (   0.58%)
      Percentage huge-24       97.07 (   0.00%)       97.55 (   0.50%)
      Percentage huge-30       95.69 (   0.00%)       98.50 (   2.95%)
      Percentage huge-32       96.70 (   0.00%)       99.27 (   2.65%)
      
      And scan rates are reduced as expected by 6% for the migration scanner
      and 29% for the free scanner indicating that there is less redundant
      work.
      
      Compaction migrate scanned    20815362    19573286
      Compaction free scanned       16352612    11510663
      
      [mgorman@techsingularity.net: remove redundant check]
        Link: http://lkml.kernel.org/r/20190201143853.GH9565@techsingularity.net
      Link: http://lkml.kernel.org/r/20190118175136.31341-23-mgorman@techsingularity.netSigned-off-by: NMel Gorman <mgorman@techsingularity.net>
      Acked-by: NVlastimil Babka <vbabka@suse.cz>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Dan Carpenter <dan.carpenter@oracle.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: YueHaibing <yuehaibing@huawei.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      5e1f0f09
    • M
      mm, compaction: round-robin the order while searching the free lists for a target · dbe2d4e4
      Mel Gorman 提交于
      As compaction proceeds and creates high-order blocks, the free list
      search gets less efficient as the larger blocks are used as compaction
      targets.  Eventually, the larger blocks will be behind the migration
      scanner for partially migrated pageblocks and the search fails.  This
      patch round-robins what orders are searched so that larger blocks can be
      ignored and find smaller blocks that can be used as migration targets.
      
      The overall impact was small on 1-socket but it avoids corner cases
      where the migration/free scanners meet prematurely or situations where
      many of the pageblocks encountered by the free scanner are almost full
      instead of being properly packed.  Previous testing had indicated that
      without this patch there were occasional large spikes in the free
      scanner without this patch.
      
      [dan.carpenter@oracle.com: fix static checker warning]
      Link: http://lkml.kernel.org/r/20190118175136.31341-20-mgorman@techsingularity.netSigned-off-by: NMel Gorman <mgorman@techsingularity.net>
      Acked-by: NVlastimil Babka <vbabka@suse.cz>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: YueHaibing <yuehaibing@huawei.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      dbe2d4e4
    • M
      mm, compaction: avoid rescanning the same pageblock multiple times · 804d3121
      Mel Gorman 提交于
      Pageblocks are marked for skip when no pages are isolated after a scan.
      However, it's possible to hit corner cases where the migration scanner
      gets stuck near the boundary between the source and target scanner.  Due
      to pages being migrated in blocks of COMPACT_CLUSTER_MAX, pages that are
      migrated can be reallocated before the pageblock is complete.  The
      pageblock is not necessarily skipped so it can be rescanned multiple
      times.  Similarly, a pageblock with some dirty/writeback pages may fail
      to migrate and be rescanned until writeback completes which is wasteful.
      
      This patch tracks if a pageblock is being rescanned.  If so, then the
      entire pageblock will be migrated as one operation.  This narrows the
      race window during which pages can be reallocated during migration.
      Secondly, if there are pages that cannot be isolated then the pageblock
      will still be fully scanned and marked for skipping.  On the second
      rescan, the pageblock skip is set and the migration scanner makes
      progress.
      
                                           5.0.0-rc1              5.0.0-rc1
                                      findfree-v3r16         norescan-v3r16
      Amean     fault-both-1         0.00 (   0.00%)        0.00 *   0.00%*
      Amean     fault-both-3      3200.68 (   0.00%)     3002.07 (   6.21%)
      Amean     fault-both-5      4847.75 (   0.00%)     4684.47 (   3.37%)
      Amean     fault-both-7      6658.92 (   0.00%)     6815.54 (  -2.35%)
      Amean     fault-both-12    11077.62 (   0.00%)    10864.02 (   1.93%)
      Amean     fault-both-18    12403.97 (   0.00%)    12247.52 (   1.26%)
      Amean     fault-both-24    15607.10 (   0.00%)    15683.99 (  -0.49%)
      Amean     fault-both-30    18752.27 (   0.00%)    18620.02 (   0.71%)
      Amean     fault-both-32    21207.54 (   0.00%)    19250.28 *   9.23%*
      
                                      5.0.0-rc1              5.0.0-rc1
                                 findfree-v3r16         norescan-v3r16
      Percentage huge-3        96.86 (   0.00%)       95.00 (  -1.91%)
      Percentage huge-5        93.72 (   0.00%)       94.22 (   0.53%)
      Percentage huge-7        94.31 (   0.00%)       92.35 (  -2.08%)
      Percentage huge-12       92.66 (   0.00%)       91.90 (  -0.82%)
      Percentage huge-18       91.51 (   0.00%)       89.58 (  -2.11%)
      Percentage huge-24       90.50 (   0.00%)       90.03 (  -0.52%)
      Percentage huge-30       91.57 (   0.00%)       89.14 (  -2.65%)
      Percentage huge-32       91.00 (   0.00%)       90.58 (  -0.46%)
      
      Negligible difference but this was likely a case when the specific
      corner case was not hit.  A previous run of the same patch based on an
      earlier iteration of the series showed large differences where migration
      rates could be halved when the corner case was hit.
      
      The specific corner case where migration scan rates go through the roof
      was due to a dirty/writeback pageblock located at the boundary of the
      migration/free scanner did not happen in this case.  When it does
      happen, the scan rates multipled by massive margins.
      
      Link: http://lkml.kernel.org/r/20190118175136.31341-13-mgorman@techsingularity.netSigned-off-by: NMel Gorman <mgorman@techsingularity.net>
      Acked-by: NVlastimil Babka <vbabka@suse.cz>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Dan Carpenter <dan.carpenter@oracle.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: YueHaibing <yuehaibing@huawei.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      804d3121
    • M
      mm, compaction: use free lists to quickly locate a migration source · 70b44595
      Mel Gorman 提交于
      The migration scanner is a linear scan of a zone with a potentiall large
      search space.  Furthermore, many pageblocks are unusable such as those
      filled with reserved pages or partially filled with pages that cannot
      migrate.  These still get scanned in the common case of allocating a THP
      and the cost accumulates.
      
      The patch uses a partial search of the free lists to locate a migration
      source candidate that is marked as MOVABLE when allocating a THP.  It
      prefers picking a block with a larger number of free pages already on
      the basis that there are fewer pages to migrate to free the entire
      block.  The lowest PFN found during searches is tracked as the basis of
      the start for the linear search after the first search of the free list
      fails.  After the search, the free list is shuffled so that the next
      search will not encounter the same page.  If the search fails then the
      subsequent searches will be shorter and the linear scanner is used.
      
      If this search fails, or if the request is for a small or
      unmovable/reclaimable allocation then the linear scanner is still used.
      It is somewhat pointless to use the list search in those cases.  Small
      free pages must be used for the search and there is no guarantee that
      movable pages are located within that block that are contiguous.
      
                                           5.0.0-rc1              5.0.0-rc1
                                       noboost-v3r10          findmig-v3r15
      Amean     fault-both-3      3771.41 (   0.00%)     3390.40 (  10.10%)
      Amean     fault-both-5      5409.05 (   0.00%)     5082.28 (   6.04%)
      Amean     fault-both-7      7040.74 (   0.00%)     7012.51 (   0.40%)
      Amean     fault-both-12    11887.35 (   0.00%)    11346.63 (   4.55%)
      Amean     fault-both-18    16718.19 (   0.00%)    15324.19 (   8.34%)
      Amean     fault-both-24    21157.19 (   0.00%)    16088.50 *  23.96%*
      Amean     fault-both-30    21175.92 (   0.00%)    18723.42 *  11.58%*
      Amean     fault-both-32    21339.03 (   0.00%)    18612.01 *  12.78%*
      
                                      5.0.0-rc1              5.0.0-rc1
                                  noboost-v3r10          findmig-v3r15
      Percentage huge-3        86.50 (   0.00%)       89.83 (   3.85%)
      Percentage huge-5        92.52 (   0.00%)       91.96 (  -0.61%)
      Percentage huge-7        92.44 (   0.00%)       92.85 (   0.44%)
      Percentage huge-12       92.98 (   0.00%)       92.74 (  -0.25%)
      Percentage huge-18       91.70 (   0.00%)       91.71 (   0.02%)
      Percentage huge-24       91.59 (   0.00%)       92.13 (   0.60%)
      Percentage huge-30       90.14 (   0.00%)       93.79 (   4.04%)
      Percentage huge-32       90.03 (   0.00%)       91.27 (   1.37%)
      
      This shows an improvement in allocation latencies with similar
      allocation success rates.  While not presented, there was a 31%
      reduction in migration scanning and a 8% reduction on system CPU usage.
      A 2-socket machine showed similar benefits.
      
      [mgorman@techsingularity.net: several fixes]
        Link: http://lkml.kernel.org/r/20190204120111.GL9565@techsingularity.net
      [vbabka@suse.cz: migrate block that was found-fast, some optimisations]
      Link: http://lkml.kernel.org/r/20190118175136.31341-10-mgorman@techsingularity.netSigned-off-by: NMel Gorman <mgorman@techsingularity.net>
      Acked-by: NVlastimil Babka <Vbabka@suse.cz>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Dan Carpenter <dan.carpenter@oracle.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: YueHaibing <yuehaibing@huawei.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      70b44595
    • M
      mm, compaction: always finish scanning of a full pageblock · efe771c7
      Mel Gorman 提交于
      When compaction is finishing, it uses a flag to ensure the pageblock is
      complete but it makes sense to always complete migration of a pageblock.
      Minimally, skip information is based on a pageblock and partially
      scanned pageblocks may incur more scanning in the future.  The pageblock
      skip handling also becomes more strict later in the series and the hint
      is more useful if a complete pageblock was always scanned.
      
      The potentially impacts latency as more scanning is done but it's not a
      consistent win or loss as the scanning is not always a high percentage
      of the pageblock and sometimes it is offset by future reductions in
      scanning.  Hence, the results are not presented this time due to a
      misleading mix of gains/losses without any clear pattern.  However, full
      scanning of the pageblock is important for later patches.
      
      Link: http://lkml.kernel.org/r/20190118175136.31341-8-mgorman@techsingularity.netSigned-off-by: NMel Gorman <mgorman@techsingularity.net>
      Acked-by: NVlastimil Babka <vbabka@suse.cz>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Dan Carpenter <dan.carpenter@oracle.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: YueHaibing <yuehaibing@huawei.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      efe771c7
    • M
      mm, compaction: remove last_migrated_pfn from compact_control · 566e54e1
      Mel Gorman 提交于
      The last_migrated_pfn field is a bit dubious as to whether it really
      helps but either way, the information from it can be inferred without
      increasing the size of compact_control so remove the field.
      
      Link: http://lkml.kernel.org/r/20190118175136.31341-4-mgorman@techsingularity.netSigned-off-by: NMel Gorman <mgorman@techsingularity.net>
      Acked-by: NVlastimil Babka <vbabka@suse.cz>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Dan Carpenter <dan.carpenter@oracle.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: YueHaibing <yuehaibing@huawei.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      566e54e1
    • M
      mm, compaction: rearrange compact_control · c5943b9c
      Mel Gorman 提交于
      compact_control spans two cache lines with write-intensive lines on
      both.  Rearrange so the most write-intensive fields are in the same
      cache line.  This has a negligible impact on the overall performance of
      compaction and is more a tidying exercise than anything.
      
      Link: http://lkml.kernel.org/r/20190118175136.31341-3-mgorman@techsingularity.netSigned-off-by: NMel Gorman <mgorman@techsingularity.net>
      Acked-by: NVlastimil Babka <vbabka@suse.cz>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Dan Carpenter <dan.carpenter@oracle.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: YueHaibing <yuehaibing@huawei.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      c5943b9c