1. 20 10月, 2008 12 次提交
    • N
      mlock: mlocked pages are unevictable · b291f000
      Nick Piggin 提交于
      Make sure that mlocked pages also live on the unevictable LRU, so kswapd
      will not scan them over and over again.
      
      This is achieved through various strategies:
      
      1) add yet another page flag--PG_mlocked--to indicate that
         the page is locked for efficient testing in vmscan and,
         optionally, fault path.  This allows early culling of
         unevictable pages, preventing them from getting to
         page_referenced()/try_to_unmap().  Also allows separate
         accounting of mlock'd pages, as Nick's original patch
         did.
      
         Note:  Nick's original mlock patch used a PG_mlocked
         flag.  I had removed this in favor of the PG_unevictable
         flag + an mlock_count [new page struct member].  I
         restored the PG_mlocked flag to eliminate the new
         count field.
      
      2) add the mlock/unevictable infrastructure to mm/mlock.c,
         with internal APIs in mm/internal.h.  This is a rework
         of Nick's original patch to these files, taking into
         account that mlocked pages are now kept on unevictable
         LRU list.
      
      3) update vmscan.c:page_evictable() to check PageMlocked()
         and, if vma passed in, the vm_flags.  Note that the vma
         will only be passed in for new pages in the fault path;
         and then only if the "cull unevictable pages in fault
         path" patch is included.
      
      4) add try_to_unlock() to rmap.c to walk a page's rmap and
         ClearPageMlocked() if no other vmas have it mlocked.
         Reuses as much of try_to_unmap() as possible.  This
         effectively replaces the use of one of the lru list links
         as an mlock count.  If this mechanism let's pages in mlocked
         vmas leak through w/o PG_mlocked set [I don't know that it
         does], we should catch them later in try_to_unmap().  One
         hopes this will be rare, as it will be relatively expensive.
      
      Original mm/internal.h, mm/rmap.c and mm/mlock.c changes:
      Signed-off-by: NNick Piggin <npiggin@suse.de>
      
      splitlru: introduce __get_user_pages():
      
        New munlock processing need to GUP_FLAGS_IGNORE_VMA_PERMISSIONS.
        because current get_user_pages() can't grab PROT_NONE pages theresore it
        cause PROT_NONE pages can't munlock.
      
      [akpm@linux-foundation.org: fix this for pagemap-pass-mm-into-pagewalkers.patch]
      [akpm@linux-foundation.org: untangle patch interdependencies]
      [akpm@linux-foundation.org: fix things after out-of-order merging]
      [hugh@veritas.com: fix page-flags mess]
      [lee.schermerhorn@hp.com: fix munlock page table walk - now requires 'mm']
      [kosaki.motohiro@jp.fujitsu.com: build fix]
      [kosaki.motohiro@jp.fujitsu.com: fix truncate race and sevaral comments]
      [kosaki.motohiro@jp.fujitsu.com: splitlru: introduce __get_user_pages()]
      Signed-off-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Signed-off-by: NRik van Riel <riel@redhat.com>
      Signed-off-by: NLee Schermerhorn <lee.schermerhorn@hp.com>
      Cc: Nick Piggin <npiggin@suse.de>
      Cc: Dave Hansen <dave@linux.vnet.ibm.com>
      Cc: Matt Mackall <mpm@selenic.com>
      Signed-off-by: NHugh Dickins <hugh@veritas.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      b291f000
    • L
      SHM_LOCKED pages are unevictable · 89e004ea
      Lee Schermerhorn 提交于
      Shmem segments locked into memory via shmctl(SHM_LOCKED) should not be
      kept on the normal LRU, since scanning them is a waste of time and might
      throw off kswapd's balancing algorithms.  Place them on the unevictable
      LRU list instead.
      
      Use the AS_UNEVICTABLE flag to mark address_space of SHM_LOCKed shared
      memory regions as unevictable.  Then these pages will be culled off the
      normal LRU lists during vmscan.
      
      Add new wrapper function to clear the mapping's unevictable state when/if
      shared memory segment is munlocked.
      
      Add 'scan_mapping_unevictable_page()' to mm/vmscan.c to scan all pages in
      the shmem segment's mapping [struct address_space] for evictability now
      that they're no longer locked.  If so, move them to the appropriate zone
      lru list.
      
      Changes depend on [CONFIG_]UNEVICTABLE_LRU.
      
      [kosaki.motohiro@jp.fujitsu.com: revert shm change]
      Signed-off-by: NLee Schermerhorn <lee.schermerhorn@hp.com>
      Signed-off-by: NRik van Riel <riel@redhat.com>
      Signed-off-by: NKosaki Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      89e004ea
    • L
      Ramfs and Ram Disk pages are unevictable · ba9ddf49
      Lee Schermerhorn 提交于
      Christoph Lameter pointed out that ram disk pages also clutter the LRU
      lists.  When vmscan finds them dirty and tries to clean them, the ram disk
      writeback function just redirties the page so that it goes back onto the
      active list.  Round and round she goes...
      
      With the ram disk driver [rd.c] replaced by the newer 'brd.c', this is no
      longer the case, as ram disk pages are no longer maintained on the lru.
      [This makes them unmigratable for defrag or memory hot remove, but that
      can be addressed by a separate patch series.] However, the ramfs pages
      behave like ram disk pages used to, so:
      
      Define new address_space flag [shares address_space flags member with
      mapping's gfp mask] to indicate that the address space contains all
      unevictable pages.  This will provide for efficient testing of ramfs pages
      in page_evictable().
      
      Also provide wrapper functions to set/test the unevictable state to
      minimize #ifdefs in ramfs driver and any other users of this facility.
      
      Set the unevictable state on address_space structures for new ramfs
      inodes.  Test the unevictable state in page_evictable() to cull
      unevictable pages.
      
      These changes depend on [CONFIG_]UNEVICTABLE_LRU.
      
      [riel@redhat.com: undo the brd.c part]
      Signed-off-by: NLee Schermerhorn <lee.schermerhorn@hp.com>
      Signed-off-by: NRik van Riel <riel@redhat.com>
      Debugged-by: NNick Piggin <nickpiggin@yahoo.com.au>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      ba9ddf49
    • L
      unevictable lru: add event counting with statistics · bbfd28ee
      Lee Schermerhorn 提交于
      Fix to unevictable-lru-page-statistics.patch
      
      Add unevictable lru infrastructure vm events to the statistics patch.
      Rename the "NORECL_" and "noreclaim_" symbols and text strings to
      "UNEVICTABLE_" and "unevictable_", respectively.
      
      Currently, both the infrastructure and the mlocked pages event are
      added by a single patch later in the series.  This makes it difficult
      to add or rework the incremental patches.  The events actually "belong"
      with the stats, so pull them up to here.
      
      Also, restore the event counting to putback_lru_page().  This was removed
      from previous patch in series where it was "misplaced".  The actual events
      weren't defined that early.
      Signed-off-by: NLee Schermerhorn <lee.schermerhorn@hp.com>
      Cc: Rik van Riel <riel@redhat.com>
      Reviewed-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      bbfd28ee
    • L
      Unevictable LRU Infrastructure · 894bc310
      Lee Schermerhorn 提交于
      When the system contains lots of mlocked or otherwise unevictable pages,
      the pageout code (kswapd) can spend lots of time scanning over these
      pages.  Worse still, the presence of lots of unevictable pages can confuse
      kswapd into thinking that more aggressive pageout modes are required,
      resulting in all kinds of bad behaviour.
      
      Infrastructure to manage pages excluded from reclaim--i.e., hidden from
      vmscan.  Based on a patch by Larry Woodman of Red Hat.  Reworked to
      maintain "unevictable" pages on a separate per-zone LRU list, to "hide"
      them from vmscan.
      
      Kosaki Motohiro added the support for the memory controller unevictable
      lru list.
      
      Pages on the unevictable list have both PG_unevictable and PG_lru set.
      Thus, PG_unevictable is analogous to and mutually exclusive with
      PG_active--it specifies which LRU list the page is on.
      
      The unevictable infrastructure is enabled by a new mm Kconfig option
      [CONFIG_]UNEVICTABLE_LRU.
      
      A new function 'page_evictable(page, vma)' in vmscan.c tests whether or
      not a page may be evictable.  Subsequent patches will add the various
      !evictable tests.  We'll want to keep these tests light-weight for use in
      shrink_active_list() and, possibly, the fault path.
      
      To avoid races between tasks putting pages [back] onto an LRU list and
      tasks that might be moving the page from non-evictable to evictable state,
      the new function 'putback_lru_page()' -- inverse to 'isolate_lru_page()'
      -- tests the "evictability" of a page after placing it on the LRU, before
      dropping the reference.  If the page has become unevictable,
      putback_lru_page() will redo the 'putback', thus moving the page to the
      unevictable list.  This way, we avoid "stranding" evictable pages on the
      unevictable list.
      
      [akpm@linux-foundation.org: fix fallout from out-of-order merge]
      [riel@redhat.com: fix UNEVICTABLE_LRU and !PROC_PAGE_MONITOR build]
      [nishimura@mxp.nes.nec.co.jp: remove redundant mapping check]
      [kosaki.motohiro@jp.fujitsu.com: unevictable-lru-infrastructure: putback_lru_page()/unevictable page handling rework]
      [kosaki.motohiro@jp.fujitsu.com: kill unnecessary lock_page() in vmscan.c]
      [kosaki.motohiro@jp.fujitsu.com: revert migration change of unevictable lru infrastructure]
      [kosaki.motohiro@jp.fujitsu.com: revert to unevictable-lru-infrastructure-kconfig-fix.patch]
      [kosaki.motohiro@jp.fujitsu.com: restore patch failure of vmstat-unevictable-and-mlocked-pages-vm-events.patch]
      Signed-off-by: NLee Schermerhorn <lee.schermerhorn@hp.com>
      Signed-off-by: NRik van Riel <riel@redhat.com>
      Signed-off-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Debugged-by: NBenjamin Kidwell <benjkidwell@yahoo.com>
      Signed-off-by: NDaisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
      Signed-off-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      894bc310
    • R
      more aggressively use lumpy reclaim · 33c120ed
      Rik van Riel 提交于
      During an AIM7 run on a 16GB system, fork started failing around 32000
      threads, despite the system having plenty of free swap and 15GB of
      pageable memory.  This was on x86-64, so 8k stacks.
      
      If a higher order allocation fails, we can either:
      - keep evicting pages off the end of the LRUs and hope that
        we eventually create a contiguous region; this is somewhat
        unlikely if the system is under enough stress by new
        allocations
      - after trying normal eviction for a bit, use lumpy reclaim
      
      This patch switches the system to lumpy reclaim if the VM is having
      trouble freeing enough pages, using the same threshold for detection as
      used by pageout congestion wait.
      Signed-off-by: NRik van Riel <riel@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      33c120ed
    • R
      vmscan: fix pagecache reclaim referenced bit check · 7e9cd484
      Rik van Riel 提交于
      Moving referenced pages back to the head of the active list creates a huge
      scalability problem, because by the time a large memory system finally
      runs out of free memory, every single page in the system will have been
      referenced.
      
      Not only do we not have the time to scan every single page on the active
      list, but since they have will all have the referenced bit set, that bit
      conveys no useful information.
      
      A more scalable solution is to just move every page that hits the end of
      the active list to the inactive list.
      
      We clear the referenced bit off of mapped pages, which need just one
      reference to be moved back onto the active list.
      
      Unmapped pages will be moved back to the active list after two references
      (see mark_page_accessed).  We preserve the PG_referenced flag on unmapped
      pages to preserve accesses that were made while the page was on the active
      list.
      Signed-off-by: NRik van Riel <riel@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      7e9cd484
    • R
      vmscan: second chance replacement for anonymous pages · 556adecb
      Rik van Riel 提交于
      We avoid evicting and scanning anonymous pages for the most part, but
      under some workloads we can end up with most of memory filled with
      anonymous pages.  At that point, we suddenly need to clear the referenced
      bits on all of memory, which can take ages on very large memory systems.
      
      We can reduce the maximum number of pages that need to be scanned by not
      taking the referenced state into account when deactivating an anonymous
      page.  After all, every anonymous page starts out referenced, so why
      check?
      
      If an anonymous page gets referenced again before it reaches the end of
      the inactive list, we move it back to the active list.
      
      To keep the maximum amount of necessary work reasonable, we scale the
      active to inactive ratio with the size of memory, using the formula
      active:inactive ratio = sqrt(memory in GB * 10).
      
      Kswapd CPU use now seems to scale by the amount of pageout bandwidth,
      instead of by the amount of memory present in the system.
      
      [kamezawa.hiroyu@jp.fujitsu.com: fix OOM with memcg]
      [kamezawa.hiroyu@jp.fujitsu.com: memcg: lru scan fix]
      Signed-off-by: NRik van Riel <riel@redhat.com>
      Signed-off-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Signed-off-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      556adecb
    • R
      vmscan: split LRU lists into anon & file sets · 4f98a2fe
      Rik van Riel 提交于
      Split the LRU lists in two, one set for pages that are backed by real file
      systems ("file") and one for pages that are backed by memory and swap
      ("anon").  The latter includes tmpfs.
      
      The advantage of doing this is that the VM will not have to scan over lots
      of anonymous pages (which we generally do not want to swap out), just to
      find the page cache pages that it should evict.
      
      This patch has the infrastructure and a basic policy to balance how much
      we scan the anon lists and how much we scan the file lists.  The big
      policy changes are in separate patches.
      
      [lee.schermerhorn@hp.com: collect lru meminfo statistics from correct offset]
      [kosaki.motohiro@jp.fujitsu.com: prevent incorrect oom under split_lru]
      [kosaki.motohiro@jp.fujitsu.com: fix pagevec_move_tail() doesn't treat unevictable page]
      [hugh@veritas.com: memcg swapbacked pages active]
      [hugh@veritas.com: splitlru: BDI_CAP_SWAP_BACKED]
      [akpm@linux-foundation.org: fix /proc/vmstat units]
      [nishimura@mxp.nes.nec.co.jp: memcg: fix handling of shmem migration]
      [kosaki.motohiro@jp.fujitsu.com: adjust Quicklists field of /proc/meminfo]
      [kosaki.motohiro@jp.fujitsu.com: fix style issue of get_scan_ratio()]
      Signed-off-by: NRik van Riel <riel@redhat.com>
      Signed-off-by: NLee Schermerhorn <Lee.Schermerhorn@hp.com>
      Signed-off-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Signed-off-by: NHugh Dickins <hugh@veritas.com>
      Signed-off-by: NDaisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      4f98a2fe
    • R
      vmscan: free swap space on swap-in/activation · 68a22394
      Rik van Riel 提交于
      If vm_swap_full() (swap space more than 50% full), the system will free
      swap space at swapin time.  With this patch, the system will also free the
      swap space in the pageout code, when we decide that the page is not a
      candidate for swapout (and just wasting swap space).
      Signed-off-by: NRik van Riel <riel@redhat.com>
      Signed-off-by: NLee Schermerhorn <Lee.Schermerhorn@hp.com>
      Signed-off-by: NMinChan Kim <minchan.kim@gmail.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      68a22394
    • C
      vmscan: Use an indexed array for LRU variables · b69408e8
      Christoph Lameter 提交于
      Currently we are defining explicit variables for the inactive and active
      list.  An indexed array can be more generic and avoid repeating similar
      code in several places in the reclaim code.
      
      We are saving a few bytes in terms of code size:
      
      Before:
      
         text    data     bss     dec     hex filename
      4097753  573120 4092484 8763357  85b7dd vmlinux
      
      After:
      
         text    data     bss     dec     hex filename
      4097729  573120 4092484 8763333  85b7c5 vmlinux
      
      Having an easy way to add new lru lists may ease future work on the
      reclaim code.
      Signed-off-by: NRik van Riel <riel@redhat.com>
      Signed-off-by: NLee Schermerhorn <lee.schermerhorn@hp.com>
      Signed-off-by: NChristoph Lameter <cl@linux-foundation.org>
      Signed-off-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      b69408e8
    • N
      vmscan: move isolate_lru_page() to vmscan.c · 62695a84
      Nick Piggin 提交于
      On large memory systems, the VM can spend way too much time scanning
      through pages that it cannot (or should not) evict from memory.  Not only
      does it use up CPU time, but it also provokes lock contention and can
      leave large systems under memory presure in a catatonic state.
      
      This patch series improves VM scalability by:
      
      1) putting filesystem backed, swap backed and unevictable pages
         onto their own LRUs, so the system only scans the pages that it
         can/should evict from memory
      
      2) switching to two handed clock replacement for the anonymous LRUs,
         so the number of pages that need to be scanned when the system
         starts swapping is bound to a reasonable number
      
      3) keeping unevictable pages off the LRU completely, so the
         VM does not waste CPU time scanning them. ramfs, ramdisk,
         SHM_LOCKED shared memory segments and mlock()ed VMA pages
         are keept on the unevictable list.
      
      This patch:
      
      isolate_lru_page logically belongs to be in vmscan.c than migrate.c.
      
      It is tough, because we don't need that function without memory migration
      so there is a valid argument to have it in migrate.c.  However a
      subsequent patch needs to make use of it in the core mm, so we can happily
      move it to vmscan.c.
      
      Also, make the function a little more generic by not requiring that it
      adds an isolated page to a given list.  Callers can do that.
      
      	Note that we now have '__isolate_lru_page()', that does
      	something quite different, visible outside of vmscan.c
      	for use with memory controller.  Methinks we need to
      	rationalize these names/purposes.	--lts
      
      [akpm@linux-foundation.org: fix mm/memory_hotplug.c build]
      Signed-off-by: NNick Piggin <npiggin@suse.de>
      Signed-off-by: NRik van Riel <riel@redhat.com>
      Signed-off-by: NLee Schermerhorn <Lee.Schermerhorn@hp.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      62695a84
  2. 05 8月, 2008 1 次提交
  3. 31 7月, 2008 2 次提交
  4. 27 7月, 2008 2 次提交
  5. 26 7月, 2008 1 次提交
    • K
      per-task-delay-accounting: add memory reclaim delay · 873b4771
      Keika Kobayashi 提交于
      Sometimes, application responses become bad under heavy memory load.
      Applications take a bit time to reclaim memory.  The statistics, how long
      memory reclaim takes, will be useful to measure memory usage.
      
      This patch adds accounting memory reclaim to per-task-delay-accounting for
      accounting the time of do_try_to_free_pages().
      
      <i.e>
      
      - When System is under low memory load,
        memory reclaim may not occur.
      
      $ free
                   total       used       free     shared    buffers     cached
      Mem:       8197800    1577300    6620500          0       4808    1516724
      -/+ buffers/cache:      55768    8142032
      Swap:     16386292          0   16386292
      
      $ vmstat 1
      procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
       r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa
       0  0      0 5069748  10612 3014060    0    0     0     0    3   26  0  0 100  0
       0  0      0 5069748  10612 3014060    0    0     0     0    4   22  0  0 100  0
       0  0      0 5069748  10612 3014060    0    0     0     0    3   18  0  0 100  0
      
      Measure the time of tar command.
      
      $ ls -s test.dat
      1501472 test.dat
      
      $ time tar cvf test.tar test.dat
      real    0m13.388s
      user    0m0.116s
      sys     0m5.304s
      
      $ ./delayget -d -p <pid>
      CPU             count     real total  virtual total    delay total
                        428     5528345500     5477116080       62749891
      IO              count    delay total
                        338     8078977189
      SWAP            count    delay total
                          0              0
      RECLAIM         count    delay total
                          0              0
      
      - When system is under heavy memory load
        memory reclaim may occur.
      
      $ vmstat 1
      procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
       r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa
       0  0 7159032  49724   1812   3012    0    0     0     0    3   24  0  0 100  0
       0  0 7159032  49724   1812   3012    0    0     0     0    4   24  0  0 100  0
       0  0 7159032  49848   1812   3012    0    0     0     0    3   22  0  0 100  0
      
      In this case, one process uses more 8G memory
      by execution of malloc() and memset().
      
      $ time tar cvf test.tar test.dat
      real    1m38.563s        <-  increased by 85 sec
      user    0m0.140s
      sys     0m7.060s
      
      $ ./delayget -d -p <pid>
      CPU             count     real total  virtual total    delay total
                       9021     7140446250     7315277975      923201824
      IO              count    delay total
                       8965    90466349669
      SWAP            count    delay total
                          3       21036367
      RECLAIM         count    delay total
                        740    61011951153
      
      In the later case, the value of RECLAIM is increasing.
      So, taskstats can show how much memory reclaim influences TAT.
      Signed-off-by: NKeika Kobayashi <kobayashi.kk@ncos.nec.co.jp>
      Acked-by: NBalbir Singh <balbir@linux.vnet.ibm.com>
      Acked-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujistu.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      873b4771
  6. 13 6月, 2008 1 次提交
  7. 30 4月, 2008 1 次提交
  8. 29 4月, 2008 1 次提交
    • N
      page allocator: smarter retry of costly-order allocations · a41f24ea
      Nishanth Aravamudan 提交于
      Because of page order checks in __alloc_pages(), hugepage (and similarly
      large order) allocations will not retry unless explicitly marked
      __GFP_REPEAT. However, the current retry logic is nearly an infinite
      loop (or until reclaim does no progress whatsoever). For these costly
      allocations, that seems like overkill and could potentially never
      terminate. Mel observed that allowing current __GFP_REPEAT semantics for
      hugepage allocations essentially killed the system. I believe this is
      because we may continue to reclaim small orders of pages all over, but
      never have enough to satisfy the hugepage allocation request. This is
      clearly only a problem for large order allocations, of which hugepages
      are the most obvious (to me).
      
      Modify try_to_free_pages() to indicate how many pages were reclaimed.
      Use that information in __alloc_pages() to eventually fail a large
      __GFP_REPEAT allocation when we've reclaimed an order of pages equal to
      or greater than the allocation's order. This relies on lumpy reclaim
      functioning as advertised. Due to fragmentation, lumpy reclaim may not
      be able to free up the order needed in one invocation, so multiple
      iterations may be requred. In other words, the more fragmented memory
      is, the more retry attempts __GFP_REPEAT will make (particularly for
      higher order allocations).
      
      This changes the semantics of __GFP_REPEAT subtly, but *only* for
      allocations > PAGE_ALLOC_COSTLY_ORDER. With this patch, for those size
      allocations, we will try up to some point (at least 1<<order reclaimed
      pages), rather than forever (which is the case for allocations <=
      PAGE_ALLOC_COSTLY_ORDER).
      
      This change improves the /proc/sys/vm/nr_hugepages interface with a
      follow-on patch that makes pool allocations use __GFP_REPEAT. Rather
      than administrators repeatedly echo'ing a particular value into the
      sysctl, and forcing reclaim into action manually, this change allows for
      the sysctl to attempt a reasonable effort itself. Similarly, dynamic
      pool growth should be more successful under load, as lumpy reclaim can
      try to free up pages, rather than failing right away.
      
      Choosing to reclaim only up to the order of the requested allocation
      strikes a balance between not failing hugepage allocations and returning
      to the caller when it's unlikely to every succeed. Because of lumpy
      reclaim, if we have freed the order requested, hopefully it has been in
      big chunks and those chunks will allow our allocation to succeed. If
      that isn't the case after freeing up the current order, I don't think it
      is likely to succeed in the future, although it is possible given a
      particular fragmentation pattern.
      Signed-off-by: NNishanth Aravamudan <nacc@us.ibm.com>
      Cc: Andy Whitcroft <apw@shadowen.org>
      Tested-by: NMel Gorman <mel@csn.ul.ie>
      Cc: Dave Hansen <haveblue@us.ibm.com>
      Cc: Christoph Lameter <clameter@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      a41f24ea
  9. 28 4月, 2008 3 次提交
    • M
      mm: have zonelist contains structs with both a zone pointer and zone_idx · dd1a239f
      Mel Gorman 提交于
      Filtering zonelists requires very frequent use of zone_idx().  This is costly
      as it involves a lookup of another structure and a substraction operation.  As
      the zone_idx is often required, it should be quickly accessible.  The node idx
      could also be stored here if it was found that accessing zone->node is
      significant which may be the case on workloads where nodemasks are heavily
      used.
      
      This patch introduces a struct zoneref to store a zone pointer and a zone
      index.  The zonelist then consists of an array of these struct zonerefs which
      are looked up as necessary.  Helpers are given for accessing the zone index as
      well as the node index.
      
      [kamezawa.hiroyu@jp.fujitsu.com: Suggested struct zoneref instead of embedding information in pointers]
      [hugh@veritas.com: mm-have-zonelist: fix memcg ooms]
      [hugh@veritas.com: just return do_try_to_free_pages]
      [hugh@veritas.com: do_try_to_free_pages gfp_mask redundant]
      Signed-off-by: NMel Gorman <mel@csn.ul.ie>
      Acked-by: NChristoph Lameter <clameter@sgi.com>
      Acked-by: NDavid Rientjes <rientjes@google.com>
      Signed-off-by: NLee Schermerhorn <lee.schermerhorn@hp.com>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Cc: Christoph Lameter <clameter@sgi.com>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Signed-off-by: NHugh Dickins <hugh@veritas.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      dd1a239f
    • M
      mm: use two zonelist that are filtered by GFP mask · 54a6eb5c
      Mel Gorman 提交于
      Currently a node has two sets of zonelists, one for each zone type in the
      system and a second set for GFP_THISNODE allocations.  Based on the zones
      allowed by a gfp mask, one of these zonelists is selected.  All of these
      zonelists consume memory and occupy cache lines.
      
      This patch replaces the multiple zonelists per-node with two zonelists.  The
      first contains all populated zones in the system, ordered by distance, for
      fallback allocations when the target/preferred node has no free pages.  The
      second contains all populated zones in the node suitable for GFP_THISNODE
      allocations.
      
      An iterator macro is introduced called for_each_zone_zonelist() that interates
      through each zone allowed by the GFP flags in the selected zonelist.
      Signed-off-by: NMel Gorman <mel@csn.ul.ie>
      Acked-by: NChristoph Lameter <clameter@sgi.com>
      Signed-off-by: NLee Schermerhorn <lee.schermerhorn@hp.com>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Cc: Christoph Lameter <clameter@sgi.com>
      Cc: Hugh Dickins <hugh@veritas.com>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      54a6eb5c
    • M
      mm: use zonelists instead of zones when direct reclaiming pages · dac1d27b
      Mel Gorman 提交于
      The following patches replace multiple zonelists per node with two zonelists
      that are filtered based on the GFP flags.  The patches as a set fix a bug with
      regard to the use of MPOL_BIND and ZONE_MOVABLE.  With this patchset, the
      MPOL_BIND will apply to the two highest zones when the highest zone is
      ZONE_MOVABLE.  This should be considered as an alternative fix for the
      MPOL_BIND+ZONE_MOVABLE in 2.6.23 to the previously discussed hack that filters
      only custom zonelists.
      
      The first patch cleans up an inconsistency where direct reclaim uses
      zonelist->zones where other places use zonelist.
      
      The second patch introduces a helper function node_zonelist() for looking up
      the appropriate zonelist for a GFP mask which simplifies patches later in the
      set.
      
      The third patch defines/remembers the "preferred zone" for numa statistics, as
      it is no longer always the first zone in a zonelist.
      
      The forth patch replaces multiple zonelists with two zonelists that are
      filtered.  The two zonelists are due to the fact that the memoryless patchset
      introduces a second set of zonelists for __GFP_THISNODE.
      
      The fifth patch introduces helper macros for retrieving the zone and node
      indices of entries in a zonelist.
      
      The final patch introduces filtering of the zonelists based on a nodemask.
      Two zonelists exist per node, one for normal allocations and one for
      __GFP_THISNODE.
      
      Performance results varied depending on the machine configuration.  In real
      workloads the gain/loss will depend on how much the userspace portion of the
      benchmark benefits from having more cache available due to reduced referencing
      of zonelists.
      
      These are the range of performance losses/gains when running against
      2.6.24-rc4-mm1.  The set and these machines are a mix of i386, x86_64 and
      ppc64 both NUMA and non-NUMA.
      			     loss   to  gain
      Total CPU time on Kernbench: -0.86% to  1.13%
      Elapsed   time on Kernbench: -0.79% to  0.76%
      page_test from aim9:         -4.37% to  0.79%
      brk_test  from aim9:         -0.71% to  4.07%
      fork_test from aim9:         -1.84% to  4.60%
      exec_test from aim9:         -0.71% to  1.08%
      
      This patch:
      
      The allocator deals with zonelists which indicate the order in which zones
      should be targeted for an allocation.  Similarly, direct reclaim of pages
      iterates over an array of zones.  For consistency, this patch converts direct
      reclaim to use a zonelist.  No functionality is changed by this patch.  This
      simplifies zonelist iterators in the next patch.
      Signed-off-by: NMel Gorman <mel@csn.ul.ie>
      Acked-by: NChristoph Lameter <clameter@sgi.com>
      Signed-off-by: NLee Schermerhorn <lee.schermerhorn@hp.com>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Cc: Christoph Lameter <clameter@sgi.com>
      Cc: Hugh Dickins <hugh@veritas.com>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      dac1d27b
  10. 20 4月, 2008 1 次提交
    • M
      nodemask: use new node_to_cpumask_ptr function · c5f59f08
      Mike Travis 提交于
        * Use new node_to_cpumask_ptr.  This creates a pointer to the
          cpumask for a given node.  This definition is in mm patch:
      
      	asm-generic-add-node_to_cpumask_ptr-macro.patch
      
        * Use new set_cpus_allowed_ptr function.
      
      Depends on:
      	[mm-patch]: asm-generic-add-node_to_cpumask_ptr-macro.patch
      	[sched-devel]: sched: add new set_cpus_allowed_ptr function
      	[x86/latest]: x86: add cpus_scnprintf function
      
      Cc: Greg Kroah-Hartman <gregkh@suse.de>
      Cc: Greg Banks <gnb@melbourne.sgi.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Signed-off-by: NMike Travis <travis@sgi.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      c5f59f08
  11. 25 3月, 2008 1 次提交
  12. 05 3月, 2008 2 次提交
  13. 08 2月, 2008 7 次提交
    • K
      per-zone and reclaim enhancements for memory controller: modifies vmscan.c for... · 1cfb419b
      KAMEZAWA Hiroyuki 提交于
      per-zone and reclaim enhancements for memory controller: modifies vmscan.c for isolate globa/cgroup lru activity
      
      When using memory controller, there are 2 levels of memory reclaim.
       1. zone memory reclaim because of system/zone memory shortage.
       2. memory cgroup memory reclaim because of hitting limit.
      
      These two can be distinguished by sc->mem_cgroup parameter.
      (scan_global_lru() macro)
      
      This patch tries to make memory cgroup reclaim routine avoid affecting
      system/zone memory reclaim. This patch inserts if (scan_global_lru()) and
      hook to memory_cgroup reclaim support functions.
      
      This patch can be a help for isolating system lru activity and group lru
      activity and shows what additional functions are necessary.
      
       * mem_cgroup_calc_mapped_ratio() ... calculate mapped ratio for cgroup.
       * mem_cgroup_reclaim_imbalance() ... calculate active/inactive balance in
                                              cgroup.
       * mem_cgroup_calc_reclaim_active() ... calculate the number of active pages to
                                      be scanned in this priority in mem_cgroup.
      
       * mem_cgroup_calc_reclaim_inactive() ... calculate the number of inactive pages
                                      to be scanned in this priority in mem_cgroup.
      
       * mem_cgroup_all_unreclaimable() .. checks cgroup's page is all unreclaimable
                                           or not.
       * mem_cgroup_get_reclaim_priority() ...
       * mem_cgroup_note_reclaim_priority() ... record reclaim priority (temporal)
       * mem_cgroup_remember_reclaim_priority()
                                   .... record reclaim priority as
                                        zone->prev_priority.
                                        This value is used for calc reclaim_mapped.
      
      [akpm@linux-foundation.org: fix unused var warning]
      Signed-off-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: "Eric W. Biederman" <ebiederm@xmission.com>
      Cc: Balbir Singh <balbir@linux.vnet.ibm.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Herbert Poetzl <herbert@13thfloor.at>
      Cc: Kirill Korotaev <dev@sw.ru>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Cc: Paul Menage <menage@google.com>
      Cc: Pavel Emelianov <xemul@openvz.org>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Vaidyanathan Srinivasan <svaidy@linux.vnet.ibm.com>
      Cc: Rik van Riel <riel@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      1cfb419b
    • K
      per-zone and reclaim enhancements for memory controller: add scan_global_lru macro · 91a45470
      KAMEZAWA Hiroyuki 提交于
      This is used to detect which scan_control scans global lru or mem_cgroup lru.
      And compiled to be static value (1) when memory controller is not configured.
      This may make the meaning obvious.
      Acked-by: NBalbir Singh <balbir@linux.vnet.ibm.com>
      Signed-off-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: "Eric W. Biederman" <ebiederm@xmission.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Herbert Poetzl <herbert@13thfloor.at>
      Cc: Kirill Korotaev <dev@sw.ru>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Cc: Paul Menage <menage@google.com>
      Cc: Pavel Emelianov <xemul@openvz.org>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Vaidyanathan Srinivasan <svaidy@linux.vnet.ibm.com>
      Cc: Rik van Riel <riel@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      91a45470
    • K
      memory cgroup enhancements: fix zone handling in try_to_free_mem_cgroup_page · 417eead3
      KAMEZAWA Hiroyuki 提交于
      Because NODE_DATA(node)->node_zonelists[] is guaranteed to contain all
      necessary zones, it is not necessary to use for_each_online_node.
      
      And this for_each_online_node() makes reclaim routine start always
      from node 0. This is not good. This patch makes reclaim start from
      caller's node and just use usual (default) zonelist order.
      
      [akpm@linux-foundation.org: fix warning]
      Signed-off-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Balbir Singh <balbir@linux.vnet.ibm.com>
      Cc: Pavel Emelianov <xemul@openvz.org>
      Cc: Paul Menage <menage@google.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: "Eric W. Biederman" <ebiederm@xmission.com>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Cc: Kirill Korotaev <dev@sw.ru>
      Cc: Herbert Poetzl <herbert@13thfloor.at>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Vaidyanathan Srinivasan <svaidy@linux.vnet.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      417eead3
    • R
      kswapd should only wait on IO if there is IO · f1a9ee75
      Rik van Riel 提交于
      The current kswapd (and try_to_free_pages) code has an oddity where the
      code will wait on IO, even if there is no IO in flight.  This problem is
      notable especially when the system scans through many unfreeable pages,
      causing unnecessary stalls in the VM.
      
      Additionally, tasks without __GFP_FS or __GFP_IO in the direct reclaim path
      will sleep if a significant number of pages are encountered that should be
      written out.  This gives kswapd a chance to write out those pages, while
      the direct reclaim task sleeps.
      Signed-off-by: NRik van Riel <riel@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      f1a9ee75
    • B
      Memory controller: make charging gfp mask aware · e1a1cd59
      Balbir Singh 提交于
      Nick Piggin pointed out that swap cache and page cache addition routines
      could be called from non GFP_KERNEL contexts.  This patch makes the
      charging routine aware of the gfp context.  Charging might fail if the
      cgroup is over it's limit, in which case a suitable error is returned.
      
      This patch was tested on a Powerpc box.  I am still looking at being able
      to test the path, through which allocations happen in non GFP_KERNEL
      contexts.
      
      [kamezawa.hiroyu@jp.fujitsu.com: problem with ZONE_MOVABLE]
      Signed-off-by: NBalbir Singh <balbir@linux.vnet.ibm.com>
      Cc: Pavel Emelianov <xemul@openvz.org>
      Cc: Paul Menage <menage@google.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: "Eric W. Biederman" <ebiederm@xmission.com>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Cc: Kirill Korotaev <dev@sw.ru>
      Cc: Herbert Poetzl <herbert@13thfloor.at>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Vaidyanathan Srinivasan <svaidy@linux.vnet.ibm.com>
      Signed-off-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      e1a1cd59
    • B
      Memory controller: make page_referenced() cgroup aware · bed7161a
      Balbir Singh 提交于
      Make page_referenced() cgroup aware.  Without this patch, page_referenced()
      can cause a page to be skipped while reclaiming pages.  This patch ensures
      that other cgroups do not hold pages in a particular cgroup hostage.  It
      is required to ensure that shared pages are freed from a cgroup when they
      are not actively referenced from the cgroup that brought them in
      Signed-off-by: NBalbir Singh <balbir@linux.vnet.ibm.com>
      Cc: Pavel Emelianov <xemul@openvz.org>
      Cc: Paul Menage <menage@google.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: "Eric W. Biederman" <ebiederm@xmission.com>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Cc: Kirill Korotaev <dev@sw.ru>
      Cc: Herbert Poetzl <herbert@13thfloor.at>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Vaidyanathan Srinivasan <svaidy@linux.vnet.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      bed7161a
    • B
      Memory controller: add per cgroup LRU and reclaim · 66e1707b
      Balbir Singh 提交于
      Add the page_cgroup to the per cgroup LRU.  The reclaim algorithm has
      been modified to make the isolate_lru_pages() as a pluggable component.  The
      scan_control data structure now accepts the cgroup on behalf of which
      reclaims are carried out.  try_to_free_pages() has been extended to become
      cgroup aware.
      
      [akpm@linux-foundation.org: fix warning]
      [Lee.Schermerhorn@hp.com: initialize all scan_control's isolate_pages member]
      [bunk@kernel.org: make do_try_to_free_pages() static]
      [hugh@veritas.com: memcgroup: fix try_to_free order]
      [kamezawa.hiroyu@jp.fujitsu.com: this unlock_page_cgroup() is unnecessary]
      Signed-off-by: NPavel Emelianov <xemul@openvz.org>
      Signed-off-by: NBalbir Singh <balbir@linux.vnet.ibm.com>
      Cc: Paul Menage <menage@google.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: "Eric W. Biederman" <ebiederm@xmission.com>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Cc: Kirill Korotaev <dev@sw.ru>
      Cc: Herbert Poetzl <herbert@13thfloor.at>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Vaidyanathan Srinivasan <svaidy@linux.vnet.ibm.com>
      Signed-off-by: NLee Schermerhorn <lee.schermerhorn@hp.com>
      Signed-off-by: NHugh Dickins <hugh@veritas.com>
      Signed-off-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      66e1707b
  14. 20 10月, 2007 1 次提交
  15. 19 10月, 2007 1 次提交
  16. 17 10月, 2007 3 次提交
    • D
      mm: test and set zone reclaim lock before starting reclaim · d773ed6b
      David Rientjes 提交于
      Introduces new zone flag interface for testing and setting flags:
      
      	int zone_test_and_set_flag(struct zone *zone, zone_flags_t flag)
      
      Instead of setting and clearing ZONE_RECLAIM_LOCKED each time shrink_zone() is
      called, this flag is test and set before starting zone reclaim.  Zone reclaim
      starts in __alloc_pages() when a zone's watermark fails and the system is in
      zone_reclaim_mode.  If it's already in reclaim, there's no need to start again
      so it is simply considered full for that allocation attempt.
      
      There is a change of behavior with regard to concurrent zone shrinking.  It is
      now possible for try_to_free_pages() or kswapd to already be shrinking a
      particular zone when __alloc_pages() starts zone reclaim.  In this case, it is
      possible for two concurrent threads to invoke shrink_zone() for a single zone.
      
      This change forbids a zone to be in zone reclaim twice, which was always the
      behavior, but allows for concurrent try_to_free_pages() or kswapd shrinking
      when starting zone reclaim.
      
      Cc: Andrea Arcangeli <andrea@suse.de>
      Cc: Christoph Lameter <clameter@sgi.com>
      Signed-off-by: NDavid Rientjes <rientjes@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      d773ed6b
    • D
      oom: change all_unreclaimable zone member to flags · e815af95
      David Rientjes 提交于
      Convert the int all_unreclaimable member of struct zone to unsigned long
      flags.  This can now be used to specify several different zone flags such as
      all_unreclaimable and reclaim_in_progress, which can now be removed and
      converted to a per-zone flag.
      
      Flags are set and cleared as follows:
      
      	zone_set_flag(struct zone *zone, zone_flags_t flag)
      	zone_clear_flag(struct zone *zone, zone_flags_t flag)
      
      Defines the first zone flags, ZONE_ALL_UNRECLAIMABLE and ZONE_RECLAIM_LOCKED,
      which have the same semantics as the old zone->all_unreclaimable and
      zone->reclaim_in_progress, respectively.  Also converts all current users that
      set or clear either flag to use the new interface.
      
      Helper functions are defined to test the flags:
      
      	int zone_is_all_unreclaimable(const struct zone *zone)
      	int zone_is_reclaim_locked(const struct zone *zone)
      
      All flag operators are of the atomic variety because there are currently
      readers that are implemented that do not take zone->lock.
      
      [akpm@linux-foundation.org: add needed include]
      Cc: Andrea Arcangeli <andrea@suse.de>
      Acked-by: NChristoph Lameter <clameter@sgi.com>
      Signed-off-by: NDavid Rientjes <rientjes@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      e815af95
    • A
      make swappiness safer to use · 4106f83a
      Andrea Arcangeli 提交于
      Swappiness isn't a safe sysctl.  Setting it to 0 for example can hang a
      system.  That's a corner case but even setting it to 10 or lower can waste
      enormous amounts of cpu without making much progress.  We've customers who
      wants to use swappiness but they can't because of the current
      implementation (if you change it so the system stops swapping it really
      stops swapping and nothing works sane anymore if you really had to swap
      something to make progress).
      
      This patch from Kurt Garloff makes swappiness safer to use (no more huge
      cpu usage or hangs with low swappiness values).
      
      I think the prev_priority can also be nuked since it wastes 4 bytes per
      zone (that would be an incremental patch but I wait the nr_scan_[in]active
      to be nuked first for similar reasons).  Clearly somebody at some point
      noticed how broken that thing was and they had to add min(priority,
      prev_priority) to give it some reliability, but they didn't go the last
      mile to nuke prev_priority too.  Calculating distress only in function of
      not-racy priority is correct and sure more than enough without having to
      add randomness into the equation.
      
      Patch is tested on older kernels but it compiles and it's quite simple
      so...
      
      Overall I'm not very satisified by the swappiness tweak, since it doesn't
      rally do anything with the dirty pagecache that may be inactive.  We need
      another kind of tweak that controls the inactive scan and tunes the
      can_writepage feature (not yet in mainline despite having submitted it a
      few times), not only the active one.  That new tweak will tell the kernel
      how hard to scan the inactive list for pure clean pagecache (something the
      mainline kernel isn't capable of yet).  We already have that feature
      working in all our enterprise kernels with the default reasonable tune, or
      they can't even run a readonly backup with tar without triggering huge
      write I/O.  I think it should be available also in mainline later.
      
      Cc: Nick Piggin <npiggin@suse.de>
      Signed-off-by: NKurt Garloff <garloff@suse.de>
      Signed-off-by: NAndrea Arcangeli <andrea@suse.de>
      Signed-off-by: NFengguang Wu <wfg@mail.ustc.edu.cn>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      4106f83a