1. 24 3月, 2011 1 次提交
  2. 23 3月, 2011 3 次提交
  3. 26 2月, 2011 1 次提交
    • G
      mm: grab rcu read lock in move_pages() · a879bf58
      Greg Thelen 提交于
      The move_pages() usage of find_task_by_vpid() requires rcu_read_lock() to
      prevent free_pid() from reclaiming the pid.
      
      Without this patch, RCU warnings are printed in v2.6.38-rc4 move_pages()
      with:
      
        CONFIG_LOCKUP_DETECTOR=y
        CONFIG_PREEMPT=y
        CONFIG_LOCKDEP=y
        CONFIG_PROVE_LOCKING=y
        CONFIG_PROVE_RCU=y
      
      Previously, migrate_pages() went through a similar transformation
      replacing usage of tasklist_lock with rcu read lock:
      
        commit 55cfaa3c
        Author: Zeng Zhaoming <zengzm.kernel@gmail.com>
        Date:   Thu Dec 2 14:31:13 2010 -0800
      
            mm/mempolicy.c: add rcu read lock to protect pid structure
      
        commit 1e50df39
        Author: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
        Date:   Thu Jan 13 15:46:14 2011 -0800
      
            mempolicy: remove tasklist_lock from migrate_pages
      Signed-off-by: NGreg Thelen <gthelen@google.com>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Cc: Minchan Kim <minchan.kim@gmail.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
      Cc: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
      Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Zeng Zhaoming <zengzm.kernel@gmail.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      a879bf58
  4. 03 2月, 2011 2 次提交
  5. 26 1月, 2011 1 次提交
  6. 14 1月, 2011 8 次提交
    • D
      memcg: fix memory migration of shmem swapcache · 50de1dd9
      Daisuke Nishimura 提交于
      In the current implementation mem_cgroup_end_migration() decides whether
      the page migration has succeeded or not by checking "oldpage->mapping".
      
      But if we are tring to migrate a shmem swapcache, the page->mapping of it
      is NULL from the begining, so the check would be invalid.  As a result,
      mem_cgroup_end_migration() assumes the migration has succeeded even if
      it's not, so "newpage" would be freed while it's not uncharged.
      
      This patch fixes it by passing mem_cgroup_end_migration() the result of
      the page migration.
      Signed-off-by: NDaisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
      Reviewed-by: NMinchan Kim <minchan.kim@gmail.com>
      Acked-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Acked-by: NBalbir Singh <balbir@linux.vnet.ibm.com>
      Cc: Minchan Kim <minchan.kim@gmail.com>
      Reviewed-by: NJohannes Weiner <hannes@cmpxchg.org>
      Cc: Hugh Dickins <hughd@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      50de1dd9
    • H
      mm: fix hugepage migration · fd4a4663
      Hugh Dickins 提交于
      2.6.37 added an unmap_and_move_huge_page() for memory failure recovery,
      but its anon_vma handling was still based around the 2.6.35 conventions.
      Update it to use page_lock_anon_vma, get_anon_vma, page_unlock_anon_vma,
      drop_anon_vma in the same way as we're now changing unmap_and_move().
      
      I don't particularly like to propose this for stable when I've not seen
      its problems in practice nor tested the solution: but it's clearly out of
      synch at present.
      Signed-off-by: NHugh Dickins <hughd@google.com>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Cc: "Jun'ichi Nomura" <j-nomura@ce.jp.nec.com>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: <stable@kernel.org> [2.6.37, 2.6.36]
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      fd4a4663
    • H
      mm: fix migration hangs on anon_vma lock · 1ce82b69
      Hugh Dickins 提交于
      Increased usage of page migration in mmotm reveals that the anon_vma
      locking in unmap_and_move() has been deficient since 2.6.36 (or even
      earlier).  Review at the time of f1819427
      ("mm: fix hang on anon_vma->root->lock") missed the issue here: the
      anon_vma to which we get a reference may already have been freed back to
      its slab (it is in use when we check page_mapped, but that can change),
      and so its anon_vma->root may be switched at any moment by reuse in
      anon_vma_prepare.
      
      Perhaps we could fix that with a get_anon_vma_unless_zero(), but let's
      not: just rely on page_lock_anon_vma() to do all the hard thinking for us,
      then we don't need any rcu read locking over here.
      
      In removing the rcu_unlock label: since PageAnon is a bit in
      page->mapping, it's impossible for a !page->mapping page to be anon; but
      insert VM_BUG_ON in case the implementation ever changes.
      
      [akpm@linux-foundation.org: coding-style fixes]
      Signed-off-by: NHugh Dickins <hughd@google.com>
      Reviewed-by: NMel Gorman <mel@csn.ul.ie>
      Reviewed-by: NRik van Riel <riel@redhat.com>
      Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Cc: "Jun'ichi Nomura" <j-nomura@ce.jp.nec.com>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: <stable@kernel.org> [2.6.37, 2.6.36]
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      1ce82b69
    • M
      mm: migration: use rcu_dereference_protected when dereferencing the radix tree... · 29c1f677
      Mel Gorman 提交于
      mm: migration: use rcu_dereference_protected when dereferencing the radix tree slot during file page migration
      
      migrate_pages() -> unmap_and_move() only calls rcu_read_lock() for
      anonymous pages, as introduced by git commit
      989f89c5 ("fix rcu_read_lock() in page
      migraton").  The point of the RCU protection there is part of getting a
      stable reference to anon_vma and is only held for anon pages as file pages
      are locked which is sufficient protection against freeing.
      
      However, while a file page's mapping is being migrated, the radix tree is
      double checked to ensure it is the expected page.  This uses
      radix_tree_deref_slot() -> rcu_dereference() without the RCU lock held
      triggering the following warning.
      
      [  173.674290] ===================================================
      [  173.676016] [ INFO: suspicious rcu_dereference_check() usage. ]
      [  173.676016] ---------------------------------------------------
      [  173.676016] include/linux/radix-tree.h:145 invoked rcu_dereference_check() without protection!
      [  173.676016]
      [  173.676016] other info that might help us debug this:
      [  173.676016]
      [  173.676016]
      [  173.676016] rcu_scheduler_active = 1, debug_locks = 0
      [  173.676016] 1 lock held by hugeadm/2899:
      [  173.676016]  #0:  (&(&inode->i_data.tree_lock)->rlock){..-.-.}, at: [<c10e3d2b>] migrate_page_move_mapping+0x40/0x1ab
      [  173.676016]
      [  173.676016] stack backtrace:
      [  173.676016] Pid: 2899, comm: hugeadm Not tainted 2.6.37-rc5-autobuild
      [  173.676016] Call Trace:
      [  173.676016]  [<c128cc01>] ? printk+0x14/0x1b
      [  173.676016]  [<c1063502>] lockdep_rcu_dereference+0x7d/0x86
      [  173.676016]  [<c10e3db5>] migrate_page_move_mapping+0xca/0x1ab
      [  173.676016]  [<c10e41ad>] migrate_page+0x23/0x39
      [  173.676016]  [<c10e491b>] buffer_migrate_page+0x22/0x107
      [  173.676016]  [<c10e48f9>] ? buffer_migrate_page+0x0/0x107
      [  173.676016]  [<c10e425d>] move_to_new_page+0x9a/0x1ae
      [  173.676016]  [<c10e47e6>] migrate_pages+0x1e7/0x2fa
      
      This patch introduces radix_tree_deref_slot_protected() which calls
      rcu_dereference_protected().  Users of it must pass in the
      mapping->tree_lock that is protecting this dereference.  Holding the tree
      lock protects against parallel updaters of the radix tree meaning that
      rcu_dereference_protected is allowable.
      
      [akpm@linux-foundation.org: remove unneeded casts]
      Signed-off-by: NMel Gorman <mel@csn.ul.ie>
      Cc: Minchan Kim <minchan.kim@gmail.com>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Milton Miller <miltonm@bga.com>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Cc: Wu Fengguang <fengguang.wu@intel.com>
      Cc: <stable@kernel.org>		[2.6.37.early]
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      29c1f677
    • A
      thp: pmd_trans_huge migrate bugcheck · 500d65d4
      Andrea Arcangeli 提交于
      No pmd_trans_huge should ever materialize in migration ptes areas, because
      we split the hugepage before migration ptes are instantiated.
      Signed-off-by: NAndrea Arcangeli <aarcange@redhat.com>
      Acked-by: NRik van Riel <riel@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      500d65d4
    • M
      mm: migration: cleanup migrate_pages API by matching types for offlining and sync · 7f0f2496
      Mel Gorman 提交于
      With the introduction of the boolean sync parameter, the API looks a
      little inconsistent as offlining is still an int.  Convert offlining to a
      bool for the sake of being tidy.
      Signed-off-by: NMel Gorman <mel@csn.ul.ie>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Rik van Riel <riel@redhat.com>
      Acked-by: NJohannes Weiner <hannes@cmpxchg.org>
      Cc: Andy Whitcroft <apw@shadowen.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      7f0f2496
    • M
      mm: migration: allow migration to operate asynchronously and avoid synchronous... · 77f1fe6b
      Mel Gorman 提交于
      mm: migration: allow migration to operate asynchronously and avoid synchronous compaction in the faster path
      
      Migration synchronously waits for writeback if the initial passes fails.
      Callers of memory compaction do not necessarily want this behaviour if the
      caller is latency sensitive or expects that synchronous migration is not
      going to have a significantly better success rate.
      
      This patch adds a sync parameter to migrate_pages() allowing the caller to
      indicate if wait_on_page_writeback() is allowed within migration or not.
      For reclaim/compaction, try_to_compact_pages() is first called
      asynchronously, direct reclaim runs and then try_to_compact_pages() is
      called synchronously as there is a greater expectation that it'll succeed.
      
      [akpm@linux-foundation.org: build/merge fix]
      Signed-off-by: NMel Gorman <mel@csn.ul.ie>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Rik van Riel <riel@redhat.com>
      Acked-by: NJohannes Weiner <hannes@cmpxchg.org>
      Cc: Andy Whitcroft <apw@shadowen.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      77f1fe6b
    • M
      mm: vmscan: reclaim order-0 and use compaction instead of lumpy reclaim · 3e7d3449
      Mel Gorman 提交于
      Lumpy reclaim is disruptive.  It reclaims a large number of pages and
      ignores the age of the pages it reclaims.  This can incur significant
      stalls and potentially increase the number of major faults.
      
      Compaction has reached the point where it is considered reasonably stable
      (meaning it has passed a lot of testing) and is a potential candidate for
      displacing lumpy reclaim.  This patch introduces an alternative to lumpy
      reclaim whe compaction is available called reclaim/compaction.  The basic
      operation is very simple - instead of selecting a contiguous range of
      pages to reclaim, a number of order-0 pages are reclaimed and then
      compaction is later by either kswapd (compact_zone_order()) or direct
      compaction (__alloc_pages_direct_compact()).
      
      [akpm@linux-foundation.org: fix build]
      [akpm@linux-foundation.org: use conventional task_struct naming]
      Signed-off-by: NMel Gorman <mel@csn.ul.ie>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Rik van Riel <riel@redhat.com>
      Acked-by: NJohannes Weiner <hannes@cmpxchg.org>
      Cc: Andy Whitcroft <apw@shadowen.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      3e7d3449
  7. 23 12月, 2010 1 次提交
  8. 27 10月, 2010 3 次提交
    • G
      mm: fix error reporting in move_pages() syscall · 70384dc6
      Gleb Natapov 提交于
      The vma returned by find_vma does not necessarily include the target
      address.  If this happens the code tries to follow a page outside of any
      vma and returns ENOENT instead of EFAULT.
      Signed-off-by: NGleb Natapov <gleb@redhat.com>
      Acked-by: NChristoph Lameter <cl@linux-foundation.org>
      Cc: Minchan Kim <minchan.kim@gmail.com>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      70384dc6
    • M
      mm: compaction: fix COMPACTPAGEFAILED counting · cf608ac1
      Minchan Kim 提交于
      Presently update_nr_listpages() doesn't have a role.  That's because lists
      passed is always empty just after calling migrate_pages.  The
      migrate_pages cleans up page list which have failed to migrate before
      returning by aaa994b3.
      
       [PATCH] page migration: handle freeing of pages in migrate_pages()
      
       Do not leave pages on the lists passed to migrate_pages().  Seems that we will
       not need any postprocessing of pages.  This will simplify the handling of
       pages by the callers of migrate_pages().
      
      At that time, we thought we don't need any postprocessing of pages.  But
      the situation is changed.  The compaction need to know the number of
      failed to migrate for COMPACTPAGEFAILED stat
      
      This patch makes new rule for caller of migrate_pages to call
      putback_lru_pages.  So caller need to clean up the lists so it has a
      chance to postprocess the pages.  [suggested by Christoph Lameter]
      Signed-off-by: NMinchan Kim <minchan.kim@gmail.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Andi Kleen <andi@firstfloor.org>
      Reviewed-by: NMel Gorman <mel@csn.ul.ie>
      Reviewed-by: NWu Fengguang <fengguang.wu@intel.com>
      Acked-by: NChristoph Lameter <cl@linux.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      cf608ac1
    • W
      writeback: remove nonblocking/encountered_congestion references · 1b430bee
      Wu Fengguang 提交于
      This removes more dead code that was somehow missed by commit 0d99519e
      (writeback: remove unused nonblocking and congestion checks).  There are
      no behavior change except for the removal of two entries from one of the
      ext4 tracing interface.
      
      The nonblocking checks in ->writepages are no longer used because the
      flusher now prefer to block on get_request_wait() than to skip inodes on
      IO congestion.  The latter will lead to more seeky IO.
      
      The nonblocking checks in ->writepage are no longer used because it's
      redundant with the WB_SYNC_NONE check.
      
      We no long set ->nonblocking in VM page out and page migration, because
      a) it's effectively redundant with WB_SYNC_NONE in current code
      b) it's old semantic of "Don't get stuck on request queues" is mis-behavior:
         that would skip some dirty inodes on congestion and page out others, which
         is unfair in terms of LRU age.
      
      Inspired by Christoph Hellwig. Thanks!
      Signed-off-by: NWu Fengguang <fengguang.wu@intel.com>
      Cc: Theodore Ts'o <tytso@mit.edu>
      Cc: David Howells <dhowells@redhat.com>
      Cc: Sage Weil <sage@newdream.net>
      Cc: Steve French <sfrench@samba.org>
      Cc: Chris Mason <chris.mason@oracle.com>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Christoph Hellwig <hch@infradead.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      1b430bee
  9. 11 10月, 2010 1 次提交
    • A
      Fix migration.c compilation on s390 · 3ef8fd7f
      Andi Kleen 提交于
      31bit s390 doesn't have huge pages and failed with:
      
      > mm/migrate.c: In function 'remove_migration_pte':
      > mm/migrate.c:143:3: error: implicit declaration of function 'pte_mkhuge'
      > mm/migrate.c:143:7: error: incompatible types when assigning to type 'pte_t' from type 'int'
      
      Put that code into a ifdef.
      
      Reported by Heiko Carstens
      Signed-off-by: NAndi Kleen <ak@linux.intel.com>
      3ef8fd7f
  10. 08 10月, 2010 1 次提交
    • N
      hugetlb: hugepage migration core · 290408d4
      Naoya Horiguchi 提交于
      This patch extends page migration code to support hugepage migration.
      One of the potential users of this feature is soft offlining which
      is triggered by memory corrected errors (added by the next patch.)
      
      Todo:
      - there are other users of page migration such as memory policy,
        memory hotplug and memocy compaction.
        They are not ready for hugepage support for now.
      
      ChangeLog since v4:
      - define migrate_huge_pages()
      - remove changes on isolation/putback_lru_page()
      
      ChangeLog since v2:
      - refactor isolate/putback_lru_page() to handle hugepage
      - add comment about race on unmap_and_move_huge_page()
      
      ChangeLog since v1:
      - divide migration code path for hugepage
      - define routine checking migration swap entry for hugetlb
      - replace "goto" with "if/else" in remove_migration_pte()
      Signed-off-by: NNaoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Signed-off-by: NJun'ichi Nomura <j-nomura@ce.jp.nec.com>
      Acked-by: NMel Gorman <mel@csn.ul.ie>
      Signed-off-by: NAndi Kleen <ak@linux.intel.com>
      290408d4
  11. 10 8月, 2010 3 次提交
  12. 28 5月, 2010 1 次提交
    • A
      memcg: fix mis-accounting of file mapped racy with migration · ac39cf8c
      akpm@linux-foundation.org 提交于
      FILE_MAPPED per memcg of migrated file cache is not properly updated,
      because our hook in page_add_file_rmap() can't know to which memcg
      FILE_MAPPED should be counted.
      
      Basically, this patch is for fixing the bug but includes some big changes
      to fix up other messes.
      
      Now, at migrating mapped file, events happen in following sequence.
      
       1. allocate a new page.
       2. get memcg of an old page.
       3. charge ageinst a new page before migration. But at this point,
          no changes to new page's page_cgroup, no commit for the charge.
          (IOW, PCG_USED bit is not set.)
       4. page migration replaces radix-tree, old-page and new-page.
       5. page migration remaps the new page if the old page was mapped.
       6. Here, the new page is unlocked.
       7. memcg commits the charge for newpage, Mark the new page's page_cgroup
          as PCG_USED.
      
      Because "commit" happens after page-remap, we can count FILE_MAPPED
      at "5", because we should avoid to trust page_cgroup->mem_cgroup.
      if PCG_USED bit is unset.
      (Note: memcg's LRU removal code does that but LRU-isolation logic is used
       for helping it. When we overwrite page_cgroup->mem_cgroup, page_cgroup is
       not on LRU or page_cgroup->mem_cgroup is NULL.)
      
      We can lose file_mapped accounting information at 5 because FILE_MAPPED
      is updated only when mapcount changes 0->1. So we should catch it.
      
      BTW, historically, above implemntation comes from migration-failure
      of anonymous page. Because we charge both of old page and new page
      with mapcount=0, we can't catch
        - the page is really freed before remap.
        - migration fails but it's freed before remap
      or .....corner cases.
      
      New migration sequence with memcg is:
      
       1. allocate a new page.
       2. mark PageCgroupMigration to the old page.
       3. charge against a new page onto the old page's memcg. (here, new page's pc
          is marked as PageCgroupUsed.)
       4. page migration replaces radix-tree, page table, etc...
       5. At remapping, new page's page_cgroup is now makrked as "USED"
          We can catch 0->1 event and FILE_MAPPED will be properly updated.
      
          And we can catch SWAPOUT event after unlock this and freeing this
          page by unmap() can be caught.
      
       7. Clear PageCgroupMigration of the old page.
      
      So, FILE_MAPPED will be correctly updated.
      
      Then, for what MIGRATION flag is ?
        Without it, at migration failure, we may have to charge old page again
        because it may be fully unmapped. "charge" means that we have to dive into
        memory reclaim or something complated. So, it's better to avoid
        charge it again. Before this patch, __commit_charge() was working for
        both of the old/new page and fixed up all. But this technique has some
        racy condtion around FILE_MAPPED and SWAPOUT etc...
        Now, the kernel use MIGRATION flag and don't uncharge old page until
        the end of migration.
      
      I hope this change will make memcg's page migration much simpler.  This
      page migration has caused several troubles.  Worth to add a flag for
      simplification.
      Reviewed-by: NDaisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
      Tested-by: NDaisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
      Reported-by: NDaisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
      Signed-off-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Balbir Singh <balbir@in.ibm.com>
      Cc: Christoph Lameter <cl@linux-foundation.org>
      Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      ac39cf8c
  13. 25 5月, 2010 6 次提交
    • M
      mm: compaction: memory compaction core · 748446bb
      Mel Gorman 提交于
      This patch is the core of a mechanism which compacts memory in a zone by
      relocating movable pages towards the end of the zone.
      
      A single compaction run involves a migration scanner and a free scanner.
      Both scanners operate on pageblock-sized areas in the zone.  The migration
      scanner starts at the bottom of the zone and searches for all movable
      pages within each area, isolating them onto a private list called
      migratelist.  The free scanner starts at the top of the zone and searches
      for suitable areas and consumes the free pages within making them
      available for the migration scanner.  The pages isolated for migration are
      then migrated to the newly isolated free pages.
      
      [aarcange@redhat.com: Fix unsafe optimisation]
      [mel@csn.ul.ie: do not schedule work on other CPUs for compaction]
      Signed-off-by: NMel Gorman <mel@csn.ul.ie>
      Acked-by: NRik van Riel <riel@redhat.com>
      Reviewed-by: NMinchan Kim <minchan.kim@gmail.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Christoph Lameter <cl@linux-foundation.org>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      748446bb
    • M
      mm: migration: allow the migration of PageSwapCache pages · 3fe2011f
      Mel Gorman 提交于
      PageAnon pages that are unmapped may or may not have an anon_vma so are
      not currently migrated.  However, a swap cache page can be migrated and
      fits this description.  This patch identifies page swap caches and allows
      them to be migrated but ensures that no attempt to made to remap the pages
      would would potentially try to access an already freed anon_vma.
      Signed-off-by: NMel Gorman <mel@csn.ul.ie>
      Reviewed-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Reviewed-by: NMinchan Kim <minchan.kim@gmail.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Christoph Lameter <cl@linux-foundation.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      3fe2011f
    • M
      mm: migration: do not try to migrate unmapped anonymous pages · 67b9509b
      Mel Gorman 提交于
      rmap_walk_anon() was triggering errors in memory compaction that look like
      use-after-free errors.  The problem is that between the page being
      isolated from the LRU and rcu_read_lock() being taken, the mapcount of the
      page dropped to 0 and the anon_vma gets freed.  This can happen during
      memory compaction if pages being migrated belong to a process that exits
      before migration completes.  Hence, the use-after-free race looks like
      
       1. Page isolated for migration
       2. Process exits
       3. page_mapcount(page) drops to zero so anon_vma was no longer reliable
       4. unmap_and_move() takes the rcu_lock but the anon_vma is already garbage
       4. call try_to_unmap, looks up tha anon_vma and "locks" it but the lock
          is garbage.
      
      This patch checks the mapcount after the rcu lock is taken.  If the
      mapcount is zero, the anon_vma is assumed to be freed and no further
      action is taken.
      Signed-off-by: NMel Gorman <mel@csn.ul.ie>
      Acked-by: NRik van Riel <riel@redhat.com>
      Reviewed-by: NMinchan Kim <minchan.kim@gmail.com>
      Reviewed-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Christoph Lameter <cl@linux-foundation.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      67b9509b
    • M
      mm: migration: share the anon_vma ref counts between KSM and page migration · 7f60c214
      Mel Gorman 提交于
      For clarity of review, KSM and page migration have separate refcounts on
      the anon_vma.  While clear, this is a waste of memory.  This patch gets
      KSM and page migration to share their toys in a spirit of harmony.
      Signed-off-by: NMel Gorman <mel@csn.ul.ie>
      Reviewed-by: NMinchan Kim <minchan.kim@gmail.com>
      Reviewed-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Reviewed-by: NChristoph Lameter <cl@linux-foundation.org>
      Reviewed-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Rik van Riel <riel@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      7f60c214
    • M
      mm: migration: take a reference to the anon_vma before migrating · 3f6c8272
      Mel Gorman 提交于
      This patchset is a memory compaction mechanism that reduces external
      fragmentation memory by moving GFP_MOVABLE pages to a fewer number of
      pageblocks.  The term "compaction" was chosen as there are is a number of
      mechanisms that are not mutually exclusive that can be used to defragment
      memory.  For example, lumpy reclaim is a form of defragmentation as was
      slub "defragmentation" (really a form of targeted reclaim).  Hence, this
      is called "compaction" to distinguish it from other forms of
      defragmentation.
      
      In this implementation, a full compaction run involves two scanners
      operating within a zone - a migration and a free scanner.  The migration
      scanner starts at the beginning of a zone and finds all movable pages
      within one pageblock_nr_pages-sized area and isolates them on a
      migratepages list.  The free scanner begins at the end of the zone and
      searches on a per-area basis for enough free pages to migrate all the
      pages on the migratepages list.  As each area is respectively migrated or
      exhausted of free pages, the scanners are advanced one area.  A compaction
      run completes within a zone when the two scanners meet.
      
      This method is a bit primitive but is easy to understand and greater
      sophistication would require maintenance of counters on a per-pageblock
      basis.  This would have a big impact on allocator fast-paths to improve
      compaction which is a poor trade-off.
      
      It also does not try relocate virtually contiguous pages to be physically
      contiguous.  However, assuming transparent hugepages were in use, a
      hypothetical khugepaged might reuse compaction code to isolate free pages,
      split them and relocate userspace pages for promotion.
      
      Memory compaction can be triggered in one of three ways.  It may be
      triggered explicitly by writing any value to /proc/sys/vm/compact_memory
      and compacting all of memory.  It can be triggered on a per-node basis by
      writing any value to /sys/devices/system/node/nodeN/compact where N is the
      node ID to be compacted.  When a process fails to allocate a high-order
      page, it may compact memory in an attempt to satisfy the allocation
      instead of entering direct reclaim.  Explicit compaction does not finish
      until the two scanners meet and direct compaction ends if a suitable page
      becomes available that would meet watermarks.
      
      The series is in 14 patches.  The first three are not "core" to the series
      but are important pre-requisites.
      
      Patch 1 reference counts anon_vma for rmap_walk_anon(). Without this
      	patch, it's possible to use anon_vma after free if the caller is
      	not holding a VMA or mmap_sem for the pages in question. While
      	there should be no existing user that causes this problem,
      	it's a requirement for memory compaction to be stable. The patch
      	is at the start of the series for bisection reasons.
      Patch 2 merges the KSM and migrate counts. It could be merged with patch 1
      	but would be slightly harder to review.
      Patch 3 skips over unmapped anon pages during migration as there are no
      	guarantees about the anon_vma existing. There is a window between
      	when a page was isolated and migration started during which anon_vma
      	could disappear.
      Patch 4 notes that PageSwapCache pages can still be migrated even if they
      	are unmapped.
      Patch 5 allows CONFIG_MIGRATION to be set without CONFIG_NUMA
      Patch 6 exports a "unusable free space index" via debugfs. It's
      	a measure of external fragmentation that takes the size of the
      	allocation request into account. It can also be calculated from
      	userspace so can be dropped if requested
      Patch 7 exports a "fragmentation index" which only has meaning when an
      	allocation request fails. It determines if an allocation failure
      	would be due to a lack of memory or external fragmentation.
      Patch 8 moves the definition for LRU isolation modes for use by compaction
      Patch 9 is the compaction mechanism although it's unreachable at this point
      Patch 10 adds a means of compacting all of memory with a proc trgger
      Patch 11 adds a means of compacting a specific node with a sysfs trigger
      Patch 12 adds "direct compaction" before "direct reclaim" if it is
      	determined there is a good chance of success.
      Patch 13 adds a sysctl that allows tuning of the threshold at which the
      	kernel will compact or direct reclaim
      Patch 14 temporarily disables compaction if an allocation failure occurs
      	after compaction.
      
      Testing of compaction was in three stages.  For the test, debugging,
      preempt, the sleep watchdog and lockdep were all enabled but nothing nasty
      popped out.  min_free_kbytes was tuned as recommended by hugeadm to help
      fragmentation avoidance and high-order allocations.  It was tested on X86,
      X86-64 and PPC64.
      
      Ths first test represents one of the easiest cases that can be faced for
      lumpy reclaim or memory compaction.
      
      1. Machine freshly booted and configured for hugepage usage with
      	a) hugeadm --create-global-mounts
      	b) hugeadm --pool-pages-max DEFAULT:8G
      	c) hugeadm --set-recommended-min_free_kbytes
      	d) hugeadm --set-recommended-shmmax
      
      	The min_free_kbytes here is important. Anti-fragmentation works best
      	when pageblocks don't mix. hugeadm knows how to calculate a value that
      	will significantly reduce the worst of external-fragmentation-related
      	events as reported by the mm_page_alloc_extfrag tracepoint.
      
      2. Load up memory
      	a) Start updatedb
      	b) Create in parallel a X files of pagesize*128 in size. Wait
      	   until files are created. By parallel, I mean that 4096 instances
      	   of dd were launched, one after the other using &. The crude
      	   objective being to mix filesystem metadata allocations with
      	   the buffer cache.
      	c) Delete every second file so that pageblocks are likely to
      	   have holes
      	d) kill updatedb if it's still running
      
      	At this point, the system is quiet, memory is full but it's full with
      	clean filesystem metadata and clean buffer cache that is unmapped.
      	This is readily migrated or discarded so you'd expect lumpy reclaim
      	to have no significant advantage over compaction but this is at
      	the POC stage.
      
      3. In increments, attempt to allocate 5% of memory as hugepages.
      	   Measure how long it took, how successful it was, how many
      	   direct reclaims took place and how how many compactions. Note
      	   the compaction figures might not fully add up as compactions
      	   can take place for orders other than the hugepage size
      
      X86				vanilla		compaction
      Final page count                    913                916 (attempted 1002)
      pages reclaimed                   68296               9791
      
      X86-64				vanilla		compaction
      Final page count:                   901                902 (attempted 1002)
      Total pages reclaimed:           112599              53234
      
      PPC64				vanilla		compaction
      Final page count:                    93                 94 (attempted 110)
      Total pages reclaimed:           103216              61838
      
      There was not a dramatic improvement in success rates but it wouldn't be
      expected in this case either.  What was important is that fewer pages were
      reclaimed in all cases reducing the amount of IO required to satisfy a
      huge page allocation.
      
      The second tests were all performance related - kernbench, netperf, iozone
      and sysbench.  None showed anything too remarkable.
      
      The last test was a high-order allocation stress test.  Many kernel
      compiles are started to fill memory with a pressured mix of unmovable and
      movable allocations.  During this, an attempt is made to allocate 90% of
      memory as huge pages - one at a time with small delays between attempts to
      avoid flooding the IO queue.
      
                                                   vanilla   compaction
      Percentage of request allocated X86               98           99
      Percentage of request allocated X86-64            95           98
      Percentage of request allocated PPC64             55           70
      
      This patch:
      
      rmap_walk_anon() does not use page_lock_anon_vma() for looking up and
      locking an anon_vma and it does not appear to have sufficient locking to
      ensure the anon_vma does not disappear from under it.
      
      This patch copies an approach used by KSM to take a reference on the
      anon_vma while pages are being migrated.  This should prevent rmap_walk()
      running into nasty surprises later because anon_vma has been freed.
      Signed-off-by: NMel Gorman <mel@csn.ul.ie>
      Acked-by: NRik van Riel <riel@redhat.com>
      Cc: Minchan Kim <minchan.kim@gmail.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Christoph Lameter <cl@linux-foundation.org>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      3f6c8272
    • M
      mm: remove return value of putback_lru_pages() · e13861d8
      Minchan Kim 提交于
      putback_lru_page() never can fail.  So it doesn't matter count of "the
      number of pages put back".
      
      In addition, users of this functions don't use return value.
      
      Let's remove unnecessary code.
      Signed-off-by: NMinchan Kim <minchan.kim@gmail.com>
      Reviewed-by: NRik van Riel <riel@redhat.com>
      Reviewed-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Reviewed-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      e13861d8
  14. 30 3月, 2010 1 次提交
    • T
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking... · 5a0e3ad6
      Tejun Heo 提交于
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
      
      percpu.h is included by sched.h and module.h and thus ends up being
      included when building most .c files.  percpu.h includes slab.h which
      in turn includes gfp.h making everything defined by the two files
      universally available and complicating inclusion dependencies.
      
      percpu.h -> slab.h dependency is about to be removed.  Prepare for
      this change by updating users of gfp and slab facilities include those
      headers directly instead of assuming availability.  As this conversion
      needs to touch large number of source files, the following script is
      used as the basis of conversion.
      
        http://userweb.kernel.org/~tj/misc/slabh-sweep.py
      
      The script does the followings.
      
      * Scan files for gfp and slab usages and update includes such that
        only the necessary includes are there.  ie. if only gfp is used,
        gfp.h, if slab is used, slab.h.
      
      * When the script inserts a new include, it looks at the include
        blocks and try to put the new include such that its order conforms
        to its surrounding.  It's put in the include block which contains
        core kernel includes, in the same order that the rest are ordered -
        alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
        doesn't seem to be any matching order.
      
      * If the script can't find a place to put a new include (mostly
        because the file doesn't have fitting include block), it prints out
        an error message indicating which .h file needs to be added to the
        file.
      
      The conversion was done in the following steps.
      
      1. The initial automatic conversion of all .c files updated slightly
         over 4000 files, deleting around 700 includes and adding ~480 gfp.h
         and ~3000 slab.h inclusions.  The script emitted errors for ~400
         files.
      
      2. Each error was manually checked.  Some didn't need the inclusion,
         some needed manual addition while adding it to implementation .h or
         embedding .c file was more appropriate for others.  This step added
         inclusions to around 150 files.
      
      3. The script was run again and the output was compared to the edits
         from #2 to make sure no file was left behind.
      
      4. Several build tests were done and a couple of problems were fixed.
         e.g. lib/decompress_*.c used malloc/free() wrappers around slab
         APIs requiring slab.h to be added manually.
      
      5. The script was run on all .h files but without automatically
         editing them as sprinkling gfp.h and slab.h inclusions around .h
         files could easily lead to inclusion dependency hell.  Most gfp.h
         inclusion directives were ignored as stuff from gfp.h was usually
         wildly available and often used in preprocessor macros.  Each
         slab.h inclusion directive was examined and added manually as
         necessary.
      
      6. percpu.h was updated not to include slab.h.
      
      7. Build test were done on the following configurations and failures
         were fixed.  CONFIG_GCOV_KERNEL was turned off for all tests (as my
         distributed build env didn't work with gcov compiles) and a few
         more options had to be turned off depending on archs to make things
         build (like ipr on powerpc/64 which failed due to missing writeq).
      
         * x86 and x86_64 UP and SMP allmodconfig and a custom test config.
         * powerpc and powerpc64 SMP allmodconfig
         * sparc and sparc64 SMP allmodconfig
         * ia64 SMP allmodconfig
         * s390 SMP allmodconfig
         * alpha SMP allmodconfig
         * um on x86_64 SMP allmodconfig
      
      8. percpu.h modifications were reverted so that it could be applied as
         a separate patch and serve as bisection point.
      
      Given the fact that I had only a couple of failures from tests on step
      6, I'm fairly confident about the coverage of this conversion patch.
      If there is a breakage, it's likely to be something in one of the arch
      headers which should be easily discoverable easily on most builds of
      the specific arch.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Guess-its-ok-by: NChristoph Lameter <cl@linux-foundation.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
      5a0e3ad6
  15. 07 3月, 2010 1 次提交
  16. 22 2月, 2010 1 次提交
    • H
      mm: Make copy_from_user() in migrate.c statically predictable · 87b8d1ad
      H. Peter Anvin 提交于
      x86-32 has had a static test for copy_on_user() overflow for a while.
      This test currently fails in mm/migrate.c resulting in an
      allyesconfig/allmodconfig build failure on x86-32:
      
      In function ‘copy_from_user’,
          inlined from ‘do_pages_stat’ at
          /home/hpa/kernel/git/mm/migrate.c:1012:
      /home/hpa/kernel/git/arch/x86/include/asm/uaccess_32.h:212: error:
          call to ‘copy_from_user_overflow’ declared
      
      Make the logic more explicit and therefore easier for gcc to
      understand.
      
      v2: rewrite the loop entirely using a more normal structure for a
          chunked-data loop (Linus Torvalds)
      Reported-by: NLen Brown <lenb@kernel.org>
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      Reviewed-and-Tested-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Arjan van de Ven <arjan@linux.kernel.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Christoph Lameter <cl@linux-foundation.org>
      Cc: Hugh Dickins <hugh.dickins@tiscali.co.uk>
      Cc: Rik van Riel <riel@redhat.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      87b8d1ad
  17. 21 2月, 2010 1 次提交
    • R
      MM: Pass a PTE pointer to update_mmu_cache() rather than the PTE itself · 4b3073e1
      Russell King 提交于
      On VIVT ARM, when we have multiple shared mappings of the same file
      in the same MM, we need to ensure that we have coherency across all
      copies.  We do this via make_coherent() by making the pages
      uncacheable.
      
      This used to work fine, until we allowed highmem with highpte - we
      now have a page table which is mapped as required, and is not available
      for modification via update_mmu_cache().
      
      Ralf Beache suggested getting rid of the PTE value passed to
      update_mmu_cache():
      
        On MIPS update_mmu_cache() calls __update_tlb() which walks pagetables
        to construct a pointer to the pte again.  Passing a pte_t * is much
        more elegant.  Maybe we might even replace the pte argument with the
        pte_t?
      
      Ben Herrenschmidt would also like the pte pointer for PowerPC:
      
        Passing the ptep in there is exactly what I want.  I want that
        -instead- of the PTE value, because I have issue on some ppc cases,
        for I$/D$ coherency, where set_pte_at() may decide to mask out the
        _PAGE_EXEC.
      
      So, pass in the mapped page table pointer into update_mmu_cache(), and
      remove the PTE value, updating all implementations and call sites to
      suit.
      
      Includes a fix from Stephen Rothwell:
      
        sparc: fix fallout from update_mmu_cache API change
      Signed-off-by: NStephen Rothwell <sfr@canb.auug.org.au>
      Acked-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      4b3073e1
  18. 07 2月, 2010 1 次提交
  19. 16 12月, 2009 3 次提交
    • L
      mm: remove unevictable_migrate_page function · 418b27ef
      Lee Schermerhorn 提交于
      unevictable_migrate_page() in mm/internal.h is a relic of the since
      removed UNEVICTABLE_LRU Kconfig option.  This patch removes the function
      and open codes the test in migrate_page_copy().
      Signed-off-by: NLee Schermerhorn <lee.schermerhorn@hp.com>
      Reviewed-by: NChristoph Lameter <cl@linux-foundation.org>
      Acked-by: NHugh Dickins <hugh.dickins@tiscali.co.uk>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      418b27ef
    • H
      ksm: memory hotremove migration only · 62b61f61
      Hugh Dickins 提交于
      The previous patch enables page migration of ksm pages, but that soon gets
      into trouble: not surprising, since we're using the ksm page lock to lock
      operations on its stable_node, but page migration switches the page whose
      lock is to be used for that.  Another layer of locking would fix it, but
      do we need that yet?
      
      Do we actually need page migration of ksm pages?  Yes, memory hotremove
      needs to offline sections of memory: and since we stopped allocating ksm
      pages with GFP_HIGHUSER, they will tend to be GFP_HIGHUSER_MOVABLE
      candidates for migration.
      
      But KSM is currently unconscious of NUMA issues, happily merging pages
      from different NUMA nodes: at present the rule must be, not to use
      MADV_MERGEABLE where you care about NUMA.  So no, NUMA page migration of
      ksm pages does not make sense yet.
      
      So, to complete support for ksm swapping we need to make hotremove safe.
      ksm_memory_callback() take ksm_thread_mutex when MEM_GOING_OFFLINE and
      release it when MEM_OFFLINE or MEM_CANCEL_OFFLINE.  But if mapped pages
      are freed before migration reaches them, stable_nodes may be left still
      pointing to struct pages which have been removed from the system: the
      stable_node needs to identify a page by pfn rather than page pointer, then
      it can safely prune them when MEM_OFFLINE.
      
      And make NUMA migration skip PageKsm pages where it skips PageReserved.
      But it's only when we reach unmap_and_move() that the page lock is taken
      and we can be sure that raised pagecount has prevented a PageAnon from
      being upgraded: so add offlining arg to migrate_pages(), to migrate ksm
      page when offlining (has sufficient locking) but reject it otherwise.
      Signed-off-by: NHugh Dickins <hugh.dickins@tiscali.co.uk>
      Cc: Izik Eidus <ieidus@redhat.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Chris Wright <chrisw@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      62b61f61
    • H
      ksm: rmap_walk to remove_migation_ptes · e9995ef9
      Hugh Dickins 提交于
      A side-effect of making ksm pages swappable is that they have to be placed
      on the LRUs: which then exposes them to isolate_lru_page() and hence to
      page migration.
      
      Add rmap_walk() for remove_migration_ptes() to use: rmap_walk_anon() and
      rmap_walk_file() in rmap.c, but rmap_walk_ksm() in ksm.c.  Perhaps some
      consolidation with existing code is possible, but don't attempt that yet
      (try_to_unmap needs to handle nonlinears, but migration pte removal does
      not).
      
      rmap_walk() is sadly less general than it appears: rmap_walk_anon(), like
      remove_anon_migration_ptes() which it replaces, avoids calling
      page_lock_anon_vma(), because that includes a page_mapped() test which
      fails when all migration ptes are in place.  That was valid when NUMA page
      migration was introduced (holding mmap_sem provided the missing guarantee
      that anon_vma's slab had not already been destroyed), but I believe not
      valid in the memory hotremove case added since.
      
      For now do the same as before, and consider the best way to fix that
      unlikely race later on.  When fixed, we can probably use rmap_walk() on
      hwpoisoned ksm pages too: for now, they remain among hwpoison's various
      exceptions (its PageKsm test comes before the page is locked, but its
      page_lock_anon_vma fails safely if an anon gets upgraded).
      Signed-off-by: NHugh Dickins <hugh.dickins@tiscali.co.uk>
      Cc: Izik Eidus <ieidus@redhat.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Chris Wright <chrisw@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      e9995ef9