1. 16 12月, 2009 40 次提交
    • W
      memcg: add accessor to mem_cgroup.css · d324236b
      Wu Fengguang 提交于
      So that an outside user can free the reference count grabbed by
      try_get_mem_cgroup_from_page().
      
      CC: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      CC: Hugh Dickins <hugh.dickins@tiscali.co.uk>
      CC: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
      CC: Balbir Singh <balbir@linux.vnet.ibm.com>
      Acked-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Signed-off-by: NWu Fengguang <fengguang.wu@intel.com>
      Signed-off-by: NAndi Kleen <ak@linux.intel.com>
      d324236b
    • W
      memcg: rename and export try_get_mem_cgroup_from_page() · e42d9d5d
      Wu Fengguang 提交于
      So that the hwpoison injector can get mem_cgroup for arbitrary page
      and thus know whether it is owned by some mem_cgroup task(s).
      
      [AK: Merged with latest git tree]
      
      CC: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      CC: Hugh Dickins <hugh.dickins@tiscali.co.uk>
      CC: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
      CC: Balbir Singh <balbir@linux.vnet.ibm.com>
      Acked-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Signed-off-by: NWu Fengguang <fengguang.wu@intel.com>
      Signed-off-by: NAndi Kleen <ak@linux.intel.com>
      e42d9d5d
    • W
      HWPOISON: add page flags filter · 478c5ffc
      Wu Fengguang 提交于
      When specified, only poison pages if ((page_flags & mask) == value).
      
      -       corrupt-filter-flags-mask
      -       corrupt-filter-flags-value
      
      This allows stress testing of many kinds of pages.
      
      Strictly speaking, the buddy pages requires taking zone lock, to avoid
      setting PG_hwpoison on a "was buddy but now allocated to someone" page.
      However we can just do nothing because we set PG_locked in the beginning,
      this prevents the page allocator from allocating it to someone. (It will
      BUG() on the unexpected PG_locked, which is fine for hwpoison testing.)
      
      [AK: Add select PROC_PAGE_MONITOR to satisfy dependency]
      
      CC: Nick Piggin <npiggin@suse.de>
      Signed-off-by: NWu Fengguang <fengguang.wu@intel.com>
      Signed-off-by: NAndi Kleen <ak@linux.intel.com>
      478c5ffc
    • W
      HWPOISON: limit hwpoison injector to known page types · 31d3d348
      Wu Fengguang 提交于
      __memory_failure()'s workflow is
      
      	set PG_hwpoison
      	//...
      	unset PG_hwpoison if didn't pass hwpoison filter
      
      That could kill unrelated process if it happens to page fault on the
      page with the (temporary) PG_hwpoison. The race should be big enough to
      appear in stress tests.
      
      Fix it by grabbing the page and checking filter at inject time.  This
      also avoids the very noisy "Injecting memory failure..." messages.
      
      - we don't touch madvise() based injection, because the filters are
        generally not necessary for it.
      - if we want to apply the filters to h/w aided injection, we'd better to
        rearrange the logic in __memory_failure() instead of this patch.
      
      AK: fix documentation, use drain all, cleanups
      
      CC: Haicheng Li <haicheng.li@intel.com>
      Signed-off-by: NWu Fengguang <fengguang.wu@intel.com>
      Signed-off-by: NAndi Kleen <ak@linux.intel.com>
      31d3d348
    • W
      HWPOISON: add fs/device filters · 7c116f2b
      Wu Fengguang 提交于
      Filesystem data/metadata present the most tricky-to-isolate pages.
      It requires careful code review and stress testing to get them right.
      
      The fs/device filter helps to target the stress tests to some specific
      filesystem pages. The filter condition is block device's major/minor
      numbers:
              - corrupt-filter-dev-major
              - corrupt-filter-dev-minor
      When specified (non -1), only page cache pages that belong to that
      device will be poisoned.
      
      The filters are checked reliably on the locked and refcounted page.
      
      Haicheng: clear PG_hwpoison and drop bad page count if filter not OK
      AK: Add documentation
      
      CC: Haicheng Li <haicheng.li@intel.com>
      CC: Nick Piggin <npiggin@suse.de>
      Signed-off-by: NWu Fengguang <fengguang.wu@intel.com>
      Signed-off-by: NAndi Kleen <ak@linux.intel.com>
      7c116f2b
    • W
      HWPOISON: return 0 to indicate success reliably · 138ce286
      Wu Fengguang 提交于
      Return 0 to indicate success, when
      - action result is RECOVERED or DELAYED
      - no extra page reference
      
      Note that dirty swapcache pages are kept in swapcache, so can have one
      more reference count.
      Signed-off-by: NWu Fengguang <fengguang.wu@intel.com>
      Signed-off-by: NAndi Kleen <ak@linux.intel.com>
      138ce286
    • W
      HWPOISON: make semantics of IGNORED/DELAYED clear · d95ea51e
      Wu Fengguang 提交于
      Change semantics for
      - IGNORED: not handled; it may well be _unsafe_
      - DELAYED: to be handled later; it is _safe_
      
      With this change,
      - IGNORED/FAILED mean (maybe) Error
      - DELAYED/RECOVERED mean Success
      Signed-off-by: NWu Fengguang <fengguang.wu@intel.com>
      Signed-off-by: NAndi Kleen <ak@linux.intel.com>
      d95ea51e
    • W
      HWPOISON: Add unpoisoning support · 847ce401
      Wu Fengguang 提交于
      The unpoisoning interface is useful for stress testing tools to
      reclaim poisoned pages (to prevent OOM)
      
      There is no hardware level unpoisioning, so this
      cannot be used for real memory errors, only for software injected errors.
      
      Note that it may leak pages silently - those who have been removed from
      LRU cache, but not isolated from page cache/swap cache at hwpoison time.
      Especially the stress test of dirty swap cache pages shall reboot system
      before exhausting memory.
      
      AK: Fix comments, add documentation, add printks, rename symbol
      Signed-off-by: NWu Fengguang <fengguang.wu@intel.com>
      Signed-off-by: NAndi Kleen <ak@linux.intel.com>
      847ce401
    • W
      HWPOISON: detect free buddy pages explicitly · 8d22ba1b
      Wu Fengguang 提交于
      Most free pages in the buddy system have no PG_buddy set.
      Introduce is_free_buddy_page() for detecting them reliably.
      
      CC: Nick Piggin <npiggin@suse.de>
      CC: Mel Gorman <mel@linux.vnet.ibm.com>
      Signed-off-by: NWu Fengguang <fengguang.wu@intel.com>
      Signed-off-by: NAndi Kleen <ak@linux.intel.com>
      8d22ba1b
    • W
      HWPOISON: remove the free buddy page handler · 95d01fc6
      Wu Fengguang 提交于
      The buddy page has already be handled in the very beginning.
      So remove redundant code.
      Signed-off-by: NWu Fengguang <fengguang.wu@intel.com>
      Signed-off-by: NAndi Kleen <ak@linux.intel.com>
      95d01fc6
    • W
      HWPOISON: introduce delete_from_lru_cache() · dc2a1cbf
      Wu Fengguang 提交于
      Introduce delete_from_lru_cache() to
      - clear PG_active, PG_unevictable to avoid complains at unpoison time
      - move the isolate_lru_page() call back to the handlers instead of the
        entrance of __memory_failure(), this is more hwpoison filter friendly
      Signed-off-by: NWu Fengguang <fengguang.wu@intel.com>
      Signed-off-by: NAndi Kleen <ak@linux.intel.com>
      dc2a1cbf
    • W
      HWPOISON: comment dirty swapcache pages · 71f72525
      Wu Fengguang 提交于
      AK: Improve comment
      Signed-off-by: NWu Fengguang <fengguang.wu@intel.com>
      Signed-off-by: NAndi Kleen <ak@linux.intel.com>
      71f72525
    • W
      db0480b3
    • W
      HWPOISON: abort on failed unmap · 1668bfd5
      Wu Fengguang 提交于
      Don't try to isolate a still mapped page. Otherwise we will hit the
      BUG_ON(page_mapped(page)) in __remove_from_page_cache().
      Signed-off-by: NWu Fengguang <fengguang.wu@intel.com>
      Signed-off-by: NAndi Kleen <ak@linux.intel.com>
      1668bfd5
    • A
      HWPOISON: Turn ref argument into flags argument · 82ba011b
      Andi Kleen 提交于
      Now that "ref" is just a boolean turn it into
      a flags argument. First step is only a single flag
      that makes the code's intention more clear, but more
      may follow.
      Signed-off-by: NAndi Kleen <ak@linux.intel.com>
      82ba011b
    • W
      HWPOISON: avoid grabbing the page count multiple times during madvise injection · bd1ce5f9
      Wu Fengguang 提交于
      If page is double referenced in madvise_hwpoison() and __memory_failure(),
      remove_mapping() will fail because it expects page_count=2. Fix it by
      not grabbing extra page count in __memory_failure().
      Signed-off-by: NWu Fengguang <fengguang.wu@intel.com>
      Signed-off-by: NAndi Kleen <ak@linux.intel.com>
      bd1ce5f9
    • W
      HWPOISON: return ENXIO on invalid page number · a7560fc8
      Wu Fengguang 提交于
      Use a different errno than the usual EIO for invalid page numbers.
      This is mainly for better reporting for the injector.
      
      This also avoids calling action_result() with invalid pfn.
      Signed-off-by: NWu Fengguang <fengguang.wu@intel.com>
      Signed-off-by: NAndi Kleen <ak@linux.intel.com>
      a7560fc8
    • W
      HWPOISON: remove the anonymous entry · 9b9a29ec
      Wu Fengguang 提交于
      (PG_swapbacked && !PG_lru) pages should not happen.
      Better to treat them as unknown pages.
      Signed-off-by: NWu Fengguang <fengguang.wu@intel.com>
      Signed-off-by: NAndi Kleen <ak@linux.intel.com>
      9b9a29ec
    • A
      HWPOISON: Be more aggressive at freeing non LRU caches · 588f9ce6
      Andi Kleen 提交于
      shake_page handles more types of page caches than lru_drain_all()
      
      - per cpu page allocator pages
      - per CPU LRU
      
      Stops early when the page became free.
      
      Used in followon patches.
      Signed-off-by: NAndi Kleen <ak@linux.intel.com>
      588f9ce6
    • J
      nommu: fix malloc performance by adding uninitialized flag · ea637639
      Jie Zhang 提交于
      The NOMMU code currently clears all anonymous mmapped memory.  While this
      is what we want in the default case, all memory allocation from userspace
      under NOMMU has to go through this interface, including malloc() which is
      allowed to return uninitialized memory.  This can easily be a significant
      performance penalty.  So for constrained embedded systems were security is
      irrelevant, allow people to avoid clearing memory unnecessarily.
      
      This also alters the ELF-FDPIC binfmt such that it obtains uninitialised
      memory for the brk and stack region.
      Signed-off-by: NJie Zhang <jie.zhang@analog.com>
      Signed-off-by: NRobin Getz <rgetz@blackfin.uclinux.org>
      Signed-off-by: NMike Frysinger <vapier@gentoo.org>
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Acked-by: NPaul Mundt <lethal@linux-sh.org>
      Acked-by: NGreg Ungerer <gerg@snapgear.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      ea637639
    • N
      mm hugetlb: add hugepage support to pagemap · 5dc37642
      Naoya Horiguchi 提交于
      This patch enables extraction of the pfn of a hugepage from
      /proc/pid/pagemap in an architecture independent manner.
      
      Details
      -------
      My test program (leak_pagemap) works as follows:
       - creat() and mmap() a file on hugetlbfs (file size is 200MB == 100 hugepages,)
       - read()/write() something on it,
       - call page-types with option -p,
       - munmap() and unlink() the file on hugetlbfs
      
      Without my patches
      ------------------
      $ ./leak_pagemap
                   flags page-count       MB  symbolic-flags                     long-symbolic-flags
      0x0000000000000000          1        0  __________________________________
      0x0000000000000804          1        0  __R________M______________________ referenced,mmap
      0x000000000000086c         81        0  __RU_lA____M______________________ referenced,uptodate,lru,active,mmap
      0x0000000000005808          5        0  ___U_______Ma_b___________________ uptodate,mmap,anonymous,swapbacked
      0x0000000000005868         12        0  ___U_lA____Ma_b___________________ uptodate,lru,active,mmap,anonymous,swapbacked
      0x000000000000586c          1        0  __RU_lA____Ma_b___________________ referenced,uptodate,lru,active,mmap,anonymous,swapbacked
                   total        101        0
      
      The output of page-types don't show any hugepage.
      
      With my patches
      ---------------
      $ ./leak_pagemap
                   flags page-count       MB  symbolic-flags                     long-symbolic-flags
      0x0000000000000000          1        0  __________________________________
      0x0000000000030000      51100      199  ________________TG________________ compound_tail,huge
      0x0000000000028018        100        0  ___UD__________H_G________________ uptodate,dirty,compound_head,huge
      0x0000000000000804          1        0  __R________M______________________ referenced,mmap
      0x000000000000080c          1        0  __RU_______M______________________ referenced,uptodate,mmap
      0x000000000000086c         80        0  __RU_lA____M______________________ referenced,uptodate,lru,active,mmap
      0x0000000000005808          4        0  ___U_______Ma_b___________________ uptodate,mmap,anonymous,swapbacked
      0x0000000000005868         12        0  ___U_lA____Ma_b___________________ uptodate,lru,active,mmap,anonymous,swapbacked
      0x000000000000586c          1        0  __RU_lA____Ma_b___________________ referenced,uptodate,lru,active,mmap,anonymous,swapbacked
                   total      51300      200
      
      The output of page-types shows 51200 pages contributing to hugepages,
      containing 100 head pages and 51100 tail pages as expected.
      
      [akpm@linux-foundation.org: build fix]
      Signed-off-by: NNaoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: Wu Fengguang <fengguang.wu@intel.com>
      Cc: Hugh Dickins <hugh.dickins@tiscali.co.uk>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Cc: Lee Schermerhorn <lee.schermerhorn@hp.com>
      Cc: Andy Whitcroft <apw@canonical.com>
      Cc: David Rientjes <rientjes@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      5dc37642
    • N
      mm: hugetlb: fix hugepage memory leak in walk_page_range() · d33b9f45
      Naoya Horiguchi 提交于
      Most callers of pmd_none_or_clear_bad() check whether the target page is
      in a hugepage or not, but walk_page_range() do not check it.  So if we
      read /proc/pid/pagemap for the hugepage on x86 machine, the hugepage
      memory is leaked as shown below.  This patch fixes it.
      
      Details
      =======
      My test program (leak_pagemap) works as follows:
       - creat() and mmap() a file on hugetlbfs (file size is 200MB == 100 hugepages,)
       - read()/write() something on it,
       - call page-types with option -p (walk around the page tables),
       - munmap() and unlink() the file on hugetlbfs
      
      Without my patches
      ------------------
      $ cat /proc/meminfo |grep "HugePage"
      HugePages_Total:    1000
      HugePages_Free:     1000
      HugePages_Rsvd:        0
      HugePages_Surp:        0
      $ ./leak_pagemap
      [snip output]
      $ cat /proc/meminfo |grep "HugePage"
      HugePages_Total:    1000
      HugePages_Free:      900
      HugePages_Rsvd:        0
      HugePages_Surp:        0
      $ ls /hugetlbfs/
      $
      
      100 hugepages are accounted as used while there is no file on hugetlbfs.
      
      With my patches
      ---------------
      $ cat /proc/meminfo |grep "HugePage"
      HugePages_Total:    1000
      HugePages_Free:     1000
      HugePages_Rsvd:        0
      HugePages_Surp:        0
      $ ./leak_pagemap
      [snip output]
      $ cat /proc/meminfo |grep "HugePage"
      HugePages_Total:    1000
      HugePages_Free:     1000
      HugePages_Rsvd:        0
      HugePages_Surp:        0
      $ ls /hugetlbfs
      $
      
      No memory leaks.
      Signed-off-by: NNaoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: Wu Fengguang <fengguang.wu@intel.com>
      Cc: Hugh Dickins <hugh.dickins@tiscali.co.uk>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Cc: Lee Schermerhorn <lee.schermerhorn@hp.com>
      Cc: Andy Whitcroft <apw@canonical.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: <stable@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      d33b9f45
    • N
      mm: hugetlb: fix hugepage memory leak in mincore() · 4f16fc10
      Naoya Horiguchi 提交于
      Most callers of pmd_none_or_clear_bad() check whether the target page is
      in a hugepage or not, but mincore() and walk_page_range() do not check it.
       So if we use mincore() on a hugepage on x86 machine, the hugepage memory
      is leaked as shown below.  This patch fixes it by extending mincore()
      system call to support hugepages.
      
      Details
      =======
      My test program (leak_mincore) works as follows:
       - creat() and mmap() a file on hugetlbfs (file size is 200MB == 100 hugepages,)
       - read()/write() something on it,
       - call mincore() for first ten pages and printf() the values of *vec
       - munmap() and unlink() the file on hugetlbfs
      
      Without my patch
      ----------------
      $ cat /proc/meminfo| grep "HugePage"
      HugePages_Total:    1000
      HugePages_Free:     1000
      HugePages_Rsvd:        0
      HugePages_Surp:        0
      $ ./leak_mincore
      vec[0] 0
      vec[1] 0
      vec[2] 0
      vec[3] 0
      vec[4] 0
      vec[5] 0
      vec[6] 0
      vec[7] 0
      vec[8] 0
      vec[9] 0
      $ cat /proc/meminfo |grep "HugePage"
      HugePages_Total:    1000
      HugePages_Free:      999
      HugePages_Rsvd:        0
      HugePages_Surp:        0
      $ ls /hugetlbfs/
      $
      
      Return values in *vec from mincore() are set to 0, while the hugepage
      should be in memory, and 1 hugepage is still accounted as used while
      there is no file on hugetlbfs.
      
      With my patch
      -------------
      $ cat /proc/meminfo| grep "HugePage"
      HugePages_Total:    1000
      HugePages_Free:     1000
      HugePages_Rsvd:        0
      HugePages_Surp:        0
      $ ./leak_mincore
      vec[0] 1
      vec[1] 1
      vec[2] 1
      vec[3] 1
      vec[4] 1
      vec[5] 1
      vec[6] 1
      vec[7] 1
      vec[8] 1
      vec[9] 1
      $ cat /proc/meminfo |grep "HugePage"
      HugePages_Total:    1000
      HugePages_Free:     1000
      HugePages_Rsvd:        0
      HugePages_Surp:        0
      $ ls /hugetlbfs/
      $
      
      Return value in *vec set to 1 and no memory leaks.
      
      [akpm@linux-foundation.org: cleanup]
      [akpm@linux-foundation.org: build fix]
      Signed-off-by: NNaoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: Wu Fengguang <fengguang.wu@intel.com>
      Cc: Hugh Dickins <hugh.dickins@tiscali.co.uk>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Cc: Lee Schermerhorn <lee.schermerhorn@hp.com>
      Cc: Andy Whitcroft <apw@canonical.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: <stable@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      4f16fc10
    • M
      hugetlb: abort a hugepage pool resize if a signal is pending · 536240f2
      Mel Gorman 提交于
      If a user asks for a hugepage pool resize but specified a large number,
      the machine can begin trashing.  In response, they might hit ctrl-c but
      signals are ignored and the pool resize continues until it fails an
      allocation.  This can take a considerable amount of time so this patch
      aborts a pool resize if a signal is pending.
      
      Suggested by Dave Hansen.
      Signed-off-by: NMel Gorman <mel@csn.ul.ie>
      Cc: Dave Hansen <dave@linux.vnet.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      536240f2
    • L
      mlock: replace stale comments in munlock_vma_page() · 6927c1dd
      Lee Schermerhorn 提交于
      Cleanup stale comments on munlock_vma_page().
      Signed-off-by: NLee Schermerhorn <lee.schermerhorn@hp.com>
      Acked-by: NHugh Dickins <hugh.dickins@tiscali.co.uk>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      6927c1dd
    • L
      mm: remove unevictable_migrate_page function · 418b27ef
      Lee Schermerhorn 提交于
      unevictable_migrate_page() in mm/internal.h is a relic of the since
      removed UNEVICTABLE_LRU Kconfig option.  This patch removes the function
      and open codes the test in migrate_page_copy().
      Signed-off-by: NLee Schermerhorn <lee.schermerhorn@hp.com>
      Reviewed-by: NChristoph Lameter <cl@linux-foundation.org>
      Acked-by: NHugh Dickins <hugh.dickins@tiscali.co.uk>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      418b27ef
    • M
      hugetlb: acquire the i_mmap_lock before walking the prio_tree to unmap a page · 4eb2b1dc
      Mel Gorman 提交于
      When the owner of a mapping fails COW because a child process is holding a
      reference, the children VMAs are walked and the page is unmapped.  The
      i_mmap_lock is taken for the unmapping of the page but not the walking of
      the prio_tree.  In theory, that tree could be changing if the lock is not
      held.  This patch takes the i_mmap_lock properly for the duration of the
      prio_tree walk.
      
      [hugh.dickins@tiscali.co.uk: Spotted the problem in the first place]
      Signed-off-by: NMel Gorman <mel@csn.ul.ie>
      Acked-by: NHugh Dickins <hugh.dickins@tiscali.co.uk>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      4eb2b1dc
    • M
      mm: uncached vma support with writenotify · c9d0bf24
      Magnus Damm 提交于
      Modify the generic mmap() code to keep the cache attribute in
      vma->vm_page_prot regardless if writenotify is enabled or not.  Without
      this patch the cache configuration selected by f_op->mmap() is overwritten
      if writenotify is enabled, making it impossible to keep the vma uncached.
      
      Needed by drivers such as drivers/video/sh_mobile_lcdcfb.c which uses
      deferred io together with uncached memory.
      Signed-off-by: NMagnus Damm <damm@opensource.se>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Cc: Hugh Dickins <hugh.dickins@tiscali.co.uk>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Cc: Jaya Kumar <jayakumar.lkml@gmail.com>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      c9d0bf24
    • H
      vmscan: simplify code · 62c0c2f1
      Huang Shijie 提交于
      Simplify the code for shrink_inactive_list().
      Signed-off-by: NHuang Shijie <shijie8@gmail.com>
      Reviewed-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Reviewed-by: NWu Fengguang <fengguang.wu@intel.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      62c0c2f1
    • R
      vmscan: do not evict inactive pages when skipping an active list scan · b39415b2
      Rik van Riel 提交于
      In AIM7 runs, recent kernels start swapping out anonymous pages well
      before they should.  This is due to shrink_list falling through to
      shrink_inactive_list if !inactive_anon_is_low(zone, sc), when all we
      really wanted to do is pre-age some anonymous pages to give them extra
      time to be referenced while on the inactive list.
      
      The obvious fix is to make sure that shrink_list does not fall through to
      scanning/reclaiming inactive pages when we called it to scan one of the
      active lists.
      
      This change should be safe because the loop in shrink_zone ensures that we
      will still shrink the anon and file inactive lists whenever we should.
      
      [kosaki.motohiro@jp.fujitsu.com: inactive_file_is_low() should be inactive_anon_is_low()]
      Reported-by: NLarry Woodman <lwoodman@redhat.com>
      Signed-off-by: NRik van Riel <riel@redhat.com>
      Acked-by: NJohannes Weiner <hannes@cmpxchg.org>
      Cc: Tomasz Chmielewski <mangoo@wpkg.org>
      Signed-off-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      b39415b2
    • J
      mm/bootmem.c: properly __init-annotate helper functions · 8aa043d7
      Jan Beulich 提交于
      Signed-off-by: NJan Beulich <jbeulich@novell.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      8aa043d7
    • K
      mm: simplify try_to_unmap_one() · caed0f48
      KOSAKI Motohiro 提交于
      SWAP_MLOCK mean "We marked the page as PG_MLOCK, please move it to
      unevictable-lru". So, following code is easy confusable.
      
              if (vma->vm_flags & VM_LOCKED) {
                      ret = SWAP_MLOCK;
                      goto out_unmap;
              }
      
      Plus, if the VMA doesn't have VM_LOCKED, We don't need to check
      the needed of calling mlock_vma_page().
      
      Also, add some commentary to try_to_unmap_one().
      Acked-by: NHugh Dickins <hugh.dickins@tiscali.co.uk>
      Signed-off-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      caed0f48
    • R
      mm: fix section mismatch in memory_hotplug.c · 23ce932a
      Rakib Mullick 提交于
      __free_pages_bootmem() is a __meminit function - which has been called
      from put_pages_bootmem thus causes a section mismatch warning.
      
       We were warned by the following warning:
      
        LD      mm/built-in.o
      WARNING: mm/built-in.o(.text+0x26b22): Section mismatch in reference
      from the function put_page_bootmem() to the function
      .meminit.text:__free_pages_bootmem()
      The function put_page_bootmem() references
      the function __meminit __free_pages_bootmem().
      This is often because put_page_bootmem lacks a __meminit
      annotation or the annotation of __free_pages_bootmem is wrong.
      Signed-off-by: NRakib Mullick <rakib.mullick@gmail.com>
      Cc: Yasunori Goto <y-goto@jp.fujitsu.com>
      Cc: Badari Pulavarty <pbadari@us.ibm.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      23ce932a
    • L
      hugetlb: prevent deadlock in __unmap_hugepage_range() when alloc_huge_page() fails · b76c8cfb
      Larry Woodman 提交于
      hugetlb_fault() takes the mm->page_table_lock spinlock then calls
      hugetlb_cow().  If the alloc_huge_page() in hugetlb_cow() fails due to an
      insufficient huge page pool it calls unmap_ref_private() with the
      mm->page_table_lock held.  unmap_ref_private() then calls
      unmap_hugepage_range() which tries to acquire the mm->page_table_lock.
      
      [<ffffffff810928c3>] print_circular_bug_tail+0x80/0x9f
       [<ffffffff8109280b>] ? check_noncircular+0xb0/0xe8
       [<ffffffff810935e0>] __lock_acquire+0x956/0xc0e
       [<ffffffff81093986>] lock_acquire+0xee/0x12e
       [<ffffffff8111a7a6>] ? unmap_hugepage_range+0x3e/0x84
       [<ffffffff8111a7a6>] ? unmap_hugepage_range+0x3e/0x84
       [<ffffffff814c348d>] _spin_lock+0x40/0x89
       [<ffffffff8111a7a6>] ? unmap_hugepage_range+0x3e/0x84
       [<ffffffff8111afee>] ? alloc_huge_page+0x218/0x318
       [<ffffffff8111a7a6>] unmap_hugepage_range+0x3e/0x84
       [<ffffffff8111b2d0>] hugetlb_cow+0x1e2/0x3f4
       [<ffffffff8111b935>] ? hugetlb_fault+0x453/0x4f6
       [<ffffffff8111b962>] hugetlb_fault+0x480/0x4f6
       [<ffffffff8111baee>] follow_hugetlb_page+0x116/0x2d9
       [<ffffffff814c31a7>] ? _spin_unlock_irq+0x3a/0x5c
       [<ffffffff81107b4d>] __get_user_pages+0x2a3/0x427
       [<ffffffff81107d0f>] get_user_pages+0x3e/0x54
       [<ffffffff81040b8b>] get_user_pages_fast+0x170/0x1b5
       [<ffffffff81160352>] dio_get_page+0x64/0x14a
       [<ffffffff8116112a>] __blockdev_direct_IO+0x4b7/0xb31
       [<ffffffff8115ef91>] blkdev_direct_IO+0x58/0x6e
       [<ffffffff8115e0a4>] ? blkdev_get_blocks+0x0/0xb8
       [<ffffffff810ed2c5>] generic_file_aio_read+0xdd/0x528
       [<ffffffff81219da3>] ? avc_has_perm+0x66/0x8c
       [<ffffffff81132842>] do_sync_read+0xf5/0x146
       [<ffffffff8107da00>] ? autoremove_wake_function+0x0/0x5a
       [<ffffffff81211857>] ? security_file_permission+0x24/0x3a
       [<ffffffff81132fd8>] vfs_read+0xb5/0x126
       [<ffffffff81133f6b>] ? fget_light+0x5e/0xf8
       [<ffffffff81133131>] sys_read+0x54/0x8c
       [<ffffffff81011e42>] system_call_fastpath+0x16/0x1b
      
      This can be fixed by dropping the mm->page_table_lock around the call to
      unmap_ref_private() if alloc_huge_page() fails, its dropped right below in
      the normal path anyway.  However, earlier in the that function, it's also
      possible to call into the page allocator with the same spinlock held.
      
      What this patch does is drop the spinlock before the page allocator is
      potentially entered.  The check for page allocation failure can be made
      without the page_table_lock as well as the copy of the huge page.  Even if
      the PTE changed while the spinlock was held, the consequence is that a
      huge page is copied unnecessarily.  This resolves both the double taking
      of the lock and sleeping with the spinlock held.
      
      [mel@csn.ul.ie: Cover also the case where process can sleep with spinlock]
      Signed-off-by: NLarry Woodman <lwooman@redhat.com>
      Signed-off-by: NMel Gorman <mel@csn.ul.ie>
      Acked-by: NAdam Litke <agl@us.ibm.com>
      Cc: Andy Whitcroft <apw@shadowen.org>
      Cc: Lee Schermerhorn <lee.schermerhorn@hp.com>
      Cc: Hugh Dickins <hugh.dickins@tiscali.co.uk>
      Cc: David Gibson <david@gibson.dropbear.id.au>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      b76c8cfb
    • A
      mm: memory_hotplug: make offline_pages() static · b4e655a4
      Andrew Morton 提交于
      It has no references outside memory_hotplug.c.
      
      Cc: "Rafael J. Wysocki" <rjw@sisk.pl>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: Gerald Schaefer <gerald.schaefer@de.ibm.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Yasunori Goto <y-goto@jp.fujitsu.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      b4e655a4
    • H
      ksm: remove unswappable max_kernel_pages · d0f209f6
      Hugh Dickins 提交于
      Now that ksm pages are swappable, and the known holes plugged, remove
      mention of unswappable kernel pages from KSM documentation and comments.
      
      Remove the totalram_pages/4 initialization of max_kernel_pages.  In fact,
      remove max_kernel_pages altogether - we can reinstate it if removal turns
      out to break someone's script; but if we later want to limit KSM's memory
      usage, limiting the stable nodes would not be an effective approach.
      Signed-off-by: NHugh Dickins <hugh.dickins@tiscali.co.uk>
      Cc: Izik Eidus <ieidus@redhat.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Chris Wright <chrisw@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      d0f209f6
    • H
      ksm: memory hotremove migration only · 62b61f61
      Hugh Dickins 提交于
      The previous patch enables page migration of ksm pages, but that soon gets
      into trouble: not surprising, since we're using the ksm page lock to lock
      operations on its stable_node, but page migration switches the page whose
      lock is to be used for that.  Another layer of locking would fix it, but
      do we need that yet?
      
      Do we actually need page migration of ksm pages?  Yes, memory hotremove
      needs to offline sections of memory: and since we stopped allocating ksm
      pages with GFP_HIGHUSER, they will tend to be GFP_HIGHUSER_MOVABLE
      candidates for migration.
      
      But KSM is currently unconscious of NUMA issues, happily merging pages
      from different NUMA nodes: at present the rule must be, not to use
      MADV_MERGEABLE where you care about NUMA.  So no, NUMA page migration of
      ksm pages does not make sense yet.
      
      So, to complete support for ksm swapping we need to make hotremove safe.
      ksm_memory_callback() take ksm_thread_mutex when MEM_GOING_OFFLINE and
      release it when MEM_OFFLINE or MEM_CANCEL_OFFLINE.  But if mapped pages
      are freed before migration reaches them, stable_nodes may be left still
      pointing to struct pages which have been removed from the system: the
      stable_node needs to identify a page by pfn rather than page pointer, then
      it can safely prune them when MEM_OFFLINE.
      
      And make NUMA migration skip PageKsm pages where it skips PageReserved.
      But it's only when we reach unmap_and_move() that the page lock is taken
      and we can be sure that raised pagecount has prevented a PageAnon from
      being upgraded: so add offlining arg to migrate_pages(), to migrate ksm
      page when offlining (has sufficient locking) but reject it otherwise.
      Signed-off-by: NHugh Dickins <hugh.dickins@tiscali.co.uk>
      Cc: Izik Eidus <ieidus@redhat.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Chris Wright <chrisw@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      62b61f61
    • H
      ksm: rmap_walk to remove_migation_ptes · e9995ef9
      Hugh Dickins 提交于
      A side-effect of making ksm pages swappable is that they have to be placed
      on the LRUs: which then exposes them to isolate_lru_page() and hence to
      page migration.
      
      Add rmap_walk() for remove_migration_ptes() to use: rmap_walk_anon() and
      rmap_walk_file() in rmap.c, but rmap_walk_ksm() in ksm.c.  Perhaps some
      consolidation with existing code is possible, but don't attempt that yet
      (try_to_unmap needs to handle nonlinears, but migration pte removal does
      not).
      
      rmap_walk() is sadly less general than it appears: rmap_walk_anon(), like
      remove_anon_migration_ptes() which it replaces, avoids calling
      page_lock_anon_vma(), because that includes a page_mapped() test which
      fails when all migration ptes are in place.  That was valid when NUMA page
      migration was introduced (holding mmap_sem provided the missing guarantee
      that anon_vma's slab had not already been destroyed), but I believe not
      valid in the memory hotremove case added since.
      
      For now do the same as before, and consider the best way to fix that
      unlikely race later on.  When fixed, we can probably use rmap_walk() on
      hwpoisoned ksm pages too: for now, they remain among hwpoison's various
      exceptions (its PageKsm test comes before the page is locked, but its
      page_lock_anon_vma fails safely if an anon gets upgraded).
      Signed-off-by: NHugh Dickins <hugh.dickins@tiscali.co.uk>
      Cc: Izik Eidus <ieidus@redhat.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Chris Wright <chrisw@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      e9995ef9
    • H
      ksm: mem cgroup charge swapin copy · 407f9c8b
      Hugh Dickins 提交于
      But ksm swapping does require one small change in mem cgroup handling.
      When do_swap_page()'s call to ksm_might_need_to_copy() does indeed
      substitute a duplicate page to accommodate a different anon_vma (or a the
      !PageSwapCache check in mem_cgroup_try_charge_swapin().
      
      That was returning success without charging, on the assumption that
      pte_same() would fail after, which is not the case here.  Originally I
      proposed that success, so that an unshrinkable mem cgroup at its limit
      would not fail unnecessarily; but that's a minor point, and there are
      plenty of other places where we may fail an overallocation which might
      later prove unnecessary.  So just go ahead and do what all the other
      exceptions do: proceed to charge current mm.
      Signed-off-by: NHugh Dickins <hugh.dickins@tiscali.co.uk>
      Cc: Izik Eidus <ieidus@redhat.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Chris Wright <chrisw@redhat.com>
      Acked-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Acked-by: NBalbir Singh <balbir@linux.vnet.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      407f9c8b
    • H
      ksm: share anon page without allocating · 80e14822
      Hugh Dickins 提交于
      When ksm pages were unswappable, it made no sense to include them in mem
      cgroup accounting; but now that they are swappable (although I see no
      strict logical connection) the principle of least surprise implies that
      they should be accounted (with the usual dissatisfaction, that a shared
      page is accounted to only one of the cgroups using it).
      
      This patch was intended to add mem cgroup accounting where necessary; but
      turned inside out, it now avoids allocating a ksm page, instead upgrading
      an anon page to ksm - which brings its existing mem cgroup accounting with
      it.  Thus mem cgroups don't appear in the patch at all.
      
      This upgrade from PageAnon to PageKsm takes place under page lock (via a
      somewhat hacky NULL kpage interface), and audit showed only one place
      which needed to cope with the race - page_referenced() is sometimes used
      without page lock, so page_lock_anon_vma() needs an ACCESS_ONCE() to be
      sure of getting anon_vma and flags together (no problem if the page goes
      ksm an instant after, the integrity of that anon_vma list is unaffected).
      Signed-off-by: NHugh Dickins <hugh.dickins@tiscali.co.uk>
      Cc: Izik Eidus <ieidus@redhat.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Chris Wright <chrisw@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      80e14822