1. 25 5月, 2010 4 次提交
    • M
      mm: migration: do not try to migrate unmapped anonymous pages · 67b9509b
      Mel Gorman 提交于
      rmap_walk_anon() was triggering errors in memory compaction that look like
      use-after-free errors.  The problem is that between the page being
      isolated from the LRU and rcu_read_lock() being taken, the mapcount of the
      page dropped to 0 and the anon_vma gets freed.  This can happen during
      memory compaction if pages being migrated belong to a process that exits
      before migration completes.  Hence, the use-after-free race looks like
      
       1. Page isolated for migration
       2. Process exits
       3. page_mapcount(page) drops to zero so anon_vma was no longer reliable
       4. unmap_and_move() takes the rcu_lock but the anon_vma is already garbage
       4. call try_to_unmap, looks up tha anon_vma and "locks" it but the lock
          is garbage.
      
      This patch checks the mapcount after the rcu lock is taken.  If the
      mapcount is zero, the anon_vma is assumed to be freed and no further
      action is taken.
      Signed-off-by: NMel Gorman <mel@csn.ul.ie>
      Acked-by: NRik van Riel <riel@redhat.com>
      Reviewed-by: NMinchan Kim <minchan.kim@gmail.com>
      Reviewed-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Christoph Lameter <cl@linux-foundation.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      67b9509b
    • M
      mm: migration: share the anon_vma ref counts between KSM and page migration · 7f60c214
      Mel Gorman 提交于
      For clarity of review, KSM and page migration have separate refcounts on
      the anon_vma.  While clear, this is a waste of memory.  This patch gets
      KSM and page migration to share their toys in a spirit of harmony.
      Signed-off-by: NMel Gorman <mel@csn.ul.ie>
      Reviewed-by: NMinchan Kim <minchan.kim@gmail.com>
      Reviewed-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Reviewed-by: NChristoph Lameter <cl@linux-foundation.org>
      Reviewed-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Rik van Riel <riel@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      7f60c214
    • M
      mm: migration: take a reference to the anon_vma before migrating · 3f6c8272
      Mel Gorman 提交于
      This patchset is a memory compaction mechanism that reduces external
      fragmentation memory by moving GFP_MOVABLE pages to a fewer number of
      pageblocks.  The term "compaction" was chosen as there are is a number of
      mechanisms that are not mutually exclusive that can be used to defragment
      memory.  For example, lumpy reclaim is a form of defragmentation as was
      slub "defragmentation" (really a form of targeted reclaim).  Hence, this
      is called "compaction" to distinguish it from other forms of
      defragmentation.
      
      In this implementation, a full compaction run involves two scanners
      operating within a zone - a migration and a free scanner.  The migration
      scanner starts at the beginning of a zone and finds all movable pages
      within one pageblock_nr_pages-sized area and isolates them on a
      migratepages list.  The free scanner begins at the end of the zone and
      searches on a per-area basis for enough free pages to migrate all the
      pages on the migratepages list.  As each area is respectively migrated or
      exhausted of free pages, the scanners are advanced one area.  A compaction
      run completes within a zone when the two scanners meet.
      
      This method is a bit primitive but is easy to understand and greater
      sophistication would require maintenance of counters on a per-pageblock
      basis.  This would have a big impact on allocator fast-paths to improve
      compaction which is a poor trade-off.
      
      It also does not try relocate virtually contiguous pages to be physically
      contiguous.  However, assuming transparent hugepages were in use, a
      hypothetical khugepaged might reuse compaction code to isolate free pages,
      split them and relocate userspace pages for promotion.
      
      Memory compaction can be triggered in one of three ways.  It may be
      triggered explicitly by writing any value to /proc/sys/vm/compact_memory
      and compacting all of memory.  It can be triggered on a per-node basis by
      writing any value to /sys/devices/system/node/nodeN/compact where N is the
      node ID to be compacted.  When a process fails to allocate a high-order
      page, it may compact memory in an attempt to satisfy the allocation
      instead of entering direct reclaim.  Explicit compaction does not finish
      until the two scanners meet and direct compaction ends if a suitable page
      becomes available that would meet watermarks.
      
      The series is in 14 patches.  The first three are not "core" to the series
      but are important pre-requisites.
      
      Patch 1 reference counts anon_vma for rmap_walk_anon(). Without this
      	patch, it's possible to use anon_vma after free if the caller is
      	not holding a VMA or mmap_sem for the pages in question. While
      	there should be no existing user that causes this problem,
      	it's a requirement for memory compaction to be stable. The patch
      	is at the start of the series for bisection reasons.
      Patch 2 merges the KSM and migrate counts. It could be merged with patch 1
      	but would be slightly harder to review.
      Patch 3 skips over unmapped anon pages during migration as there are no
      	guarantees about the anon_vma existing. There is a window between
      	when a page was isolated and migration started during which anon_vma
      	could disappear.
      Patch 4 notes that PageSwapCache pages can still be migrated even if they
      	are unmapped.
      Patch 5 allows CONFIG_MIGRATION to be set without CONFIG_NUMA
      Patch 6 exports a "unusable free space index" via debugfs. It's
      	a measure of external fragmentation that takes the size of the
      	allocation request into account. It can also be calculated from
      	userspace so can be dropped if requested
      Patch 7 exports a "fragmentation index" which only has meaning when an
      	allocation request fails. It determines if an allocation failure
      	would be due to a lack of memory or external fragmentation.
      Patch 8 moves the definition for LRU isolation modes for use by compaction
      Patch 9 is the compaction mechanism although it's unreachable at this point
      Patch 10 adds a means of compacting all of memory with a proc trgger
      Patch 11 adds a means of compacting a specific node with a sysfs trigger
      Patch 12 adds "direct compaction" before "direct reclaim" if it is
      	determined there is a good chance of success.
      Patch 13 adds a sysctl that allows tuning of the threshold at which the
      	kernel will compact or direct reclaim
      Patch 14 temporarily disables compaction if an allocation failure occurs
      	after compaction.
      
      Testing of compaction was in three stages.  For the test, debugging,
      preempt, the sleep watchdog and lockdep were all enabled but nothing nasty
      popped out.  min_free_kbytes was tuned as recommended by hugeadm to help
      fragmentation avoidance and high-order allocations.  It was tested on X86,
      X86-64 and PPC64.
      
      Ths first test represents one of the easiest cases that can be faced for
      lumpy reclaim or memory compaction.
      
      1. Machine freshly booted and configured for hugepage usage with
      	a) hugeadm --create-global-mounts
      	b) hugeadm --pool-pages-max DEFAULT:8G
      	c) hugeadm --set-recommended-min_free_kbytes
      	d) hugeadm --set-recommended-shmmax
      
      	The min_free_kbytes here is important. Anti-fragmentation works best
      	when pageblocks don't mix. hugeadm knows how to calculate a value that
      	will significantly reduce the worst of external-fragmentation-related
      	events as reported by the mm_page_alloc_extfrag tracepoint.
      
      2. Load up memory
      	a) Start updatedb
      	b) Create in parallel a X files of pagesize*128 in size. Wait
      	   until files are created. By parallel, I mean that 4096 instances
      	   of dd were launched, one after the other using &. The crude
      	   objective being to mix filesystem metadata allocations with
      	   the buffer cache.
      	c) Delete every second file so that pageblocks are likely to
      	   have holes
      	d) kill updatedb if it's still running
      
      	At this point, the system is quiet, memory is full but it's full with
      	clean filesystem metadata and clean buffer cache that is unmapped.
      	This is readily migrated or discarded so you'd expect lumpy reclaim
      	to have no significant advantage over compaction but this is at
      	the POC stage.
      
      3. In increments, attempt to allocate 5% of memory as hugepages.
      	   Measure how long it took, how successful it was, how many
      	   direct reclaims took place and how how many compactions. Note
      	   the compaction figures might not fully add up as compactions
      	   can take place for orders other than the hugepage size
      
      X86				vanilla		compaction
      Final page count                    913                916 (attempted 1002)
      pages reclaimed                   68296               9791
      
      X86-64				vanilla		compaction
      Final page count:                   901                902 (attempted 1002)
      Total pages reclaimed:           112599              53234
      
      PPC64				vanilla		compaction
      Final page count:                    93                 94 (attempted 110)
      Total pages reclaimed:           103216              61838
      
      There was not a dramatic improvement in success rates but it wouldn't be
      expected in this case either.  What was important is that fewer pages were
      reclaimed in all cases reducing the amount of IO required to satisfy a
      huge page allocation.
      
      The second tests were all performance related - kernbench, netperf, iozone
      and sysbench.  None showed anything too remarkable.
      
      The last test was a high-order allocation stress test.  Many kernel
      compiles are started to fill memory with a pressured mix of unmovable and
      movable allocations.  During this, an attempt is made to allocate 90% of
      memory as huge pages - one at a time with small delays between attempts to
      avoid flooding the IO queue.
      
                                                   vanilla   compaction
      Percentage of request allocated X86               98           99
      Percentage of request allocated X86-64            95           98
      Percentage of request allocated PPC64             55           70
      
      This patch:
      
      rmap_walk_anon() does not use page_lock_anon_vma() for looking up and
      locking an anon_vma and it does not appear to have sufficient locking to
      ensure the anon_vma does not disappear from under it.
      
      This patch copies an approach used by KSM to take a reference on the
      anon_vma while pages are being migrated.  This should prevent rmap_walk()
      running into nasty surprises later because anon_vma has been freed.
      Signed-off-by: NMel Gorman <mel@csn.ul.ie>
      Acked-by: NRik van Riel <riel@redhat.com>
      Cc: Minchan Kim <minchan.kim@gmail.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Christoph Lameter <cl@linux-foundation.org>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      3f6c8272
    • M
      mm: remove return value of putback_lru_pages() · e13861d8
      Minchan Kim 提交于
      putback_lru_page() never can fail.  So it doesn't matter count of "the
      number of pages put back".
      
      In addition, users of this functions don't use return value.
      
      Let's remove unnecessary code.
      Signed-off-by: NMinchan Kim <minchan.kim@gmail.com>
      Reviewed-by: NRik van Riel <riel@redhat.com>
      Reviewed-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Reviewed-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      e13861d8
  2. 30 3月, 2010 1 次提交
    • T
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking... · 5a0e3ad6
      Tejun Heo 提交于
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
      
      percpu.h is included by sched.h and module.h and thus ends up being
      included when building most .c files.  percpu.h includes slab.h which
      in turn includes gfp.h making everything defined by the two files
      universally available and complicating inclusion dependencies.
      
      percpu.h -> slab.h dependency is about to be removed.  Prepare for
      this change by updating users of gfp and slab facilities include those
      headers directly instead of assuming availability.  As this conversion
      needs to touch large number of source files, the following script is
      used as the basis of conversion.
      
        http://userweb.kernel.org/~tj/misc/slabh-sweep.py
      
      The script does the followings.
      
      * Scan files for gfp and slab usages and update includes such that
        only the necessary includes are there.  ie. if only gfp is used,
        gfp.h, if slab is used, slab.h.
      
      * When the script inserts a new include, it looks at the include
        blocks and try to put the new include such that its order conforms
        to its surrounding.  It's put in the include block which contains
        core kernel includes, in the same order that the rest are ordered -
        alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
        doesn't seem to be any matching order.
      
      * If the script can't find a place to put a new include (mostly
        because the file doesn't have fitting include block), it prints out
        an error message indicating which .h file needs to be added to the
        file.
      
      The conversion was done in the following steps.
      
      1. The initial automatic conversion of all .c files updated slightly
         over 4000 files, deleting around 700 includes and adding ~480 gfp.h
         and ~3000 slab.h inclusions.  The script emitted errors for ~400
         files.
      
      2. Each error was manually checked.  Some didn't need the inclusion,
         some needed manual addition while adding it to implementation .h or
         embedding .c file was more appropriate for others.  This step added
         inclusions to around 150 files.
      
      3. The script was run again and the output was compared to the edits
         from #2 to make sure no file was left behind.
      
      4. Several build tests were done and a couple of problems were fixed.
         e.g. lib/decompress_*.c used malloc/free() wrappers around slab
         APIs requiring slab.h to be added manually.
      
      5. The script was run on all .h files but without automatically
         editing them as sprinkling gfp.h and slab.h inclusions around .h
         files could easily lead to inclusion dependency hell.  Most gfp.h
         inclusion directives were ignored as stuff from gfp.h was usually
         wildly available and often used in preprocessor macros.  Each
         slab.h inclusion directive was examined and added manually as
         necessary.
      
      6. percpu.h was updated not to include slab.h.
      
      7. Build test were done on the following configurations and failures
         were fixed.  CONFIG_GCOV_KERNEL was turned off for all tests (as my
         distributed build env didn't work with gcov compiles) and a few
         more options had to be turned off depending on archs to make things
         build (like ipr on powerpc/64 which failed due to missing writeq).
      
         * x86 and x86_64 UP and SMP allmodconfig and a custom test config.
         * powerpc and powerpc64 SMP allmodconfig
         * sparc and sparc64 SMP allmodconfig
         * ia64 SMP allmodconfig
         * s390 SMP allmodconfig
         * alpha SMP allmodconfig
         * um on x86_64 SMP allmodconfig
      
      8. percpu.h modifications were reverted so that it could be applied as
         a separate patch and serve as bisection point.
      
      Given the fact that I had only a couple of failures from tests on step
      6, I'm fairly confident about the coverage of this conversion patch.
      If there is a breakage, it's likely to be something in one of the arch
      headers which should be easily discoverable easily on most builds of
      the specific arch.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Guess-its-ok-by: NChristoph Lameter <cl@linux-foundation.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
      5a0e3ad6
  3. 07 3月, 2010 1 次提交
  4. 22 2月, 2010 1 次提交
    • H
      mm: Make copy_from_user() in migrate.c statically predictable · 87b8d1ad
      H. Peter Anvin 提交于
      x86-32 has had a static test for copy_on_user() overflow for a while.
      This test currently fails in mm/migrate.c resulting in an
      allyesconfig/allmodconfig build failure on x86-32:
      
      In function ‘copy_from_user’,
          inlined from ‘do_pages_stat’ at
          /home/hpa/kernel/git/mm/migrate.c:1012:
      /home/hpa/kernel/git/arch/x86/include/asm/uaccess_32.h:212: error:
          call to ‘copy_from_user_overflow’ declared
      
      Make the logic more explicit and therefore easier for gcc to
      understand.
      
      v2: rewrite the loop entirely using a more normal structure for a
          chunked-data loop (Linus Torvalds)
      Reported-by: NLen Brown <lenb@kernel.org>
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      Reviewed-and-Tested-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Arjan van de Ven <arjan@linux.kernel.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Christoph Lameter <cl@linux-foundation.org>
      Cc: Hugh Dickins <hugh.dickins@tiscali.co.uk>
      Cc: Rik van Riel <riel@redhat.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      87b8d1ad
  5. 21 2月, 2010 1 次提交
    • R
      MM: Pass a PTE pointer to update_mmu_cache() rather than the PTE itself · 4b3073e1
      Russell King 提交于
      On VIVT ARM, when we have multiple shared mappings of the same file
      in the same MM, we need to ensure that we have coherency across all
      copies.  We do this via make_coherent() by making the pages
      uncacheable.
      
      This used to work fine, until we allowed highmem with highpte - we
      now have a page table which is mapped as required, and is not available
      for modification via update_mmu_cache().
      
      Ralf Beache suggested getting rid of the PTE value passed to
      update_mmu_cache():
      
        On MIPS update_mmu_cache() calls __update_tlb() which walks pagetables
        to construct a pointer to the pte again.  Passing a pte_t * is much
        more elegant.  Maybe we might even replace the pte argument with the
        pte_t?
      
      Ben Herrenschmidt would also like the pte pointer for PowerPC:
      
        Passing the ptep in there is exactly what I want.  I want that
        -instead- of the PTE value, because I have issue on some ppc cases,
        for I$/D$ coherency, where set_pte_at() may decide to mask out the
        _PAGE_EXEC.
      
      So, pass in the mapped page table pointer into update_mmu_cache(), and
      remove the PTE value, updating all implementations and call sites to
      suit.
      
      Includes a fix from Stephen Rothwell:
      
        sparc: fix fallout from update_mmu_cache API change
      Signed-off-by: NStephen Rothwell <sfr@canb.auug.org.au>
      Acked-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      4b3073e1
  6. 07 2月, 2010 1 次提交
  7. 16 12月, 2009 5 次提交
    • L
      mm: remove unevictable_migrate_page function · 418b27ef
      Lee Schermerhorn 提交于
      unevictable_migrate_page() in mm/internal.h is a relic of the since
      removed UNEVICTABLE_LRU Kconfig option.  This patch removes the function
      and open codes the test in migrate_page_copy().
      Signed-off-by: NLee Schermerhorn <lee.schermerhorn@hp.com>
      Reviewed-by: NChristoph Lameter <cl@linux-foundation.org>
      Acked-by: NHugh Dickins <hugh.dickins@tiscali.co.uk>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      418b27ef
    • H
      ksm: memory hotremove migration only · 62b61f61
      Hugh Dickins 提交于
      The previous patch enables page migration of ksm pages, but that soon gets
      into trouble: not surprising, since we're using the ksm page lock to lock
      operations on its stable_node, but page migration switches the page whose
      lock is to be used for that.  Another layer of locking would fix it, but
      do we need that yet?
      
      Do we actually need page migration of ksm pages?  Yes, memory hotremove
      needs to offline sections of memory: and since we stopped allocating ksm
      pages with GFP_HIGHUSER, they will tend to be GFP_HIGHUSER_MOVABLE
      candidates for migration.
      
      But KSM is currently unconscious of NUMA issues, happily merging pages
      from different NUMA nodes: at present the rule must be, not to use
      MADV_MERGEABLE where you care about NUMA.  So no, NUMA page migration of
      ksm pages does not make sense yet.
      
      So, to complete support for ksm swapping we need to make hotremove safe.
      ksm_memory_callback() take ksm_thread_mutex when MEM_GOING_OFFLINE and
      release it when MEM_OFFLINE or MEM_CANCEL_OFFLINE.  But if mapped pages
      are freed before migration reaches them, stable_nodes may be left still
      pointing to struct pages which have been removed from the system: the
      stable_node needs to identify a page by pfn rather than page pointer, then
      it can safely prune them when MEM_OFFLINE.
      
      And make NUMA migration skip PageKsm pages where it skips PageReserved.
      But it's only when we reach unmap_and_move() that the page lock is taken
      and we can be sure that raised pagecount has prevented a PageAnon from
      being upgraded: so add offlining arg to migrate_pages(), to migrate ksm
      page when offlining (has sufficient locking) but reject it otherwise.
      Signed-off-by: NHugh Dickins <hugh.dickins@tiscali.co.uk>
      Cc: Izik Eidus <ieidus@redhat.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Chris Wright <chrisw@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      62b61f61
    • H
      ksm: rmap_walk to remove_migation_ptes · e9995ef9
      Hugh Dickins 提交于
      A side-effect of making ksm pages swappable is that they have to be placed
      on the LRUs: which then exposes them to isolate_lru_page() and hence to
      page migration.
      
      Add rmap_walk() for remove_migration_ptes() to use: rmap_walk_anon() and
      rmap_walk_file() in rmap.c, but rmap_walk_ksm() in ksm.c.  Perhaps some
      consolidation with existing code is possible, but don't attempt that yet
      (try_to_unmap needs to handle nonlinears, but migration pte removal does
      not).
      
      rmap_walk() is sadly less general than it appears: rmap_walk_anon(), like
      remove_anon_migration_ptes() which it replaces, avoids calling
      page_lock_anon_vma(), because that includes a page_mapped() test which
      fails when all migration ptes are in place.  That was valid when NUMA page
      migration was introduced (holding mmap_sem provided the missing guarantee
      that anon_vma's slab had not already been destroyed), but I believe not
      valid in the memory hotremove case added since.
      
      For now do the same as before, and consider the best way to fix that
      unlikely race later on.  When fixed, we can probably use rmap_walk() on
      hwpoisoned ksm pages too: for now, they remain among hwpoison's various
      exceptions (its PageKsm test comes before the page is locked, but its
      page_lock_anon_vma fails safely if an anon gets upgraded).
      Signed-off-by: NHugh Dickins <hugh.dickins@tiscali.co.uk>
      Cc: Izik Eidus <ieidus@redhat.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Chris Wright <chrisw@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      e9995ef9
    • H
      mm: define PAGE_MAPPING_FLAGS · 3ca7b3c5
      Hugh Dickins 提交于
      At present we define PageAnon(page) by the low PAGE_MAPPING_ANON bit set
      in page->mapping, with the higher bits a pointer to the anon_vma; and have
      defined PageKsm(page) as that with NULL anon_vma.
      
      But KSM swapping will need to store a pointer there: so in preparation for
      that, now define PAGE_MAPPING_FLAGS as the low two bits, including
      PAGE_MAPPING_KSM (always set along with PAGE_MAPPING_ANON, until some
      other use for the bit emerges).
      
      Declare page_rmapping(page) to return the pointer part of page->mapping,
      and page_anon_vma(page) to return the anon_vma pointer when that's what it
      is.  Use these in a few appropriate places: notably, unuse_vma() has been
      testing page->mapping, but is better to be testing page_anon_vma() (cases
      may be added in which flag bits are set without any pointer).
      Signed-off-by: NHugh Dickins <hugh.dickins@tiscali.co.uk>
      Cc: Izik Eidus <ieidus@redhat.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Nick Piggin <npiggin@suse.de>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Reviewed-by: NRik van Riel <riel@redhat.com>
      Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Wu Fengguang <fengguang.wu@intel.com>
      Cc: Minchan Kim <minchan.kim@gmail.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      3ca7b3c5
    • K
      mm: move inc_zone_page_state(NR_ISOLATED) to just isolated place · 6d9c285a
      KOSAKI Motohiro 提交于
      Christoph pointed out inc_zone_page_state(NR_ISOLATED) should be placed
      in right after isolate_page().
      
      This patch does it.
      Reviewed-by: NChristoph Lameter <cl@linux-foundation.org>
      Signed-off-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      6d9c285a
  8. 12 12月, 2009 1 次提交
    • H
      mm: Adjust do_pages_stat() so gcc can see copy_from_user() is safe · b9255850
      H. Peter Anvin 提交于
      Slightly adjust the logic for determining the size of the
      copy_form_user() in do_pages_stat(); with this change, gcc can see
      that the copying is safe.
      
      Without this, we get a build error for i386 allyesconfig:
      
      /home/hpa/kernel/linux-2.6-tip.urgent/arch/x86/include/asm/uaccess_32.h:213:
      error: call to ‘copy_from_user_overflow’ declared with attribute
      error: copy_from_user() buffer size is not provably correct
      
      Unlike an earlier patch from Arjan, this doesn't introduce new
      variables; merely reshuffles the compare so that gcc can see that an
      overflow cannot happen.
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      Cc: Brice Goglin <Brice.Goglin@inria.fr>
      Cc: Arjan van de Ven <arjan@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      LKML-Reference: <20090926205406.30d55b08@infradead.org>
      b9255850
  9. 12 11月, 2009 1 次提交
  10. 22 9月, 2009 5 次提交
  11. 16 9月, 2009 1 次提交
    • A
      HWPOISON: Use bitmask/action code for try_to_unmap behaviour · 14fa31b8
      Andi Kleen 提交于
      try_to_unmap currently has multiple modi (migration, munlock, normal unmap)
      which are selected by magic flag variables. The logic is not very straight
      forward, because each of these flag change multiple behaviours (e.g.
      migration turns off aging, not only sets up migration ptes etc.)
      Also the different flags interact in magic ways.
      
      A later patch in this series adds another mode to try_to_unmap, so
      this becomes quickly unmanageable.
      
      Replace the different flags with a action code (migration, munlock, munmap)
      and some additional flags as modifiers (ignore mlock, ignore aging).
      This makes the logic more straight forward and allows easier extension
      to new behaviours. Change all the caller to declare what they want to
      do.
      
      This patch is supposed to be a nop in behaviour. If anyone can prove
      it is not that would be a bug.
      
      Cc: Lee.Schermerhorn@hp.com
      Cc: npiggin@suse.de
      Signed-off-by: NAndi Kleen <ak@linux.intel.com>
      14fa31b8
  12. 17 6月, 2009 2 次提交
  13. 03 4月, 2009 1 次提交
  14. 12 2月, 2009 1 次提交
  15. 14 1月, 2009 1 次提交
  16. 09 1月, 2009 2 次提交
    • K
      memcg: simple migration handling · 01b1ae63
      KAMEZAWA Hiroyuki 提交于
      Now, management of "charge" under page migration is done under following
      manner. (Assume migrate page contents from oldpage to newpage)
      
       before
        - "newpage" is charged before migration.
       at success.
        - "oldpage" is uncharged at somewhere(unmap, radix-tree-replace)
       at failure
        - "newpage" is uncharged.
        - "oldpage" is charged if necessary (*1)
      
      But (*1) is not reliable....because of GFP_ATOMIC.
      
      This patch tries to change behavior as following by charge/commit/cancel ops.
      
       before
        - charge PAGE_SIZE (no target page)
       success
        - commit charge against "newpage".
       failure
        - commit charge against "oldpage".
          (PCG_USED bit works effectively to avoid double-counting)
        - if "oldpage" is obsolete, cancel charge of PAGE_SIZE.
      Signed-off-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Reviewed-by: NDaisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
      Cc: Balbir Singh <balbir@in.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      01b1ae63
    • K
      memcg: introduce charge-commit-cancel style of functions · 7a81b88c
      KAMEZAWA Hiroyuki 提交于
      There is a small race in do_swap_page().  When the page swapped-in is
      charged, the mapcount can be greater than 0.  But, at the same time some
      process (shares it ) call unmap and make mapcount 1->0 and the page is
      uncharged.
      
            CPUA 			CPUB
             mapcount == 1.
         (1) charge if mapcount==0     zap_pte_range()
                                      (2) mapcount 1 => 0.
      			        (3) uncharge(). (success)
         (4) set page's rmap()
             mapcount 0=>1
      
      Then, this swap page's account is leaked.
      
      For fixing this, I added a new interface.
        - charge
         account to res_counter by PAGE_SIZE and try to free pages if necessary.
        - commit
         register page_cgroup and add to LRU if necessary.
        - cancel
         uncharge PAGE_SIZE because of do_swap_page failure.
      
           CPUA
        (1) charge (always)
        (2) set page's rmap (mapcount > 0)
        (3) commit charge was necessary or not after set_pte().
      
      This protocol uses PCG_USED bit on page_cgroup for avoiding over accounting.
      Usual mem_cgroup_charge_common() does charge -> commit at a time.
      
      And this patch also adds following function to clarify all charges.
      
        - mem_cgroup_newpage_charge() ....replacement for mem_cgroup_charge()
      	called against newly allocated anon pages.
      
        - mem_cgroup_charge_migrate_fixup()
              called only from remove_migration_ptes().
      	we'll have to rewrite this later.(this patch just keeps old behavior)
      	This function will be removed by additional patch to make migration
      	clearer.
      
      Good for clarifying "what we do"
      
      Then, we have 4 following charge points.
        - newpage
        - swap-in
        - add-to-cache.
        - migration.
      
      [akpm@linux-foundation.org: add missing inline directives to stubs]
      Signed-off-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Reviewed-by: NDaisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
      Cc: Balbir Singh <balbir@in.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      7a81b88c
  17. 07 1月, 2009 3 次提交
  18. 17 12月, 2008 1 次提交
  19. 11 12月, 2008 1 次提交
  20. 20 11月, 2008 1 次提交
  21. 14 11月, 2008 3 次提交
  22. 07 11月, 2008 1 次提交
    • C
      mm: move migrate_prep out from under mmap_sem · 0aedadf9
      Christoph Lameter 提交于
      Move the migrate_prep outside the mmap_sem for the following system calls
      
      1. sys_move_pages
      2. sys_migrate_pages
      3. sys_mbind()
      
      It really does not matter when we flush the lru.  The system is free to
      add pages onto the lru even during migration which will make the page
      migration either skip the page (mbind, migrate_pages) or return a busy
      state (move_pages).
      
      Fixes this lockdep warning (and potential deadlock):
      
      Some VM place has
            mmap_sem -> kevent_wq via lru_add_drain_all()
      
      net/core/dev.c::dev_ioctl()  has
           rtnl_lock  ->  mmap_sem        (*) the ioctl has copy_from_user() and it can do page fault.
      
      linkwatch_event has
           kevent_wq -> rtnl_lock
      Signed-off-by: NChristoph Lameter <cl@linux-foundation.org>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Reported-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Cc: Hugh Dickins <hugh@veritas.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Lee Schermerhorn <lee.schermerhorn@hp.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      0aedadf9
  23. 20 10月, 2008 1 次提交