1. 03 2月, 2011 2 次提交
  2. 14 1月, 2011 4 次提交
    • A
      thp: compound_trans_order · 37c2ac78
      Andrea Arcangeli 提交于
      Read compound_trans_order safe. Noop for CONFIG_TRANSPARENT_HUGEPAGE=n.
      Signed-off-by: NAndrea Arcangeli <aarcange@redhat.com>
      Cc: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      37c2ac78
    • A
      thp: fix memory-failure hugetlbfs vs THP collision · 91600e9e
      Andrea Arcangeli 提交于
      hugetlbfs was changed to allow memory failure to migrate the hugetlbfs
      pages and that broke THP as split_huge_page was then called on hugetlbfs
      pages too.
      
      compound_head/order was also run unsafe on THP pages that can be splitted
      at any time.
      
      All compound_head() invocations in memory-failure.c that are run on pages
      that aren't pinned and that can be freed and reused from under us (while
      compound_head is running) are buggy because compound_head can return a
      dangling pointer, but I'm not fixing this as this is a generic
      memory-failure bug not specific to THP but it applies to hugetlbfs too, so
      I can fix it later after THP is merged upstream.
      Signed-off-by: NAndrea Arcangeli <aarcange@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      91600e9e
    • A
      thp: split_huge_page paging · 3f04f62f
      Andrea Arcangeli 提交于
      Paging logic that splits the page before it is unmapped and added to swap
      to ensure backwards compatibility with the legacy swap code.  Eventually
      swap should natively pageout the hugepages to increase performance and
      decrease seeking and fragmentation of swap space.  swapoff can just skip
      over huge pmd as they cannot be part of swap yet.  In add_to_swap be
      careful to split the page only if we got a valid swap entry so we don't
      split hugepages with a full swap.
      
      In theory we could split pages before isolating them during the lru scan,
      but for khugepaged to be safe, I'm relying on either mmap_sem write mode,
      or PG_lock taken, so split_huge_page has to run either with mmap_sem
      read/write mode or PG_lock taken.  Calling it from isolate_lru_page would
      make locking more complicated, in addition to that split_huge_page would
      deadlock if called by __isolate_lru_page because it has to take the lru
      lock to add the tail pages.
      Signed-off-by: NAndrea Arcangeli <aarcange@redhat.com>
      Acked-by: NMel Gorman <mel@csn.ul.ie>
      Acked-by: NRik van Riel <riel@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      3f04f62f
    • M
      mm: migration: allow migration to operate asynchronously and avoid synchronous... · 77f1fe6b
      Mel Gorman 提交于
      mm: migration: allow migration to operate asynchronously and avoid synchronous compaction in the faster path
      
      Migration synchronously waits for writeback if the initial passes fails.
      Callers of memory compaction do not necessarily want this behaviour if the
      caller is latency sensitive or expects that synchronous migration is not
      going to have a significantly better success rate.
      
      This patch adds a sync parameter to migrate_pages() allowing the caller to
      indicate if wait_on_page_writeback() is allowed within migration or not.
      For reclaim/compaction, try_to_compact_pages() is first called
      asynchronously, direct reclaim runs and then try_to_compact_pages() is
      called synchronously as there is a greater expectation that it'll succeed.
      
      [akpm@linux-foundation.org: build/merge fix]
      Signed-off-by: NMel Gorman <mel@csn.ul.ie>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Rik van Riel <riel@redhat.com>
      Acked-by: NJohannes Weiner <hannes@cmpxchg.org>
      Cc: Andy Whitcroft <apw@shadowen.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      77f1fe6b
  3. 03 12月, 2010 1 次提交
  4. 27 10月, 2010 1 次提交
    • M
      mm: compaction: fix COMPACTPAGEFAILED counting · cf608ac1
      Minchan Kim 提交于
      Presently update_nr_listpages() doesn't have a role.  That's because lists
      passed is always empty just after calling migrate_pages.  The
      migrate_pages cleans up page list which have failed to migrate before
      returning by aaa994b3.
      
       [PATCH] page migration: handle freeing of pages in migrate_pages()
      
       Do not leave pages on the lists passed to migrate_pages().  Seems that we will
       not need any postprocessing of pages.  This will simplify the handling of
       pages by the callers of migrate_pages().
      
      At that time, we thought we don't need any postprocessing of pages.  But
      the situation is changed.  The compaction need to know the number of
      failed to migrate for COMPACTPAGEFAILED stat
      
      This patch makes new rule for caller of migrate_pages to call
      putback_lru_pages.  So caller need to clean up the lists so it has a
      chance to postprocess the pages.  [suggested by Christoph Lameter]
      Signed-off-by: NMinchan Kim <minchan.kim@gmail.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Andi Kleen <andi@firstfloor.org>
      Reviewed-by: NMel Gorman <mel@csn.ul.ie>
      Reviewed-by: NWu Fengguang <fengguang.wu@intel.com>
      Acked-by: NChristoph Lameter <cl@linux.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      cf608ac1
  5. 08 10月, 2010 9 次提交
  6. 07 10月, 2010 2 次提交
  7. 11 8月, 2010 4 次提交
  8. 01 8月, 2010 2 次提交
  9. 30 3月, 2010 1 次提交
    • T
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking... · 5a0e3ad6
      Tejun Heo 提交于
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
      
      percpu.h is included by sched.h and module.h and thus ends up being
      included when building most .c files.  percpu.h includes slab.h which
      in turn includes gfp.h making everything defined by the two files
      universally available and complicating inclusion dependencies.
      
      percpu.h -> slab.h dependency is about to be removed.  Prepare for
      this change by updating users of gfp and slab facilities include those
      headers directly instead of assuming availability.  As this conversion
      needs to touch large number of source files, the following script is
      used as the basis of conversion.
      
        http://userweb.kernel.org/~tj/misc/slabh-sweep.py
      
      The script does the followings.
      
      * Scan files for gfp and slab usages and update includes such that
        only the necessary includes are there.  ie. if only gfp is used,
        gfp.h, if slab is used, slab.h.
      
      * When the script inserts a new include, it looks at the include
        blocks and try to put the new include such that its order conforms
        to its surrounding.  It's put in the include block which contains
        core kernel includes, in the same order that the rest are ordered -
        alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
        doesn't seem to be any matching order.
      
      * If the script can't find a place to put a new include (mostly
        because the file doesn't have fitting include block), it prints out
        an error message indicating which .h file needs to be added to the
        file.
      
      The conversion was done in the following steps.
      
      1. The initial automatic conversion of all .c files updated slightly
         over 4000 files, deleting around 700 includes and adding ~480 gfp.h
         and ~3000 slab.h inclusions.  The script emitted errors for ~400
         files.
      
      2. Each error was manually checked.  Some didn't need the inclusion,
         some needed manual addition while adding it to implementation .h or
         embedding .c file was more appropriate for others.  This step added
         inclusions to around 150 files.
      
      3. The script was run again and the output was compared to the edits
         from #2 to make sure no file was left behind.
      
      4. Several build tests were done and a couple of problems were fixed.
         e.g. lib/decompress_*.c used malloc/free() wrappers around slab
         APIs requiring slab.h to be added manually.
      
      5. The script was run on all .h files but without automatically
         editing them as sprinkling gfp.h and slab.h inclusions around .h
         files could easily lead to inclusion dependency hell.  Most gfp.h
         inclusion directives were ignored as stuff from gfp.h was usually
         wildly available and often used in preprocessor macros.  Each
         slab.h inclusion directive was examined and added manually as
         necessary.
      
      6. percpu.h was updated not to include slab.h.
      
      7. Build test were done on the following configurations and failures
         were fixed.  CONFIG_GCOV_KERNEL was turned off for all tests (as my
         distributed build env didn't work with gcov compiles) and a few
         more options had to be turned off depending on archs to make things
         build (like ipr on powerpc/64 which failed due to missing writeq).
      
         * x86 and x86_64 UP and SMP allmodconfig and a custom test config.
         * powerpc and powerpc64 SMP allmodconfig
         * sparc and sparc64 SMP allmodconfig
         * ia64 SMP allmodconfig
         * s390 SMP allmodconfig
         * alpha SMP allmodconfig
         * um on x86_64 SMP allmodconfig
      
      8. percpu.h modifications were reverted so that it could be applied as
         a separate patch and serve as bisection point.
      
      Given the fact that I had only a couple of failures from tests on step
      6, I'm fairly confident about the coverage of this conversion patch.
      If there is a breakage, it's likely to be something in one of the arch
      headers which should be easily discoverable easily on most builds of
      the specific arch.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Guess-its-ok-by: NChristoph Lameter <cl@linux-foundation.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
      5a0e3ad6
  10. 07 3月, 2010 1 次提交
    • R
      mm: change anon_vma linking to fix multi-process server scalability issue · 5beb4930
      Rik van Riel 提交于
      The old anon_vma code can lead to scalability issues with heavily forking
      workloads.  Specifically, each anon_vma will be shared between the parent
      process and all its child processes.
      
      In a workload with 1000 child processes and a VMA with 1000 anonymous
      pages per process that get COWed, this leads to a system with a million
      anonymous pages in the same anon_vma, each of which is mapped in just one
      of the 1000 processes.  However, the current rmap code needs to walk them
      all, leading to O(N) scanning complexity for each page.
      
      This can result in systems where one CPU is walking the page tables of
      1000 processes in page_referenced_one, while all other CPUs are stuck on
      the anon_vma lock.  This leads to catastrophic failure for a benchmark
      like AIM7, where the total number of processes can reach in the tens of
      thousands.  Real workloads are still a factor 10 less process intensive
      than AIM7, but they are catching up.
      
      This patch changes the way anon_vmas and VMAs are linked, which allows us
      to associate multiple anon_vmas with a VMA.  At fork time, each child
      process gets its own anon_vmas, in which its COWed pages will be
      instantiated.  The parents' anon_vma is also linked to the VMA, because
      non-COWed pages could be present in any of the children.
      
      This reduces rmap scanning complexity to O(1) for the pages of the 1000
      child processes, with O(N) complexity for at most 1/N pages in the system.
       This reduces the average scanning cost in heavily forking workloads from
      O(N) to 2.
      
      The only real complexity in this patch stems from the fact that linking a
      VMA to anon_vmas now involves memory allocations.  This means vma_adjust
      can fail, if it needs to attach a VMA to anon_vma structures.  This in
      turn means error handling needs to be added to the calling functions.
      
      A second source of complexity is that, because there can be multiple
      anon_vmas, the anon_vma linking in vma_adjust can no longer be done under
      "the" anon_vma lock.  To prevent the rmap code from walking up an
      incomplete VMA, this patch introduces the VM_LOCK_RMAP VMA flag.  This bit
      flag uses the same slot as the NOMMU VM_MAPPED_COPY, with an ifdef in mm.h
      to make sure it is impossible to compile a kernel that needs both symbolic
      values for the same bitflag.
      
      Some test results:
      
      Without the anon_vma changes, when AIM7 hits around 9.7k users (on a test
      box with 16GB RAM and not quite enough IO), the system ends up running
      >99% in system time, with every CPU on the same anon_vma lock in the
      pageout code.
      
      With these changes, AIM7 hits the cross-over point around 29.7k users.
      This happens with ~99% IO wait time, there never seems to be any spike in
      system time.  The anon_vma lock contention appears to be resolved.
      
      [akpm@linux-foundation.org: cleanups]
      Signed-off-by: NRik van Riel <riel@redhat.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Larry Woodman <lwoodman@redhat.com>
      Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
      Cc: Minchan Kim <minchan.kim@gmail.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Hugh Dickins <hugh.dickins@tiscali.co.uk>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      5beb4930
  11. 22 12月, 2009 1 次提交
  12. 16 12月, 2009 12 次提交
    • A
      HWPOISON: Remove stray phrase in a comment · f2c03deb
      Andi Kleen 提交于
      Better to have complete sentences.
      Signed-off-by: NAndi Kleen <ak@linux.intel.com>
      f2c03deb
    • A
      12686d15
    • A
      HWPOISON: Add soft page offline support · facb6011
      Andi Kleen 提交于
      This is a simpler, gentler variant of memory_failure() for soft page
      offlining controlled from user space.  It doesn't kill anything, just
      tries to invalidate and if that doesn't work migrate the
      page away.
      
      This is useful for predictive failure analysis, where a page has
      a high rate of corrected errors, but hasn't gone bad yet. Instead
      it can be offlined early and avoided.
      
      The offlining is controlled from sysfs, including a new generic
      entry point for hard page offlining for symmetry too.
      
      We use the page isolate facility to prevent re-allocation
      race. Normally this is only used by memory hotplug. To avoid
      races with memory allocation I am using lock_system_sleep().
      This avoids the situation where memory hotplug is about
      to isolate a page range and then hwpoison undoes that work.
      This is a big hammer currently, but the simplest solution
      currently.
      
      When the page is not free or LRU we try to free pages
      from slab and other caches. The slab freeing is currently
      quite dumb and does not try to focus on the specific slab
      cache which might own the page. This could be potentially
      improved later.
      
      Thanks to Fengguang Wu and Haicheng Li for some fixes.
      
      [Added fix from Andrew Morton to adapt to new migrate_pages prototype]
      Signed-off-by: NAndi Kleen <ak@linux.intel.com>
      facb6011
    • A
    • A
      HWPOISON: Use new shake_page in memory_failure · 0474a60e
      Andi Kleen 提交于
      shake_page handles more types of page caches than
      the much simpler lru_add_drain_all:
      
      - slab (quite inefficiently for now)
      - any other caches with a shrinker callback
      - per cpu page allocator pages
      - per CPU LRU
      
      Use this call to try to turn pages into free or LRU pages.
      Then handle the case of the page becoming free after drain everything.
      Signed-off-by: NAndi Kleen <ak@linux.intel.com>
      0474a60e
    • H
      HWPOISON: add an interface to switch off/on all the page filters · 1bfe5feb
      Haicheng Li 提交于
      In some use cases, user doesn't need extra filtering. E.g. user program
      can inject errors through madvise syscall to its own pages, however it
      might not know what the page state exactly is or which inode the page
      belongs to.
      
      So introduce an one-off interface "corrupt-filter-enable".
      
      Echo 0 to switch off page filters, and echo 1 to switch on the filters.
      [AK: changed default to 0]
      Signed-off-by: NHaicheng Li <haicheng.li@linux.intel.com>
      Signed-off-by: NWu Fengguang <fengguang.wu@intel.com>
      Signed-off-by: NAndi Kleen <ak@linux.intel.com>
      1bfe5feb
    • A
      HWPOISON: add memory cgroup filter · 4fd466eb
      Andi Kleen 提交于
      The hwpoison test suite need to inject hwpoison to a collection of
      selected task pages, and must not touch pages not owned by them and
      thus kill important system processes such as init. (But it's OK to
      mis-hwpoison free/unowned pages as well as shared clean pages.
      Mis-hwpoison of shared dirty pages will kill all tasks, so the test
      suite will target all or non of such tasks in the first place.)
      
      The memory cgroup serves this purpose well. We can put the target
      processes under the control of a memory cgroup, and tell the hwpoison
      injection code to only kill pages associated with some active memory
      cgroup.
      
      The prerequisite for doing hwpoison stress tests with mem_cgroup is,
      the mem_cgroup code tracks task pages _accurately_ (unless page is
      locked).  Which we believe is/should be true.
      
      The benefits are simplification of hwpoison injector code. Also the
      mem_cgroup code will automatically be tested by hwpoison test cases.
      
      The alternative interfaces pin-pfn/unpin-pfn can also delegate the
      (process and page flags) filtering functions reliably to user space.
      However prototype implementation shows that this scheme adds more
      complexity than we wanted.
      
      Example test case:
      
      	mkdir /cgroup/hwpoison
      
      	usemem -m 100 -s 1000 &
      	echo `jobs -p` > /cgroup/hwpoison/tasks
      
      	memcg_ino=$(ls -id /cgroup/hwpoison | cut -f1 -d' ')
      	echo $memcg_ino > /debug/hwpoison/corrupt-filter-memcg
      
      	page-types -p `pidof init`   --hwpoison  # shall do nothing
      	page-types -p `pidof usemem` --hwpoison  # poison its pages
      
      [AK: Fix documentation]
      [Add fix for problem noticed by Li Zefan <lizf@cn.fujitsu.com>;
      dentry in the css could be NULL]
      
      CC: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      CC: Hugh Dickins <hugh.dickins@tiscali.co.uk>
      CC: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
      CC: Balbir Singh <balbir@linux.vnet.ibm.com>
      CC: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      CC: Li Zefan <lizf@cn.fujitsu.com>
      CC: Paul Menage <menage@google.com>
      CC: Nick Piggin <npiggin@suse.de>
      CC: Andi Kleen <andi@firstfloor.org>
      Signed-off-by: NWu Fengguang <fengguang.wu@intel.com>
      Signed-off-by: NAndi Kleen <ak@linux.intel.com>
      4fd466eb
    • W
      HWPOISON: add page flags filter · 478c5ffc
      Wu Fengguang 提交于
      When specified, only poison pages if ((page_flags & mask) == value).
      
      -       corrupt-filter-flags-mask
      -       corrupt-filter-flags-value
      
      This allows stress testing of many kinds of pages.
      
      Strictly speaking, the buddy pages requires taking zone lock, to avoid
      setting PG_hwpoison on a "was buddy but now allocated to someone" page.
      However we can just do nothing because we set PG_locked in the beginning,
      this prevents the page allocator from allocating it to someone. (It will
      BUG() on the unexpected PG_locked, which is fine for hwpoison testing.)
      
      [AK: Add select PROC_PAGE_MONITOR to satisfy dependency]
      
      CC: Nick Piggin <npiggin@suse.de>
      Signed-off-by: NWu Fengguang <fengguang.wu@intel.com>
      Signed-off-by: NAndi Kleen <ak@linux.intel.com>
      478c5ffc
    • W
      HWPOISON: add fs/device filters · 7c116f2b
      Wu Fengguang 提交于
      Filesystem data/metadata present the most tricky-to-isolate pages.
      It requires careful code review and stress testing to get them right.
      
      The fs/device filter helps to target the stress tests to some specific
      filesystem pages. The filter condition is block device's major/minor
      numbers:
              - corrupt-filter-dev-major
              - corrupt-filter-dev-minor
      When specified (non -1), only page cache pages that belong to that
      device will be poisoned.
      
      The filters are checked reliably on the locked and refcounted page.
      
      Haicheng: clear PG_hwpoison and drop bad page count if filter not OK
      AK: Add documentation
      
      CC: Haicheng Li <haicheng.li@intel.com>
      CC: Nick Piggin <npiggin@suse.de>
      Signed-off-by: NWu Fengguang <fengguang.wu@intel.com>
      Signed-off-by: NAndi Kleen <ak@linux.intel.com>
      7c116f2b
    • W
      HWPOISON: return 0 to indicate success reliably · 138ce286
      Wu Fengguang 提交于
      Return 0 to indicate success, when
      - action result is RECOVERED or DELAYED
      - no extra page reference
      
      Note that dirty swapcache pages are kept in swapcache, so can have one
      more reference count.
      Signed-off-by: NWu Fengguang <fengguang.wu@intel.com>
      Signed-off-by: NAndi Kleen <ak@linux.intel.com>
      138ce286
    • W
      HWPOISON: make semantics of IGNORED/DELAYED clear · d95ea51e
      Wu Fengguang 提交于
      Change semantics for
      - IGNORED: not handled; it may well be _unsafe_
      - DELAYED: to be handled later; it is _safe_
      
      With this change,
      - IGNORED/FAILED mean (maybe) Error
      - DELAYED/RECOVERED mean Success
      Signed-off-by: NWu Fengguang <fengguang.wu@intel.com>
      Signed-off-by: NAndi Kleen <ak@linux.intel.com>
      d95ea51e
    • W
      HWPOISON: Add unpoisoning support · 847ce401
      Wu Fengguang 提交于
      The unpoisoning interface is useful for stress testing tools to
      reclaim poisoned pages (to prevent OOM)
      
      There is no hardware level unpoisioning, so this
      cannot be used for real memory errors, only for software injected errors.
      
      Note that it may leak pages silently - those who have been removed from
      LRU cache, but not isolated from page cache/swap cache at hwpoison time.
      Especially the stress test of dirty swap cache pages shall reboot system
      before exhausting memory.
      
      AK: Fix comments, add documentation, add printks, rename symbol
      Signed-off-by: NWu Fengguang <fengguang.wu@intel.com>
      Signed-off-by: NAndi Kleen <ak@linux.intel.com>
      847ce401