1. 13 11月, 2008 1 次提交
    • D
      cpusets: update mems allowed in page allocator · e33c3b5e
      David Rientjes 提交于
      If all allowable memory is unreclaimable, it is possible to loop forever
      in the page allocator for ~__GFP_NORETRY allocations.
      
      During this time, it is also possible for a task's cpuset to expand its
      set of allowable nodes so that it now includes free memory.  The cached
      copy of this set, current->mems_allowed, is stale, however, since there
      has not been a subsequent call to cpuset_update_task_memory_state().
      
      The cached copy of the set of allowable nodes is now updated in the page
      allocator's slow path so the additional memory is available to
      get_page_from_freelist().
      
      [akpm@linux-foundation.org: add comment]
      Signed-off-by: NDavid Rientjes <rientjes@google.com>
      Cc: Paul Menage <menage@google.com>
      Cc: Christoph Lameter <cl@linux-foundation.org>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      e33c3b5e
  2. 07 11月, 2008 1 次提交
  3. 20 10月, 2008 10 次提交
  4. 17 10月, 2008 1 次提交
  5. 03 10月, 2008 1 次提交
    • A
      mm: handle initialising compound pages at orders greater than MAX_ORDER · 6babc32c
      Andy Whitcroft 提交于
      When we initialise a compound page we initialise the page flags and head
      page pointer for all base pages spanned by that page.  When we initialise
      a gigantic page (a page of order greater than or equal to MAX_ORDER) we
      have to initialise more than MAX_ORDER_NR_PAGES pages.  Currently we
      assume that all elements of the mem_map in this page are contigious in
      memory.  However this is only guarenteed out to MAX_ORDER_NR_PAGES pages,
      and with SPARSEMEM enabled they will not be contigious.  This leads us to
      walk off the end of the first section and scribble on everything which
      follows, BAD.
      
      When we reach a MAX_ORDER_NR_PAGES boundary we much locate the next
      section of the mem_map.  As gigantic pages can only be maximally aligned
      we know this will occur at exact multiple of MAX_ORDER_NR_PAGES pages from
      the start of the page.
      
      This is a bug fix for the gigantic page support in hugetlbfs.
      
      Credit to Mel Gorman for spotting the issue.
      Signed-off-by: NAndy Whitcroft <apw@shadowen.org>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Cc: Jon Tollefson <kniht@linux.vnet.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      6babc32c
  6. 03 9月, 2008 2 次提交
  7. 13 8月, 2008 1 次提交
  8. 31 7月, 2008 1 次提交
  9. 28 7月, 2008 1 次提交
  10. 25 7月, 2008 12 次提交
  11. 08 7月, 2008 6 次提交
  12. 04 7月, 2008 1 次提交
  13. 26 6月, 2008 1 次提交
  14. 10 6月, 2008 1 次提交
    • Y
      mm, x86: shrink_active_range() should check all · cc1a9d86
      Yinghai Lu 提交于
      Now we are using register_e820_active_regions() instead of
      add_active_range() directly. So end_pfn could be different between the
      value in early_node_map to node_end_pfn.
      
      So we need to make shrink_active_range() smarter.
      
      shrink_active_range() is a generic MM function in mm/page_alloc.c but
      it is only used on 32-bit x86. Should we move it back to some file in
      arch/x86?
      Signed-off-by: NYinghai Lu <yhlu.kernel@gmail.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      cc1a9d86