1. 13 8月, 2008 1 次提交
  2. 31 7月, 2008 1 次提交
  3. 28 7月, 2008 1 次提交
  4. 25 7月, 2008 12 次提交
  5. 08 7月, 2008 6 次提交
  6. 04 7月, 2008 1 次提交
  7. 26 6月, 2008 1 次提交
  8. 10 6月, 2008 2 次提交
  9. 03 6月, 2008 1 次提交
  10. 25 5月, 2008 3 次提交
    • H
      memory hotplug: fix early allocation handling · cd94b9db
      Heiko Carstens 提交于
      Trying to add memory via add_memory() from within an initcall function
      results in
      
      bootmem alloc of 163840 bytes failed!
      Kernel panic - not syncing: Out of memory
      
      This is caused by zone_wait_table_init() which uses system_state to decide
      if it should use the bootmem allocator or not.
      
      When initcalls are handled the system_state is still SYSTEM_BOOTING but
      the bootmem allocator doesn't work anymore.  So the allocation will fail.
      
      To fix this use slab_is_available() instead as indicator like we do it
      everywhere else.
      
      [akpm@linux-foundation.org: coding-style fix]
      Reviewed-by: NAndy Whitcroft <apw@shadowen.org>
      Cc: Dave Hansen <haveblue@us.ibm.com>
      Cc: Gerald Schaefer <gerald.schaefer@de.ibm.com>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Acked-by: NYasunori Goto <y-goto@jp.fujitsu.com>
      Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      cd94b9db
    • A
      zonelists: handle a node zonelist with no applicable entries · 7eb54824
      Andy Whitcroft 提交于
      When booting 2.6.26-rc3 on a multi-node x86_32 numa system we are seeing
      panics when trying node local allocations:
      
       BUG: unable to handle kernel NULL pointer dereference at 0000034c
       IP: [<c1042507>] get_page_from_freelist+0x4a/0x18e
       *pdpt = 00000000013a7001 *pde = 0000000000000000
       Oops: 0000 [#1] SMP
       Modules linked in:
      
       Pid: 0, comm: swapper Not tainted (2.6.26-rc3-00003-g5abc28d #82)
       EIP: 0060:[<c1042507>] EFLAGS: 00010282 CPU: 0
       EIP is at get_page_from_freelist+0x4a/0x18e
       EAX: c1371ed8 EBX: 00000000 ECX: 00000000 EDX: 00000000
       ESI: f7801180 EDI: 00000000 EBP: 00000000 ESP: c1371ec0
        DS: 007b ES: 007b FS: 00d8 GS: 0000 SS: 0068
       Process swapper (pid: 0, ti=c1370000 task=c12f5b40 task.ti=c1370000)
       Stack: 00000000 00000000 00000000 00000000 000612d0 000412d0 00000000 000412d0
              f7801180 f7c0101c f7c01018 c10426e4 f7c01018 00000001 00000044 00000000
              00000001 c12f5b40 00000001 00000010 00000000 000412d0 00000286 000412d0
       Call Trace:
        [<c10426e4>] __alloc_pages_internal+0x99/0x378
        [<c10429ca>] __alloc_pages+0x7/0x9
        [<c105e0e8>] kmem_getpages+0x66/0xef
        [<c105ec55>] cache_grow+0x8f/0x123
        [<c105f117>] ____cache_alloc_node+0xb9/0xe4
        [<c105f427>] kmem_cache_alloc_node+0x92/0xd2
        [<c122118c>] setup_cpu_cache+0xaf/0x177
        [<c105e6ca>] kmem_cache_create+0x2c8/0x353
        [<c13853af>] kmem_cache_init+0x1ce/0x3ad
        [<c13755c5>] start_kernel+0x178/0x1ee
      
      This occurs when we are scanning the zonelists looking for a ZONE_NORMAL
      page.  In this system there is only ZONE_DMA and ZONE_NORMAL memory on
      node 0, all other nodes are mapped above 4GB physical.  Here is a dump
      of the zonelists from this system:
      
          zonelists pgdat=c1400000
           0: c14006c0:2 f7c006c0:2 f7e006c0:2 c1400360:1 c1400000:0
           1: c14006c0:2 c1400360:1 c1400000:0
          zonelists pgdat=f7c00000
           0: f7c006c0:2 f7e006c0:2 c14006c0:2 c1400360:1 c1400000:0
           1: f7c006c0:2
          zonelists pgdat=f7e00000
           0: f7e006c0:2 c14006c0:2 f7c006c0:2 c1400360:1 c1400000:0
           1: f7e006c0:2
      
      When performing a node local allocation we call get_page_from_freelist()
      looking for a page.  It in turn calls first_zones_zonelist() which returns
      a preferred_zone.  Where there are no applicable zones this will be NULL.
      However we use this unconditionally, leading to this panic.
      
      Where there are no applicable zones there is no possibility of a successful
      allocation, so simply fail the allocation.
      Signed-off-by: NAndy Whitcroft <apw@shadowen.org>
      Acked-by: NMel Gorman <mel@csn.ul.ie>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      7eb54824
    • J
      mm: don't drop a partial page in a zone's memory map size · f7232154
      Johannes Weiner 提交于
      In a zone's present pages number, account for all pages occupied by the
      memory map, including a partial.
      Signed-off-by: NJohannes Weiner <hannes@saeurebad.de>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      f7232154
  11. 15 5月, 2008 1 次提交
    • H
      memory_hotplug: always initialize pageblock bitmap · 76cdd58e
      Heiko Carstens 提交于
      Trying to online a new memory section that was added via memory hotplug
      sometimes results in crashes when the new pages are added via __free_page.
       Reason for that is that the pageblock bitmap isn't initialized and hence
      contains random stuff.  That means that get_pageblock_migratetype()
      returns also random stuff and therefore
      
      	list_add(&page->lru,
      		&zone->free_area[order].free_list[migratetype]);
      
      in __free_one_page() tries to do a list_add to something that isn't even
      necessarily a list.
      
      This happens since 86051ca5 ("mm: fix
      usemap initialization") which makes sure that the pageblock bitmap gets
      only initialized for pages present in a zone.  Unfortunately for hot-added
      memory the zones "grow" after the memmap and the pageblock memmap have
      been initialized.  Which means that the new pages have an unitialized
      bitmap.  To solve this the calls to grow_zone_span() and grow_pgdat_span()
      are moved to __add_zone() just before the initialization happens.
      
      The patch also moves the two functions since __add_zone() is the only
      caller and I didn't want to add a forward declaration.
      Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Andy Whitcroft <apw@shadowen.org>
      Cc: Dave Hansen <haveblue@us.ibm.com>
      Cc: Gerald Schaefer <gerald.schaefer@de.ibm.com>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Yasunori Goto <y-goto@jp.fujitsu.com>
      Cc: <stable@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      76cdd58e
  12. 30 4月, 2008 1 次提交
    • T
      infrastructure to debug (dynamic) objects · 3ac7fe5a
      Thomas Gleixner 提交于
      We can see an ever repeating problem pattern with objects of any kind in the
      kernel:
      
      1) freeing of active objects
      2) reinitialization of active objects
      
      Both problems can be hard to debug because the crash happens at a point where
      we have no chance to decode the root cause anymore.  One problem spot are
      kernel timers, where the detection of the problem often happens in interrupt
      context and usually causes the machine to panic.
      
      While working on a timer related bug report I had to hack specialized code
      into the timer subsystem to get a reasonable hint for the root cause.  This
      debug hack was fine for temporary use, but far from a mergeable solution due
      to the intrusiveness into the timer code.
      
      The code further lacked the ability to detect and report the root cause
      instantly and keep the system operational.
      
      Keeping the system operational is important to get hold of the debug
      information without special debugging aids like serial consoles and special
      knowledge of the bug reporter.
      
      The problems described above are not restricted to timers, but timers tend to
      expose it usually in a full system crash.  Other objects are less explosive,
      but the symptoms caused by such mistakes can be even harder to debug.
      
      Instead of creating specialized debugging code for the timer subsystem a
      generic infrastructure is created which allows developers to verify their code
      and provides an easy to enable debug facility for users in case of trouble.
      
      The debugobjects core code keeps track of operations on static and dynamic
      objects by inserting them into a hashed list and sanity checking them on
      object operations and provides additional checks whenever kernel memory is
      freed.
      
      The tracked object operations are:
      - initializing an object
      - adding an object to a subsystem list
      - deleting an object from a subsystem list
      
      Each operation is sanity checked before the operation is executed and the
      subsystem specific code can provide a fixup function which allows to prevent
      the damage of the operation.  When the sanity check triggers a warning message
      and a stack trace is printed.
      
      The list of operations can be extended if the need arises.  For now it's
      limited to the requirements of the first user (timers).
      
      The core code enqueues the objects into hash buckets.  The hash index is
      generated from the address of the object to simplify the lookup for the check
      on kfree/vfree.  Each bucket has it's own spinlock to avoid contention on a
      global lock.
      
      The debug code can be compiled in without being active.  The runtime overhead
      is minimal and could be optimized by asm alternatives.  A kernel command line
      option enables the debugging code.
      
      Thanks to Ingo Molnar for review, suggestions and cleanup patches.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Cc: Greg KH <greg@kroah.com>
      Cc: Randy Dunlap <randy.dunlap@oracle.com>
      Cc: Kay Sievers <kay.sievers@vrfy.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      3ac7fe5a
  13. 29 4月, 2008 3 次提交
    • N
      page allocator: smarter retry of costly-order allocations · a41f24ea
      Nishanth Aravamudan 提交于
      Because of page order checks in __alloc_pages(), hugepage (and similarly
      large order) allocations will not retry unless explicitly marked
      __GFP_REPEAT. However, the current retry logic is nearly an infinite
      loop (or until reclaim does no progress whatsoever). For these costly
      allocations, that seems like overkill and could potentially never
      terminate. Mel observed that allowing current __GFP_REPEAT semantics for
      hugepage allocations essentially killed the system. I believe this is
      because we may continue to reclaim small orders of pages all over, but
      never have enough to satisfy the hugepage allocation request. This is
      clearly only a problem for large order allocations, of which hugepages
      are the most obvious (to me).
      
      Modify try_to_free_pages() to indicate how many pages were reclaimed.
      Use that information in __alloc_pages() to eventually fail a large
      __GFP_REPEAT allocation when we've reclaimed an order of pages equal to
      or greater than the allocation's order. This relies on lumpy reclaim
      functioning as advertised. Due to fragmentation, lumpy reclaim may not
      be able to free up the order needed in one invocation, so multiple
      iterations may be requred. In other words, the more fragmented memory
      is, the more retry attempts __GFP_REPEAT will make (particularly for
      higher order allocations).
      
      This changes the semantics of __GFP_REPEAT subtly, but *only* for
      allocations > PAGE_ALLOC_COSTLY_ORDER. With this patch, for those size
      allocations, we will try up to some point (at least 1<<order reclaimed
      pages), rather than forever (which is the case for allocations <=
      PAGE_ALLOC_COSTLY_ORDER).
      
      This change improves the /proc/sys/vm/nr_hugepages interface with a
      follow-on patch that makes pool allocations use __GFP_REPEAT. Rather
      than administrators repeatedly echo'ing a particular value into the
      sysctl, and forcing reclaim into action manually, this change allows for
      the sysctl to attempt a reasonable effort itself. Similarly, dynamic
      pool growth should be more successful under load, as lumpy reclaim can
      try to free up pages, rather than failing right away.
      
      Choosing to reclaim only up to the order of the requested allocation
      strikes a balance between not failing hugepage allocations and returning
      to the caller when it's unlikely to every succeed. Because of lumpy
      reclaim, if we have freed the order requested, hopefully it has been in
      big chunks and those chunks will allow our allocation to succeed. If
      that isn't the case after freeing up the current order, I don't think it
      is likely to succeed in the future, although it is possible given a
      particular fragmentation pattern.
      Signed-off-by: NNishanth Aravamudan <nacc@us.ibm.com>
      Cc: Andy Whitcroft <apw@shadowen.org>
      Tested-by: NMel Gorman <mel@csn.ul.ie>
      Cc: Dave Hansen <haveblue@us.ibm.com>
      Cc: Christoph Lameter <clameter@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      a41f24ea
    • N
      mm: fix misleading __GFP_REPEAT related comments · ab857d09
      Nishanth Aravamudan 提交于
      The definition and use of __GFP_REPEAT, __GFP_NOFAIL and __GFP_NORETRY in the
      core VM have somewhat differing comments as to their actual semantics.
      Annoyingly, the flags definition has inline and header comments, which might
      be interpreted as not being equivalent.  Just add references to the header
      comments in the inline ones so they don't go out of sync in the future.  In
      their use in __alloc_pages() clarify that the current implementation treats
      low-order allocations and __GFP_REPEAT allocations as distinct cases.
      
      To clarify, the flags' semantics are:
      
      __GFP_NORETRY means try no harder than one run through __alloc_pages
      
      __GFP_REPEAT means __GFP_NOFAIL
      
      __GFP_NOFAIL means repeat forever
      
      order <= PAGE_ALLOC_COSTLY_ORDER means __GFP_NOFAIL
      Signed-off-by: NNishanth Aravamudan <nacc@us.ibm.com>
      Acked-by: NMel Gorman <mel@csn.ul.ie>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      ab857d09
    • K
      mm: fix usemap initialization · 86051ca5
      KAMEZAWA Hiroyuki 提交于
      usemap must be initialized only when pfn is within zone.  If not, it corrupts
      memory.
      
      And this patch also reduces the number of calls to set_pageblock_migratetype()
      from
      	(pfn & (pageblock_nr_pages -1)
      to
      	!(pfn & (pageblock_nr_pages-1)
      it should be called once per pageblock.
      Signed-off-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Acked-by: NMel Gorman <mel@csn.ul.ie>
      Cc: Hugh Dickins <hugh@veritas.com>
      Cc: Shi Weihua <shiwh@cn.fujitsu.com>
      Cc: Balbir Singh <balbir@in.ibm.com>
      Cc: Pavel Emelyanov <xemul@openvz.org>
      Cc: <stable@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      86051ca5
  14. 28 4月, 2008 6 次提交