1. 23 6月, 2006 30 次提交
  2. 13 6月, 2006 2 次提交
  3. 12 6月, 2006 1 次提交
  4. 03 6月, 2006 1 次提交
    • I
      [PATCH] slab.c: fix offslab_limit bug · b1ab41c4
      Ingo Molnar 提交于
      mm/slab.c's offlab_limit logic is totally broken.
      
      Firstly, "offslab_limit" is a global variable while it should either be
      calculated in situ or should be passed in as a parameter.
      
      Secondly, the more serious problem with it is that the condition for
      calculating it:
      
                     if (!(OFF_SLAB(sizes->cs_cachep))) {
                             offslab_limit = sizes->cs_size - sizeof(struct slab);
                             offslab_limit /= sizeof(kmem_bufctl_t);
      
      is in total disconnect with the condition that makes use of it:
      
                     /* More than offslab_limit objects will cause problems */
                     if ((flags & CFLGS_OFF_SLAB) && num > offslab_limit)
                             break;
      
      but due to offslab_limit being a global variable this breakage was
      hidden.
      
      Up until lockdep came along and perturbed the slab sizes sufficiently so
      that the first off-slab cache would still see a (non-calculated) zero
      value for offslab_limit and would panic with:
      
        kmem_cache_create: couldn't create cache size-512.
      
        Call Trace:
         [<ffffffff8020a5b9>] show_trace+0x96/0x1c8
         [<ffffffff8020a8f0>] dump_stack+0x13/0x15
         [<ffffffff8022994f>] panic+0x39/0x21a
         [<ffffffff80270814>] kmem_cache_create+0x5a0/0x5d0
         [<ffffffff80aced62>] kmem_cache_init+0x193/0x379
         [<ffffffff80abf779>] start_kernel+0x17f/0x218
         [<ffffffff80abf263>] _sinittext+0x263/0x26a
      
        Kernel panic - not syncing: kmem_cache_create(): failed to create slab `size-512'
      
      Paolo Ornati's config on x86_64 managed to trigger it.
      
      The fix is to move the calculation to the place that makes use of it.
      This also makes slab.o 54 bytes smaller.
      
      Btw., the check itself is quite silly. Its intention is to test whether
      the number of objects per slab would be higher than the number of slab
      control pointers possible. In theory it could be triggered: if someone
      tried to allocate 4-byte objects cache and explicitly requested with
      CFLGS_OFF_SLAB. So i kept the check.
      
      Out of historic interest i checked how old this bug was and it's
      ancient, 10 years old! It is the oldest hidden and then truly triggering
      bugs i ever saw being fixed in the kernel!
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      b1ab41c4
  5. 01 6月, 2006 1 次提交
  6. 22 5月, 2006 3 次提交
    • B
      [PATCH] Align the node_mem_map endpoints to a MAX_ORDER boundary · e984bb43
      Bob Picco 提交于
      Andy added code to buddy allocator which does not require the zone's
      endpoints to be aligned to MAX_ORDER.  An issue is that the buddy allocator
      requires the node_mem_map's endpoints to be MAX_ORDER aligned.  Otherwise
      __page_find_buddy could compute a buddy not in node_mem_map for partial
      MAX_ORDER regions at zone's endpoints.  page_is_buddy will detect that
      these pages at endpoints are not PG_buddy (they were zeroed out by bootmem
      allocator and not part of zone).  Of course the negative here is we could
      waste a little memory but the positive is eliminating all the old checks
      for zone boundary conditions.
      
      SPARSEMEM won't encounter this issue because of MAX_ORDER size constraint
      when SPARSEMEM is configured.  ia64 VIRTUAL_MEM_MAP doesn't need the logic
      either because the holes and endpoints are handled differently.  This
      leaves checking alloc_remap and other arches which privately allocate for
      node_mem_map.
      Signed-off-by: NBob Picco <bob.picco@hp.com>
      Acked-by: NMel Gorman <mel@csn.ul.ie>
      Cc: Dave Hansen <haveblue@us.ibm.com>
      Cc: Andy Whitcroft <apw@shadowen.org>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      e984bb43
    • P
      [PATCH] Cpuset: might sleep checking zones allowed fix · bdd804f4
      Paul Jackson 提交于
      Fix a couple of infrequently encountered 'sleeping function called from
      invalid context' in the cpuset hooks in __alloc_pages.  Could sleep while
      interrupts disabled.
      
      The routine cpuset_zone_allowed() is called by code in mm/page_alloc.c
      __alloc_pages() to determine if a zone is allowed in the current tasks
      cpuset.  This routine can sleep, for certain GFP_KERNEL allocations, if the
      zone is on a memory node not allowed in the current cpuset, but might be
      allowed in a parent cpuset.
      
      But we can't sleep in __alloc_pages() if in interrupt, nor if called for a
      GFP_ATOMIC request (__GFP_WAIT not set in gfp_flags).
      
      The rule was intended to be:
        Don't call cpuset_zone_allowed() if you can't sleep, unless you
        pass in the __GFP_HARDWALL flag set in gfp_flag, which disables
        the code that might scan up ancestor cpusets and sleep.
      
      This rule was being violated in a couple of places, due to a bogus change
      made (by myself, pj) to __alloc_pages() as part of the November 2005 effort
      to cleanup its logic, and also due to a later fix to constrain which swap
      daemons were awoken.
      
      The bogus change can be seen at:
        http://linux.derkeiler.com/Mailing-Lists/Kernel/2005-11/4691.html
        [PATCH 01/05] mm fix __alloc_pages cpuset ALLOC_* flags
      
      This was first noticed on a tight memory system, in code that was disabling
      interrupts and doing allocation requests with __GFP_WAIT not set, which
      resulted in __might_sleep() writing complaints to the log "Debug: sleeping
      function called ...", when the code in cpuset_zone_allowed() tried to take
      the callback_sem cpuset semaphore.
      
      We haven't seen a system hang on this 'might_sleep' yet, but we are at
      decent risk of seeing it fairly soon, especially since the additional
      cpuset_zone_allowed() check was added, conditioning wakeup_kswapd(), in
      March 2006.
      
      Special thanks to Dave Chinner, for figuring this out, and a tip of the hat
      to Nick Piggin who warned me of this back in Nov 2005, before I was ready
      to listen.
      Signed-off-by: NPaul Jackson <pj@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      bdd804f4
    • M
      [PATCH] SPARSEMEM incorrectly calculates section number · 12783b00
      Mike Kravetz 提交于
      A bad calculation/loop in __section_nr() could result in incorrect section
      information being put into sysfs memory entries.  This primarily impacts
      memory add operations as the sysfs information is used while onlining new
      memory.
      
      Fix suggested by Dave Hansen.
      
      Note that the bug may not be obvious from the patch.  It actually occurs in
      the function's return statement:
      
      	return (root_nr * SECTIONS_PER_ROOT) + (ms - root);
      
      In the existing code, root_nr has already been multiplied by
      SECTIONS_PER_ROOT.
      Signed-off-by: NMike Kravetz <kravetz@us.ibm.com>
      Cc: Dave Hansen <haveblue@us.ibm.com>
      Cc: Andy Whitcroft <apw@shadowen.org>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      12783b00
  7. 16 5月, 2006 2 次提交
    • R
      [PATCH] slab: Fix kmem_cache_destroy() on NUMA · a4523a8b
      Roland Dreier 提交于
      With CONFIG_NUMA set, kmem_cache_destroy() may fail and say "Can't
      free all objects."  The problem is caused by sequences such as the
      following (suppose we are on a NUMA machine with two nodes, 0 and 1):
      
       * Allocate an object from cache on node 0.
       * Free the object on node 1.  The object is put into node 1's alien
         array_cache for node 0.
       * Call kmem_cache_destroy(), which ultimately ends up in __cache_shrink().
       * __cache_shrink() does drain_cpu_caches(), which loops through all nodes.
         For each node it drains the shared array_cache and then handles the
         alien array_cache for the other node.
      
      However this means that node 0's shared array_cache will be drained,
      and then node 1 will move the contents of its alien[0] array_cache
      into that same shared array_cache.  node 0's shared array_cache is
      never looked at again, so the objects left there will appear to be in
      use when __cache_shrink() calls __node_shrink() for node 0.  So
      __node_shrink() will return 1 and kmem_cache_destroy() will fail.
      
      This patch fixes this by having drain_cpu_caches() do
      drain_alien_cache() on every node before it does drain_array() on the
      nodes' shared array_caches.
      
      The problem was originally reported by Or Gerlitz <ogerlitz@voltaire.com>.
      Signed-off-by: NRoland Dreier <rolandd@cisco.com>
      Acked-by: NChristoph Lameter <clameter@sgi.com>
      Acked-by: NPekka Enberg <penberg@cs.helsinki.fi>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      a4523a8b
    • M
      [PATCH] add slab_is_available() routine for boot code · 39d24e64
      Mike Kravetz 提交于
      slab_is_available() indicates slab based allocators are available for use.
      SPARSEMEM code needs to know this as it can be called at various times
      during the boot process.
      Signed-off-by: NMike Kravetz <kravetz@us.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      39d24e64