1. 23 6月, 2006 22 次提交
  2. 13 6月, 2006 2 次提交
  3. 12 6月, 2006 1 次提交
  4. 03 6月, 2006 1 次提交
    • I
      [PATCH] slab.c: fix offslab_limit bug · b1ab41c4
      Ingo Molnar 提交于
      mm/slab.c's offlab_limit logic is totally broken.
      
      Firstly, "offslab_limit" is a global variable while it should either be
      calculated in situ or should be passed in as a parameter.
      
      Secondly, the more serious problem with it is that the condition for
      calculating it:
      
                     if (!(OFF_SLAB(sizes->cs_cachep))) {
                             offslab_limit = sizes->cs_size - sizeof(struct slab);
                             offslab_limit /= sizeof(kmem_bufctl_t);
      
      is in total disconnect with the condition that makes use of it:
      
                     /* More than offslab_limit objects will cause problems */
                     if ((flags & CFLGS_OFF_SLAB) && num > offslab_limit)
                             break;
      
      but due to offslab_limit being a global variable this breakage was
      hidden.
      
      Up until lockdep came along and perturbed the slab sizes sufficiently so
      that the first off-slab cache would still see a (non-calculated) zero
      value for offslab_limit and would panic with:
      
        kmem_cache_create: couldn't create cache size-512.
      
        Call Trace:
         [<ffffffff8020a5b9>] show_trace+0x96/0x1c8
         [<ffffffff8020a8f0>] dump_stack+0x13/0x15
         [<ffffffff8022994f>] panic+0x39/0x21a
         [<ffffffff80270814>] kmem_cache_create+0x5a0/0x5d0
         [<ffffffff80aced62>] kmem_cache_init+0x193/0x379
         [<ffffffff80abf779>] start_kernel+0x17f/0x218
         [<ffffffff80abf263>] _sinittext+0x263/0x26a
      
        Kernel panic - not syncing: kmem_cache_create(): failed to create slab `size-512'
      
      Paolo Ornati's config on x86_64 managed to trigger it.
      
      The fix is to move the calculation to the place that makes use of it.
      This also makes slab.o 54 bytes smaller.
      
      Btw., the check itself is quite silly. Its intention is to test whether
      the number of objects per slab would be higher than the number of slab
      control pointers possible. In theory it could be triggered: if someone
      tried to allocate 4-byte objects cache and explicitly requested with
      CFLGS_OFF_SLAB. So i kept the check.
      
      Out of historic interest i checked how old this bug was and it's
      ancient, 10 years old! It is the oldest hidden and then truly triggering
      bugs i ever saw being fixed in the kernel!
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      b1ab41c4
  5. 01 6月, 2006 1 次提交
  6. 22 5月, 2006 3 次提交
    • B
      [PATCH] Align the node_mem_map endpoints to a MAX_ORDER boundary · e984bb43
      Bob Picco 提交于
      Andy added code to buddy allocator which does not require the zone's
      endpoints to be aligned to MAX_ORDER.  An issue is that the buddy allocator
      requires the node_mem_map's endpoints to be MAX_ORDER aligned.  Otherwise
      __page_find_buddy could compute a buddy not in node_mem_map for partial
      MAX_ORDER regions at zone's endpoints.  page_is_buddy will detect that
      these pages at endpoints are not PG_buddy (they were zeroed out by bootmem
      allocator and not part of zone).  Of course the negative here is we could
      waste a little memory but the positive is eliminating all the old checks
      for zone boundary conditions.
      
      SPARSEMEM won't encounter this issue because of MAX_ORDER size constraint
      when SPARSEMEM is configured.  ia64 VIRTUAL_MEM_MAP doesn't need the logic
      either because the holes and endpoints are handled differently.  This
      leaves checking alloc_remap and other arches which privately allocate for
      node_mem_map.
      Signed-off-by: NBob Picco <bob.picco@hp.com>
      Acked-by: NMel Gorman <mel@csn.ul.ie>
      Cc: Dave Hansen <haveblue@us.ibm.com>
      Cc: Andy Whitcroft <apw@shadowen.org>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      e984bb43
    • P
      [PATCH] Cpuset: might sleep checking zones allowed fix · bdd804f4
      Paul Jackson 提交于
      Fix a couple of infrequently encountered 'sleeping function called from
      invalid context' in the cpuset hooks in __alloc_pages.  Could sleep while
      interrupts disabled.
      
      The routine cpuset_zone_allowed() is called by code in mm/page_alloc.c
      __alloc_pages() to determine if a zone is allowed in the current tasks
      cpuset.  This routine can sleep, for certain GFP_KERNEL allocations, if the
      zone is on a memory node not allowed in the current cpuset, but might be
      allowed in a parent cpuset.
      
      But we can't sleep in __alloc_pages() if in interrupt, nor if called for a
      GFP_ATOMIC request (__GFP_WAIT not set in gfp_flags).
      
      The rule was intended to be:
        Don't call cpuset_zone_allowed() if you can't sleep, unless you
        pass in the __GFP_HARDWALL flag set in gfp_flag, which disables
        the code that might scan up ancestor cpusets and sleep.
      
      This rule was being violated in a couple of places, due to a bogus change
      made (by myself, pj) to __alloc_pages() as part of the November 2005 effort
      to cleanup its logic, and also due to a later fix to constrain which swap
      daemons were awoken.
      
      The bogus change can be seen at:
        http://linux.derkeiler.com/Mailing-Lists/Kernel/2005-11/4691.html
        [PATCH 01/05] mm fix __alloc_pages cpuset ALLOC_* flags
      
      This was first noticed on a tight memory system, in code that was disabling
      interrupts and doing allocation requests with __GFP_WAIT not set, which
      resulted in __might_sleep() writing complaints to the log "Debug: sleeping
      function called ...", when the code in cpuset_zone_allowed() tried to take
      the callback_sem cpuset semaphore.
      
      We haven't seen a system hang on this 'might_sleep' yet, but we are at
      decent risk of seeing it fairly soon, especially since the additional
      cpuset_zone_allowed() check was added, conditioning wakeup_kswapd(), in
      March 2006.
      
      Special thanks to Dave Chinner, for figuring this out, and a tip of the hat
      to Nick Piggin who warned me of this back in Nov 2005, before I was ready
      to listen.
      Signed-off-by: NPaul Jackson <pj@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      bdd804f4
    • M
      [PATCH] SPARSEMEM incorrectly calculates section number · 12783b00
      Mike Kravetz 提交于
      A bad calculation/loop in __section_nr() could result in incorrect section
      information being put into sysfs memory entries.  This primarily impacts
      memory add operations as the sysfs information is used while onlining new
      memory.
      
      Fix suggested by Dave Hansen.
      
      Note that the bug may not be obvious from the patch.  It actually occurs in
      the function's return statement:
      
      	return (root_nr * SECTIONS_PER_ROOT) + (ms - root);
      
      In the existing code, root_nr has already been multiplied by
      SECTIONS_PER_ROOT.
      Signed-off-by: NMike Kravetz <kravetz@us.ibm.com>
      Cc: Dave Hansen <haveblue@us.ibm.com>
      Cc: Andy Whitcroft <apw@shadowen.org>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      12783b00
  7. 16 5月, 2006 3 次提交
  8. 02 5月, 2006 3 次提交
    • J
      [PATCH] spufs: fix for CONFIG_NUMA · bed120c6
      Joel H Schopp 提交于
      Based on an older patch from  Mike Kravetz <kravetz@us.ibm.com>
      
      We need to have a mem_map for high addresses in order to make fops->no_page
      work on spufs mem and register files.  So far, we have used the
      memory_present() function during early bootup, but that did not work when
      CONFIG_NUMA was enabled.
      
      We now use the __add_pages() function to add the mem_map when loading the
      spufs module, which is a lot nicer.
      Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      bed120c6
    • M
      [PATCH] sparsemem interaction with memory add bug fixes · 46a66eec
      Mike Kravetz 提交于
      This patch fixes two bugs with the way sparsemem interacts with memory add.
      They are:
      
      - memory leak if memmap for section already exists
      
      - calling alloc_bootmem_node() after boot
      
      These bugs were discovered and a first cut at the fixes were provided by
      Arnd Bergmann <arnd@arndb.de> and Joel Schopp <jschopp@us.ibm.com>.
      Signed-off-by: NMike Kravetz <kravetz@us.ibm.com>
      Signed-off-by: NJoel Schopp <jschopp@austin.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      46a66eec
    • C
      [PATCH] page migration: Fix fallback behavior for dirty pages · 4c28f811
      Christoph Lameter 提交于
      Currently we check PageDirty() in order to make the decision to swap out
      the page.  However, the dirty information may be only be contained in the
      ptes pointing to the page.  We need to first unmap the ptes before checking
      for PageDirty().  If unmap is successful then the page count of the page
      will also be decreased so that pageout() works properly.
      
      This is a fix necessary for 2.6.17.  Without this fix we may migrate dirty
      pages for filesystems without migration functions.  Filesystems may keep
      pointers to dirty pages.  Migration of dirty pages can result in the
      filesystem keeping pointers to freed pages.
      
      Unmapping is currently not be separated out from removing all the
      references to a page and moving the mapping.  Therefore try_to_unmap will
      be called again in migrate_page() if the writeout is successful.  However,
      it wont do anything since the ptes are already removed.
      
      The coming updates to the page migration code will restructure the code
      so that this is no longer necessary.
      Signed-off-by: NChristoph Lameter <clameter@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      4c28f811
  9. 29 4月, 2006 1 次提交
  10. 27 4月, 2006 1 次提交
  11. 26 4月, 2006 1 次提交
  12. 23 4月, 2006 1 次提交
    • L
      [PATCH] add migratepage address space op to shmem · 304dbdb7
      Lee Schermerhorn 提交于
      Basic problem: pages of a shared memory segment can only be migrated once.
      
      In 2.6.16 through 2.6.17-rc1, shared memory mappings do not have a
      migratepage address space op.  Therefore, migrate_pages() falls back to
      default processing.  In this path, it will try to pageout() dirty pages.
      Once a shared memory page has been migrated it becomes dirty, so
      migrate_pages() will try to page it out.  However, because the page count
      is 3 [cache + current + pte], pageout() will return PAGE_KEEP because
      is_page_cache_freeable() returns false.  This will abort all subsequent
      migrations.
      
      This patch adds a migratepage address space op to shared memory segments to
      avoid taking the default path.  We use the "migrate_page()" function
      because it knows how to migrate dirty pages.  This allows shared memory
      segment pages to migrate, subject to other conditions such as # pte's
      referencing the page [page_mapcount(page)], when requested.
      
      I think this is safe.  If we're migrating a shared memory page, then we
      found the page via a page table, so it must be in memory.
      
      Can be verified with memtoy and the shmem-mbind-test script, both
      available at:  http://free.linux.hp.com/~lts/Tools/Signed-off-by: NLee Schermerhorn <lee.schermerhorn@hp.com>
      Acked-by: NChristoph Lameter <clameter@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      304dbdb7