1. 27 4月, 2008 11 次提交
  2. 24 4月, 2008 1 次提交
  3. 14 4月, 2008 6 次提交
  4. 02 4月, 2008 1 次提交
  5. 28 3月, 2008 1 次提交
  6. 27 3月, 2008 1 次提交
  7. 18 3月, 2008 1 次提交
    • C
      slub page alloc fallback: Enable interrupts for GFP_WAIT. · caeab084
      Christoph Lameter 提交于
      The fallback path needs to enable interrupts like done for
      the other page allocator calls. This was not necessary with
      the alternate fast path since we handled irq enable/disable in
      the slow path. The regular fastpath handles irq enable/disable
      around calls to the slow path so we need to restore the proper
      status before calling the page allocator from the slowpath.
      Signed-off-by: NChristoph Lameter <clameter@sgi.com>
      caeab084
  8. 07 3月, 2008 2 次提交
  9. 04 3月, 2008 11 次提交
  10. 20 2月, 2008 1 次提交
    • L
      Revert "SLUB: Alternate fast paths using cmpxchg_local" · 00e962c5
      Linus Torvalds 提交于
      This reverts commit 1f84260c, which is
      suspected to be the reason for some very occasional and hard-to-trigger
      crashes that usually look related to memory allocation (mostly reported
      in networking, but since that's generally the most common source of
      shortlived allocations - and allocations in interrupt contexts - that in
      itself is not a big clue).
      
      See for example
      	http://bugzilla.kernel.org/show_bug.cgi?id=9973
      	http://lkml.org/lkml/2008/2/19/278
      etc.
      
      One promising suspicion for what the root cause of bug is (which also
      explains why it's so hard to trigger in practice) came from Eric
      Dumazet:
      
         "I wonder how SLUB_FASTPATH is supposed to work, since it is affected
          by a classical ABA problem of lockless algo.
      
          cmpxchg_local(&c->freelist, object, object[c->offset]) can succeed,
          while an interrupt came (on this cpu), and several allocations were
          done, and one free was performed at the end of this interruption, so
          'object' was recycled.
      
          c->freelist can then contain the previous value (object), but
          object[c->offset] was changed by IRQ.
      
          We then put back in freelist an already allocated object."
      
      but another reason for the revert is simply that everybody agrees that
      this code was the main suspect just by virtue of the pattern of oopses.
      
      Cc: Torsten Kaiser <just.for.lkml@googlemail.com>
      Cc: Christoph Lameter <clameter@sgi.com>
      Cc: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
      Cc: Pekka Enberg <penberg@cs.helsinki.fi>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Eric Dumazet <dada1@cosmosbay.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      00e962c5
  11. 15 2月, 2008 4 次提交
    • C
      slub: Support 4k kmallocs again to compensate for page allocator slowness · 331dc558
      Christoph Lameter 提交于
      Currently we hand off PAGE_SIZEd kmallocs to the page allocator in the
      mistaken belief that the page allocator can handle these allocations
      effectively. However, measurements indicate a minimum slowdown by the
      factor of 8 (and that is only SMP, NUMA is much worse) vs the slub fastpath
      which causes regressions in tbench.
      
      Increase the number of kmalloc caches by one so that we again handle 4k
      kmallocs directly from slub. 4k page buffering for the page allocator
      will be performed by slub like done by slab.
      
      At some point the page allocator fastpath should be fixed. A lot of the kernel
      would benefit from a faster ability to allocate a single page. If that is
      done then the 4k allocs may again be forwarded to the page allocator and this
      patch could be reverted.
      Reviewed-by: NPekka Enberg <penberg@cs.helsinki.fi>
      Acked-by: NMel Gorman <mel@csn.ul.ie>
      Signed-off-by: NChristoph Lameter <clameter@sgi.com>
      331dc558
    • C
      slub: Fallback to kmalloc_large for failing higher order allocs · 71c7a06f
      Christoph Lameter 提交于
      Slub already has two ways of allocating an object. One is via its own
      logic and the other is via the call to kmalloc_large to hand off object
      allocation to the page allocator. kmalloc_large is typically used
      for objects >= PAGE_SIZE.
      
      We can use that handoff to avoid failing if a higher order kmalloc slab
      allocation cannot be satisfied by the page allocator. If we reach the
      out of memory path then simply try a kmalloc_large(). kfree() can
      already handle the case of an object that was allocated via the page
      allocator and so this will work just fine (apart from object
      accounting...).
      
      For any kmalloc slab that already requires higher order allocs (which
      makes it impossible to use the page allocator fastpath!)
      we just use PAGE_ALLOC_COSTLY_ORDER to get the largest number of
      objects in one go from the page allocator slowpath.
      
      On a 4k platform this patch will lead to the following use of higher
      order pages for the following kmalloc slabs:
      
      8 ... 1024	order 0
      2048 .. 4096	order 3 (4k slab only after the next patch)
      
      We may waste some space if fallback occurs on a 2k slab but we
      are always able to fallback to an order 0 alloc.
      Reviewed-by: NPekka Enberg <penberg@cs.helsinki.fi>
      Signed-off-by: NChristoph Lameter <clameter@sgi.com>
      71c7a06f
    • C
      slub: Determine gfpflags once and not every time a slab is allocated · b7a49f0d
      Christoph Lameter 提交于
      Currently we determine the gfp flags to pass to the page allocator
      each time a slab is being allocated.
      
      Determine the bits to be set at the time the slab is created. Store
      in a new allocflags field and add the flags in allocate_slab().
      Acked-by: NMel Gorman <mel@csn.ul.ie>
      Reviewed-by: NPekka Enberg <penberg@cs.helsinki.fi>
      Signed-off-by: NChristoph Lameter <clameter@sgi.com>
      b7a49f0d
    • A
      make slub.c:slab_address() static · dada123d
      Adrian Bunk 提交于
      slab_address() can become static.
      Signed-off-by: NAdrian Bunk <bunk@kernel.org>
      Signed-off-by: NChristoph Lameter <clameter@sgi.com>
      dada123d