1. 04 7月, 2007 1 次提交
  2. 24 6月, 2007 1 次提交
  3. 17 6月, 2007 2 次提交
  4. 09 6月, 2007 1 次提交
  5. 01 6月, 2007 1 次提交
  6. 31 5月, 2007 1 次提交
    • C
      SLUB: Fix NUMA / SYSFS bootstrap issue · 8ffa6875
      Christoph Lameter 提交于
      We need this patch in ASAP.  Patch fixes the mysterious hang that remained
      on some particular configurations with lockdep on after the first fix that
      moved the #idef CONFIG_SLUB_DEBUG to the right location.  See
      http://marc.info/?t=117963072300001&r=1&w=2
      
      The kmem_cache_node cache is very special because it is needed for NUMA
      bootstrap.  Under certain conditions (like for example if lockdep is
      enabled and significantly increases the size of spinlock_t) the structure
      may become exactly the size as one of the larger caches in the kmalloc
      array.
      
      That early during bootstrap we cannot perform merging properly.  The unique
      id for the kmem_cache_node cache will match one of the kmalloc array.
      Sysfs will complain about a duplicate directory entry.  All of this occurs
      while the console is not yet fully operational.  Thus boot may appear to be
      silently failing.
      
      The kmem_cache_node cache is very special.  During early boostrap the main
      allocation function is not operational yet and so we have to run our own
      small special alloc function during early boot.  It is also special in that
      it is never freed.
      
      We really do not want any merging on that cache.  Set the refcount -1 and
      forbid merging of slabs that have a negative refcount.
      Signed-off-by: NChristoph Lameter <clameter@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      8ffa6875
  7. 24 5月, 2007 2 次提交
  8. 17 5月, 2007 6 次提交
  9. 11 5月, 2007 2 次提交
    • C
      SLUB: remove nr_cpu_ids hack · bcf889f9
      Christoph Lameter 提交于
      This was in SLUB in order to head off trouble while the nr_cpu_ids
      functionality was not merged.  Its merged now so no need to still have this.
      Signed-off-by: NChristoph Lameter <clameter@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      bcf889f9
    • C
      slub: support concurrent local and remote frees and allocs on a slab · 894b8788
      Christoph Lameter 提交于
      Avoid atomic overhead in slab_alloc and slab_free
      
      SLUB needs to use the slab_lock for the per cpu slabs to synchronize with
      potential kfree operations.  This patch avoids that need by moving all free
      objects onto a lockless_freelist.  The regular freelist continues to exist
      and will be used to free objects.  So while we consume the
      lockless_freelist the regular freelist may build up objects.
      
      If we are out of objects on the lockless_freelist then we may check the
      regular freelist.  If it has objects then we move those over to the
      lockless_freelist and do this again.  There is a significant savings in
      terms of atomic operations that have to be performed.
      
      We can even free directly to the lockless_freelist if we know that we are
      running on the same processor.  So this speeds up short lived objects.
      They may be allocated and freed without taking the slab_lock.  This is
      particular good for netperf.
      
      In order to maximize the effect of the new faster hotpath we extract the
      hottest performance pieces into inlined functions.  These are then inlined
      into kmem_cache_alloc and kmem_cache_free.  So hotpath allocation and
      freeing no longer requires a subroutine call within SLUB.
      
      [I am not sure that it is worth doing this because it changes the easy to
      read structure of slub just to reduce atomic ops.  However, there is
      someone out there with a benchmark on 4 way and 8 way processor systems
      that seems to show a 5% regression vs.  Slab.  Seems that the regression is
      due to increased atomic operations use vs.  SLAB in SLUB).  I wonder if
      this is applicable or discernable at all in a real workload?]
      Signed-off-by: NChristoph Lameter <clameter@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      894b8788
  10. 10 5月, 2007 18 次提交
  11. 08 5月, 2007 5 次提交