1. 18 7月, 2007 1 次提交
    • C
      SLUB: change error reporting format to follow lockdep loosely · 24922684
      Christoph Lameter 提交于
      Changes the error reporting format to loosely follow lockdep.
      
      If data corruption is detected then we generate the following lines:
      
      ============================================
      BUG <slab-cache>: <problem>
      --------------------------------------------
      
      INFO: <more information> [possibly multiple times]
      
      <object dump>
      
      FIX <slab-cache>: <remedial action>
      
      This also adds some more intelligence to the data corruption detection. Its
      now capable of figuring out the start and end.
      
      Add a comment on how to configure SLUB so that a production system may
      continue to operate even though occasional slab corruption occur through
      a misbehaving kernel component. See "Emergency operations" in
      Documentation/vm/slub.txt.
      
      [akpm@linux-foundation.org: build fix]
      Signed-off-by: NChristoph Lameter <clameter@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      24922684
  2. 17 7月, 2007 1 次提交
  3. 07 7月, 2007 1 次提交
  4. 04 7月, 2007 1 次提交
  5. 24 6月, 2007 1 次提交
  6. 17 6月, 2007 2 次提交
  7. 09 6月, 2007 1 次提交
  8. 01 6月, 2007 1 次提交
  9. 31 5月, 2007 1 次提交
    • C
      SLUB: Fix NUMA / SYSFS bootstrap issue · 8ffa6875
      Christoph Lameter 提交于
      We need this patch in ASAP.  Patch fixes the mysterious hang that remained
      on some particular configurations with lockdep on after the first fix that
      moved the #idef CONFIG_SLUB_DEBUG to the right location.  See
      http://marc.info/?t=117963072300001&r=1&w=2
      
      The kmem_cache_node cache is very special because it is needed for NUMA
      bootstrap.  Under certain conditions (like for example if lockdep is
      enabled and significantly increases the size of spinlock_t) the structure
      may become exactly the size as one of the larger caches in the kmalloc
      array.
      
      That early during bootstrap we cannot perform merging properly.  The unique
      id for the kmem_cache_node cache will match one of the kmalloc array.
      Sysfs will complain about a duplicate directory entry.  All of this occurs
      while the console is not yet fully operational.  Thus boot may appear to be
      silently failing.
      
      The kmem_cache_node cache is very special.  During early boostrap the main
      allocation function is not operational yet and so we have to run our own
      small special alloc function during early boot.  It is also special in that
      it is never freed.
      
      We really do not want any merging on that cache.  Set the refcount -1 and
      forbid merging of slabs that have a negative refcount.
      Signed-off-by: NChristoph Lameter <clameter@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      8ffa6875
  10. 24 5月, 2007 2 次提交
  11. 17 5月, 2007 6 次提交
  12. 11 5月, 2007 2 次提交
    • C
      SLUB: remove nr_cpu_ids hack · bcf889f9
      Christoph Lameter 提交于
      This was in SLUB in order to head off trouble while the nr_cpu_ids
      functionality was not merged.  Its merged now so no need to still have this.
      Signed-off-by: NChristoph Lameter <clameter@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      bcf889f9
    • C
      slub: support concurrent local and remote frees and allocs on a slab · 894b8788
      Christoph Lameter 提交于
      Avoid atomic overhead in slab_alloc and slab_free
      
      SLUB needs to use the slab_lock for the per cpu slabs to synchronize with
      potential kfree operations.  This patch avoids that need by moving all free
      objects onto a lockless_freelist.  The regular freelist continues to exist
      and will be used to free objects.  So while we consume the
      lockless_freelist the regular freelist may build up objects.
      
      If we are out of objects on the lockless_freelist then we may check the
      regular freelist.  If it has objects then we move those over to the
      lockless_freelist and do this again.  There is a significant savings in
      terms of atomic operations that have to be performed.
      
      We can even free directly to the lockless_freelist if we know that we are
      running on the same processor.  So this speeds up short lived objects.
      They may be allocated and freed without taking the slab_lock.  This is
      particular good for netperf.
      
      In order to maximize the effect of the new faster hotpath we extract the
      hottest performance pieces into inlined functions.  These are then inlined
      into kmem_cache_alloc and kmem_cache_free.  So hotpath allocation and
      freeing no longer requires a subroutine call within SLUB.
      
      [I am not sure that it is worth doing this because it changes the easy to
      read structure of slub just to reduce atomic ops.  However, there is
      someone out there with a benchmark on 4 way and 8 way processor systems
      that seems to show a 5% regression vs.  Slab.  Seems that the regression is
      due to increased atomic operations use vs.  SLAB in SLUB).  I wonder if
      this is applicable or discernable at all in a real workload?]
      Signed-off-by: NChristoph Lameter <clameter@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      894b8788
  13. 10 5月, 2007 18 次提交
  14. 08 5月, 2007 2 次提交