1. 07 3月, 2008 1 次提交
  2. 15 2月, 2008 1 次提交
  3. 12 2月, 2008 1 次提交
  4. 26 1月, 2008 2 次提交
  5. 25 1月, 2008 1 次提交
  6. 03 1月, 2008 1 次提交
  7. 06 12月, 2007 1 次提交
  8. 01 12月, 2007 1 次提交
  9. 15 11月, 2007 1 次提交
  10. 20 10月, 2007 1 次提交
  11. 19 10月, 2007 2 次提交
  12. 17 10月, 2007 6 次提交
  13. 23 8月, 2007 1 次提交
  14. 25 7月, 2007 1 次提交
  15. 20 7月, 2007 3 次提交
    • P
      mm: Remove slab destructors from kmem_cache_create(). · 20c2df83
      Paul Mundt 提交于
      Slab destructors were no longer supported after Christoph's
      c59def9f change. They've been
      BUGs for both slab and slub, and slob never supported them
      either.
      
      This rips out support for the dtor pointer from kmem_cache_create()
      completely and fixes up every single callsite in the kernel (there were
      about 224, not including the slab allocator definitions themselves,
      or the documentation references).
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      20c2df83
    • L
      Fix up non-NUMA SLAB configuration for zero-sized allocations · a5c96d8a
      Linus Torvalds 提交于
      I suspect Christoph tested his code only in the NUMA configuration, for
      the combination of SLAB+non-NUMA the zero-sized kmalloc's would not work.
      
      Of course, this would only trigger in configurations where those zero-
      sized allocations happen (not very common), so that may explain why it
      wasn't more widely noticed.
      
      Seen by by Andi Kleen under qemu, and there seems to be a report by
      Michael Tsirkin on it too.
      
      Cc: Andi Kleen <ak@suse.de>
      Cc: Roland Dreier <rdreier@cisco.com>
      Cc: Michael S. Tsirkin <mst@dev.mellanox.co.il>
      Cc: Pekka Enberg <penberg@cs.helsinki.fi>
      Cc: Christoph Lameter <clameter@sgi.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      a5c96d8a
    • D
      FRV: work around a possible compiler bug · ea02e3dd
      David Howells 提交于
      Work around a possible bug in the FRV compiler.
      
      What appears to be happening is that gcc resolves the
      __builtin_constant_p() in kmalloc() to true, but then fails to reduce the
      therefore constant conditions in the if-statements it guards to constant
      results.
      
      When compiling with -O2 or -Os, one single spurious error crops up in
      cpuup_callback() in mm/slab.c.  This can be avoided by making the memsize
      variable const.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      ea02e3dd
  16. 18 7月, 2007 5 次提交
  17. 17 7月, 2007 2 次提交
  18. 06 7月, 2007 1 次提交
    • D
      Fix slab redzone alignment · 87a927c7
      David Woodhouse 提交于
      Commit b46b8f19 fixed a couple of bugs
      by switching the redzone to 64 bits. Unfortunately, it neglected to
      ensure that the _second_ redzone, after the slab object, is aligned
      correctly. This caused illegal instruction faults on sparc32, which for
      some reason not entirely clear to me are not trapped and fixed up.
      
      Two things need to be done to fix this:
        - increase the object size, rounding up to alignof(long long) so
          that the second redzone can be aligned correctly.
        - If SLAB_STORE_USER is set but alignof(long long)==8, allow a
          full 64 bits of space for the user word at the end of the buffer,
          even though we may not _use_ the whole 64 bits.
      
      This patch should be a no-op on any 64-bit architecture or any 32-bit
      architecture where alignof(long long) == 4. Of the others, it's tested
      on ppc32 by myself and a very similar patch was tested on sparc32 by
      Mark Fortescue, who reported the new problem.
      
      Also, fix the conditions for FORCED_DEBUG, which hadn't been adjusted to
      the new sizes. Again noticed by Mark.
      Signed-off-by: NDavid Woodhouse <dwmw2@infradead.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      87a927c7
  19. 02 7月, 2007 1 次提交
  20. 09 6月, 2007 1 次提交
  21. 19 5月, 2007 1 次提交
  22. 17 5月, 2007 4 次提交
  23. 10 5月, 2007 1 次提交
    • C
      Move remote node draining out of slab allocators · 4037d452
      Christoph Lameter 提交于
      Currently the slab allocators contain callbacks into the page allocator to
      perform the draining of pagesets on remote nodes.  This requires SLUB to have
      a whole subsystem in order to be compatible with SLAB.  Moving node draining
      out of the slab allocators avoids a section of code in SLUB.
      
      Move the node draining so that is is done when the vm statistics are updated.
      At that point we are already touching all the cachelines with the pagesets of
      a processor.
      
      Add a expire counter there.  If we have to update per zone or global vm
      statistics then assume that the pageset will require subsequent draining.
      
      The expire counter will be decremented on each vm stats update pass until it
      reaches zero.  Then we will drain one batch from the pageset.  The draining
      will cause vm counter updates which will then cause another expiration until
      the pcp is empty.  So we will drain a batch every 3 seconds.
      
      Note that remote node draining is a somewhat esoteric feature that is required
      on large NUMA systems because otherwise significant portions of system memory
      can become trapped in pcp queues.  The number of pcp is determined by the
      number of processors and nodes in a system.  A system with 4 processors and 2
      nodes has 8 pcps which is okay.  But a system with 1024 processors and 512
      nodes has 512k pcps with a high potential for large amount of memory being
      caught in them.
      Signed-off-by: NChristoph Lameter <clameter@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      4037d452