1. 06 2月, 2008 2 次提交
  2. 10 12月, 2007 1 次提交
  3. 06 12月, 2007 1 次提交
  4. 16 11月, 2007 1 次提交
  5. 17 10月, 2007 3 次提交
  6. 22 7月, 2007 1 次提交
    • M
      slob: reduce list scanning · d6269543
      Matt Mackall 提交于
      The version of SLOB in -mm always scans its free list from the beginning,
      which results in small allocations and free segments clustering at the
      beginning of the list over time.  This causes the average search to scan
      over a large stretch at the beginning on each allocation.
      
      By starting each page search where the last one left off, we evenly
      distribute the allocations and greatly shorten the average search.
      
      Without this patch, kernel compiles on a 1.5G machine take a large amount
      of system time for list scanning.  With this patch, compiles are within a
      few seconds of performance of a SLAB kernel with no notable change in
      system time.
      Signed-off-by: NMatt Mackall <mpm@selenic.com>
      Cc: Christoph Lameter <clameter@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      d6269543
  7. 20 7月, 2007 1 次提交
    • P
      mm: Remove slab destructors from kmem_cache_create(). · 20c2df83
      Paul Mundt 提交于
      Slab destructors were no longer supported after Christoph's
      c59def9f change. They've been
      BUGs for both slab and slub, and slob never supported them
      either.
      
      This rips out support for the dtor pointer from kmem_cache_create()
      completely and fixes up every single callsite in the kernel (there were
      about 224, not including the slab allocator definitions themselves,
      or the documentation references).
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      20c2df83
  8. 18 7月, 2007 4 次提交
  9. 17 7月, 2007 5 次提交
    • P
      slob: sparsemem support · 84a01c2f
      Paul Mundt 提交于
      Currently slob is disabled if we're using sparsemem, due to an earlier
      patch from Goto-san.  Slob and static sparsemem work without any trouble as
      it is, and the only hiccup is a missing slab_is_available() in the case of
      sparsemem extreme.  With this, we're rid of the last set of restrictions
      for slob usage.
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      Acked-by: NPekka Enberg <penberg@cs.helsinki.fi>
      Acked-by: NMatt Mackall <mpm@selenic.com>
      Cc: Christoph Lameter <clameter@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      84a01c2f
    • P
      slob: initial NUMA support · 6193a2ff
      Paul Mundt 提交于
      This adds preliminary NUMA support to SLOB, primarily aimed at systems with
      small nodes (tested all the way down to a 128kB SRAM block), whether
      asymmetric or otherwise.
      
      We follow the same conventions as SLAB/SLUB, preferring current node
      placement for new pages, or with explicit placement, if a node has been
      specified.  Presently on UP NUMA this has the side-effect of preferring
      node#0 allocations (since numa_node_id() == 0, though this could be
      reworked if we could hand off a pfn to determine node placement), so
      single-CPU NUMA systems will want to place smaller nodes further out in
      terms of node id.  Once a page has been bound to a node (via explicit node
      id typing), we only do block allocations from partial free pages that have
      a matching node id in the page flags.
      
      The current implementation does have some scalability problems, in that all
      partial free pages are tracked in the global freelist (with contention due
      to the single spinlock).  However, these are things that are being reworked
      for SMP scalability first, while things like per-node freelists can easily
      be built on top of this sort of functionality once it's been added.
      
      More background can be found in:
      
      	http://marc.info/?l=linux-mm&m=118117916022379&w=2
      	http://marc.info/?l=linux-mm&m=118170446306199&w=2
      	http://marc.info/?l=linux-mm&m=118187859420048&w=2
      
      and subsequent threads.
      Acked-by: NChristoph Lameter <clameter@sgi.com>
      Acked-by: NMatt Mackall <mpm@selenic.com>
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      Acked-by: NNick Piggin <nickpiggin@yahoo.com.au>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      6193a2ff
    • N
      slob: improved alignment handling · 55394849
      Nick Piggin 提交于
      Remove the core slob allocator's minimum alignment restrictions, and instead
      introduce the alignment restrictions at the slab API layer.  This lets us heed
      the ARCH_KMALLOC/SLAB_MINALIGN directives, and also use __alignof__ (unsigned
      long) for the default alignment (which should allow relaxed alignment
      architectures to take better advantage of SLOB's small minimum alignment).
      Signed-off-by: NNick Piggin <npiggin@suse.de>
      Acked-by: NMatt Mackall <mpm@selenic.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      55394849
    • N
      slob: remove bigblock tracking · d87a133f
      Nick Piggin 提交于
      Remove the bigblock lists in favour of using compound pages and going directly
      to the page allocator.  Allocation size is stored in page->private, which also
      makes ksize more accurate than it previously was.
      
      Saves ~.5K of code, and 12-24 bytes overhead per >= PAGE_SIZE allocation.
      Signed-off-by: NNick Piggin <npiggin@suse.de>
      Acked-by: NMatt Mackall <mpm@selenic.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      d87a133f
    • N
      slob: rework freelist handling · 95b35127
      Nick Piggin 提交于
      Improve slob by turning the freelist into a list of pages using struct page
      fields, then each page has a singly linked freelist of slob blocks via a
      pointer in the struct page.
      
      - The first benefit is that the slob freelists can be indexed by a smaller
        type (2 bytes, if the PAGE_SIZE is reasonable).
      
      - Next is that freeing is much quicker because it does not have to traverse
        the entire freelist. Allocation can be slightly faster too, because we can
        skip almost-full freelist pages completely.
      
      - Slob pages are then freed immediately when they become empty, rather than
        having a periodic timer try to free them. This gives efficiency and memory
        consumption improvement.
      
      Then, we don't encode seperate size and next fields into each slob block,
      rather we use the sign bit to distinguish between "size" or "next". Then
      size 1 blocks contain a "next" offset, and others contain the "size" in
      the first unit and "next" in the second unit.
      
      - This allows minimum slob allocation alignment to go from 8 bytes to 2
        bytes on 32-bit and 12 bytes to 2 bytes on 64-bit. In practice, it is
        best to align them to word size, however some architectures (eg. cris)
        could gain space savings from turning off this extra alignment.
      
      Then, make kmalloc use its own slob_block at the front of the allocation
      in order to encode allocation size, rather than rely on not overwriting
      slob's existing header block.
      
      - This reduces kmalloc allocation overhead similarly to alignment reductions.
      
      - Decouples kmalloc layer from the slob allocator.
      
      Then, add a page flag specific to slob pages.
      
      - This means kfree of a page aligned slob block doesn't have to traverse
        the bigblock list.
      
      I would get benchmarks, but my test box's network doesn't come up with
      slob before this patch. I think something is timing out. Anyway, things
      are faster after the patch.
      
      Code size goes up about 1K, however dynamic memory usage _should_ be
      lower even on relatively small memory systems.
      
      Future todo item is to restore the cyclic free list search, rather than
      to always begin at the start.
      Signed-off-by: NNick Piggin <npiggin@suse.de>
      Acked-by: NMatt Mackall <mpm@selenic.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      95b35127
  10. 17 5月, 2007 3 次提交
  11. 08 5月, 2007 4 次提交
  12. 31 12月, 2006 1 次提交
  13. 14 12月, 2006 2 次提交
    • C
      [PATCH] More slab.h cleanups · 55935a34
      Christoph Lameter 提交于
      More cleanups for slab.h
      
      1. Remove tabs from weird locations as suggested by Pekka
      
      2. Drop the check for NUMA and SLAB_DEBUG from the fallback section
         as suggested by Pekka.
      
      3. Uses static inline for the fallback defs as also suggested by Pekka.
      
      4. Make kmem_ptr_valid take a const * argument.
      
      5. Separate the NUMA fallback definitions from the kmalloc_track fallback
         definitions.
      Signed-off-by: NChristoph Lameter <clameter@sgi.com>
      Cc: Pekka Enberg <penberg@cs.helsinki.fi>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      55935a34
    • C
      [PATCH] Cleanup slab headers / API to allow easy addition of new slab allocators · 2e892f43
      Christoph Lameter 提交于
      This is a response to an earlier discussion on linux-mm about splitting
      slab.h components per allocator.  Patch is against 2.6.19-git11.  See
      http://marc.theaimsgroup.com/?l=linux-mm&m=116469577431008&w=2
      
      This patch cleans up the slab header definitions.  We define the common
      functions of slob and slab in slab.h and put the extra definitions needed
      for slab's kmalloc implementations in <linux/slab_def.h>.  In order to get
      a greater set of common functions we add several empty functions to slob.c
      and also rename slob's kmalloc to __kmalloc.
      
      Slob does not need any special definitions since we introduce a fallback
      case.  If there is no need for a slab implementation to provide its own
      kmalloc mess^H^H^Hacros then we simply fall back to __kmalloc functions.
      That is sufficient for SLOB.
      
      Sort the function in slab.h according to their functionality.  First the
      functions operating on struct kmem_cache * then the kmalloc related
      functions followed by special debug and fallback definitions.
      
      Also redo a lot of comments.
      
      Signed-off-by: Christoph Lameter <clameter@sgi.com>?
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      2e892f43
  14. 27 9月, 2006 1 次提交
  15. 26 9月, 2006 2 次提交
  16. 01 7月, 2006 1 次提交
  17. 20 4月, 2006 1 次提交
  18. 26 3月, 2006 1 次提交
  19. 08 2月, 2006 1 次提交
  20. 09 1月, 2006 1 次提交
    • M
      [PATCH] slob: introduce the SLOB allocator · 10cef602
      Matt Mackall 提交于
      configurable replacement for slab allocator
      
      This adds a CONFIG_SLAB option under CONFIG_EMBEDDED.  When CONFIG_SLAB is
      disabled, the kernel falls back to using the 'SLOB' allocator.
      
      SLOB is a traditional K&R/UNIX allocator with a SLAB emulation layer,
      similar to the original Linux kmalloc allocator that SLAB replaced.  It's
      signicantly smaller code and is more memory efficient.  But like all
      similar allocators, it scales poorly and suffers from fragmentation more
      than SLAB, so it's only appropriate for small systems.
      
      It's been tested extensively in the Linux-tiny tree.  I've also
      stress-tested it with make -j 8 compiles on a 3G SMP+PREEMPT box (not
      recommended).
      
      Here's a comparison for otherwise identical builds, showing SLOB saving
      nearly half a megabyte of RAM:
      
      $ size vmlinux*
         text    data     bss     dec     hex filename
      3336372  529360  190812 4056544  3de5e0 vmlinux-slab
      3323208  527948  190684 4041840  3dac70 vmlinux-slob
      
      $ size mm/{slab,slob}.o
         text    data     bss     dec     hex filename
        13221     752      48   14021    36c5 mm/slab.o
         1896      52       8    1956     7a4 mm/slob.o
      
      /proc/meminfo:
                        SLAB          SLOB      delta
      MemTotal:        27964 kB      27980 kB     +16 kB
      MemFree:         24596 kB      25092 kB    +496 kB
      Buffers:            36 kB         36 kB       0 kB
      Cached:           1188 kB       1188 kB       0 kB
      SwapCached:          0 kB          0 kB       0 kB
      Active:            608 kB        600 kB      -8 kB
      Inactive:          808 kB        812 kB      +4 kB
      HighTotal:           0 kB          0 kB       0 kB
      HighFree:            0 kB          0 kB       0 kB
      LowTotal:        27964 kB      27980 kB     +16 kB
      LowFree:         24596 kB      25092 kB    +496 kB
      SwapTotal:           0 kB          0 kB       0 kB
      SwapFree:            0 kB          0 kB       0 kB
      Dirty:               4 kB         12 kB      +8 kB
      Writeback:           0 kB          0 kB       0 kB
      Mapped:            560 kB        556 kB      -4 kB
      Slab:             1756 kB          0 kB   -1756 kB
      CommitLimit:     13980 kB      13988 kB      +8 kB
      Committed_AS:     4208 kB       4208 kB       0 kB
      PageTables:         28 kB         28 kB       0 kB
      VmallocTotal:  1007312 kB    1007312 kB       0 kB
      VmallocUsed:        48 kB         48 kB       0 kB
      VmallocChunk:  1007264 kB    1007264 kB       0 kB
      
      (this work has been sponsored in part by CELF)
      
      From: Ingo Molnar <mingo@elte.hu>
      
         Fix 32-bitness bugs in mm/slob.c.
      Signed-off-by: NMatt Mackall <mpm@selenic.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      10cef602