1. 08 5月, 2007 1 次提交
  2. 14 12月, 2006 2 次提交
    • C
      [PATCH] More slab.h cleanups · 55935a34
      Christoph Lameter 提交于
      More cleanups for slab.h
      
      1. Remove tabs from weird locations as suggested by Pekka
      
      2. Drop the check for NUMA and SLAB_DEBUG from the fallback section
         as suggested by Pekka.
      
      3. Uses static inline for the fallback defs as also suggested by Pekka.
      
      4. Make kmem_ptr_valid take a const * argument.
      
      5. Separate the NUMA fallback definitions from the kmalloc_track fallback
         definitions.
      Signed-off-by: NChristoph Lameter <clameter@sgi.com>
      Cc: Pekka Enberg <penberg@cs.helsinki.fi>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      55935a34
    • C
      [PATCH] Cleanup slab headers / API to allow easy addition of new slab allocators · 2e892f43
      Christoph Lameter 提交于
      This is a response to an earlier discussion on linux-mm about splitting
      slab.h components per allocator.  Patch is against 2.6.19-git11.  See
      http://marc.theaimsgroup.com/?l=linux-mm&m=116469577431008&w=2
      
      This patch cleans up the slab header definitions.  We define the common
      functions of slob and slab in slab.h and put the extra definitions needed
      for slab's kmalloc implementations in <linux/slab_def.h>.  In order to get
      a greater set of common functions we add several empty functions to slob.c
      and also rename slob's kmalloc to __kmalloc.
      
      Slob does not need any special definitions since we introduce a fallback
      case.  If there is no need for a slab implementation to provide its own
      kmalloc mess^H^H^Hacros then we simply fall back to __kmalloc functions.
      That is sufficient for SLOB.
      
      Sort the function in slab.h according to their functionality.  First the
      functions operating on struct kmem_cache * then the kmalloc related
      functions followed by special debug and fallback definitions.
      
      Also redo a lot of comments.
      
      Signed-off-by: Christoph Lameter <clameter@sgi.com>?
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      2e892f43
  3. 08 12月, 2006 18 次提交
  4. 04 10月, 2006 2 次提交
  5. 27 9月, 2006 1 次提交
  6. 26 9月, 2006 3 次提交
  7. 23 6月, 2006 1 次提交
  8. 16 5月, 2006 1 次提交
  9. 26 4月, 2006 1 次提交
  10. 29 3月, 2006 1 次提交
  11. 26 3月, 2006 3 次提交
    • P
      [PATCH] slab: optimize constant-size kzalloc calls · 40c07ae8
      Pekka Enberg 提交于
      As suggested by Eric Dumazet, optimize kzalloc() calls that pass a
      compile-time constant size.  Please note that the patch increases kernel
      text slightly (~200 bytes for defconfig on x86).
      Signed-off-by: NPekka Enberg <penberg@cs.helsinki.fi>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      40c07ae8
    • P
      [PATCH] slab: introduce kmem_cache_zalloc allocator · a8c0f9a4
      Pekka Enberg 提交于
      Introduce a memory-zeroing variant of kmem_cache_alloc.  The allocator
      already exits in XFS and there are potential users for it so this patch
      makes the allocator available for the general public.
      Signed-off-by: NPekka Enberg <penberg@cs.helsinki.fi>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      a8c0f9a4
    • A
      [PATCH] slab: implement /proc/slab_allocators · 871751e2
      Al Viro 提交于
      Implement /proc/slab_allocators.   It produces output like:
      
      idr_layer_cache: 80 idr_pre_get+0x33/0x4e
      buffer_head: 2555 alloc_buffer_head+0x20/0x75
      mm_struct: 9 mm_alloc+0x1e/0x42
      mm_struct: 20 dup_mm+0x36/0x370
      vm_area_struct: 384 dup_mm+0x18f/0x370
      vm_area_struct: 151 do_mmap_pgoff+0x2e0/0x7c3
      vm_area_struct: 1 split_vma+0x5a/0x10e
      vm_area_struct: 11 do_brk+0x206/0x2e2
      vm_area_struct: 2 copy_vma+0xda/0x142
      vm_area_struct: 9 setup_arg_pages+0x99/0x214
      fs_cache: 8 copy_fs_struct+0x21/0x133
      fs_cache: 29 copy_process+0xf38/0x10e3
      files_cache: 30 alloc_files+0x1b/0xcf
      signal_cache: 81 copy_process+0xbaa/0x10e3
      sighand_cache: 77 copy_process+0xe65/0x10e3
      sighand_cache: 1 de_thread+0x4d/0x5f8
      anon_vma: 241 anon_vma_prepare+0xd9/0xf3
      size-2048: 1 add_sect_attrs+0x5f/0x145
      size-2048: 2 journal_init_revoke+0x99/0x302
      size-2048: 2 journal_init_revoke+0x137/0x302
      size-2048: 2 journal_init_inode+0xf9/0x1c4
      
      Cc: Manfred Spraul <manfred@colorfullife.com>
      Cc: Alexander Nyberg <alexn@telia.com>
      Cc: Pekka Enberg <penberg@cs.helsinki.fi>
      Cc: Christoph Lameter <clameter@engr.sgi.com>
      Cc: Ravikiran Thirumalai <kiran@scalex86.org>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      DESC
      slab-leaks3-locking-fix
      EDESC
      From: Andrew Morton <akpm@osdl.org>
      
      Update for slab-remove-cachep-spinlock.patch
      
      Cc: Al Viro <viro@ftp.linux.org.uk>
      Cc: Manfred Spraul <manfred@colorfullife.com>
      Cc: Alexander Nyberg <alexn@telia.com>
      Cc: Pekka Enberg <penberg@cs.helsinki.fi>
      Cc: Christoph Lameter <clameter@engr.sgi.com>
      Cc: Ravikiran Thirumalai <kiran@scalex86.org>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      871751e2
  12. 24 3月, 2006 1 次提交
    • P
      [PATCH] cpuset memory spread slab cache implementation · 101a5001
      Paul Jackson 提交于
      Provide the slab cache infrastructure to support cpuset memory spreading.
      
      See the previous patches, cpuset_mem_spread, for an explanation of cpuset
      memory spreading.
      
      This patch provides a slab cache SLAB_MEM_SPREAD flag.  If set in the
      kmem_cache_create() call defining a slab cache, then any task marked with the
      process state flag PF_MEMSPREAD will spread memory page allocations for that
      cache over all the allowed nodes, instead of preferring the local (faulting)
      node.
      
      On systems not configured with CONFIG_NUMA, this results in no change to the
      page allocation code path for slab caches.
      
      On systems with cpusets configured in the kernel, but the "memory_spread"
      cpuset option not enabled for the current tasks cpuset, this adds a call to a
      cpuset routine and failed bit test of the processor state flag PF_SPREAD_SLAB.
      
      For tasks so marked, a second inline test is done for the slab cache flag
      SLAB_MEM_SPREAD, and if that is set and if the allocation is not
      in_interrupt(), this adds a call to to a cpuset routine that computes which of
      the tasks mems_allowed nodes should be preferred for this allocation.
      
      ==> This patch adds another hook into the performance critical
          code path to allocating objects from the slab cache, in the
          ____cache_alloc() chunk, below.  The next patch optimizes this
          hook, reducing the impact of the combined mempolicy plus memory
          spreading hooks on this critical code path to a single check
          against the tasks task_struct flags word.
      
      This patch provides the generic slab flags and logic needed to apply memory
      spreading to a particular slab.
      
      A subsequent patch will mark a few specific slab caches for this placement
      policy.
      Signed-off-by: NPaul Jackson <pj@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      101a5001
  13. 22 3月, 2006 2 次提交
    • C
      [PATCH] slab: Remove SLAB_NO_REAP option · ac2b898c
      Christoph Lameter 提交于
      SLAB_NO_REAP is documented as an option that will cause this slab not to be
      reaped under memory pressure.  However, that is not what happens.  The only
      thing that SLAB_NO_REAP controls at the moment is the reclaim of the unused
      slab elements that were allocated in batch in cache_reap().  Cache_reap()
      is run every few seconds independently of memory pressure.
      
      Could we remove the whole thing?  Its only used by three slabs anyways and
      I cannot find a reason for having this option.
      
      There is an additional problem with SLAB_NO_REAP.  If set then the recovery
      of objects from alien caches is switched off.  Objects not freed on the
      same node where they were initially allocated will only be reused if a
      certain amount of objects accumulates from one alien node (not very likely)
      or if the cache is explicitly shrunk.  (Strangely __cache_shrink does not
      check for SLAB_NO_REAP)
      
      Getting rid of SLAB_NO_REAP fixes the problems with alien cache freeing.
      Signed-off-by: NChristoph Lameter <clameter@sgi.com>
      Cc: Pekka Enberg <penberg@cs.helsinki.fi>
      Cc: Manfred Spraul <manfred@colorfullife.com>
      Cc: Mark Fasheh <mark.fasheh@oracle.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      ac2b898c
    • A
      [PATCH] kcalloc(): INT_MAX -> ULONG_MAX · b50ec7d8
      Adrian Bunk 提交于
      Since size_t has the same size as a long on all architectures, it's enough
      for overflow checks to check against ULONG_MAX.
      
      This change could allow a compiler better optimization (especially in the
      n=1 case).
      
      The practical effect seems to be positive, but quite small:
      
          text           data     bss      dec            hex filename
      21762380        5859870 1848928 29471178        1c1b1ca vmlinux-old
      21762211        5859870 1848928 29471009        1c1b121 vmlinux-patched
      Signed-off-by: NAdrian Bunk <bunk@stusta.de>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      b50ec7d8
  14. 02 2月, 2006 1 次提交
  15. 09 1月, 2006 1 次提交
    • M
      [PATCH] slob: introduce the SLOB allocator · 10cef602
      Matt Mackall 提交于
      configurable replacement for slab allocator
      
      This adds a CONFIG_SLAB option under CONFIG_EMBEDDED.  When CONFIG_SLAB is
      disabled, the kernel falls back to using the 'SLOB' allocator.
      
      SLOB is a traditional K&R/UNIX allocator with a SLAB emulation layer,
      similar to the original Linux kmalloc allocator that SLAB replaced.  It's
      signicantly smaller code and is more memory efficient.  But like all
      similar allocators, it scales poorly and suffers from fragmentation more
      than SLAB, so it's only appropriate for small systems.
      
      It's been tested extensively in the Linux-tiny tree.  I've also
      stress-tested it with make -j 8 compiles on a 3G SMP+PREEMPT box (not
      recommended).
      
      Here's a comparison for otherwise identical builds, showing SLOB saving
      nearly half a megabyte of RAM:
      
      $ size vmlinux*
         text    data     bss     dec     hex filename
      3336372  529360  190812 4056544  3de5e0 vmlinux-slab
      3323208  527948  190684 4041840  3dac70 vmlinux-slob
      
      $ size mm/{slab,slob}.o
         text    data     bss     dec     hex filename
        13221     752      48   14021    36c5 mm/slab.o
         1896      52       8    1956     7a4 mm/slob.o
      
      /proc/meminfo:
                        SLAB          SLOB      delta
      MemTotal:        27964 kB      27980 kB     +16 kB
      MemFree:         24596 kB      25092 kB    +496 kB
      Buffers:            36 kB         36 kB       0 kB
      Cached:           1188 kB       1188 kB       0 kB
      SwapCached:          0 kB          0 kB       0 kB
      Active:            608 kB        600 kB      -8 kB
      Inactive:          808 kB        812 kB      +4 kB
      HighTotal:           0 kB          0 kB       0 kB
      HighFree:            0 kB          0 kB       0 kB
      LowTotal:        27964 kB      27980 kB     +16 kB
      LowFree:         24596 kB      25092 kB    +496 kB
      SwapTotal:           0 kB          0 kB       0 kB
      SwapFree:            0 kB          0 kB       0 kB
      Dirty:               4 kB         12 kB      +8 kB
      Writeback:           0 kB          0 kB       0 kB
      Mapped:            560 kB        556 kB      -4 kB
      Slab:             1756 kB          0 kB   -1756 kB
      CommitLimit:     13980 kB      13988 kB      +8 kB
      Committed_AS:     4208 kB       4208 kB       0 kB
      PageTables:         28 kB         28 kB       0 kB
      VmallocTotal:  1007312 kB    1007312 kB       0 kB
      VmallocUsed:        48 kB         48 kB       0 kB
      VmallocChunk:  1007264 kB    1007264 kB       0 kB
      
      (this work has been sponsored in part by CELF)
      
      From: Ingo Molnar <mingo@elte.hu>
      
         Fix 32-bitness bugs in mm/slob.c.
      Signed-off-by: NMatt Mackall <mpm@selenic.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      10cef602
  16. 07 11月, 2005 1 次提交