1. 04 12月, 2010 2 次提交
    • T
      slub: Fix a crash during slabinfo -v · 37d57443
      Tero Roponen 提交于
      Commit f7cb1933 ("SLUB: Pass active
      and inactive redzone flags instead of boolean to debug functions")
      missed two instances of check_object(). This caused a lot of warnings
      during 'slabinfo -v' finally leading to a crash:
      
        BUG ext4_xattr: Freepointer corrupt
        ...
        BUG buffer_head: Freepointer corrupt
        ...
        BUG ext4_alloc_context: Freepointer corrupt
        ...
        ...
        BUG: unable to handle kernel NULL pointer dereference at 0000000000000008
        IP: [<ffffffff810a291f>] file_sb_list_del+0x1c/0x35
        PGD 79d78067 PUD 79e67067 PMD 0
        Oops: 0002 [#1] SMP
        last sysfs file: /sys/kernel/slab/:t-0000192/validate
      
      This patch fixes the problem by converting the two missed instances.
      Acked-by: NChristoph Lameter <cl@linux.com>
      Signed-off-by: NTero Roponen <tero.roponen@gmail.com>
      Signed-off-by: NPekka Enberg <penberg@kernel.org>
      37d57443
    • T
      slub: Fix a crash during slabinfo -v · 8165984a
      Tero Roponen 提交于
      Commit f7cb1933 ("SLUB: Pass active
      and inactive redzone flags instead of boolean to debug functions")
      missed two instances of check_object(). This caused a lot of warnings
      during 'slabinfo -v' finally leading to a crash:
      
        BUG ext4_xattr: Freepointer corrupt
        ...
        BUG buffer_head: Freepointer corrupt
        ...
        BUG ext4_alloc_context: Freepointer corrupt
        ...
        ...
        BUG: unable to handle kernel NULL pointer dereference at 0000000000000008
        IP: [<ffffffff810a291f>] file_sb_list_del+0x1c/0x35
        PGD 79d78067 PUD 79e67067 PMD 0
        Oops: 0002 [#1] SMP
        last sysfs file: /sys/kernel/slab/:t-0000192/validate
      
      This patch fixes the problem by converting the two missed instances.
      Acked-by: NChristoph Lameter <cl@linux.com>
      Signed-off-by: NTero Roponen <tero.roponen@gmail.com>
      Signed-off-by: NPekka Enberg <penberg@kernel.org>
      8165984a
  2. 14 11月, 2010 1 次提交
  3. 06 11月, 2010 2 次提交
    • P
      slub: Fix slub_lock down/up imbalance · 98072e4d
      Pavel Emelyanov 提交于
      There are two places, that do not release the slub_lock.
      
      Respective bugs were introduced by sysfs changes ab4d5ed5 (slub: Enable
      sysfs support for !CONFIG_SLUB_DEBUG) and 2bce6485 ( slub: Allow removal
      of slab caches during boot).
      Acked-by: NChristoph Lameter <cl@linux.com>
      Signed-off-by: NPavel Emelyanov <xemul@openvz.org>
      Signed-off-by: NPekka Enberg <penberg@kernel.org>
      98072e4d
    • R
      slub tracing: move trace calls out of always inlined functions to reduce kernel code size · 4a92379b
      Richard Kennedy 提交于
      Having the trace calls defined in the always inlined kmalloc functions
      in include/linux/slub_def.h causes a lot of code duplication as the
      trace functions get instantiated for each kamalloc call site. This can
      simply be removed by pushing the trace calls down into the functions in
      slub.c.
      
      On my x86_64 built this patch shrinks the code size of the kernel by
      approx 36K and also shrinks the code size of many modules -- too many to
      list here ;)
      
      size vmlinux (2.6.36) reports
             text        data     bss     dec     hex filename
          5410611	 743172	 828928	6982711	 6a8c37	vmlinux
          5373738	 744244	 828928	6946910	 6a005e	vmlinux + patch
      
      The resulting kernel has had some testing & kmalloc trace still seems to
      work.
      
      This patch
      - moves trace_kmalloc out of the inlined kmalloc() and pushes it down
      into kmem_cache_alloc_trace() so this it only get instantiated once.
      
      - rename kmem_cache_alloc_notrace()  to kmem_cache_alloc_trace() to
      indicate that now is does have tracing. (maybe this would better being
      called something like kmalloc_kmem_cache ?)
      
      - adds a new function kmalloc_order() to handle allocation and tracing
      of large allocations of page order.
      
      - removes tracing from the inlined kmalloc_large() replacing them with a
      call to kmalloc_order();
      
      - move tracing out of inlined kmalloc_node() and pushing it down into
      kmem_cache_alloc_node_trace
      
      - rename kmem_cache_alloc_node_notrace() to
      kmem_cache_alloc_node_trace()
      
      - removes the include of trace/events/kmem.h from slub_def.h.
      
      v2
      - keep kmalloc_order_trace inline when !CONFIG_TRACE
      Signed-off-by: NRichard Kennedy <richard@rsk.demon.co.uk>
      Signed-off-by: NPekka Enberg <penberg@kernel.org>
      4a92379b
  4. 07 10月, 2010 1 次提交
  5. 06 10月, 2010 3 次提交
  6. 02 10月, 2010 18 次提交
  7. 03 8月, 2010 2 次提交
  8. 29 7月, 2010 1 次提交
    • C
      slub numa: Fix rare allocation from unexpected node · bc6488e9
      Christoph Lameter 提交于
      The network developers have seen sporadic allocations resulting in objects
      coming from unexpected NUMA nodes despite asking for objects from a
      specific node.
      
      This is due to get_partial() calling get_any_partial() if partial
      slabs are exhausted for a node even if a node was specified and therefore
      one would expect allocations only from the specified node.
      
      get_any_partial() sporadically may return a slab from a foreign
      node to gradually reduce the size of partial lists on remote nodes
      and thereby reduce total memory use for a slab cache.
      
      The behavior is controlled by the remote_defrag_ratio of each cache.
      
      Strictly speaking this is permitted behavior since __GFP_THISNODE was
      not specified for the allocation but it is certain surprising.
      
      This patch makes sure that the remote defrag behavior only occurs
      if no node was specified.
      Signed-off-by: NChristoph Lameter <cl@linux-foundation.org>
      Signed-off-by: NPekka Enberg <penberg@cs.helsinki.fi>
      bc6488e9
  9. 16 7月, 2010 5 次提交
  10. 09 6月, 2010 1 次提交
  11. 25 5月, 2010 2 次提交
    • M
      cpuset,mm: fix no node to alloc memory when changing cpuset's mems · c0ff7453
      Miao Xie 提交于
      Before applying this patch, cpuset updates task->mems_allowed and
      mempolicy by setting all new bits in the nodemask first, and clearing all
      old unallowed bits later.  But in the way, the allocator may find that
      there is no node to alloc memory.
      
      The reason is that cpuset rebinds the task's mempolicy, it cleans the
      nodes which the allocater can alloc pages on, for example:
      
      (mpol: mempolicy)
      	task1			task1's mpol	task2
      	alloc page		1
      	  alloc on node0? NO	1
      				1		change mems from 1 to 0
      				1		rebind task1's mpol
      				0-1		  set new bits
      				0	  	  clear disallowed bits
      	  alloc on node1? NO	0
      	  ...
      	can't alloc page
      	  goto oom
      
      This patch fixes this problem by expanding the nodes range first(set newly
      allowed bits) and shrink it lazily(clear newly disallowed bits).  So we
      use a variable to tell the write-side task that read-side task is reading
      nodemask, and the write-side task clears newly disallowed nodes after
      read-side task ends the current memory allocation.
      
      [akpm@linux-foundation.org: fix spello]
      Signed-off-by: NMiao Xie <miaox@cn.fujitsu.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Nick Piggin <npiggin@suse.de>
      Cc: Paul Menage <menage@google.com>
      Cc: Lee Schermerhorn <lee.schermerhorn@hp.com>
      Cc: Hugh Dickins <hugh.dickins@tiscali.co.uk>
      Cc: Ravikiran Thirumalai <kiran@scalex86.org>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Christoph Lameter <cl@linux-foundation.org>
      Cc: Andi Kleen <andi@firstfloor.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      c0ff7453
    • A
      slub: move kmem_cache_node into it's own cacheline · 73367bd8
      Alexander Duyck 提交于
      This patch is meant to improve the performance of SLUB by moving the local
      kmem_cache_node lock into it's own cacheline separate from kmem_cache.
      This is accomplished by simply removing the local_node when NUMA is enabled.
      
      On my system with 2 nodes I saw around a 5% performance increase w/
      hackbench times dropping from 6.2 seconds to 5.9 seconds on average.  I
      suspect the performance gain would increase as the number of nodes
      increases, but I do not have the data to currently back that up.
      
      Bugzilla-Reference: http://bugzilla.kernel.org/show_bug.cgi?id=15713
      Cc: <stable@kernel.org>
      Reported-by: NAlex Shi <alex.shi@intel.com>
      Tested-by: NAlex Shi <alex.shi@intel.com>
      Acked-by: NYanmin Zhang <yanmin_zhang@linux.intel.com>
      Acked-by: NChristoph Lameter <cl@linux-foundation.org>
      Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com>
      Signed-off-by: NPekka Enberg <penberg@cs.helsinki.fi>
      73367bd8
  12. 22 5月, 2010 2 次提交