1. 11 1月, 2011 1 次提交
  2. 04 12月, 2010 1 次提交
    • T
      slub: Fix a crash during slabinfo -v · 37d57443
      Tero Roponen 提交于
      Commit f7cb1933 ("SLUB: Pass active
      and inactive redzone flags instead of boolean to debug functions")
      missed two instances of check_object(). This caused a lot of warnings
      during 'slabinfo -v' finally leading to a crash:
      
        BUG ext4_xattr: Freepointer corrupt
        ...
        BUG buffer_head: Freepointer corrupt
        ...
        BUG ext4_alloc_context: Freepointer corrupt
        ...
        ...
        BUG: unable to handle kernel NULL pointer dereference at 0000000000000008
        IP: [<ffffffff810a291f>] file_sb_list_del+0x1c/0x35
        PGD 79d78067 PUD 79e67067 PMD 0
        Oops: 0002 [#1] SMP
        last sysfs file: /sys/kernel/slab/:t-0000192/validate
      
      This patch fixes the problem by converting the two missed instances.
      Acked-by: NChristoph Lameter <cl@linux.com>
      Signed-off-by: NTero Roponen <tero.roponen@gmail.com>
      Signed-off-by: NPekka Enberg <penberg@kernel.org>
      37d57443
  3. 14 11月, 2010 1 次提交
  4. 07 10月, 2010 1 次提交
  5. 06 10月, 2010 3 次提交
  6. 02 10月, 2010 18 次提交
  7. 03 8月, 2010 2 次提交
  8. 29 7月, 2010 1 次提交
    • C
      slub numa: Fix rare allocation from unexpected node · bc6488e9
      Christoph Lameter 提交于
      The network developers have seen sporadic allocations resulting in objects
      coming from unexpected NUMA nodes despite asking for objects from a
      specific node.
      
      This is due to get_partial() calling get_any_partial() if partial
      slabs are exhausted for a node even if a node was specified and therefore
      one would expect allocations only from the specified node.
      
      get_any_partial() sporadically may return a slab from a foreign
      node to gradually reduce the size of partial lists on remote nodes
      and thereby reduce total memory use for a slab cache.
      
      The behavior is controlled by the remote_defrag_ratio of each cache.
      
      Strictly speaking this is permitted behavior since __GFP_THISNODE was
      not specified for the allocation but it is certain surprising.
      
      This patch makes sure that the remote defrag behavior only occurs
      if no node was specified.
      Signed-off-by: NChristoph Lameter <cl@linux-foundation.org>
      Signed-off-by: NPekka Enberg <penberg@cs.helsinki.fi>
      bc6488e9
  9. 16 7月, 2010 5 次提交
  10. 09 6月, 2010 1 次提交
  11. 25 5月, 2010 2 次提交
    • M
      cpuset,mm: fix no node to alloc memory when changing cpuset's mems · c0ff7453
      Miao Xie 提交于
      Before applying this patch, cpuset updates task->mems_allowed and
      mempolicy by setting all new bits in the nodemask first, and clearing all
      old unallowed bits later.  But in the way, the allocator may find that
      there is no node to alloc memory.
      
      The reason is that cpuset rebinds the task's mempolicy, it cleans the
      nodes which the allocater can alloc pages on, for example:
      
      (mpol: mempolicy)
      	task1			task1's mpol	task2
      	alloc page		1
      	  alloc on node0? NO	1
      				1		change mems from 1 to 0
      				1		rebind task1's mpol
      				0-1		  set new bits
      				0	  	  clear disallowed bits
      	  alloc on node1? NO	0
      	  ...
      	can't alloc page
      	  goto oom
      
      This patch fixes this problem by expanding the nodes range first(set newly
      allowed bits) and shrink it lazily(clear newly disallowed bits).  So we
      use a variable to tell the write-side task that read-side task is reading
      nodemask, and the write-side task clears newly disallowed nodes after
      read-side task ends the current memory allocation.
      
      [akpm@linux-foundation.org: fix spello]
      Signed-off-by: NMiao Xie <miaox@cn.fujitsu.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Nick Piggin <npiggin@suse.de>
      Cc: Paul Menage <menage@google.com>
      Cc: Lee Schermerhorn <lee.schermerhorn@hp.com>
      Cc: Hugh Dickins <hugh.dickins@tiscali.co.uk>
      Cc: Ravikiran Thirumalai <kiran@scalex86.org>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Christoph Lameter <cl@linux-foundation.org>
      Cc: Andi Kleen <andi@firstfloor.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      c0ff7453
    • A
      slub: move kmem_cache_node into it's own cacheline · 73367bd8
      Alexander Duyck 提交于
      This patch is meant to improve the performance of SLUB by moving the local
      kmem_cache_node lock into it's own cacheline separate from kmem_cache.
      This is accomplished by simply removing the local_node when NUMA is enabled.
      
      On my system with 2 nodes I saw around a 5% performance increase w/
      hackbench times dropping from 6.2 seconds to 5.9 seconds on average.  I
      suspect the performance gain would increase as the number of nodes
      increases, but I do not have the data to currently back that up.
      
      Bugzilla-Reference: http://bugzilla.kernel.org/show_bug.cgi?id=15713
      Cc: <stable@kernel.org>
      Reported-by: NAlex Shi <alex.shi@intel.com>
      Tested-by: NAlex Shi <alex.shi@intel.com>
      Acked-by: NYanmin Zhang <yanmin_zhang@linux.intel.com>
      Acked-by: NChristoph Lameter <cl@linux-foundation.org>
      Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com>
      Signed-off-by: NPekka Enberg <penberg@cs.helsinki.fi>
      73367bd8
  12. 22 5月, 2010 3 次提交
  13. 20 5月, 2010 1 次提交