1. 28 5月, 2010 2 次提交
  2. 25 5月, 2010 1 次提交
    • M
      cpuset,mm: fix no node to alloc memory when changing cpuset's mems · c0ff7453
      Miao Xie 提交于
      Before applying this patch, cpuset updates task->mems_allowed and
      mempolicy by setting all new bits in the nodemask first, and clearing all
      old unallowed bits later.  But in the way, the allocator may find that
      there is no node to alloc memory.
      
      The reason is that cpuset rebinds the task's mempolicy, it cleans the
      nodes which the allocater can alloc pages on, for example:
      
      (mpol: mempolicy)
      	task1			task1's mpol	task2
      	alloc page		1
      	  alloc on node0? NO	1
      				1		change mems from 1 to 0
      				1		rebind task1's mpol
      				0-1		  set new bits
      				0	  	  clear disallowed bits
      	  alloc on node1? NO	0
      	  ...
      	can't alloc page
      	  goto oom
      
      This patch fixes this problem by expanding the nodes range first(set newly
      allowed bits) and shrink it lazily(clear newly disallowed bits).  So we
      use a variable to tell the write-side task that read-side task is reading
      nodemask, and the write-side task clears newly disallowed nodes after
      read-side task ends the current memory allocation.
      
      [akpm@linux-foundation.org: fix spello]
      Signed-off-by: NMiao Xie <miaox@cn.fujitsu.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Nick Piggin <npiggin@suse.de>
      Cc: Paul Menage <menage@google.com>
      Cc: Lee Schermerhorn <lee.schermerhorn@hp.com>
      Cc: Hugh Dickins <hugh.dickins@tiscali.co.uk>
      Cc: Ravikiran Thirumalai <kiran@scalex86.org>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Christoph Lameter <cl@linux-foundation.org>
      Cc: Andi Kleen <andi@firstfloor.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      c0ff7453
  3. 20 5月, 2010 1 次提交
  4. 15 4月, 2010 1 次提交
    • S
      slab: Fix missing DEBUG_SLAB last user · 5c5e3b33
      Shiyong Li 提交于
      Even with SLAB_RED_ZONE and SLAB_STORE_USER enabled, kernel would NOT store
      redzone and last user data around allocated memory space if "arch cache line >
      sizeof(unsigned long long)". As a result, last user information is unexpectedly
      MISSED while dumping slab corruption log.
      
      This fix makes sure that redzone and last user tags get stored unless the
      required alignment breaks redzone's.
      Signed-off-by: NShiyong Li <shi-yong.li@motorola.com>
      Signed-off-by: NPekka Enberg <penberg@cs.helsinki.fi>
      5c5e3b33
  5. 10 4月, 2010 1 次提交
    • P
      slab: Generify kernel pointer validation · fc1c1833
      Pekka Enberg 提交于
      As suggested by Linus, introduce a kern_ptr_validate() helper that does some
      sanity checks to make sure a pointer is a valid kernel pointer.  This is a
      preparational step for fixing SLUB kmem_ptr_validate().
      
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Christoph Lameter <cl@linux-foundation.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Matt Mackall <mpm@selenic.com>
      Cc: Nick Piggin <npiggin@suse.de>
      Signed-off-by: NPekka Enberg <penberg@cs.helsinki.fi>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      fc1c1833
  6. 08 4月, 2010 1 次提交
    • D
      slab: add memory hotplug support · 8f9f8d9e
      David Rientjes 提交于
      Slab lacks any memory hotplug support for nodes that are hotplugged
      without cpus being hotplugged.  This is possible at least on x86
      CONFIG_MEMORY_HOTPLUG_SPARSE kernels where SRAT entries are marked
      ACPI_SRAT_MEM_HOT_PLUGGABLE and the regions of RAM represent a seperate
      node.  It can also be done manually by writing the start address to
      /sys/devices/system/memory/probe for kernels that have
      CONFIG_ARCH_MEMORY_PROBE set, which is how this patch was tested, and
      then onlining the new memory region.
      
      When a node is hotadded, a nodelist for that node is allocated and
      initialized for each slab cache.  If this isn't completed due to a lack
      of memory, the hotadd is aborted: we have a reasonable expectation that
      kmalloc_node(nid) will work for all caches if nid is online and memory is
      available.
      
      Since nodelists must be allocated and initialized prior to the new node's
      memory actually being online, the struct kmem_list3 is allocated off-node
      due to kmalloc_node()'s fallback.
      
      When an entire node would be offlined, its nodelists are subsequently
      drained.  If slab objects still exist and cannot be freed, the offline is
      aborted.  It is possible that objects will be allocated between this
      drain and page isolation, so it's still possible that the offline will
      still fail, however.
      Acked-by: NChristoph Lameter <cl@linux-foundation.org>
      Signed-off-by: NDavid Rientjes <rientjes@google.com>
      Signed-off-by: NPekka Enberg <penberg@cs.helsinki.fi>
      8f9f8d9e
  7. 29 3月, 2010 1 次提交
  8. 27 2月, 2010 1 次提交
  9. 30 1月, 2010 1 次提交
    • N
      slab: fix regression in touched logic · 44b57f1c
      Nick Piggin 提交于
      When factoring common code into transfer_objects in commit 3ded175a ("slab: add
      transfer_objects() function"), the 'touched' logic got a bit broken. When
      refilling from the shared array (taking objects from the shared array), we are
      making use of the shared array so it should be marked as touched.
      
      Subsequently pulling an element from the cpu array and allocating it should
      also touch the cpu array, but that is taken care of after the alloc_done label.
      (So yes, the cpu array was getting touched = 1 twice).
      
      So revert this logic to how it worked in earlier kernels.
      
      This also affects the behaviour in __drain_alien_cache, which would previously
      'touch' the shared array and now does not. I think it is more logical not to
      touch there, because we are pushing objects into the shared array rather than
      pulling them off. So there is no good reason to postpone reaping them -- if the
      shared array is getting utilized, then it will get 'touched' in the alloc path
      (where this patch now restores the touch).
      Acked-by: NChristoph Lameter <cl@linux-foundation.org>
      Signed-off-by: NNick Piggin <npiggin@suse.de>
      Signed-off-by: NPekka Enberg <penberg@cs.helsinki.fi>
      44b57f1c
  10. 12 1月, 2010 1 次提交
  11. 29 12月, 2009 1 次提交
  12. 17 12月, 2009 1 次提交
  13. 11 12月, 2009 2 次提交
    • L
      tracing, slab: Fix no callsite ifndef CONFIG_KMEMTRACE · 0bb38a5c
      Li Zefan 提交于
      For slab, if CONFIG_KMEMTRACE and CONFIG_DEBUG_SLAB are not set,
      __do_kmalloc() will not track callers:
      
       # ./perf record -f -a -R -e kmem:kmalloc
       ^C
       # ./perf trace
       ...
                perf-2204  [000]   147.376774: kmalloc: call_site=c0529d2d ...
                perf-2204  [000]   147.400997: kmalloc: call_site=c0529d2d ...
                Xorg-1461  [001]   147.405413: kmalloc: call_site=0 ...
                Xorg-1461  [001]   147.405609: kmalloc: call_site=0 ...
             konsole-1776  [001]   147.405786: kmalloc: call_site=0 ...
      Signed-off-by: NLi Zefan <lizf@cn.fujitsu.com>
      Reviewed-by: NPekka Enberg <penberg@cs.helsinki.fi>
      Cc: Christoph Lameter <cl@linux-foundation.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: linux-mm@kvack.org <linux-mm@kvack.org>
      Cc: Eduard - Gabriel Munteanu <eduard.munteanu@linux360.ro>
      LKML-Reference: <4B21F8AE.6020804@cn.fujitsu.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      0bb38a5c
    • L
      tracing, slab: Define kmem_cache_alloc_notrace ifdef CONFIG_TRACING · 0f24f128
      Li Zefan 提交于
      Define kmem_trace_alloc_{,node}_notrace() if CONFIG_TRACING is
      enabled, otherwise perf-kmem will show wrong stats ifndef
      CONFIG_KMEM_TRACE, because a kmalloc() memory allocation may
      be traced by both trace_kmalloc() and trace_kmem_cache_alloc().
      Signed-off-by: NLi Zefan <lizf@cn.fujitsu.com>
      Reviewed-by: NPekka Enberg <penberg@cs.helsinki.fi>
      Cc: Christoph Lameter <cl@linux-foundation.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: linux-mm@kvack.org <linux-mm@kvack.org>
      Cc: Eduard - Gabriel Munteanu <eduard.munteanu@linux360.ro>
      LKML-Reference: <4B21F89A.7000801@cn.fujitsu.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      0f24f128
  14. 06 12月, 2009 3 次提交
  15. 01 12月, 2009 1 次提交
    • P
      SLAB: Fix lockdep annotations for CPU hotplug · ce79ddc8
      Pekka Enberg 提交于
      As reported by Paul McKenney:
      
        I am seeing some lockdep complaints in rcutorture runs that include
        frequent CPU-hotplug operations.  The tests are otherwise successful.
        My first thought was to send a patch that gave each array_cache
        structure's ->lock field its own struct lock_class_key, but you already
        have a init_lock_keys() that seems to be intended to deal with this.
      
        ------------------------------------------------------------------------
      
        =============================================
        [ INFO: possible recursive locking detected ]
        2.6.32-rc4-autokern1 #1
        ---------------------------------------------
        syslogd/2908 is trying to acquire lock:
         (&nc->lock){..-...}, at: [<c0000000001407f4>] .kmem_cache_free+0x118/0x2d4
      
        but task is already holding lock:
         (&nc->lock){..-...}, at: [<c0000000001411bc>] .kfree+0x1f0/0x324
      
        other info that might help us debug this:
        3 locks held by syslogd/2908:
         #0:  (&u->readlock){+.+.+.}, at: [<c0000000004556f8>] .unix_dgram_recvmsg+0x70/0x338
         #1:  (&nc->lock){..-...}, at: [<c0000000001411bc>] .kfree+0x1f0/0x324
         #2:  (&parent->list_lock){-.-...}, at: [<c000000000140f64>] .__drain_alien_cache+0x50/0xb8
      
        stack backtrace:
        Call Trace:
        [c0000000e8ccafc0] [c0000000000101e4] .show_stack+0x70/0x184 (unreliable)
        [c0000000e8ccb070] [c0000000000afebc] .validate_chain+0x6ec/0xf58
        [c0000000e8ccb180] [c0000000000b0ff0] .__lock_acquire+0x8c8/0x974
        [c0000000e8ccb280] [c0000000000b2290] .lock_acquire+0x140/0x18c
        [c0000000e8ccb350] [c000000000468df0] ._spin_lock+0x48/0x70
        [c0000000e8ccb3e0] [c0000000001407f4] .kmem_cache_free+0x118/0x2d4
        [c0000000e8ccb4a0] [c000000000140b90] .free_block+0x130/0x1a8
        [c0000000e8ccb540] [c000000000140f94] .__drain_alien_cache+0x80/0xb8
        [c0000000e8ccb5e0] [c0000000001411e0] .kfree+0x214/0x324
        [c0000000e8ccb6a0] [c0000000003ca860] .skb_release_data+0xe8/0x104
        [c0000000e8ccb730] [c0000000003ca2ec] .__kfree_skb+0x20/0xd4
        [c0000000e8ccb7b0] [c0000000003cf2c8] .skb_free_datagram+0x1c/0x5c
        [c0000000e8ccb830] [c00000000045597c] .unix_dgram_recvmsg+0x2f4/0x338
        [c0000000e8ccb920] [c0000000003c0f14] .sock_recvmsg+0xf4/0x13c
        [c0000000e8ccbb30] [c0000000003c28ec] .SyS_recvfrom+0xb4/0x130
        [c0000000e8ccbcb0] [c0000000003bfb78] .sys_recv+0x18/0x2c
        [c0000000e8ccbd20] [c0000000003ed388] .compat_sys_recv+0x14/0x28
        [c0000000e8ccbd90] [c0000000003ee1bc] .compat_sys_socketcall+0x178/0x220
        [c0000000e8ccbe30] [c0000000000085d4] syscall_exit+0x0/0x40
      
      This patch fixes the issue by setting up lockdep annotations during CPU
      hotplug.
      Reported-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Tested-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Christoph Lameter <cl@linux-foundation.org>
      Signed-off-by: NPekka Enberg <penberg@cs.helsinki.fi>
      ce79ddc8
  16. 29 10月, 2009 1 次提交
    • T
      percpu: make percpu symbols under kernel/ and mm/ unique · 1871e52c
      Tejun Heo 提交于
      This patch updates percpu related symbols under kernel/ and mm/ such
      that percpu symbols are unique and don't clash with local symbols.
      This serves two purposes of decreasing the possibility of global
      percpu symbol collision and allowing dropping per_cpu__ prefix from
      percpu symbols.
      
      * kernel/lockdep.c: s/lock_stats/cpu_lock_stats/
      
      * kernel/sched.c: s/init_rq_rt/init_rt_rq_var/	(any better idea?)
        		  s/sched_group_cpus/sched_groups/
      
      * kernel/softirq.c: s/ksoftirqd/run_ksoftirqd/a
      
      * kernel/softlockup.c: s/(*)_timestamp/softlockup_\1_ts/
        		       s/watchdog_task/softlockup_watchdog/
      		       s/timestamp/ts/ for local variables
      
      * kernel/time/timer_stats: s/lookup_lock/tstats_lookup_lock/
      
      * mm/slab.c: s/reap_work/slab_reap_work/
        	     s/reap_node/slab_reap_node/
      
      * mm/vmstat.c: local variable changed to avoid collision with vmstat_work
      
      Partly based on Rusty Russell's "alloc_percpu: rename percpu vars
      which cause name clashes" patch.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Acked-by: N(slab/vmstat) Christoph Lameter <cl@linux-foundation.org>
      Reviewed-by: NChristoph Lameter <cl@linux-foundation.org>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Nick Piggin <npiggin@suse.de>
      1871e52c
  17. 28 10月, 2009 2 次提交
  18. 22 9月, 2009 1 次提交
  19. 29 6月, 2009 1 次提交
    • P
      SLAB: Fix lockdep annotations · ec5a36f9
      Pekka Enberg 提交于
      Commit 8429db5c... ("slab: setup cpu caches later on when interrupts are
      enabled") broke mm/slab.c lockdep annotations:
      
        [   11.554715] =============================================
        [   11.555249] [ INFO: possible recursive locking detected ]
        [   11.555560] 2.6.31-rc1 #896
        [   11.555861] ---------------------------------------------
        [   11.556127] udevd/1899 is trying to acquire lock:
        [   11.556436]  (&nc->lock){-.-...}, at: [<ffffffff810c337f>] kmem_cache_free+0xcd/0x25b
        [   11.557101]
        [   11.557102] but task is already holding lock:
        [   11.557706]  (&nc->lock){-.-...}, at: [<ffffffff810c3cd0>] kfree+0x137/0x292
        [   11.558109]
        [   11.558109] other info that might help us debug this:
        [   11.558720] 2 locks held by udevd/1899:
        [   11.558983]  #0:  (&nc->lock){-.-...}, at: [<ffffffff810c3cd0>] kfree+0x137/0x292
        [   11.559734]  #1:  (&parent->list_lock){-.-...}, at: [<ffffffff810c36c7>] __drain_alien_cache+0x3b/0xbd
        [   11.560442]
        [   11.560443] stack backtrace:
        [   11.561009] Pid: 1899, comm: udevd Not tainted 2.6.31-rc1 #896
        [   11.561276] Call Trace:
        [   11.561632]  [<ffffffff81065ed6>] __lock_acquire+0x15ec/0x168f
        [   11.561901]  [<ffffffff81065f60>] ? __lock_acquire+0x1676/0x168f
        [   11.562171]  [<ffffffff81063c52>] ? trace_hardirqs_on_caller+0x113/0x13e
        [   11.562490]  [<ffffffff8150c337>] ? trace_hardirqs_on_thunk+0x3a/0x3f
        [   11.562807]  [<ffffffff8106603a>] lock_acquire+0xc1/0xe5
        [   11.563073]  [<ffffffff810c337f>] ? kmem_cache_free+0xcd/0x25b
        [   11.563385]  [<ffffffff8150c8fc>] _spin_lock+0x31/0x66
        [   11.563696]  [<ffffffff810c337f>] ? kmem_cache_free+0xcd/0x25b
        [   11.563964]  [<ffffffff810c337f>] kmem_cache_free+0xcd/0x25b
        [   11.564235]  [<ffffffff8109bf8c>] ? __free_pages+0x1b/0x24
        [   11.564551]  [<ffffffff810c3564>] slab_destroy+0x57/0x5c
        [   11.564860]  [<ffffffff810c3641>] free_block+0xd8/0x123
        [   11.565126]  [<ffffffff810c372e>] __drain_alien_cache+0xa2/0xbd
        [   11.565441]  [<ffffffff810c3ce5>] kfree+0x14c/0x292
        [   11.565752]  [<ffffffff8144a007>] skb_release_data+0xc6/0xcb
        [   11.566020]  [<ffffffff81449cf0>] __kfree_skb+0x19/0x86
        [   11.566286]  [<ffffffff81449d88>] consume_skb+0x2b/0x2d
        [   11.566631]  [<ffffffff8144cbe0>] skb_free_datagram+0x14/0x3a
        [   11.566901]  [<ffffffff81462eef>] netlink_recvmsg+0x164/0x258
        [   11.567170]  [<ffffffff81443461>] sock_recvmsg+0xe5/0xfe
        [   11.567486]  [<ffffffff810ab063>] ? might_fault+0xaf/0xb1
        [   11.567802]  [<ffffffff81053a78>] ? autoremove_wake_function+0x0/0x38
        [   11.568073]  [<ffffffff810d84ca>] ? core_sys_select+0x3d/0x2b4
        [   11.568378]  [<ffffffff81065f60>] ? __lock_acquire+0x1676/0x168f
        [   11.568693]  [<ffffffff81442dc1>] ? sockfd_lookup_light+0x1b/0x54
        [   11.568961]  [<ffffffff81444416>] sys_recvfrom+0xa3/0xf8
        [   11.569228]  [<ffffffff81063c8a>] ? trace_hardirqs_on+0xd/0xf
        [   11.569546]  [<ffffffff8100af2b>] system_call_fastpath+0x16/0x1b#
      
      Fix that up.
      
      Closes-bug: http://bugzilla.kernel.org/show_bug.cgi?id=13654Tested-by: NVenkatesh Pallipadi <venkatesh.pallipadi@intel.com>
      Signed-off-by: NPekka Enberg <penberg@cs.helsinki.fi>
      ec5a36f9
  20. 26 6月, 2009 1 次提交
  21. 19 6月, 2009 1 次提交
  22. 17 6月, 2009 2 次提交
  23. 15 6月, 2009 2 次提交
  24. 13 6月, 2009 1 次提交
  25. 12 6月, 2009 5 次提交
    • P
      slab: setup cpu caches later on when interrupts are enabled · 8429db5c
      Pekka Enberg 提交于
      Fixes the following boot-time warning:
      
        [    0.000000] ------------[ cut here ]------------
        [    0.000000] WARNING: at kernel/smp.c:369 smp_call_function_many+0x56/0x1bc()
        [    0.000000] Hardware name:
        [    0.000000] Modules linked in:
        [    0.000000] Pid: 0, comm: swapper Not tainted 2.6.30 #492
        [    0.000000] Call Trace:
        [    0.000000]  [<ffffffff8149e021>] ? _spin_unlock+0x4f/0x5c
        [    0.000000]  [<ffffffff8108f11b>] ? smp_call_function_many+0x56/0x1bc
        [    0.000000]  [<ffffffff81061764>] warn_slowpath_common+0x7c/0xa9
        [    0.000000]  [<ffffffff810617a5>] warn_slowpath_null+0x14/0x16
        [    0.000000]  [<ffffffff8108f11b>] smp_call_function_many+0x56/0x1bc
        [    0.000000]  [<ffffffff810f3e00>] ? do_ccupdate_local+0x0/0x54
        [    0.000000]  [<ffffffff810f3e00>] ? do_ccupdate_local+0x0/0x54
        [    0.000000]  [<ffffffff8108f2be>] smp_call_function+0x3d/0x68
        [    0.000000]  [<ffffffff810f3e00>] ? do_ccupdate_local+0x0/0x54
        [    0.000000]  [<ffffffff81066fd8>] on_each_cpu+0x31/0x7c
        [    0.000000]  [<ffffffff810f64f5>] do_tune_cpucache+0x119/0x454
        [    0.000000]  [<ffffffff81087080>] ? lockdep_init_map+0x94/0x10b
        [    0.000000]  [<ffffffff818133b0>] ? kmem_cache_init+0x421/0x593
        [    0.000000]  [<ffffffff810f69cf>] enable_cpucache+0x68/0xad
        [    0.000000]  [<ffffffff818133c3>] kmem_cache_init+0x434/0x593
        [    0.000000]  [<ffffffff8180987c>] ? mem_init+0x156/0x161
        [    0.000000]  [<ffffffff817f8aae>] start_kernel+0x1cc/0x3b9
        [    0.000000]  [<ffffffff817f829a>] x86_64_start_reservations+0xaa/0xae
        [    0.000000]  [<ffffffff817f837f>] x86_64_start_kernel+0xe1/0xe8
        [    0.000000] ---[ end trace 4eaa2a86a8e2da22 ]---
      
      Cc: Christoph Lameter <cl@linux-foundation.org>
      Cc: Nick Piggin <npiggin@suse.de>
      Signed-off-by: NPekka Enberg <penberg@cs.helsinki.fi>
      8429db5c
    • P
      slab,slub: don't enable interrupts during early boot · 7e85ee0c
      Pekka Enberg 提交于
      As explained by Benjamin Herrenschmidt:
      
        Oh and btw, your patch alone doesn't fix powerpc, because it's missing
        a whole bunch of GFP_KERNEL's in the arch code... You would have to
        grep the entire kernel for things that check slab_is_available() and
        even then you'll be missing some.
      
        For example, slab_is_available() didn't always exist, and so in the
        early days on powerpc, we used a mem_init_done global that is set form
        mem_init() (not perfect but works in practice). And we still have code
        using that to do the test.
      
      Therefore, mask out __GFP_WAIT, __GFP_IO, and __GFP_FS in the slab allocators
      in early boot code to avoid enabling interrupts.
      Signed-off-by: NPekka Enberg <penberg@cs.helsinki.fi>
      7e85ee0c
    • P
      slab: fix gfp flag in setup_cpu_cache() · eb91f1d0
      Pekka Enberg 提交于
      Fixes the following warning during bootup when compiling with CONFIG_SLAB:
      
        [    0.000000] ------------[ cut here ]------------
        [    0.000000] WARNING: at kernel/lockdep.c:2282 lockdep_trace_alloc+0x91/0xb9()
        [    0.000000] Hardware name:
        [    0.000000] Modules linked in:
        [    0.000000] Pid: 0, comm: swapper Not tainted 2.6.30 #491
        [    0.000000] Call Trace:
        [    0.000000]  [<ffffffff81087d84>] ? lockdep_trace_alloc+0x91/0xb9
        [    0.000000]  [<ffffffff81061764>] warn_slowpath_common+0x7c/0xa9
        [    0.000000]  [<ffffffff810617a5>] warn_slowpath_null+0x14/0x16
        [    0.000000]  [<ffffffff81087d84>] lockdep_trace_alloc+0x91/0xb9
        [    0.000000]  [<ffffffff810f5b03>] kmem_cache_alloc_node_notrace+0x26/0xdf
        [    0.000000]  [<ffffffff81487f4e>] ? setup_cpu_cache+0x7e/0x210
        [    0.000000]  [<ffffffff81487fe3>] setup_cpu_cache+0x113/0x210
        [    0.000000]  [<ffffffff810f73ff>] kmem_cache_create+0x409/0x486
        [    0.000000]  [<ffffffff818131c1>] kmem_cache_init+0x232/0x593
        [    0.000000]  [<ffffffff8180987c>] ? mem_init+0x156/0x161
        [    0.000000]  [<ffffffff817f8aae>] start_kernel+0x1cc/0x3b9
        [    0.000000]  [<ffffffff817f829a>] x86_64_start_reservations+0xaa/0xae
        [    0.000000]  [<ffffffff817f837f>] x86_64_start_kernel+0xe1/0xe8
        [    0.000000] ---[ end trace 4eaa2a86a8e2da22 ]---
      Signed-off-by: NPekka Enberg <penberg@cs.helsinki.fi>
      eb91f1d0
    • P
      slab: setup allocators earlier in the boot sequence · 83b519e8
      Pekka Enberg 提交于
      This patch makes kmalloc() available earlier in the boot sequence so we can get
      rid of some bootmem allocations. The bulk of the changes are due to
      kmem_cache_init() being called with interrupts disabled which requires some
      changes to allocator boostrap code.
      
      Note: 32-bit x86 does WP protect test in mem_init() so we must setup traps
      before we call mem_init() during boot as reported by Ingo Molnar:
      
        We have a hard crash in the WP-protect code:
      
        [    0.000000] Checking if this processor honours the WP bit even in supervisor mode...BUG: Int 14: CR2 ffcff000
        [    0.000000]      EDI 00000188  ESI 00000ac7  EBP c17eaf9c  ESP c17eaf8c
        [    0.000000]      EBX 000014e0  EDX 0000000e  ECX 01856067  EAX 00000001
        [    0.000000]      err 00000003  EIP c10135b1   CS 00000060  flg 00010002
        [    0.000000] Stack: c17eafa8 c17fd410 c16747bc c17eafc4 c17fd7e5 000011fd f8616000 c18237cc
        [    0.000000]        00099800 c17bb000 c17eafec c17f1668 000001c5 c17f1322 c166e039 c1822bf0
        [    0.000000]        c166e033 c153a014 c18237cc 00020800 c17eaff8 c17f106a 00020800 01ba5003
        [    0.000000] Pid: 0, comm: swapper Not tainted 2.6.30-tip-02161-g7a74539-dirty #52203
        [    0.000000] Call Trace:
        [    0.000000]  [<c15357c2>] ? printk+0x14/0x16
        [    0.000000]  [<c10135b1>] ? do_test_wp_bit+0x19/0x23
        [    0.000000]  [<c17fd410>] ? test_wp_bit+0x26/0x64
        [    0.000000]  [<c17fd7e5>] ? mem_init+0x1ba/0x1d8
        [    0.000000]  [<c17f1668>] ? start_kernel+0x164/0x2f7
        [    0.000000]  [<c17f1322>] ? unknown_bootoption+0x0/0x19c
        [    0.000000]  [<c17f106a>] ? __init_begin+0x6a/0x6f
      Acked-by: NJohannes Weiner <hannes@cmpxchg.org>
      Acked-by Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Christoph Lameter <cl@linux-foundation.org>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Matt Mackall <mpm@selenic.com>
      Cc: Nick Piggin <npiggin@suse.de>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Signed-off-by: NPekka Enberg <penberg@cs.helsinki.fi>
      83b519e8
    • C
      kmemleak: Add the slab memory allocation/freeing hooks · d5cff635
      Catalin Marinas 提交于
      This patch adds the callbacks to kmemleak_(alloc|free) functions from
      the slab allocator. The patch also adds the SLAB_NOLEAKTRACE flag to
      avoid recursive calls to kmemleak when it allocates its own data
      structures.
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      Reviewed-by: NPekka Enberg <penberg@cs.helsinki.fi>
      d5cff635
  26. 22 5月, 2009 1 次提交
    • R
      slab: fix generic PAGE_POISONING conflict with SLAB_RED_ZONE · 67461365
      Ron Lee 提交于
      A generic page poisoning mechanism was added with commit:
       6a11f75b
      which destructively poisons full pages with a bitpattern.
      
      On arches where PAGE_POISONING is used, this conflicts with the slab
      redzone checking enabled by DEBUG_SLAB, scribbling bits all over its
      magic words and making it complain about that quite emphatically.
      
      On x86 (and I presume at present all the other arches which set
      ARCH_SUPPORTS_DEBUG_PAGEALLOC too), the kernel_map_pages() operation
      is non destructive so it can coexist with the other DEBUG_SLAB
      mechanisms just fine.
      
      This patch favours the expensive full page destruction test for
      cases where there is a collision and it is explicitly selected.
      Signed-off-by: NRon Lee <ron@debian.org>
      Signed-off-by: NPekka Enberg <penberg@cs.helsinki.fi>
      67461365
  27. 12 4月, 2009 1 次提交
  28. 03 4月, 2009 2 次提交