1. 01 12月, 2009 1 次提交
    • P
      SLAB: Fix lockdep annotations for CPU hotplug · ce79ddc8
      Pekka Enberg 提交于
      As reported by Paul McKenney:
      
        I am seeing some lockdep complaints in rcutorture runs that include
        frequent CPU-hotplug operations.  The tests are otherwise successful.
        My first thought was to send a patch that gave each array_cache
        structure's ->lock field its own struct lock_class_key, but you already
        have a init_lock_keys() that seems to be intended to deal with this.
      
        ------------------------------------------------------------------------
      
        =============================================
        [ INFO: possible recursive locking detected ]
        2.6.32-rc4-autokern1 #1
        ---------------------------------------------
        syslogd/2908 is trying to acquire lock:
         (&nc->lock){..-...}, at: [<c0000000001407f4>] .kmem_cache_free+0x118/0x2d4
      
        but task is already holding lock:
         (&nc->lock){..-...}, at: [<c0000000001411bc>] .kfree+0x1f0/0x324
      
        other info that might help us debug this:
        3 locks held by syslogd/2908:
         #0:  (&u->readlock){+.+.+.}, at: [<c0000000004556f8>] .unix_dgram_recvmsg+0x70/0x338
         #1:  (&nc->lock){..-...}, at: [<c0000000001411bc>] .kfree+0x1f0/0x324
         #2:  (&parent->list_lock){-.-...}, at: [<c000000000140f64>] .__drain_alien_cache+0x50/0xb8
      
        stack backtrace:
        Call Trace:
        [c0000000e8ccafc0] [c0000000000101e4] .show_stack+0x70/0x184 (unreliable)
        [c0000000e8ccb070] [c0000000000afebc] .validate_chain+0x6ec/0xf58
        [c0000000e8ccb180] [c0000000000b0ff0] .__lock_acquire+0x8c8/0x974
        [c0000000e8ccb280] [c0000000000b2290] .lock_acquire+0x140/0x18c
        [c0000000e8ccb350] [c000000000468df0] ._spin_lock+0x48/0x70
        [c0000000e8ccb3e0] [c0000000001407f4] .kmem_cache_free+0x118/0x2d4
        [c0000000e8ccb4a0] [c000000000140b90] .free_block+0x130/0x1a8
        [c0000000e8ccb540] [c000000000140f94] .__drain_alien_cache+0x80/0xb8
        [c0000000e8ccb5e0] [c0000000001411e0] .kfree+0x214/0x324
        [c0000000e8ccb6a0] [c0000000003ca860] .skb_release_data+0xe8/0x104
        [c0000000e8ccb730] [c0000000003ca2ec] .__kfree_skb+0x20/0xd4
        [c0000000e8ccb7b0] [c0000000003cf2c8] .skb_free_datagram+0x1c/0x5c
        [c0000000e8ccb830] [c00000000045597c] .unix_dgram_recvmsg+0x2f4/0x338
        [c0000000e8ccb920] [c0000000003c0f14] .sock_recvmsg+0xf4/0x13c
        [c0000000e8ccbb30] [c0000000003c28ec] .SyS_recvfrom+0xb4/0x130
        [c0000000e8ccbcb0] [c0000000003bfb78] .sys_recv+0x18/0x2c
        [c0000000e8ccbd20] [c0000000003ed388] .compat_sys_recv+0x14/0x28
        [c0000000e8ccbd90] [c0000000003ee1bc] .compat_sys_socketcall+0x178/0x220
        [c0000000e8ccbe30] [c0000000000085d4] syscall_exit+0x0/0x40
      
      This patch fixes the issue by setting up lockdep annotations during CPU
      hotplug.
      Reported-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Tested-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Christoph Lameter <cl@linux-foundation.org>
      Signed-off-by: NPekka Enberg <penberg@cs.helsinki.fi>
      ce79ddc8
  2. 18 11月, 2009 2 次提交
  3. 12 11月, 2009 4 次提交
  4. 10 11月, 2009 3 次提交
  5. 04 11月, 2009 1 次提交
  6. 03 11月, 2009 1 次提交
  7. 01 11月, 2009 1 次提交
  8. 29 10月, 2009 10 次提交
  9. 28 10月, 2009 1 次提交
    • J
      percpu: allow pcpu_alloc() to be called with IRQs off · 403a91b1
      Jiri Kosina 提交于
      pcpu_alloc() and pcpu_extend_area_map() perform a series of
      spin_lock_irq()/spin_unlock_irq() calls, which make them unsafe
      with respect to being called from contexts which have IRQs off.
      
      This patch converts the code to perform save/restore of flags instead,
      making pcpu_alloc() (or __alloc_percpu() respectively) to be called
      from early kernel startup stage, where IRQs are off.
      
      This is needed for proper initialization of per-cpu rq_weight data from
      sched_init().
      
      tj: added comment explaining why irqsave/restore is used in alloc path.
      Signed-off-by: NJiri Kosina <jkosina@suse.cz>
      Acked-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NTejun Heo <tj@kernel.org>
      403a91b1
  10. 27 10月, 2009 1 次提交
  11. 19 10月, 2009 4 次提交
  12. 12 10月, 2009 2 次提交
  13. 10 10月, 2009 2 次提交
  14. 09 10月, 2009 2 次提交
  15. 08 10月, 2009 3 次提交
  16. 02 10月, 2009 2 次提交
    • K
      memcg: reduce check for softlimit excess · ef8745c1
      KAMEZAWA Hiroyuki 提交于
      In charge/uncharge/reclaim path, usage_in_excess is calculated repeatedly
      and it takes res_counter's spin_lock every time.
      
      This patch removes unnecessary calls for res_count_soft_limit_excess.
      Reviewed-by: NDaisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
      Signed-off-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Paul Menage <menage@google.com>
      Cc: Li Zefan <lizf@cn.fujitsu.com>
      Cc: Balbir Singh <balbir@in.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      ef8745c1
    • K
      memcg: some modification to softlimit under hierarchical memory reclaim. · 4e649152
      KAMEZAWA Hiroyuki 提交于
      This patch clean up/fixes for memcg's uncharge soft limit path.
      
      Problems:
        Now, res_counter_charge()/uncharge() handles softlimit information at
        charge/uncharge and softlimit-check is done when event counter per memcg
        goes over limit. Now, event counter per memcg is updated only when
        memory usage is over soft limit. Here, considering hierarchical memcg
        management, ancesotors should be taken care of.
      
        Now, ancerstors(hierarchy) are handled in charge() but not in uncharge().
        This is not good.
      
        Prolems:
        1. memcg's event counter incremented only when softlimit hits. That's bad.
           It makes event counter hard to be reused for other purpose.
      
        2. At uncharge, only the lowest level rescounter is handled. This is bug.
           Because ancesotor's event counter is not incremented, children should
           take care of them.
      
        3. res_counter_uncharge()'s 3rd argument is NULL in most case.
           ops under res_counter->lock should be small. No "if" sentense is better.
      
      Fixes:
        * Removed soft_limit_xx poitner and checks in charge and uncharge.
          Do-check-only-when-necessary scheme works enough well without them.
      
        * make event-counter of memcg incremented at every charge/uncharge.
          (per-cpu area will be accessed soon anyway)
      
        * All ancestors are checked at soft-limit-check. This is necessary because
          ancesotor's event counter may never be modified. Then, they should be
          checked at the same time.
      Reviewed-by: NDaisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
      Signed-off-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Paul Menage <menage@google.com>
      Cc: Li Zefan <lizf@cn.fujitsu.com>
      Cc: Balbir Singh <balbir@in.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      4e649152