1. 23 3月, 2011 1 次提交
  2. 03 2月, 2011 6 次提交
  3. 26 1月, 2011 4 次提交
  4. 21 1月, 2011 5 次提交
  5. 14 1月, 2011 11 次提交
  6. 31 12月, 2010 1 次提交
  7. 25 11月, 2010 3 次提交
    • M
      cgroups: make swap accounting default behavior configurable · a42c390c
      Michal Hocko 提交于
      Swap accounting can be configured by CONFIG_CGROUP_MEM_RES_CTLR_SWAP
      configuration option and then it is turned on by default.  There is a boot
      option (noswapaccount) which can disable this feature.
      
      This makes it hard for distributors to enable the configuration option as
      this feature leads to a bigger memory consumption and this is a no-go for
      general purpose distribution kernel.  On the other hand swap accounting
      may be very usuful for some workloads.
      
      This patch adds a new configuration option which controls the default
      behavior (CGROUP_MEM_RES_CTLR_SWAP_ENABLED).  If the option is selected
      then the feature is turned on by default.
      
      It also adds a new boot parameter swapaccount[=1|0] which enhances the
      original noswapaccount parameter semantic by means of enable/disable logic
      (defaults to 1 if no value is provided to be still consistent with
      noswapaccount).
      
      The default behavior is unchanged (if CONFIG_CGROUP_MEM_RES_CTLR_SWAP is
      enabled then CONFIG_CGROUP_MEM_RES_CTLR_SWAP_ENABLED is enabled as well)
      Signed-off-by: NMichal Hocko <mhocko@suse.cz>
      Acked-by: NDaisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
      Cc: Balbir Singh <balbir@linux.vnet.ibm.com>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      a42c390c
    • D
      memcg: avoid deadlock between move charge and try_charge() · b1dd693e
      Daisuke Nishimura 提交于
      __mem_cgroup_try_charge() can be called under down_write(&mmap_sem)(e.g.
      mlock does it). This means it can cause deadlock if it races with move charge:
      
      Ex.1)
                      move charge             |        try charge
        --------------------------------------+------------------------------
          mem_cgroup_can_attach()             |  down_write(&mmap_sem)
            mc.moving_task = current          |    ..
            mem_cgroup_precharge_mc()         |  __mem_cgroup_try_charge()
              mem_cgroup_count_precharge()    |    prepare_to_wait()
                down_read(&mmap_sem)          |    if (mc.moving_task)
                -> cannot aquire the lock     |    -> true
                                              |      schedule()
      
      Ex.2)
                      move charge             |        try charge
        --------------------------------------+------------------------------
          mem_cgroup_can_attach()             |
            mc.moving_task = current          |
            mem_cgroup_precharge_mc()         |
              mem_cgroup_count_precharge()    |
                down_read(&mmap_sem)          |
                ..                            |
                up_read(&mmap_sem)            |
                                              |  down_write(&mmap_sem)
          mem_cgroup_move_task()              |    ..
            mem_cgroup_move_charge()          |  __mem_cgroup_try_charge()
              down_read(&mmap_sem)            |    prepare_to_wait()
              -> cannot aquire the lock       |    if (mc.moving_task)
                                              |    -> true
                                              |      schedule()
      
      To avoid this deadlock, we do all the move charge works (both can_attach() and
      attach()) under one mmap_sem section.
      And after this patch, we set/clear mc.moving_task outside mc.lock, because we
      use the lock only to check mc.from/to.
      Signed-off-by: NDaisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
      Cc: Balbir Singh <balbir@linux.vnet.ibm.com>
      Acked-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: <stable@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      b1dd693e
    • K
      memcg: fix false positive VM_BUG on non-SMP · 112bc2e1
      Kirill A. Shutemov 提交于
      Fix this:
      
        kernel BUG at mm/memcontrol.c:2155!
        invalid opcode: 0000 [#1]
        last sysfs file:
      
        Pid: 18, comm: sh Not tainted 2.6.37-rc3 #3 /Bochs
        EIP: 0060:[<c10731b2>] EFLAGS: 00000246 CPU: 0
        EIP is at mem_cgroup_move_account+0xe2/0xf0
        EAX: 00000004 EBX: c6f931d4 ECX: c681c300 EDX: c681c000
        ESI: c681c300 EDI: ffffffea EBP: c681c000 ESP: c46f3e30
         DS: 007b ES: 007b FS: 0000 GS: 0033 SS: 0068
        Process sh (pid: 18, ti=c46f2000 task=c6826e60 task.ti=c46f2000)
        Stack:
         00000155 c681c000 0805f000 c46ee180 c46f3e5c c7058820 c1074d37 00000000
         08060000 c46db9a0 c46ec080 c7058820 0805f000 08060000 c46f3e98 c1074c50
         c106c75e c46f3e98 c46ec080 08060000 0805ffff c46db9a0 c46f3e98 c46e0340
        Call Trace:
         [<c1074d37>] ? mem_cgroup_move_charge_pte_range+0xe7/0x130
         [<c1074c50>] ? mem_cgroup_move_charge_pte_range+0x0/0x130
         [<c106c75e>] ? walk_page_range+0xee/0x1d0
         [<c10725d6>] ? mem_cgroup_move_task+0x66/0x90
         [<c1074c50>] ? mem_cgroup_move_charge_pte_range+0x0/0x130
         [<c1072570>] ? mem_cgroup_move_task+0x0/0x90
         [<c1042616>] ? cgroup_attach_task+0x136/0x200
         [<c1042878>] ? cgroup_tasks_write+0x48/0xc0
         [<c1041e9e>] ? cgroup_file_write+0xde/0x220
         [<c101398d>] ? do_page_fault+0x17d/0x3f0
         [<c108a79d>] ? alloc_fd+0x2d/0xd0
         [<c1041dc0>] ? cgroup_file_write+0x0/0x220
         [<c1077ba2>] ? vfs_write+0x92/0xc0
         [<c1077c81>] ? sys_write+0x41/0x70
         [<c1140e3d>] ? syscall_call+0x7/0xb
        Code: 03 00 74 09 8b 44 24 04 e8 1c f1 ff ff 89 73 04 8d 86 b0 00 00 00 b9 01 00 00 00 89 da 31 ff e8 65 f5 ff ff e9 4d ff ff ff 0f 0b <0f> 0b 0f 0b 0f 0b 90 8d b4 26 00 00 00 00 83 ec 10 8b 0d f4 e3
        EIP: [<c10731b2>] mem_cgroup_move_account+0xe2/0xf0 SS:ESP 0068:c46f3e30
        ---[ end trace 7daa1582159b6532 ]---
      
      lock_page_cgroup and unlock_page_cgroup are implemented using
      bit_spinlock.  bit_spinlock doesn't touch the bit if we are on non-SMP
      machine, so we can't use the bit to check whether the lock was taken.
      
      Let's introduce is_page_cgroup_locked based on bit_spin_is_locked instead
      of PageCgroupLocked to fix it.
      
      [akpm@linux-foundation.org: s/is_page_cgroup_locked/page_is_cgroup_locked/]
      Signed-off-by: NKirill A. Shutemov <kirill@shutemov.name>
      Reviewed-by: NJohannes Weiner <hannes@cmpxchg.org>
      Acked-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujtisu.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      112bc2e1
  8. 12 11月, 2010 1 次提交
  9. 28 10月, 2010 6 次提交
    • K
      memcg: generic filestat update interface · 26174efd
      KAMEZAWA Hiroyuki 提交于
      This patch extracts the core logic from mem_cgroup_update_file_mapped() as
      mem_cgroup_update_file_stat() and adds a wrapper.
      
      As a planned future update, memory cgroup has to count dirty pages to
      implement dirty_ratio/limit.  And more, the number of dirty pages is
      required to kick flusher thread to start writeback.  (Now, no kick.)
      
      This patch is preparation for it and makes other statistics implementation
      clearer.  Just a clean up.
      Signed-off-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Acked-by: NBalbir Singh <balbir@linux.vnet.ibm.com>
      Reviewed-by: NGreg Thelen <gthelen@google.com>
      Cc: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      26174efd
    • K
      memcg: cpu hotplug aware quick acount_move detection · 1489ebad
      KAMEZAWA Hiroyuki 提交于
      An event counter MEM_CGROUP_ON_MOVE is used for quick check whether file
      stat update can be done in async manner or not.  Now, it use percpu
      counter and for_each_possible_cpu to update.
      
      This patch replaces for_each_possible_cpu to for_each_online_cpu and adds
      necessary synchronization logic at CPU HOTPLUG.
      
      [akpm@linux-foundation.org: coding-style fixes]
      Signed-off-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Balbir Singh <balbir@in.ibm.com>
      Cc: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      1489ebad
    • K
      memcg: cpu hotplug aware percpu count updates · 711d3d2c
      KAMEZAWA Hiroyuki 提交于
      Now, memcgroup's per cpu coutner uses for_each_possible_cpu() to get the
      value.  It's better to use for_each_online_cpu() and a cpu hotplug
      handler.
      
      This patch only handles statistics counter.  MEM_CGROUP_ON_MOVE will be
      handled in another patch.
      Signed-off-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Balbir Singh <balbir@in.ibm.com>
      Cc: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      711d3d2c
    • K
      memcg: use for_each_mem_cgroup · 7d74b06f
      KAMEZAWA Hiroyuki 提交于
      In memory cgroup management, we sometimes have to walk through
      subhierarchy of cgroup to gather informaiton, or lock something, etc.
      
      Now, to do that, mem_cgroup_walk_tree() function is provided.  It calls
      given callback function per cgroup found.  But the bad thing is that it
      has to pass a fixed style function and argument, "void*" and it adds much
      type casting to memcontrol.c.
      
      To make the code clean, this patch replaces walk_tree() with
      
        for_each_mem_cgroup_tree(iter, root)
      
      An iterator style call.  The good point is that iterator call doesn't have
      to assume what kind of function is called under it.  A bad point is that
      it may cause reference-count leak if a caller use "break" from the loop by
      mistake.
      
      I think the benefit is larger.  The modified code seems straigtforward and
      easy to read because we don't have misterious callbacks and pointer cast.
      Signed-off-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Balbir Singh <balbir@in.ibm.com>
      Cc: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      7d74b06f
    • K
      memcg: avoid lock in updating file_mapped (Was fix race in file_mapped accouting flag management · 32047e2a
      KAMEZAWA Hiroyuki 提交于
      At accounting file events per memory cgroup, we need to find memory cgroup
      via page_cgroup->mem_cgroup.  Now, we use lock_page_cgroup() for guarantee
      pc->mem_cgroup is not overwritten while we make use of it.
      
      But, considering the context which page-cgroup for files are accessed,
      we can use alternative light-weight mutual execusion in the most case.
      
      At handling file-caches, the only race we have to take care of is "moving"
      account, IOW, overwriting page_cgroup->mem_cgroup.  (See comment in the
      patch)
      
      Unlike charge/uncharge, "move" happens not so frequently. It happens only when
      rmdir() and task-moving (with a special settings.)
      This patch adds a race-checker for file-cache-status accounting v.s. account
      moving. The new per-cpu-per-memcg counter MEM_CGROUP_ON_MOVE is added.
      The routine for account move
        1. Increment it before start moving
        2. Call synchronize_rcu()
        3. Decrement it after the end of moving.
      By this, file-status-counting routine can check it needs to call
      lock_page_cgroup(). In most case, I doesn't need to call it.
      
      Following is a perf data of a process which mmap()/munmap 32MB of file cache
      in a minute.
      
      Before patch:
          28.25%     mmap  mmap               [.] main
          22.64%     mmap  [kernel.kallsyms]  [k] page_fault
           9.96%     mmap  [kernel.kallsyms]  [k] mem_cgroup_update_file_mapped
           3.67%     mmap  [kernel.kallsyms]  [k] filemap_fault
           3.50%     mmap  [kernel.kallsyms]  [k] unmap_vmas
           2.99%     mmap  [kernel.kallsyms]  [k] __do_fault
           2.76%     mmap  [kernel.kallsyms]  [k] find_get_page
      
      After patch:
          30.00%     mmap  mmap               [.] main
          23.78%     mmap  [kernel.kallsyms]  [k] page_fault
           5.52%     mmap  [kernel.kallsyms]  [k] mem_cgroup_update_file_mapped
           3.81%     mmap  [kernel.kallsyms]  [k] unmap_vmas
           3.26%     mmap  [kernel.kallsyms]  [k] find_get_page
           3.18%     mmap  [kernel.kallsyms]  [k] __do_fault
           3.03%     mmap  [kernel.kallsyms]  [k] filemap_fault
           2.40%     mmap  [kernel.kallsyms]  [k] handle_mm_fault
           2.40%     mmap  [kernel.kallsyms]  [k] do_page_fault
      
      This patch reduces memcg's cost to some extent.
      (mem_cgroup_update_file_mapped is called by both of map/unmap)
      
      Note: It seems some more improvements are required..but no idea.
            maybe removing set/unset flag is required.
      Signed-off-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Reviewed-by: NDaisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
      Cc: Balbir Singh <balbir@in.ibm.com>
      Cc: Greg Thelen <gthelen@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      32047e2a
    • K
      memcg: fix race in file_mapped accouting flag management · 0c270f8f
      KAMEZAWA Hiroyuki 提交于
      Presently memory cgroup accounts file-mapped by counter and flag.  counter
      is working in the same way with zone_stat but FileMapped flag only exists
      in memcg (for helping move_account).
      
      This flag can be updated wrongly in a case.  Assume CPU0 and CPU1 and a
      thread mapping a page on CPU0, another thread unmapping it on CPU1.
      
          CPU0                   		CPU1
      				rmv rmap (mapcount 1->0)
         add rmap (mapcount 0->1)
         lock_page_cgroup()
         memcg counter+1		(some delay)
         set MAPPED FLAG.
         unlock_page_cgroup()
      				lock_page_cgroup()
      				memcg counter-1
      				clear MAPPED flag
      
      In the above sequence counter is properly updated but FLAG is not.  This
      means that representing a state by a flag which is maintained by counter
      needs some special care.
      
      To handle this, when clearing a flag, this patch check mapcount directly
      and clear the flag only when mapcount == 0.  (if mapcount >0, someone will
      make it to zero later and flag will be cleared.)
      
      Reverse case, dec-after-inc cannot be a problem because page_table_lock()
      works well for it.  (IOW, to make above sequence, 2 processes should touch
      the same page at once with map/unmap.)
      Signed-off-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Balbir Singh <balbir@in.ibm.com>
      Cc: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
      Cc: Greg Thelen <gthelen@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      0c270f8f
  10. 08 10月, 2010 1 次提交
  11. 11 8月, 2010 1 次提交