1. 28 10月, 2010 6 次提交
    • K
      memcg: generic filestat update interface · 26174efd
      KAMEZAWA Hiroyuki 提交于
      This patch extracts the core logic from mem_cgroup_update_file_mapped() as
      mem_cgroup_update_file_stat() and adds a wrapper.
      
      As a planned future update, memory cgroup has to count dirty pages to
      implement dirty_ratio/limit.  And more, the number of dirty pages is
      required to kick flusher thread to start writeback.  (Now, no kick.)
      
      This patch is preparation for it and makes other statistics implementation
      clearer.  Just a clean up.
      Signed-off-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Acked-by: NBalbir Singh <balbir@linux.vnet.ibm.com>
      Reviewed-by: NGreg Thelen <gthelen@google.com>
      Cc: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      26174efd
    • K
      memcg: cpu hotplug aware quick acount_move detection · 1489ebad
      KAMEZAWA Hiroyuki 提交于
      An event counter MEM_CGROUP_ON_MOVE is used for quick check whether file
      stat update can be done in async manner or not.  Now, it use percpu
      counter and for_each_possible_cpu to update.
      
      This patch replaces for_each_possible_cpu to for_each_online_cpu and adds
      necessary synchronization logic at CPU HOTPLUG.
      
      [akpm@linux-foundation.org: coding-style fixes]
      Signed-off-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Balbir Singh <balbir@in.ibm.com>
      Cc: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      1489ebad
    • K
      memcg: cpu hotplug aware percpu count updates · 711d3d2c
      KAMEZAWA Hiroyuki 提交于
      Now, memcgroup's per cpu coutner uses for_each_possible_cpu() to get the
      value.  It's better to use for_each_online_cpu() and a cpu hotplug
      handler.
      
      This patch only handles statistics counter.  MEM_CGROUP_ON_MOVE will be
      handled in another patch.
      Signed-off-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Balbir Singh <balbir@in.ibm.com>
      Cc: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      711d3d2c
    • K
      memcg: use for_each_mem_cgroup · 7d74b06f
      KAMEZAWA Hiroyuki 提交于
      In memory cgroup management, we sometimes have to walk through
      subhierarchy of cgroup to gather informaiton, or lock something, etc.
      
      Now, to do that, mem_cgroup_walk_tree() function is provided.  It calls
      given callback function per cgroup found.  But the bad thing is that it
      has to pass a fixed style function and argument, "void*" and it adds much
      type casting to memcontrol.c.
      
      To make the code clean, this patch replaces walk_tree() with
      
        for_each_mem_cgroup_tree(iter, root)
      
      An iterator style call.  The good point is that iterator call doesn't have
      to assume what kind of function is called under it.  A bad point is that
      it may cause reference-count leak if a caller use "break" from the loop by
      mistake.
      
      I think the benefit is larger.  The modified code seems straigtforward and
      easy to read because we don't have misterious callbacks and pointer cast.
      Signed-off-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Balbir Singh <balbir@in.ibm.com>
      Cc: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      7d74b06f
    • K
      memcg: avoid lock in updating file_mapped (Was fix race in file_mapped accouting flag management · 32047e2a
      KAMEZAWA Hiroyuki 提交于
      At accounting file events per memory cgroup, we need to find memory cgroup
      via page_cgroup->mem_cgroup.  Now, we use lock_page_cgroup() for guarantee
      pc->mem_cgroup is not overwritten while we make use of it.
      
      But, considering the context which page-cgroup for files are accessed,
      we can use alternative light-weight mutual execusion in the most case.
      
      At handling file-caches, the only race we have to take care of is "moving"
      account, IOW, overwriting page_cgroup->mem_cgroup.  (See comment in the
      patch)
      
      Unlike charge/uncharge, "move" happens not so frequently. It happens only when
      rmdir() and task-moving (with a special settings.)
      This patch adds a race-checker for file-cache-status accounting v.s. account
      moving. The new per-cpu-per-memcg counter MEM_CGROUP_ON_MOVE is added.
      The routine for account move
        1. Increment it before start moving
        2. Call synchronize_rcu()
        3. Decrement it after the end of moving.
      By this, file-status-counting routine can check it needs to call
      lock_page_cgroup(). In most case, I doesn't need to call it.
      
      Following is a perf data of a process which mmap()/munmap 32MB of file cache
      in a minute.
      
      Before patch:
          28.25%     mmap  mmap               [.] main
          22.64%     mmap  [kernel.kallsyms]  [k] page_fault
           9.96%     mmap  [kernel.kallsyms]  [k] mem_cgroup_update_file_mapped
           3.67%     mmap  [kernel.kallsyms]  [k] filemap_fault
           3.50%     mmap  [kernel.kallsyms]  [k] unmap_vmas
           2.99%     mmap  [kernel.kallsyms]  [k] __do_fault
           2.76%     mmap  [kernel.kallsyms]  [k] find_get_page
      
      After patch:
          30.00%     mmap  mmap               [.] main
          23.78%     mmap  [kernel.kallsyms]  [k] page_fault
           5.52%     mmap  [kernel.kallsyms]  [k] mem_cgroup_update_file_mapped
           3.81%     mmap  [kernel.kallsyms]  [k] unmap_vmas
           3.26%     mmap  [kernel.kallsyms]  [k] find_get_page
           3.18%     mmap  [kernel.kallsyms]  [k] __do_fault
           3.03%     mmap  [kernel.kallsyms]  [k] filemap_fault
           2.40%     mmap  [kernel.kallsyms]  [k] handle_mm_fault
           2.40%     mmap  [kernel.kallsyms]  [k] do_page_fault
      
      This patch reduces memcg's cost to some extent.
      (mem_cgroup_update_file_mapped is called by both of map/unmap)
      
      Note: It seems some more improvements are required..but no idea.
            maybe removing set/unset flag is required.
      Signed-off-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Reviewed-by: NDaisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
      Cc: Balbir Singh <balbir@in.ibm.com>
      Cc: Greg Thelen <gthelen@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      32047e2a
    • K
      memcg: fix race in file_mapped accouting flag management · 0c270f8f
      KAMEZAWA Hiroyuki 提交于
      Presently memory cgroup accounts file-mapped by counter and flag.  counter
      is working in the same way with zone_stat but FileMapped flag only exists
      in memcg (for helping move_account).
      
      This flag can be updated wrongly in a case.  Assume CPU0 and CPU1 and a
      thread mapping a page on CPU0, another thread unmapping it on CPU1.
      
          CPU0                   		CPU1
      				rmv rmap (mapcount 1->0)
         add rmap (mapcount 0->1)
         lock_page_cgroup()
         memcg counter+1		(some delay)
         set MAPPED FLAG.
         unlock_page_cgroup()
      				lock_page_cgroup()
      				memcg counter-1
      				clear MAPPED flag
      
      In the above sequence counter is properly updated but FLAG is not.  This
      means that representing a state by a flag which is maintained by counter
      needs some special care.
      
      To handle this, when clearing a flag, this patch check mapcount directly
      and clear the flag only when mapcount == 0.  (if mapcount >0, someone will
      make it to zero later and flag will be cleared.)
      
      Reverse case, dec-after-inc cannot be a problem because page_table_lock()
      works well for it.  (IOW, to make above sequence, 2 processes should touch
      the same page at once with map/unmap.)
      Signed-off-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Balbir Singh <balbir@in.ibm.com>
      Cc: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
      Cc: Greg Thelen <gthelen@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      0c270f8f
  2. 08 10月, 2010 1 次提交
  3. 11 8月, 2010 9 次提交
  4. 10 8月, 2010 3 次提交
    • K
      memcg: add mm_vmscan_memcg_isolate tracepoint · cc8e970c
      KOSAKI Motohiro 提交于
      Memcg also need to trace page isolation information as global reclaim.
      This patch does it.
      Signed-off-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Reviewed-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Acked-by: NMel Gorman <mel@csn.ul.ie>
      Acked-by: NBalbir Singh <balbir@linux.vnet.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      cc8e970c
    • D
      oom: badness heuristic rewrite · a63d83f4
      David Rientjes 提交于
      This a complete rewrite of the oom killer's badness() heuristic which is
      used to determine which task to kill in oom conditions.  The goal is to
      make it as simple and predictable as possible so the results are better
      understood and we end up killing the task which will lead to the most
      memory freeing while still respecting the fine-tuning from userspace.
      
      Instead of basing the heuristic on mm->total_vm for each task, the task's
      rss and swap space is used instead.  This is a better indication of the
      amount of memory that will be freeable if the oom killed task is chosen
      and subsequently exits.  This helps specifically in cases where KDE or
      GNOME is chosen for oom kill on desktop systems instead of a memory
      hogging task.
      
      The baseline for the heuristic is a proportion of memory that each task is
      currently using in memory plus swap compared to the amount of "allowable"
      memory.  "Allowable," in this sense, means the system-wide resources for
      unconstrained oom conditions, the set of mempolicy nodes, the mems
      attached to current's cpuset, or a memory controller's limit.  The
      proportion is given on a scale of 0 (never kill) to 1000 (always kill),
      roughly meaning that if a task has a badness() score of 500 that the task
      consumes approximately 50% of allowable memory resident in RAM or in swap
      space.
      
      The proportion is always relative to the amount of "allowable" memory and
      not the total amount of RAM systemwide so that mempolicies and cpusets may
      operate in isolation; they shall not need to know the true size of the
      machine on which they are running if they are bound to a specific set of
      nodes or mems, respectively.
      
      Root tasks are given 3% extra memory just like __vm_enough_memory()
      provides in LSMs.  In the event of two tasks consuming similar amounts of
      memory, it is generally better to save root's task.
      
      Because of the change in the badness() heuristic's baseline, it is also
      necessary to introduce a new user interface to tune it.  It's not possible
      to redefine the meaning of /proc/pid/oom_adj with a new scale since the
      ABI cannot be changed for backward compatability.  Instead, a new tunable,
      /proc/pid/oom_score_adj, is added that ranges from -1000 to +1000.  It may
      be used to polarize the heuristic such that certain tasks are never
      considered for oom kill while others may always be considered.  The value
      is added directly into the badness() score so a value of -500, for
      example, means to discount 50% of its memory consumption in comparison to
      other tasks either on the system, bound to the mempolicy, in the cpuset,
      or sharing the same memory controller.
      
      /proc/pid/oom_adj is changed so that its meaning is rescaled into the
      units used by /proc/pid/oom_score_adj, and vice versa.  Changing one of
      these per-task tunables will rescale the value of the other to an
      equivalent meaning.  Although /proc/pid/oom_adj was originally defined as
      a bitshift on the badness score, it now shares the same linear growth as
      /proc/pid/oom_score_adj but with different granularity.  This is required
      so the ABI is not broken with userspace applications and allows oom_adj to
      be deprecated for future removal.
      Signed-off-by: NDavid Rientjes <rientjes@google.com>
      Cc: Nick Piggin <npiggin@suse.de>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Balbir Singh <balbir@in.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      a63d83f4
    • K
      vmscan: kill prev_priority completely · 25edde03
      KOSAKI Motohiro 提交于
      Since 2.6.28 zone->prev_priority is unused. Then it can be removed
      safely. It reduce stack usage slightly.
      
      Now I have to say that I'm sorry. 2 years ago, I thought prev_priority
      can be integrate again, it's useful. but four (or more) times trying
      haven't got good performance number. Thus I give up such approach.
      
      The rest of this changelog is notes on prev_priority and why it existed in
      the first place and why it might be not necessary any more. This information
      is based heavily on discussions between Andrew Morton, Rik van Riel and
      Kosaki Motohiro who is heavily quotes from.
      
      Historically prev_priority was important because it determined when the VM
      would start unmapping PTE pages. i.e. there are no balances of note within
      the VM, Anon vs File and Mapped vs Unmapped. Without prev_priority, there
      is a potential risk of unnecessarily increasing minor faults as a large
      amount of read activity of use-once pages could push mapped pages to the
      end of the LRU and get unmapped.
      
      There is no proof this is still a problem but currently it is not considered
      to be. Active files are not deactivated if the active file list is smaller
      than the inactive list reducing the liklihood that file-mapped pages are
      being pushed off the LRU and referenced executable pages are kept on the
      active list to avoid them getting pushed out by read activity.
      
      Even if it is a problem, prev_priority prev_priority wouldn't works
      nowadays. First of all, current vmscan still a lot of UP centric code. it
      expose some weakness on some dozens CPUs machine. I think we need more and
      more improvement.
      
      The problem is, current vmscan mix up per-system-pressure, per-zone-pressure
      and per-task-pressure a bit. example, prev_priority try to boost priority to
      other concurrent priority. but if the another task have mempolicy restriction,
      it is unnecessary, but also makes wrong big latency and exceeding reclaim.
      per-task based priority + prev_priority adjustment make the emulation of
      per-system pressure. but it have two issue 1) too rough and brutal emulation
      2) we need per-zone pressure, not per-system.
      
      Another example, currently DEF_PRIORITY is 12. it mean the lru rotate about
      2 cycle (1/4096 + 1/2048 + 1/1024 + .. + 1) before invoking OOM-Killer.
      but if 10,0000 thrreads enter DEF_PRIORITY reclaim at the same time, the
      system have higher memory pressure than priority==0 (1/4096*10,000 > 2).
      prev_priority can't solve such multithreads workload issue. In other word,
      prev_priority concept assume the sysmtem don't have lots threads."
      Signed-off-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Signed-off-by: NMel Gorman <mel@csn.ul.ie>
      Reviewed-by: NJohannes Weiner <hannes@cmpxchg.org>
      Reviewed-by: NRik van Riel <riel@redhat.com>
      Cc: Dave Chinner <david@fromorbit.com>
      Cc: Chris Mason <chris.mason@oracle.com>
      Cc: Nick Piggin <npiggin@suse.de>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Christoph Hellwig <hch@infradead.org>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Michael Rubin <mrubin@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      25edde03
  5. 30 6月, 2010 1 次提交
  6. 28 5月, 2010 10 次提交
  7. 12 5月, 2010 2 次提交
    • K
      memcg: fix css_is_ancestor() RCU locking · 747388d7
      KAMEZAWA Hiroyuki 提交于
      Some callers (in memcontrol.c) calls css_is_ancestor() without
      rcu_read_lock.  Because css_is_ancestor() has to access RCU protected
      data, it should be under rcu_read_lock().
      
      This makes css_is_ancestor() itself does safe access to RCU protected
      area.  (At least, "root" can have refcnt==0 if it's not an ancestor of
      "child".  So, we need rcu_read_lock().)
      Signed-off-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
      Cc: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
      Cc: Balbir Singh <balbir@linux.vnet.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      747388d7
    • K
      memcg: fix css_id() RCU locking for real · 7f0f1546
      KAMEZAWA Hiroyuki 提交于
      Commit ad4ba375 ("memcg: css_id() must be
      called under rcu_read_lock()") modifies memcontol.c for fixing RCU check
      message.  But Andrew Morton pointed out that the fix doesn't seems sane
      and it was just for hidining lockdep messages.
      
      This is a patch for do proper things.  Checking again, all places,
      accessing without rcu_read_lock, that commit fixies was intentional....
      all callers of css_id() has reference count on it.  So, it's not necessary
      to be under rcu_read_lock().
      
      Considering again, we can use rcu_dereference_check for css_id().  We know
      css->id is valid if css->refcnt > 0.  (css->id never changes and freed
      after css->refcnt going to be 0.)
      
      This patch makes use of rcu_dereference_check() in css_id/depth and remove
      unnecessary rcu-read-lock added by the commit.
      Signed-off-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
      Cc: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
      Cc: Balbir Singh <balbir@linux.vnet.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      7f0f1546
  8. 05 5月, 2010 1 次提交
  9. 25 4月, 2010 1 次提交
  10. 07 4月, 2010 1 次提交
  11. 25 3月, 2010 2 次提交
  12. 15 3月, 2010 1 次提交
  13. 13 3月, 2010 2 次提交