1. 28 5月, 2010 2 次提交
  2. 12 5月, 2010 2 次提交
    • K
      memcg: fix css_is_ancestor() RCU locking · 747388d7
      KAMEZAWA Hiroyuki 提交于
      Some callers (in memcontrol.c) calls css_is_ancestor() without
      rcu_read_lock.  Because css_is_ancestor() has to access RCU protected
      data, it should be under rcu_read_lock().
      
      This makes css_is_ancestor() itself does safe access to RCU protected
      area.  (At least, "root" can have refcnt==0 if it's not an ancestor of
      "child".  So, we need rcu_read_lock().)
      Signed-off-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
      Cc: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
      Cc: Balbir Singh <balbir@linux.vnet.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      747388d7
    • K
      memcg: fix css_id() RCU locking for real · 7f0f1546
      KAMEZAWA Hiroyuki 提交于
      Commit ad4ba375 ("memcg: css_id() must be
      called under rcu_read_lock()") modifies memcontol.c for fixing RCU check
      message.  But Andrew Morton pointed out that the fix doesn't seems sane
      and it was just for hidining lockdep messages.
      
      This is a patch for do proper things.  Checking again, all places,
      accessing without rcu_read_lock, that commit fixies was intentional....
      all callers of css_id() has reference count on it.  So, it's not necessary
      to be under rcu_read_lock().
      
      Considering again, we can use rcu_dereference_check for css_id().  We know
      css->id is valid if css->refcnt > 0.  (css->id never changes and freed
      after css->refcnt going to be 0.)
      
      This patch makes use of rcu_dereference_check() in css_id/depth and remove
      unnecessary rcu-read-lock added by the commit.
      Signed-off-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
      Cc: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
      Cc: Balbir Singh <balbir@linux.vnet.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      7f0f1546
  3. 05 5月, 2010 1 次提交
  4. 25 4月, 2010 1 次提交
  5. 07 4月, 2010 1 次提交
  6. 25 3月, 2010 2 次提交
  7. 15 3月, 2010 1 次提交
  8. 13 3月, 2010 15 次提交
    • K
      memcg: fix oom kill behavior · 867578cb
      KAMEZAWA Hiroyuki 提交于
      In current page-fault code,
      
      	handle_mm_fault()
      		-> ...
      		-> mem_cgroup_charge()
      		-> map page or handle error.
      	-> check return code.
      
      If page fault's return code is VM_FAULT_OOM, page_fault_out_of_memory() is
      called.  But if it's caused by memcg, OOM should have been already
      invoked.
      
      Then, I added a patch: a636b327.  That
      patch records last_oom_jiffies for memcg's sub-hierarchy and prevents
      page_fault_out_of_memory from being invoked in near future.
      
      But Nishimura-san reported that check by jiffies is not enough when the
      system is terribly heavy.
      
      This patch changes memcg's oom logic as.
       * If memcg causes OOM-kill, continue to retry.
       * remove jiffies check which is used now.
       * add memcg-oom-lock which works like perzone oom lock.
       * If current is killed(as a process), bypass charge.
      
      Something more sophisticated can be added but this pactch does
      fundamental things.
      TODO:
       - add oom notifier
       - add permemcg disable-oom-kill flag and freezer at oom.
       - more chances for wake up oom waiter (when changing memory limit etc..)
      Reviewed-by: NDaisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
      Tested-by: NDaisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
      Signed-off-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Balbir Singh <balbir@in.ibm.com>
      Cc: David Rientjes <rientjes@google.com>
      Signed-off-by: NDaisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      867578cb
    • K
      cgroups: remove events before destroying subsystem state objects · a0a4db54
      Kirill A. Shutemov 提交于
      Events should be removed after rmdir of cgroup directory, but before
      destroying subsystem state objects.  Let's take reference to cgroup
      directory dentry to do that.
      Signed-off-by: NKirill A. Shutemov <kirill@shutemov.name>
      Acked-by: NKAMEZAWA Hiroyuki <kamezawa.hioryu@jp.fujitsu.com>
      Cc: Paul Menage <menage@google.com>
      Acked-by: NLi Zefan <lizf@cn.fujitsu.com>
      Cc: Balbir Singh <balbir@linux.vnet.ibm.com>
      Cc: Pavel Emelyanov <xemul@openvz.org>
      Cc: Dan Malek <dan@embeddedalley.com>
      Cc: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      a0a4db54
    • K
      memcg : share event counter rather than duplicate · d2265e6f
      KAMEZAWA Hiroyuki 提交于
      Memcg has 2 eventcountes which counts "the same" event.  Just usages are
      different from each other.  This patch tries to reduce event counter.
      
      Now logic uses "only increment, no reset" counter and masks for each
      checks.  Softlimit chesk was done per 1000 evetns.  So, the similar check
      can be done by !(new_counter & 0x3ff).  Threshold check was done per 100
      events.  So, the similar check can be done by (!new_counter & 0x7f)
      
      ALL event checks are done right after EVENT percpu counter is updated.
      Signed-off-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Kirill A. Shutemov <kirill@shutemov.name>
      Cc: Balbir Singh <balbir@linux.vnet.ibm.com>
      Cc: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      d2265e6f
    • K
      memcg: update threshold and softlimit at commit · 430e4863
      KAMEZAWA Hiroyuki 提交于
      Presently, move_task does "batched" precharge.  Because res_counter or
      css's refcnt are not-scalable jobs for memcg, try_charge_()..  tend to be
      done in batched manner if allowed.
      
      Now, softlimit and threshold check their event counter in try_charge, but
      the charge is not a per-page event.  And event counter is not updated at
      charge().  Moreover, precharge doesn't pass "page" to try_charge() and
      softlimit tree will be never updated until uncharge() causes an event."
      
      So the best place to check the event counter is commit_charge().  This is
      per-page event by its nature.  This patch move checks to there.
      Signed-off-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Kirill A. Shutemov <kirill@shutemov.name>
      Cc: Balbir Singh <balbir@linux.vnet.ibm.com>
      Cc: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      430e4863
    • K
      memcg: use generic percpu instead of private implementation · c62b1a3b
      KAMEZAWA Hiroyuki 提交于
      When per-cpu counter for memcg was implemneted, dynamic percpu allocator
      was not very good.  But now, we have good one and useful macros.  This
      patch replaces memcg's private percpu counter implementation with generic
      dynamic percpu allocator.
      
      The benefits are
      	- We can remove private implementation.
      	- The counters will be NUMA-aware. (Current one is not...)
      	- This patch makes sizeof struct mem_cgroup smaller. Then,
      	  struct mem_cgroup may be fit in page size on small config.
              - About basic performance aspects, see below.
      
       [Before]
       # size mm/memcontrol.o
         text    data     bss     dec     hex filename
        24373    2528    4132   31033    7939 mm/memcontrol.o
      
       [page-fault-throuput test on 8cpu/SMP in root cgroup]
       # /root/bin/perf stat -a -e page-faults,cache-misses --repeat 5 ./multi-fault-fork 8
      
       Performance counter stats for './multi-fault-fork 8' (5 runs):
      
             45878618  page-faults                ( +-   0.110% )
            602635826  cache-misses               ( +-   0.105% )
      
         61.005373262  seconds time elapsed   ( +-   0.004% )
      
       Then cache-miss/page fault = 13.14
      
       [After]
       #size mm/memcontrol.o
         text    data     bss     dec     hex filename
        23913    2528    4132   30573    776d mm/memcontrol.o
       # /root/bin/perf stat -a -e page-faults,cache-misses --repeat 5 ./multi-fault-fork 8
      
       Performance counter stats for './multi-fault-fork 8' (5 runs):
      
             48179400  page-faults                ( +-   0.271% )
            588628407  cache-misses               ( +-   0.136% )
      
         61.004615021  seconds time elapsed   ( +-   0.004% )
      
        Then cache-miss/page fault = 12.22
      
       Text size is reduced.
       This performance improvement is not big and will be invisible in real world
       applications. But this result shows this patch has some good effect even
       on (small) SMP.
      
      Here is a test program I used.
      
       1. fork() processes on each cpus.
       2. do page fault repeatedly on each process.
       3. after 60secs, kill all childredn and exit.
      
      (3 is necessary for getting stable data, this is improvement from previous one.)
      
      #define _GNU_SOURCE
      #include <stdio.h>
      #include <sched.h>
      #include <sys/mman.h>
      #include <sys/types.h>
      #include <sys/stat.h>
      #include <fcntl.h>
      #include <signal.h>
      #include <stdlib.h>
      
      /*
       * For avoiding contention in page table lock, FAULT area is
       * sparse. If FAULT_LENGTH is too large for your cpus, decrease it.
       */
      #define FAULT_LENGTH	(2 * 1024 * 1024)
      #define PAGE_SIZE	4096
      #define MAXNUM		(128)
      
      void alarm_handler(int sig)
      {
      }
      
      void *worker(int cpu, int ppid)
      {
      	void *start, *end;
      	char *c;
      	cpu_set_t set;
      	int i;
      
      	CPU_ZERO(&set);
      	CPU_SET(cpu, &set);
      	sched_setaffinity(0, sizeof(set), &set);
      
      	start = mmap(NULL, FAULT_LENGTH, PROT_READ|PROT_WRITE,
      			MAP_PRIVATE | MAP_ANONYMOUS, 0, 0);
      	if (start == MAP_FAILED) {
      		perror("mmap");
      		exit(1);
      	}
      	end = start + FAULT_LENGTH;
      
      	pause();
      	//fprintf(stderr, "run%d", cpu);
      	while (1) {
      		for (c = (char*)start; (void *)c < end; c += PAGE_SIZE)
      			*c = 0;
      		madvise(start, FAULT_LENGTH, MADV_DONTNEED);
      	}
      	return NULL;
      }
      
      int main(int argc, char *argv[])
      {
      	int num, i, ret, pid, status;
      	int pids[MAXNUM];
      
      	if (argc < 2)
      		return 0;
      
      	setpgid(0, 0);
      	signal(SIGALRM, alarm_handler);
      	num = atoi(argv[1]);
      	pid = getpid();
      
      	for (i = 0; i < num; ++i) {
      		ret = fork();
      		if (!ret) {
      			worker(i, pid);
      			exit(0);
      		}
      		pids[i] = ret;
      	}
      	sleep(1);
      	kill(-pid, SIGALRM);
      	sleep(60);
      	for (i = 0; i < num; i++)
      		kill(pids[i], SIGKILL);
      	for (i = 0; i < num; i++)
      		waitpid(pids[i], &status, 0);
      	return 0;
      }
      Signed-off-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
      Cc: Balbir Singh <balbir@in.ibm.com>
      Cc: Pavel Emelyanov <xemul@openvz.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      c62b1a3b
    • K
      memcg: typo in comment to mem_cgroup_print_oom_info() · 6a6135b6
      Kirill A. Shutemov 提交于
      s/mem_cgroup_print_mem_info/mem_cgroup_print_oom_info/
      Signed-off-by: NKirill A. Shutemov <kirill@shutemov.name>
      Cc: Balbir Singh <balbir@linux.vnet.ibm.com>
      Cc: Pavel Emelyanov <xemul@openvz.org>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      6a6135b6
    • K
      memcg: implement memory thresholds · 2e72b634
      Kirill A. Shutemov 提交于
      It allows to register multiple memory and memsw thresholds and gets
      notifications when it crosses.
      
      To register a threshold application need:
      - create an eventfd;
      - open memory.usage_in_bytes or memory.memsw.usage_in_bytes;
      - write string like "<event_fd> <memory.usage_in_bytes> <threshold>" to
        cgroup.event_control.
      
      Application will be notified through eventfd when memory usage crosses
      threshold in any direction.
      
      It's applicable for root and non-root cgroup.
      
      It uses stats to track memory usage, simmilar to soft limits. It checks
      if we need to send event to userspace on every 100 page in/out. I guess
      it's good compromise between performance and accuracy of thresholds.
      
      [akpm@linux-foundation.org: coding-style fixes]
      [nishimura@mxp.nes.nec.co.jp: fix documentation merge issue]
      Signed-off-by: NKirill A. Shutemov <kirill@shutemov.name>
      Cc: Li Zefan <lizf@cn.fujitsu.com>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Balbir Singh <balbir@linux.vnet.ibm.com>
      Cc: Pavel Emelyanov <xemul@openvz.org>
      Cc: Dan Malek <dan@embeddedalley.com>
      Cc: Vladislav Buzov <vbuzov@embeddedalley.com>
      Cc: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
      Cc: Alexander Shishkin <virtuoso@slind.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      2e72b634
    • K
      memcg: rework usage of stats by soft limit · 378ce724
      Kirill A. Shutemov 提交于
      Instead of incrementing counter on each page in/out and comparing it with
      constant, we set counter to constant, decrement counter on each page
      in/out and compare it with zero.  We want to make comparing as fast as
      possible.  On many RISC systems (probably not only RISC) comparing with
      zero is more effective than comparing with a constant, since not every
      constant can be immediate operand for compare instruction.
      
      Also, I've renamed MEM_CGROUP_STAT_EVENTS to MEM_CGROUP_STAT_SOFTLIMIT,
      since really it's not a generic counter.
      Signed-off-by: NKirill A. Shutemov <kirill@shutemov.name>
      Cc: Li Zefan <lizf@cn.fujitsu.com>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Balbir Singh <balbir@linux.vnet.ibm.com>
      Cc: Pavel Emelyanov <xemul@openvz.org>
      Cc: Dan Malek <dan@embeddedalley.com>
      Cc: Vladislav Buzov <vbuzov@embeddedalley.com>
      Cc: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
      Cc: Alexander Shishkin <virtuoso@slind.org>
      Cc: Davide Libenzi <davidel@xmailserver.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      378ce724
    • K
      memcg: extract mem_group_usage() from mem_cgroup_read() · 104f3928
      Kirill A. Shutemov 提交于
      Helper to get memory or mem+swap usage of the cgroup.
      Signed-off-by: NKirill A. Shutemov <kirill@shutemov.name>
      Acked-by: NBalbir Singh <balbir@linux.vnet.ibm.com>
      Acked-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Li Zefan <lizf@cn.fujitsu.com>
      Cc: Pavel Emelyanov <xemul@openvz.org>
      Cc: Dan Malek <dan@embeddedalley.com>
      Cc: Vladislav Buzov <vbuzov@embeddedalley.com>
      Cc: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
      Cc: Alexander Shishkin <virtuoso@slind.org>
      Cc: Davide Libenzi <davidel@xmailserver.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      104f3928
    • D
      memcg: improve performance in moving swap charge · 483c30b5
      Daisuke Nishimura 提交于
      Try to reduce overheads in moving swap charge by:
      
      - Adds a new function(__mem_cgroup_put), which takes "count" as a arg and
        decrement mem->refcnt by "count".
      - Removed res_counter_uncharge, css_put, and mem_cgroup_put from the path
        of moving swap account, and consolidate all of them into mem_cgroup_clear_mc.
        We cannot do that about mc.to->refcnt.
      
      These changes reduces the overhead from 1.35sec to 0.9sec to move charges
      of 1G anonymous memory(including 500MB swap) in my test environment.
      Signed-off-by: NDaisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
      Cc: Balbir Singh <balbir@linux.vnet.ibm.com>
      Acked-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Li Zefan <lizf@cn.fujitsu.com>
      Cc: Paul Menage <menage@google.com>
      Cc: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      483c30b5
    • D
      memcg: move charges of anonymous swap · 02491447
      Daisuke Nishimura 提交于
      This patch is another core part of this move-charge-at-task-migration
      feature.  It enables moving charges of anonymous swaps.
      
      To move the charge of swap, we need to exchange swap_cgroup's record.
      
      In current implementation, swap_cgroup's record is protected by:
      
        - page lock: if the entry is on swap cache.
        - swap_lock: if the entry is not on swap cache.
      
      This works well in usual swap-in/out activity.
      
      But this behavior make the feature of moving swap charge check many
      conditions to exchange swap_cgroup's record safely.
      
      So I changed modification of swap_cgroup's recored(swap_cgroup_record())
      to use xchg, and define a new function to cmpxchg swap_cgroup's record.
      
      This patch also enables moving charge of non pte_present but not uncharged
      swap caches, which can be exist on swap-out path, by getting the target
      pages via find_get_page() as do_mincore() does.
      
      [kosaki.motohiro@jp.fujitsu.com: fix ia64 build]
      [akpm@linux-foundation.org: fix typos]
      Signed-off-by: NDaisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
      Cc: Balbir Singh <balbir@linux.vnet.ibm.com>
      Acked-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Li Zefan <lizf@cn.fujitsu.com>
      Cc: Paul Menage <menage@google.com>
      Cc: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
      Signed-off-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      02491447
    • D
      memcg: avoid oom during moving charge · 8033b97c
      Daisuke Nishimura 提交于
      This move-charge-at-task-migration feature has extra charges on
      "to"(pre-charges) and "from"(left-over charges) during moving charge.
      This means unnecessary oom can happen.
      
      This patch tries to avoid such oom.
      Signed-off-by: NDaisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
      Cc: Balbir Singh <balbir@linux.vnet.ibm.com>
      Acked-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Li Zefan <lizf@cn.fujitsu.com>
      Cc: Paul Menage <menage@google.com>
      Cc: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      8033b97c
    • D
      memcg: improve performance in moving charge · 854ffa8d
      Daisuke Nishimura 提交于
      Try to reduce overheads in moving charge by:
      
      - Instead of calling res_counter_uncharge() against the old cgroup in
        __mem_cgroup_move_account() everytime, call res_counter_uncharge() at the end
        of task migration once.
      - removed css_get(&to->css) from __mem_cgroup_move_account() because callers
        should have already called css_get(). And removed css_put(&to->css) too,
        which was called by callers of move_account on success of move_account.
      - Instead of calling __mem_cgroup_try_charge(), i.e. res_counter_charge(),
        repeatedly, call res_counter_charge(PAGE_SIZE * count) in can_attach() if
        possible.
      - Instead of calling css_get()/css_put() repeatedly, make use of coalesce
        __css_get()/__css_put() if possible.
      
      These changes reduces the overhead from 1.7sec to 0.6sec to move charges
      of 1G anonymous memory in my test environment.
      Signed-off-by: NDaisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
      Cc: Balbir Singh <balbir@linux.vnet.ibm.com>
      Acked-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Li Zefan <lizf@cn.fujitsu.com>
      Cc: Paul Menage <menage@google.com>
      Cc: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      854ffa8d
    • D
      memcg: move charges of anonymous page · 4ffef5fe
      Daisuke Nishimura 提交于
      This patch is the core part of this move-charge-at-task-migration feature.
       It implements functions to move charges of anonymous pages mapped only by
      the target task.
      
      Implementation:
      - define struct move_charge_struct and a valuable of it(mc) to remember the
        count of pre-charges and other information.
      - At can_attach(), get anon_rss of the target mm, call __mem_cgroup_try_charge()
        repeatedly and count up mc.precharge.
      - At attach(), parse the page table, find a target page to be move, and call
        mem_cgroup_move_account() about the page.
      - Cancel all precharges if mc.precharge > 0 on failure or at the end of
        task move.
      
      [akpm@linux-foundation.org: a little simplification]
      Signed-off-by: NDaisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
      Cc: Balbir Singh <balbir@linux.vnet.ibm.com>
      Acked-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Li Zefan <lizf@cn.fujitsu.com>
      Cc: Paul Menage <menage@google.com>
      Cc: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      4ffef5fe
    • D
      memcg: add interface to move charge at task migration · 7dc74be0
      Daisuke Nishimura 提交于
      In current memcg, charges associated with a task aren't moved to the new
      cgroup at task migration.  Some users feel this behavior to be strange.
      These patches are for this feature, that is, for charging to the new
      cgroup and, of course, uncharging from the old cgroup at task migration.
      
      This patch adds "memory.move_charge_at_immigrate" file, which is a flag
      file to determine whether charges should be moved to the new cgroup at
      task migration or not and what type of charges should be moved.  This
      patch also adds read and write handlers of the file.
      
      This patch also adds no-op handlers for this feature.  These handlers will
      be implemented in later patches.  And you cannot write any values other
      than 0 to move_charge_at_immigrate yet.
      Signed-off-by: NDaisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
      Cc: Balbir Singh <balbir@linux.vnet.ibm.com>
      Acked-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Li Zefan <lizf@cn.fujitsu.com>
      Cc: Paul Menage <menage@google.com>
      Cc: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      7dc74be0
  9. 07 3月, 2010 1 次提交
  10. 17 1月, 2010 1 次提交
    • D
      memcg: ensure list is empty at rmdir · fce66477
      Daisuke Nishimura 提交于
      Current mem_cgroup_force_empty() only ensures mem->res.usage == 0 on
      success.  But this doesn't guarantee memcg's LRU is really empty, because
      there are some cases in which !PageCgrupUsed pages exist on memcg's LRU.
      
      For example:
      - Pages can be uncharged by its owner process while they are on LRU.
      - race between mem_cgroup_add_lru_list() and __mem_cgroup_uncharge_common().
      
      So there can be a case in which the usage is zero but some of the LRUs are not empty.
      
      OTOH, mem_cgroup_del_lru_list(), which can be called asynchronously with
      rmdir, accesses the mem_cgroup, so this access can cause a problem if it
      races with rmdir because the mem_cgroup might have been freed by rmdir.
      
      Actually, I saw a bug which seems to be caused by this race.
      
      	[1530745.949906] BUG: unable to handle kernel NULL pointer dereference at 0000000000000230
      	[1530745.950651] IP: [<ffffffff810fbc11>] mem_cgroup_del_lru_list+0x30/0x80
      	[1530745.950651] PGD 3863de067 PUD 3862c7067 PMD 0
      	[1530745.950651] Oops: 0002 [#1] SMP
      	[1530745.950651] last sysfs file: /sys/devices/system/cpu/cpu7/cache/index1/shared_cpu_map
      	[1530745.950651] CPU 3
      	[1530745.950651] Modules linked in: configs ipt_REJECT xt_tcpudp iptable_filter ip_tables x_tables bridge stp nfsd nfs_acl auth_rpcgss exportfs autofs4 hidp rfcomm l2cap crc16 bluetooth lockd sunrpc ib_iser rdma_cm ib_cm iw_cm ib_sa ib_mad ib_core ib_addr iscsi_tcp bnx2i cnic uio ipv6 cxgb3i cxgb3 mdio libiscsi_tcp libiscsi scsi_transport_iscsi dm_mirror dm_multipath scsi_dh video output sbs sbshc battery ac lp kvm_intel kvm sg ide_cd_mod cdrom serio_raw tpm_tis tpm tpm_bios acpi_memhotplug button parport_pc parport rtc_cmos rtc_core rtc_lib e1000 i2c_i801 i2c_core pcspkr dm_region_hash dm_log dm_mod ata_piix libata shpchp megaraid_mbox sd_mod scsi_mod megaraid_mm ext3 jbd uhci_hcd ohci_hcd ehci_hcd [last unloaded: freq_table]
      	[1530745.950651] Pid: 19653, comm: shmem_test_02 Tainted: G   M       2.6.32-mm1-00701-g2b04386 #3 Express5800/140Rd-4 [N8100-1065]
      	[1530745.950651] RIP: 0010:[<ffffffff810fbc11>]  [<ffffffff810fbc11>] mem_cgroup_del_lru_list+0x30/0x80
      	[1530745.950651] RSP: 0018:ffff8803863ddcb8  EFLAGS: 00010002
      	[1530745.950651] RAX: 00000000000001e0 RBX: ffff8803abc02238 RCX: 00000000000001e0
      	[1530745.950651] RDX: 0000000000000000 RSI: ffff88038611a000 RDI: ffff8803abc02238
      	[1530745.950651] RBP: ffff8803863ddcc8 R08: 0000000000000002 R09: ffff8803a04c8643
      	[1530745.950651] R10: 0000000000000000 R11: ffffffff810c7333 R12: 0000000000000000
      	[1530745.950651] R13: ffff880000017f00 R14: 0000000000000092 R15: ffff8800179d0310
      	[1530745.950651] FS:  0000000000000000(0000) GS:ffff880017800000(0000) knlGS:0000000000000000
      	[1530745.950651] CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
      	[1530745.950651] CR2: 0000000000000230 CR3: 0000000379d87000 CR4: 00000000000006e0
      	[1530745.950651] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
      	[1530745.950651] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
      	[1530745.950651] Process shmem_test_02 (pid: 19653, threadinfo ffff8803863dc000, task ffff88038612a8a0)
      	[1530745.950651] Stack:
      	[1530745.950651]  ffffea00040c2fe8 0000000000000000 ffff8803863ddd98 ffffffff810c739a
      	[1530745.950651] <0> 00000000863ddd18 000000000000000c 0000000000000000 0000000000000000
      	[1530745.950651] <0> 0000000000000002 0000000000000000 ffff8803863ddd68 0000000000000046
      	[1530745.950651] Call Trace:
      	[1530745.950651]  [<ffffffff810c739a>] release_pages+0x142/0x1e7
      	[1530745.950651]  [<ffffffff810c778f>] ? pagevec_move_tail+0x6e/0x112
      	[1530745.950651]  [<ffffffff810c781e>] pagevec_move_tail+0xfd/0x112
      	[1530745.950651]  [<ffffffff810c78a9>] lru_add_drain+0x76/0x94
      	[1530745.950651]  [<ffffffff810dba0c>] exit_mmap+0x6e/0x145
      	[1530745.950651]  [<ffffffff8103f52d>] mmput+0x5e/0xcf
      	[1530745.950651]  [<ffffffff81043ea8>] exit_mm+0x11c/0x129
      	[1530745.950651]  [<ffffffff8108fb29>] ? audit_free+0x196/0x1c9
      	[1530745.950651]  [<ffffffff81045353>] do_exit+0x1f5/0x6b7
      	[1530745.950651]  [<ffffffff8106133f>] ? up_read+0x2b/0x2f
      	[1530745.950651]  [<ffffffff8137d187>] ? lockdep_sys_exit_thunk+0x35/0x67
      	[1530745.950651]  [<ffffffff81045898>] do_group_exit+0x83/0xb0
      	[1530745.950651]  [<ffffffff810458dc>] sys_exit_group+0x17/0x1b
      	[1530745.950651]  [<ffffffff81002c1b>] system_call_fastpath+0x16/0x1b
      	[1530745.950651] Code: 54 53 0f 1f 44 00 00 83 3d cc 29 7c 00 00 41 89 f4 75 63 eb 4e 48 83 7b 08 00 75 04 0f 0b eb fe 48 89 df e8 18 f3 ff ff 44 89 e2 <48> ff 4c d0 50 48 8b 05 2b 2d 7c 00 48 39 43 08 74 39 48 8b 4b
      	[1530745.950651] RIP  [<ffffffff810fbc11>] mem_cgroup_del_lru_list+0x30/0x80
      	[1530745.950651]  RSP <ffff8803863ddcb8>
      	[1530745.950651] CR2: 0000000000000230
      	[1530745.950651] ---[ end trace c3419c1bb8acc34f ]---
      	[1530745.950651] Fixing recursive fault but reboot is needed!
      
      The problem here is pages on LRU may contain pointer to stale memcg.  To
      make res->usage to be 0, all pages on memcg must be uncharged or moved to
      another(parent) memcg.  Moved page_cgroup have already removed from
      original LRU, but uncharged page_cgroup contains pointer to memcg withou
      PCG_USED bit.  (This asynchronous LRU work is for improving performance.)
      If PCG_USED bit is not set, page_cgroup will never be added to memcg's
      LRU.  So, about pages not on LRU, they never access stale pointer.  Then,
      what we have to take care of is page_cgroup _on_ LRU list.  This patch
      fixes this problem by making mem_cgroup_force_empty() visit all LRUs
      before exiting its loop and guarantee there are no pages on its LRU.
      Signed-off-by: NDaisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
      Acked-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Balbir Singh <balbir@linux.vnet.ibm.com>
      Cc: <stable@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      fce66477
  11. 16 12月, 2009 12 次提交
  12. 04 12月, 2009 1 次提交