提交 ece35ca8 编写于 作者: K KAMEZAWA Hiroyuki 提交者: Linus Torvalds

memcg: fix LRU accounting with THP

memory cgroup's LRU stat should take care of size of pages because
Transparent Hugepage inserts hugepage into LRU.  If this value is the
number wrong, memory reclaim will not work well.

Note: only head page of THP's huge page is linked into LRU.
Signed-off-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
Cc: Balbir Singh <balbir@linux.vnet.ibm.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
上级 ca3e0214
...@@ -814,7 +814,8 @@ void mem_cgroup_del_lru_list(struct page *page, enum lru_list lru) ...@@ -814,7 +814,8 @@ void mem_cgroup_del_lru_list(struct page *page, enum lru_list lru)
* removed from global LRU. * removed from global LRU.
*/ */
mz = page_cgroup_zoneinfo(pc); mz = page_cgroup_zoneinfo(pc);
MEM_CGROUP_ZSTAT(mz, lru) -= 1; /* huge page split is done under lru_lock. so, we have no races. */
MEM_CGROUP_ZSTAT(mz, lru) -= 1 << compound_order(page);
if (mem_cgroup_is_root(pc->mem_cgroup)) if (mem_cgroup_is_root(pc->mem_cgroup))
return; return;
VM_BUG_ON(list_empty(&pc->lru)); VM_BUG_ON(list_empty(&pc->lru));
...@@ -865,7 +866,8 @@ void mem_cgroup_add_lru_list(struct page *page, enum lru_list lru) ...@@ -865,7 +866,8 @@ void mem_cgroup_add_lru_list(struct page *page, enum lru_list lru)
return; return;
mz = page_cgroup_zoneinfo(pc); mz = page_cgroup_zoneinfo(pc);
MEM_CGROUP_ZSTAT(mz, lru) += 1; /* huge page split is done under lru_lock. so, we have no races. */
MEM_CGROUP_ZSTAT(mz, lru) += 1 << compound_order(page);
SetPageCgroupAcctLRU(pc); SetPageCgroupAcctLRU(pc);
if (mem_cgroup_is_root(pc->mem_cgroup)) if (mem_cgroup_is_root(pc->mem_cgroup))
return; return;
...@@ -2152,14 +2154,26 @@ void mem_cgroup_split_huge_fixup(struct page *head, struct page *tail) ...@@ -2152,14 +2154,26 @@ void mem_cgroup_split_huge_fixup(struct page *head, struct page *tail)
unsigned long flags; unsigned long flags;
/* /*
* We have no races witch charge/uncharge but will have races with * We have no races with charge/uncharge but will have races with
* page state accounting. * page state accounting.
*/ */
move_lock_page_cgroup(head_pc, &flags); move_lock_page_cgroup(head_pc, &flags);
tail_pc->mem_cgroup = head_pc->mem_cgroup; tail_pc->mem_cgroup = head_pc->mem_cgroup;
smp_wmb(); /* see __commit_charge() */ smp_wmb(); /* see __commit_charge() */
/* we don't need to copy all flags...*/ if (PageCgroupAcctLRU(head_pc)) {
enum lru_list lru;
struct mem_cgroup_per_zone *mz;
/*
* LRU flags cannot be copied because we need to add tail
*.page to LRU by generic call and our hook will be called.
* We hold lru_lock, then, reduce counter directly.
*/
lru = page_lru(head);
mz = page_cgroup_zoneinfo(head_pc);
MEM_CGROUP_ZSTAT(mz, lru) -= 1;
}
tail_pc->flags = head_pc->flags & ~PCGF_NOCOPY_AT_SPLIT; tail_pc->flags = head_pc->flags & ~PCGF_NOCOPY_AT_SPLIT;
move_unlock_page_cgroup(head_pc, &flags); move_unlock_page_cgroup(head_pc, &flags);
} }
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册