提交 af34770e 编写于 作者: J Johannes Weiner 提交者: Linus Torvalds

mm: reduce rmap overhead for ex-KSM page copies created on swap faults

When ex-KSM pages are faulted from swap cache, the fault handler is not
capable of re-establishing anon_vma-spanning KSM pages.  In this case, a
copy of the page is created instead, just like during a COW break.

These freshly made copies are known to be exclusive to the faulting VMA
and there is no reason to go look for this page in parent and sibling
processes during rmap operations.

Use page_add_new_anon_rmap() for these copies.  This also puts them on
the proper LRU lists and marks them SwapBacked, so we can get rid of
doing this ad-hoc in the KSM copy code.
Signed-off-by: NJohannes Weiner <hannes@cmpxchg.org>
Reviewed-by: NRik van Riel <riel@redhat.com>
Acked-by: NHugh Dickins <hughd@google.com>
Cc: Simon Jeons <simon.jeons@gmail.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Michal Hocko <mhocko@suse.cz>
Cc: Satoru Moriya <satoru.moriya@hds.com>
Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
上级 9b4f98cd
...@@ -1590,13 +1590,7 @@ struct page *ksm_does_need_to_copy(struct page *page, ...@@ -1590,13 +1590,7 @@ struct page *ksm_does_need_to_copy(struct page *page,
SetPageDirty(new_page); SetPageDirty(new_page);
__SetPageUptodate(new_page); __SetPageUptodate(new_page);
SetPageSwapBacked(new_page);
__set_page_locked(new_page); __set_page_locked(new_page);
if (!mlocked_vma_newpage(vma, new_page))
lru_cache_add_lru(new_page, LRU_ACTIVE_ANON);
else
add_page_to_unevictable_list(new_page);
} }
return new_page; return new_page;
......
...@@ -3044,7 +3044,10 @@ static int do_swap_page(struct mm_struct *mm, struct vm_area_struct *vma, ...@@ -3044,7 +3044,10 @@ static int do_swap_page(struct mm_struct *mm, struct vm_area_struct *vma,
} }
flush_icache_page(vma, page); flush_icache_page(vma, page);
set_pte_at(mm, address, page_table, pte); set_pte_at(mm, address, page_table, pte);
do_page_add_anon_rmap(page, vma, address, exclusive); if (swapcache) /* ksm created a completely new copy */
page_add_new_anon_rmap(page, vma, address);
else
do_page_add_anon_rmap(page, vma, address, exclusive);
/* It's better to call commit-charge after rmap is established */ /* It's better to call commit-charge after rmap is established */
mem_cgroup_commit_charge_swapin(page, ptr); mem_cgroup_commit_charge_swapin(page, ptr);
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册