提交 945754a1 编写于 作者: N Nick Piggin 提交者: Linus Torvalds

mm: fix race in COW logic

There is a race in the COW logic.  It contains a shortcut to avoid the
COW and reuse the page if we have the sole reference on the page,
however it is possible to have two racing do_wp_page()ers with one
causing the other to mistakenly believe it is safe to take the shortcut
when it is not.  This could lead to data corruption.

Process 1 and process2 each have a wp pte of the same anon page (ie.
one forked the other).  The page's mapcount is 2.  Then they both
attempt to write to it around the same time...

  proc1				proc2 thr1			proc2 thr2
  CPU0				CPU1				CPU3
  do_wp_page()			do_wp_page()
				 trylock_page()
				  can_share_swap_page()
				   load page mapcount (==2)
				  reuse = 0
				 pte unlock
				 copy page to new_page
				 pte lock
				 page_remove_rmap(page);
   trylock_page()
    can_share_swap_page()
     load page mapcount (==1)
    reuse = 1
   ptep_set_access_flags (allow W)

  write private key into page
								read from page
				ptep_clear_flush()
				set_pte_at(pte of new_page)

Fix this by moving the page_remove_rmap of the old page after the pte
clear and flush.  Potentially the entire branch could be moved down
here, but in order to stay consistent, I won't (should probably move all
the *_mm_counter stuff with one patch).
Signed-off-by: NNick Piggin <npiggin@suse.de>
Acked-by: NHugh Dickins <hugh@veritas.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
上级 672ca28e
...@@ -1785,7 +1785,6 @@ static int do_wp_page(struct mm_struct *mm, struct vm_area_struct *vma, ...@@ -1785,7 +1785,6 @@ static int do_wp_page(struct mm_struct *mm, struct vm_area_struct *vma,
page_table = pte_offset_map_lock(mm, pmd, address, &ptl); page_table = pte_offset_map_lock(mm, pmd, address, &ptl);
if (likely(pte_same(*page_table, orig_pte))) { if (likely(pte_same(*page_table, orig_pte))) {
if (old_page) { if (old_page) {
page_remove_rmap(old_page, vma);
if (!PageAnon(old_page)) { if (!PageAnon(old_page)) {
dec_mm_counter(mm, file_rss); dec_mm_counter(mm, file_rss);
inc_mm_counter(mm, anon_rss); inc_mm_counter(mm, anon_rss);
...@@ -1807,6 +1806,32 @@ static int do_wp_page(struct mm_struct *mm, struct vm_area_struct *vma, ...@@ -1807,6 +1806,32 @@ static int do_wp_page(struct mm_struct *mm, struct vm_area_struct *vma,
lru_cache_add_active(new_page); lru_cache_add_active(new_page);
page_add_new_anon_rmap(new_page, vma, address); page_add_new_anon_rmap(new_page, vma, address);
if (old_page) {
/*
* Only after switching the pte to the new page may
* we remove the mapcount here. Otherwise another
* process may come and find the rmap count decremented
* before the pte is switched to the new page, and
* "reuse" the old page writing into it while our pte
* here still points into it and can be read by other
* threads.
*
* The critical issue is to order this
* page_remove_rmap with the ptp_clear_flush above.
* Those stores are ordered by (if nothing else,)
* the barrier present in the atomic_add_negative
* in page_remove_rmap.
*
* Then the TLB flush in ptep_clear_flush ensures that
* no process can access the old page before the
* decremented mapcount is visible. And the old page
* cannot be reused until after the decremented
* mapcount is visible. So transitively, TLBs to
* old page will be flushed before it can be reused.
*/
page_remove_rmap(old_page, vma);
}
/* Free the old page.. */ /* Free the old page.. */
new_page = old_page; new_page = old_page;
ret |= VM_FAULT_WRITE; ret |= VM_FAULT_WRITE;
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册