提交 4fd01770 编写于 作者: M Mel Gorman

mm: Check if PTE is already allocated during page fault

With transparent hugepage support, handle_mm_fault() has to be careful
that a normal PMD has been established before handling a PTE fault. To
achieve this, it used __pte_alloc() directly instead of pte_alloc_map
as pte_alloc_map is unsafe to run against a huge PMD. pte_offset_map()
is called once it is known the PMD is safe.

pte_alloc_map() is smart enough to check if a PTE is already present
before calling __pte_alloc but this check was lost. As a consequence,
PTEs may be allocated unnecessarily and the page table lock taken.
Thi useless PTE does get cleaned up but it's a performance hit which
is visible in page_test from aim9.

This patch simply re-adds the check normally done by pte_alloc_map to
check if the PTE needs to be allocated before taking the page table
lock. The effect is noticable in page_test from aim9.

 AIM9
                 2.6.38-vanilla 2.6.38-checkptenone
 creat-clo      446.10 ( 0.00%)   424.47 (-5.10%)
 page_test       38.10 ( 0.00%)    42.04 ( 9.37%)
 brk_test        52.45 ( 0.00%)    51.57 (-1.71%)
 exec_test      382.00 ( 0.00%)   456.90 (16.39%)
 fork_test       60.11 ( 0.00%)    67.79 (11.34%)
 MMTests Statistics: duration
 Total Elapsed Time (seconds)                611.90    612.22

(While this affects 2.6.38, it is a performance rather than a
functional bug and normally outside the rules -stable. While the big
performance differences are to a microbench, the difference in fork
and exec performance may be significant enough that -stable wants to
consider the patch)
Reported-by: NRaz Ben Yehuda <raziebe@gmail.com>
Signed-off-by: NMel Gorman <mgorman@suse.de>
Signed-off-by: NAndrea Arcangeli <aarcange@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Rik van Riel <riel@redhat.com>
[ Picked this up from the AutoNUMA tree to help
  it upstream and to allow apples-to-apples
  performance comparisons. ]
Signed-off-by: NIngo Molnar <mingo@kernel.org>
上级 8d1acce4
......@@ -710,7 +710,8 @@ int do_huge_pmd_anonymous_page(struct mm_struct *mm, struct vm_area_struct *vma,
* run pte_offset_map on the pmd, if an huge pmd could
* materialize from under us from a different thread.
*/
if (unlikely(__pte_alloc(mm, vma, pmd, address)))
if (unlikely(pmd_none(*pmd)) &&
unlikely(__pte_alloc(mm, vma, pmd, address)))
return VM_FAULT_OOM;
/* if an huge pmd materialized from under us just retry later */
if (unlikely(pmd_trans_huge(*pmd)))
......
......@@ -3560,7 +3560,8 @@ int handle_mm_fault(struct mm_struct *mm, struct vm_area_struct *vma,
* run pte_offset_map on the pmd, if an huge pmd could
* materialize from under us from a different thread.
*/
if (unlikely(pmd_none(*pmd)) && __pte_alloc(mm, vma, pmd, address))
if (unlikely(pmd_none(*pmd)) &&
unlikely(__pte_alloc(mm, vma, pmd, address)))
return VM_FAULT_OOM;
/* if an huge pmd materialized from under us just retry later */
if (unlikely(pmd_trans_huge(*pmd)))
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册