提交 d095d43e 编写于 作者: D David Vrabel 提交者: Konrad Rzeszutek Wilk

xen/mm: do direct hypercall in xen_set_pte() if batching is unavailable

In xen_set_pte() if batching is unavailable (because the caller is in
an interrupt context such as handling a page fault) it would fall back
to using native_set_pte() and trapping and emulating the PTE write.

On 32-bit guests this requires two traps for each PTE write (one for
each dword of the PTE).  Instead, do one mmu_update hypercall
directly.

During construction of the initial page tables, continue to use
native_set_pte() because most of the PTEs being set are in writable
and unpinned pages (see phys_pmd_init() in arch/x86/mm/init_64.c) and
using a hypercall for this is very expensive.

This significantly improves page fault performance in 32-bit PV
guests.

lmbench3 test  Before    After     Improvement
----------------------------------------------
lat_pagefault  3.18 us   2.32 us   27%
lat_proc fork  356 us    313.3 us  11%
Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
上级 37a80bf5
...@@ -308,8 +308,20 @@ static bool xen_batched_set_pte(pte_t *ptep, pte_t pteval) ...@@ -308,8 +308,20 @@ static bool xen_batched_set_pte(pte_t *ptep, pte_t pteval)
static inline void __xen_set_pte(pte_t *ptep, pte_t pteval) static inline void __xen_set_pte(pte_t *ptep, pte_t pteval)
{ {
if (!xen_batched_set_pte(ptep, pteval)) if (!xen_batched_set_pte(ptep, pteval)) {
native_set_pte(ptep, pteval); /*
* Could call native_set_pte() here and trap and
* emulate the PTE write but with 32-bit guests this
* needs two traps (one for each of the two 32-bit
* words in the PTE) so do one hypercall directly
* instead.
*/
struct mmu_update u;
u.ptr = virt_to_machine(ptep).maddr | MMU_NORMAL_PT_UPDATE;
u.val = pte_val_ma(pteval);
HYPERVISOR_mmu_update(&u, 1, NULL, DOMID_SELF);
}
} }
static void xen_set_pte(pte_t *ptep, pte_t pteval) static void xen_set_pte(pte_t *ptep, pte_t pteval)
...@@ -1416,13 +1428,21 @@ static pte_t __init mask_rw_pte(pte_t *ptep, pte_t pte) ...@@ -1416,13 +1428,21 @@ static pte_t __init mask_rw_pte(pte_t *ptep, pte_t pte)
} }
#endif /* CONFIG_X86_64 */ #endif /* CONFIG_X86_64 */
/* Init-time set_pte while constructing initial pagetables, which /*
doesn't allow RO pagetable pages to be remapped RW */ * Init-time set_pte while constructing initial pagetables, which
* doesn't allow RO page table pages to be remapped RW.
*
* Many of these PTE updates are done on unpinned and writable pages
* and doing a hypercall for these is unnecessary and expensive. At
* this point it is not possible to tell if a page is pinned or not,
* so always write the PTE directly and rely on Xen trapping and
* emulating any updates as necessary.
*/
static void __init xen_set_pte_init(pte_t *ptep, pte_t pte) static void __init xen_set_pte_init(pte_t *ptep, pte_t pte)
{ {
pte = mask_rw_pte(ptep, pte); pte = mask_rw_pte(ptep, pte);
xen_set_pte(ptep, pte); native_set_pte(ptep, pte);
} }
static void pin_pagetable_pfn(unsigned cmd, unsigned long pfn) static void pin_pagetable_pfn(unsigned cmd, unsigned long pfn)
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册