提交 a9ba9a3b 编写于 作者: A Arjan van de Ven 提交者: Linus Torvalds

[PATCH] x86_64: prefetch the mmap_sem in the fault path

In a micro-benchmark that stresses the pagefault path, the down_read_trylock
on the mmap_sem showed up quite high on the profile. Turns out this lock is
bouncing between cpus quite a bit and thus is cache-cold a lot. This patch
prefetches the lock (for write) as early as possible (and before some other
somewhat expensive operations). With this patch, the down_read_trylock
basically fell out of the top of profile.
Signed-off-by: NArjan van de Ven <arjan@linux.intel.com>
Signed-off-by: NAndi Kleen <ak@suse.de>
Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
上级 4bc32c4d
......@@ -314,11 +314,13 @@ asmlinkage void __kprobes do_page_fault(struct pt_regs *regs,
unsigned long flags;
siginfo_t info;
tsk = current;
mm = tsk->mm;
prefetchw(&mm->mmap_sem);
/* get the address */
__asm__("movq %%cr2,%0":"=r" (address));
tsk = current;
mm = tsk->mm;
info.si_code = SEGV_MAPERR;
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册