提交 35000870 编写于 作者: A Anton Blanchard 提交者: Benjamin Herrenschmidt

powerpc: Optimise enable_kernel_altivec

Add two optimisations to enable_kernel_altivec:

- enable_kernel_altivec has already determined if we need to
save the previous task's state but we call giveup_altivec
in both cases, requiring an extra branch in giveup_altivec. Create
giveup_altivec_notask which only turns on the VMX bit in the
MSR.

- We write the VMX MSR bit each time we call enable_kernel_altivec
even it was already set. Check the bit and branch out if we have
already set it. The classic case for this is vectored IO
where we have to copy multiple buffers to or from userspace.

The following testcase was used to confirm this patch improves
performance:

http://ozlabs.org/~anton/junkcode/copy_to_user.c

Since the current breakpoint for using VMX in copy_tofrom_user is
4096 bytes, I'm using buffers of 4096 + 1 cacheline (4224) bytes.
A benchmark of 16 entry readvs (-s 16):

time copy_to_user -l 4224 -s 16 -i 1000000

completes 5.2% faster on a POWER7 PS700.
Signed-off-by: NAnton Blanchard <anton@samba.org>
Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
上级 8cd3c23d
...@@ -40,6 +40,7 @@ static inline void discard_lazy_cpu_state(void) ...@@ -40,6 +40,7 @@ static inline void discard_lazy_cpu_state(void)
#ifdef CONFIG_ALTIVEC #ifdef CONFIG_ALTIVEC
extern void flush_altivec_to_thread(struct task_struct *); extern void flush_altivec_to_thread(struct task_struct *);
extern void giveup_altivec(struct task_struct *); extern void giveup_altivec(struct task_struct *);
extern void giveup_altivec_notask(void);
#else #else
static inline void flush_altivec_to_thread(struct task_struct *t) static inline void flush_altivec_to_thread(struct task_struct *t)
{ {
......
...@@ -124,7 +124,7 @@ void enable_kernel_altivec(void) ...@@ -124,7 +124,7 @@ void enable_kernel_altivec(void)
if (current->thread.regs && (current->thread.regs->msr & MSR_VEC)) if (current->thread.regs && (current->thread.regs->msr & MSR_VEC))
giveup_altivec(current); giveup_altivec(current);
else else
giveup_altivec(NULL); /* just enable AltiVec for kernel - force */ giveup_altivec_notask();
#else #else
giveup_altivec(last_task_used_altivec); giveup_altivec(last_task_used_altivec);
#endif /* CONFIG_SMP */ #endif /* CONFIG_SMP */
......
...@@ -89,6 +89,16 @@ _GLOBAL(load_up_altivec) ...@@ -89,6 +89,16 @@ _GLOBAL(load_up_altivec)
/* restore registers and return */ /* restore registers and return */
blr blr
_GLOBAL(giveup_altivec_notask)
mfmsr r3
andis. r4,r3,MSR_VEC@h
bnelr /* Already enabled? */
oris r3,r3,MSR_VEC@h
SYNC
MTMSRD(r3) /* enable use of VMX now */
isync
blr
/* /*
* giveup_altivec(tsk) * giveup_altivec(tsk)
* Disable VMX for the task given as the argument, * Disable VMX for the task given as the argument,
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册