提交 d9c9ce34 编写于 作者: S Sebastian Andrzej Siewior 提交者: Borislav Petkov

x86/fpu: Fault-in user stack if copy_fpstate_to_sigframe() fails

In the compacted form, XSAVES may save only the XMM+SSE state but skip
FP (x87 state).

This is denoted by header->xfeatures = 6. The fastpath
(copy_fpregs_to_sigframe()) does that but _also_ initialises the FP
state (cwd to 0x37f, mxcsr as we do, remaining fields to 0).

The slowpath (copy_xstate_to_user()) leaves most of the FP
state untouched. Only mxcsr and mxcsr_flags are set due to
xfeatures_mxcsr_quirk(). Now that XFEATURE_MASK_FP is set
unconditionally, see

  04944b79 ("x86: xsave: set FP, SSE bits in the xsave header in the user sigcontext"),

on return from the signal, random garbage is loaded as the FP state.

Instead of utilizing copy_xstate_to_user(), fault-in the user memory
and retry the fast path. Ideally, the fast path succeeds on the second
attempt but may be retried again if the memory is swapped out due
to memory pressure. If the user memory can not be faulted-in then
get_user_pages() returns an error so we don't loop forever.

Fault in memory via get_user_pages_unlocked() so
copy_fpregs_to_sigframe() succeeds without a fault.

Fixes: 69277c98 ("x86/fpu: Always store the registers in copy_fpstate_to_sigframe()")
Reported-by: NKurt Kanzenbach <kurt.kanzenbach@linutronix.de>
Suggested-by: NDave Hansen <dave.hansen@intel.com>
Signed-off-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: NBorislav Petkov <bp@suse.de>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Jann Horn <jannh@google.com>
Cc: "linux-mm@kvack.org" <linux-mm@kvack.org>
Cc: Qian Cai <cai@lca.pw>
Cc: Rik van Riel <riel@surriel.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: x86-ml <x86@kernel.org>
Link: https://lkml.kernel.org/r/20190502171139.mqtegctsg35cir2e@linutronix.de
上级 a5eff725
...@@ -157,11 +157,9 @@ static inline int copy_fpregs_to_sigframe(struct xregs_state __user *buf) ...@@ -157,11 +157,9 @@ static inline int copy_fpregs_to_sigframe(struct xregs_state __user *buf)
*/ */
int copy_fpstate_to_sigframe(void __user *buf, void __user *buf_fx, int size) int copy_fpstate_to_sigframe(void __user *buf, void __user *buf_fx, int size)
{ {
struct fpu *fpu = &current->thread.fpu;
struct xregs_state *xsave = &fpu->state.xsave;
struct task_struct *tsk = current; struct task_struct *tsk = current;
int ia32_fxstate = (buf != buf_fx); int ia32_fxstate = (buf != buf_fx);
int ret = -EFAULT; int ret;
ia32_fxstate &= (IS_ENABLED(CONFIG_X86_32) || ia32_fxstate &= (IS_ENABLED(CONFIG_X86_32) ||
IS_ENABLED(CONFIG_IA32_EMULATION)); IS_ENABLED(CONFIG_IA32_EMULATION));
...@@ -174,11 +172,12 @@ int copy_fpstate_to_sigframe(void __user *buf, void __user *buf_fx, int size) ...@@ -174,11 +172,12 @@ int copy_fpstate_to_sigframe(void __user *buf, void __user *buf_fx, int size)
sizeof(struct user_i387_ia32_struct), NULL, sizeof(struct user_i387_ia32_struct), NULL,
(struct _fpstate_32 __user *) buf) ? -1 : 1; (struct _fpstate_32 __user *) buf) ? -1 : 1;
retry:
/* /*
* Load the FPU registers if they are not valid for the current task. * Load the FPU registers if they are not valid for the current task.
* With a valid FPU state we can attempt to save the state directly to * With a valid FPU state we can attempt to save the state directly to
* userland's stack frame which will likely succeed. If it does not, do * userland's stack frame which will likely succeed. If it does not,
* the slowpath. * resolve the fault in the user memory and try again.
*/ */
fpregs_lock(); fpregs_lock();
if (test_thread_flag(TIF_NEED_FPU_LOAD)) if (test_thread_flag(TIF_NEED_FPU_LOAD))
...@@ -187,20 +186,20 @@ int copy_fpstate_to_sigframe(void __user *buf, void __user *buf_fx, int size) ...@@ -187,20 +186,20 @@ int copy_fpstate_to_sigframe(void __user *buf, void __user *buf_fx, int size)
pagefault_disable(); pagefault_disable();
ret = copy_fpregs_to_sigframe(buf_fx); ret = copy_fpregs_to_sigframe(buf_fx);
pagefault_enable(); pagefault_enable();
if (ret && !test_thread_flag(TIF_NEED_FPU_LOAD))
copy_fpregs_to_fpstate(fpu);
set_thread_flag(TIF_NEED_FPU_LOAD);
fpregs_unlock(); fpregs_unlock();
if (ret) { if (ret) {
if (using_compacted_format()) { int aligned_size;
if (copy_xstate_to_user(buf_fx, xsave, 0, size)) int nr_pages;
return -1;
} else { aligned_size = offset_in_page(buf_fx) + fpu_user_xstate_size;
fpstate_sanitize_xstate(fpu); nr_pages = DIV_ROUND_UP(aligned_size, PAGE_SIZE);
if (__copy_to_user(buf_fx, xsave, fpu_user_xstate_size))
return -1; ret = get_user_pages_unlocked((unsigned long)buf_fx, nr_pages,
} NULL, FOLL_WRITE);
if (ret == nr_pages)
goto retry;
return -EFAULT;
} }
/* Save the fsave header for the 32-bit frames. */ /* Save the fsave header for the 32-bit frames. */
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册