提交 644c1541 编写于 作者: V Vincent Palatin 提交者: H. Peter Anvin

x86, fpu: Avoid FPU lazy restore after suspend

When a cpu enters S3 state, the FPU state is lost.
After resuming for S3, if we try to lazy restore the FPU for a process running
on the same CPU, this will result in a corrupted FPU context.

Ensure that "fpu_owner_task" is properly invalided when (re-)initializing a CPU,
so nobody will try to lazy restore a state which doesn't exist in the hardware.

Tested with a 64-bit kernel on a 4-core Ivybridge CPU with eagerfpu=off,
by doing thousands of suspend/resume cycles with 4 processes doing FPU
operations running. Without the patch, a process is killed after a
few hundreds cycles by a SIGFPE.

Cc: Duncan Laurie <dlaurie@chromium.org>
Cc: Olof Johansson <olofj@chromium.org>
Cc: <stable@kernel.org> v3.4+ # for 3.4 need to replace this_cpu_write by percpu_write
Signed-off-by: NVincent Palatin <vpalatin@chromium.org>
Link: http://lkml.kernel.org/r/1354306532-1014-1-git-send-email-vpalatin@chromium.orgSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
上级 6662c34f
...@@ -399,14 +399,17 @@ static inline void drop_init_fpu(struct task_struct *tsk) ...@@ -399,14 +399,17 @@ static inline void drop_init_fpu(struct task_struct *tsk)
typedef struct { int preload; } fpu_switch_t; typedef struct { int preload; } fpu_switch_t;
/* /*
* FIXME! We could do a totally lazy restore, but we need to * Must be run with preemption disabled: this clears the fpu_owner_task,
* add a per-cpu "this was the task that last touched the FPU * on this CPU.
* on this CPU" variable, and the task needs to have a "I last
* touched the FPU on this CPU" and check them.
* *
* We don't do that yet, so "fpu_lazy_restore()" always returns * This will disable any lazy FPU state restore of the current FPU state,
* false, but some day.. * but if the current thread owns the FPU, it will still be saved by.
*/ */
static inline void __cpu_disable_lazy_restore(unsigned int cpu)
{
per_cpu(fpu_owner_task, cpu) = NULL;
}
static inline int fpu_lazy_restore(struct task_struct *new, unsigned int cpu) static inline int fpu_lazy_restore(struct task_struct *new, unsigned int cpu)
{ {
return new == this_cpu_read_stable(fpu_owner_task) && return new == this_cpu_read_stable(fpu_owner_task) &&
......
...@@ -68,6 +68,8 @@ ...@@ -68,6 +68,8 @@
#include <asm/mwait.h> #include <asm/mwait.h>
#include <asm/apic.h> #include <asm/apic.h>
#include <asm/io_apic.h> #include <asm/io_apic.h>
#include <asm/i387.h>
#include <asm/fpu-internal.h>
#include <asm/setup.h> #include <asm/setup.h>
#include <asm/uv/uv.h> #include <asm/uv/uv.h>
#include <linux/mc146818rtc.h> #include <linux/mc146818rtc.h>
...@@ -818,6 +820,9 @@ int __cpuinit native_cpu_up(unsigned int cpu, struct task_struct *tidle) ...@@ -818,6 +820,9 @@ int __cpuinit native_cpu_up(unsigned int cpu, struct task_struct *tidle)
per_cpu(cpu_state, cpu) = CPU_UP_PREPARE; per_cpu(cpu_state, cpu) = CPU_UP_PREPARE;
/* the FPU context is blank, nobody can own it */
__cpu_disable_lazy_restore(cpu);
err = do_boot_cpu(apicid, cpu, tidle); err = do_boot_cpu(apicid, cpu, tidle);
if (err) { if (err) {
pr_debug("do_boot_cpu failed %d\n", err); pr_debug("do_boot_cpu failed %d\n", err);
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册