提交 04633df0 编写于 作者: B Borislav Petkov 提交者: Thomas Gleixner

x86/cpu: Call verify_cpu() after having entered long mode too

When we get loaded by a 64-bit bootloader, kernel entry point is
startup_64 in head_64.S. We don't trust any and all bootloaders because
some will fiddle with CPU configuration so we go ahead and massage each
CPU into sanity again.

For example, some dell BIOSes have this XD disable feature which set
IA32_MISC_ENABLE[34] and disable NX. This might be some dumb workaround
for other OSes but Linux sure doesn't need it.

A similar thing is present in the Surface 3 firmware - see
https://bugzilla.kernel.org/show_bug.cgi?id=106051 - which sets this bit
only on the BSP:

  # rdmsr -a 0x1a0
  400850089
  850089
  850089
  850089

I know, right?!

There's not even an off switch in there.

So fix all those cases by sanitizing the 64-bit entry point too. For
that, make verify_cpu() callable in 64-bit mode also.
Requested-and-debugged-by: N"H. Peter Anvin" <hpa@zytor.com>
Reported-and-tested-by: NBastien Nocera <bugzilla@hadess.net>
Signed-off-by: NBorislav Petkov <bp@suse.de>
Cc: Matt Fleming <matt@codeblueprint.co.uk>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: stable@vger.kernel.org
Link: http://lkml.kernel.org/r/1446739076-21303-1-git-send-email-bp@alien8.deSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
上级 68accac3
...@@ -65,6 +65,9 @@ startup_64: ...@@ -65,6 +65,9 @@ startup_64:
* tables and then reload them. * tables and then reload them.
*/ */
/* Sanitize CPU configuration */
call verify_cpu
/* /*
* Compute the delta between the address I am compiled to run at and the * Compute the delta between the address I am compiled to run at and the
* address I am actually running at. * address I am actually running at.
...@@ -174,6 +177,9 @@ ENTRY(secondary_startup_64) ...@@ -174,6 +177,9 @@ ENTRY(secondary_startup_64)
* after the boot processor executes this code. * after the boot processor executes this code.
*/ */
/* Sanitize CPU configuration */
call verify_cpu
movq $(init_level4_pgt - __START_KERNEL_map), %rax movq $(init_level4_pgt - __START_KERNEL_map), %rax
1: 1:
...@@ -288,6 +294,8 @@ ENTRY(secondary_startup_64) ...@@ -288,6 +294,8 @@ ENTRY(secondary_startup_64)
pushq %rax # target address in negative space pushq %rax # target address in negative space
lretq lretq
#include "verify_cpu.S"
#ifdef CONFIG_HOTPLUG_CPU #ifdef CONFIG_HOTPLUG_CPU
/* /*
* Boot CPU0 entry point. It's called from play_dead(). Everything has been set * Boot CPU0 entry point. It's called from play_dead(). Everything has been set
......
...@@ -34,10 +34,11 @@ ...@@ -34,10 +34,11 @@
#include <asm/msr-index.h> #include <asm/msr-index.h>
verify_cpu: verify_cpu:
pushfl # Save caller passed flags pushf # Save caller passed flags
pushl $0 # Kill any dangerous flags push $0 # Kill any dangerous flags
popfl popf
#ifndef __x86_64__
pushfl # standard way to check for cpuid pushfl # standard way to check for cpuid
popl %eax popl %eax
movl %eax,%ebx movl %eax,%ebx
...@@ -48,6 +49,7 @@ verify_cpu: ...@@ -48,6 +49,7 @@ verify_cpu:
popl %eax popl %eax
cmpl %eax,%ebx cmpl %eax,%ebx
jz verify_cpu_no_longmode # cpu has no cpuid jz verify_cpu_no_longmode # cpu has no cpuid
#endif
movl $0x0,%eax # See if cpuid 1 is implemented movl $0x0,%eax # See if cpuid 1 is implemented
cpuid cpuid
...@@ -130,10 +132,10 @@ verify_cpu_sse_test: ...@@ -130,10 +132,10 @@ verify_cpu_sse_test:
jmp verify_cpu_sse_test # try again jmp verify_cpu_sse_test # try again
verify_cpu_no_longmode: verify_cpu_no_longmode:
popfl # Restore caller passed flags popf # Restore caller passed flags
movl $1,%eax movl $1,%eax
ret ret
verify_cpu_sse_ok: verify_cpu_sse_ok:
popfl # Restore caller passed flags popf # Restore caller passed flags
xorl %eax, %eax xorl %eax, %eax
ret ret
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册