1. 18 2月, 2016 1 次提交
  2. 17 2月, 2016 3 次提交
    • B
      x86/ftrace, x86/asm: Kill ftrace_caller_end label · f1b92bb6
      Borislav Petkov 提交于
      One of ftrace_caller_end and ftrace_return is redundant so unify them.
      Rename ftrace_return to ftrace_epilogue to mean that everything after
      that label represents, like an afterword, work which happens *after* the
      ftrace call, e.g., the function graph tracer for one.
      
      Steve wants this to rather mean "[a]n event which reflects meaningfully
      on a recently ended conflict or struggle." I can imagine that ftrace can
      be a struggle sometimes.
      
      Anyway, beef up the comment about the code contents and layout before
      ftrace_epilogue label.
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Reviewed-by: NSteven Rostedt <rostedt@goodmis.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/1455612202-14414-4-git-send-email-bp@alien8.deSigned-off-by: NIngo Molnar <mingo@kernel.org>
      f1b92bb6
    • A
      x86/signal/64: Re-add support for SS in the 64-bit signal context · 6c25da5a
      Andy Lutomirski 提交于
      This is a second attempt to make the improvements from c6f20629
      ("x86/signal/64: Fix SS handling for signals delivered to 64-bit
      programs"), which was reverted by 51adbfbba5c6 ("x86/signal/64: Add
      support for SS in the 64-bit signal context").
      
      This adds two new uc_flags flags.  UC_SIGCONTEXT_SS will be set for
      all 64-bit signals (including x32).  It indicates that the saved SS
      field is valid and that the kernel supports the new behavior.
      
      The goal is to fix a problems with signal handling in 64-bit tasks:
      SS wasn't saved in the 64-bit signal context, making it awkward to
      determine what SS was at the time of signal delivery and making it
      impossible to return to a non-flat SS (as calling sigreturn clobbers
      SS).
      
      This also made it extremely difficult for 64-bit tasks to return to
      fully-defined 16-bit contexts, because only the kernel can easily do
      espfix64, but sigreturn was unable to set a non-flag SS:ESP.
      (DOSEMU has a monstrous hack to partially work around this
      limitation.)
      
      If we could go back in time, the correct fix would be to make 64-bit
      signals work just like 32-bit signals with respect to SS: save it
      in signal context, reset it when delivering a signal, and restore
      it in sigreturn.
      
      Unfortunately, doing that (as I tried originally) breaks DOSEMU:
      DOSEMU wouldn't reset the signal context's SS when clearing the LDT
      and changing the saved CS to 64-bit mode, since it predates the SS
      context field existing in the first place.
      
      This patch is a bit more complicated, and it tries to balance a
      bunch of goals.  It makes most cases of changing ucontext->ss during
      signal handling work as expected.
      
      I do this by special-casing the interesting case.  On sigreturn,
      ucontext->ss will be honored by default, unless the ucontext was
      created from scratch by an old program and had a 64-bit CS
      (unfortunately, CRIU can do this) or was the result of changing a
      32-bit signal context to 64-bit without resetting SS (as DOSEMU
      does).
      
      For the benefit of new 64-bit software that uses segmentation (new
      versions of DOSEMU might), the new behavior can be detected with a
      new ucontext flag UC_SIGCONTEXT_SS.
      
      To avoid compilation issues, __pad0 is left as an alias for ss in
      ucontext.
      
      The nitty-gritty details are documented in the header file.
      
      This patch also re-enables the sigreturn_64 and ldt_gdt_64 selftests,
      as the kernel change allows both of them to pass.
      Tested-by: NStas Sergeev <stsp@list.ru>
      Signed-off-by: NAndy Lutomirski <luto@kernel.org>
      Acked-by: NBorislav Petkov <bp@alien8.de>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Cyrill Gorcunov <gorcunov@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Pavel Emelyanov <xemul@parallels.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/749149cbfc3e75cd7fcdad69a854b399d792cc6f.1455664054.git.luto@kernel.org
      [ Small readability edit. ]
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      6c25da5a
    • A
      x86/signal/64: Fix SS if needed when delivering a 64-bit signal · 8ff5bd2e
      Andy Lutomirski 提交于
      Signals are always delivered to 64-bit tasks with CS set to a long
      mode segment.  In long mode, SS doesn't matter as long as it's a
      present writable segment.
      
      If SS starts out invalid (this can happen if the signal was caused
      by an IRET fault or was delivered on the way out of set_thread_area
      or modify_ldt), then IRET to the signal handler can fail, eventually
      killing the task.
      
      The straightforward fix would be to simply reset SS when delivering
      a signal.  That breaks DOSEMU, though: 64-bit builds of DOSEMU rely
      on SS being set to the faulting SS when signals are delivered.
      
      As a compromise, this patch leaves SS alone so long as it's valid.
      
      The net effect should be that the behavior of successfully delivered
      signals is unchanged.  Some signals that would previously have
      failed to be delivered will now be delivered successfully.
      
      This has no effect for x32 or 32-bit tasks: their signal handlers
      were already called with SS == __USER_DS.
      
      (On Xen, there's a slight hole: if a task sets SS to a writable
       *kernel* data segment, then we will fail to identify it as invalid
       and we'll still kill the task.  If anyone cares, this could be fixed
       with a new paravirt hook.)
      Signed-off-by: NAndy Lutomirski <luto@kernel.org>
      Acked-by: NBorislav Petkov <bp@alien8.de>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Cyrill Gorcunov <gorcunov@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Pavel Emelyanov <xemul@parallels.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stas Sergeev <stsp@list.ru>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/163c6e1eacde41388f3ff4d2fe6769be651d7b6e.1455664054.git.luto@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      8ff5bd2e
  3. 01 2月, 2016 1 次提交
  4. 30 1月, 2016 4 次提交
    • B
      x86/alternatives: Discard dynamic check after init · 2476f2fa
      Brian Gerst 提交于
      Move the code to do the dynamic check to the altinstr_aux
      section so that it is discarded after alternatives have run and
      a static branch has been chosen.
      
      This way we're changing the dynamic branch from C code to
      assembly, which makes it *substantially* smaller while avoiding
      a completely unnecessary call to an out of line function.
      Signed-off-by: NBrian Gerst <brgerst@gmail.com>
      [ Changed it to do TESTB, as hpa suggested. ]
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Dave Young <dyoung@redhat.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Kristen Carlson Accardi <kristen@linux.intel.com>
      Cc: Laura Abbott <labbott@fedoraproject.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Prarit Bhargava <prarit@redhat.com>
      Cc: Ross Zwisler <ross.zwisler@linux.intel.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/1452972124-7380-1-git-send-email-brgerst@gmail.com
      Link: http://lkml.kernel.org/r/20160127084525.GC30712@pd.tnicSigned-off-by: NIngo Molnar <mingo@kernel.org>
      2476f2fa
    • B
      x86/alternatives: Add an auxilary section · 337e4cc8
      Borislav Petkov 提交于
      Add .altinstr_aux for additional instructions which will be used
      before and/or during patching. All stuff which needs more
      sophisticated patching should go there. See next patch.
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/1453842730-28463-8-git-send-email-bp@alien8.deSigned-off-by: NIngo Molnar <mingo@kernel.org>
      337e4cc8
    • B
      x86/cpufeature: Replace the old static_cpu_has() with safe variant · bc696ca0
      Borislav Petkov 提交于
      So the old one didn't work properly before alternatives had run.
      And it was supposed to provide an optimized JMP because the
      assumption was that the offset it is jumping to is within a
      signed byte and thus a two-byte JMP.
      
      So I did an x86_64 allyesconfig build and dumped all possible
      sites where static_cpu_has() was used. The optimization amounted
      to all in all 12(!) places where static_cpu_has() had generated
      a 2-byte JMP. Which has saved us a whopping 36 bytes!
      
      This clearly is not worth the trouble so we can remove it. The
      only place where the optimization might count - in __switch_to()
      - we will handle differently. But that's not subject of this
      patch.
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/1453842730-28463-6-git-send-email-bp@alien8.deSigned-off-by: NIngo Molnar <mingo@kernel.org>
      bc696ca0
    • B
      x86/cpufeature: Carve out X86_FEATURE_* · cd4d09ec
      Borislav Petkov 提交于
      Move them to a separate header and have the following
      dependency:
      
        x86/cpufeatures.h <- x86/processor.h <- x86/cpufeature.h
      
      This makes it easier to use the header in asm code and not
      include the whole cpufeature.h and add guards for asm.
      Suggested-by: NH. Peter Anvin <hpa@zytor.com>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/1453842730-28463-5-git-send-email-bp@alien8.deSigned-off-by: NIngo Molnar <mingo@kernel.org>
      cd4d09ec
  5. 29 1月, 2016 5 次提交
    • A
      x86/syscalls: Add syscall entry qualifiers · cfcbadb4
      Andy Lutomirski 提交于
      This will let us specify something like 'sys_xyz/foo' instead of
      'sys_xyz' in the syscall table, where the 'foo' qualifier conveys
      some extra information to the C code.
      
      The intent is to allow things like sys_execve/ptregs to indicate
      that sys_execve() touches pt_regs.
      Signed-off-by: NAndy Lutomirski <luto@kernel.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/2de06e33dce62556b3ec662006fcb295504e296e.1454022279.git.luto@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      cfcbadb4
    • A
      x86/syscalls: Move compat syscall entry handling into syscalltbl.sh · 3e65654e
      Andy Lutomirski 提交于
      Rather than duplicating the compat entry handling in all
      consumers of syscalls_BITS.h, handle it directly in
      syscalltbl.sh.  Now we generate entries in syscalls_32.h like:
      
      __SYSCALL_I386(5, sys_open)
      __SYSCALL_I386(5, compat_sys_open)
      
      and all of its consumers implicitly get the right entry point.
      Signed-off-by: NAndy Lutomirski <luto@kernel.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/b7c2b501dc0e6e43050e916b95807c3e2e16e9bb.1454022279.git.luto@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      3e65654e
    • A
      x86/syscalls: Remove __SYSCALL_COMMON and __SYSCALL_X32 · 32324ce1
      Andy Lutomirski 提交于
      The common/64/x32 distinction has no effect other than
      determining which kernels actually support the syscall.  Move
      the logic into syscalltbl.sh.
      Signed-off-by: NAndy Lutomirski <luto@kernel.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/58d4a95f40e43b894f93288b4a3633963d0ee22e.1454022279.git.luto@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      32324ce1
    • P
      perf/x86: De-obfuscate code · 8f04b853
      Peter Zijlstra 提交于
      Get rid of the 'onln' obfuscation.
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: David Ahern <dsahern@gmail.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      8f04b853
    • P
      perf/x86: Fix uninitialized value usage · e01d8718
      Peter Zijlstra 提交于
      When calling intel_alt_er() with .idx != EXTRA_REG_RSP_* we will not
      initialize alt_idx and then use this uninitialized value to index an
      array.
      
      When that is not fatal, it can result in an infinite loop in its
      caller __intel_shared_reg_get_constraints(), with IRQs disabled.
      
      Alternative error modes are random memory corruption due to the
      cpuc->shared_regs->regs[] array overrun, which manifest in either
      get_constraints or put_constraints doing weird stuff.
      
      Only took 6 hours of painful debugging to find this. Neither GCC nor
      Smatch warnings flagged this bug.
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: David Ahern <dsahern@gmail.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Kan Liang <kan.liang@intel.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Fixes: ae3f011f ("perf/x86/intel: Fix SLM MSR_OFFCORE_RSP1 valid_mask")
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      e01d8718
  6. 27 1月, 2016 1 次提交
  7. 22 1月, 2016 1 次提交
  8. 21 1月, 2016 1 次提交
  9. 19 1月, 2016 3 次提交
  10. 16 1月, 2016 1 次提交
  11. 15 1月, 2016 15 次提交
  12. 14 1月, 2016 1 次提交
  13. 12 1月, 2016 3 次提交
    • M
      x86/reboot/quirks: Add iMac10,1 to pci_reboot_dmi_table[] · 2f0c0b2d
      Mario Kleiner 提交于
      Without the reboot=pci method, the iMac 10,1 simply
      hangs after printing "Restarting system" at the point
      when it should reboot. This fixes it.
      Signed-off-by: NMario Kleiner <mario.kleiner.de@gmail.com>
      Cc: <stable@vger.kernel.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Dave Jones <davej@codemonkey.org.uk>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/1450466646-26663-1-git-send-email-mario.kleiner.de@gmail.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      2f0c0b2d
    • B
      x86/boot: Hide local labels in verify_cpu() · aa042141
      Borislav Petkov 提交于
      ... from the final ELF image's symbol table as they're not
      really needed there.
      
      Before:
      
      $ readelf -a vmlinux | grep verify_cpu
          43: ffffffff810001a9     0 NOTYPE  LOCAL  DEFAULT    1 verify_cpu
          45: ffffffff8100028f     0 NOTYPE  LOCAL  DEFAULT    1 verify_cpu_no_longmode
          46: ffffffff810001de     0 NOTYPE  LOCAL  DEFAULT    1 verify_cpu_noamd
          47: ffffffff8100022b     0 NOTYPE  LOCAL  DEFAULT    1 verify_cpu_check
          48: ffffffff8100021c     0 NOTYPE  LOCAL  DEFAULT    1 verify_cpu_clear_xd
          49: ffffffff81000263     0 NOTYPE  LOCAL  DEFAULT    1 verify_cpu_sse_test
          50: ffffffff81000296     0 NOTYPE  LOCAL  DEFAULT    1 verify_cpu_sse_ok
      
      After:
      
      $ readelf -a vmlinux | grep verify_cpu
          43: ffffffff810001a9     0 NOTYPE  LOCAL  DEFAULT    1 verify_cpu
      
      No functionality change.
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Quentin Casasnovas <quentin.casasnovas@oracle.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/1451860733-21163-1-git-send-email-bp@alien8.deSigned-off-by: NIngo Molnar <mingo@kernel.org>
      aa042141
    • Y
      x86/fpu: Disable AVX when eagerfpu is off · 394db20c
      yu-cheng yu 提交于
      When "eagerfpu=off" is given as a command-line input, the kernel
      should disable AVX support.
      
      The Task Switched bit used for lazy context switching does not
      support AVX. If AVX is enabled without eagerfpu context
      switching, one task's AVX state could become corrupted or leak
      to other tasks. This is a bug and has bad security implications.
      
      This only affects systems that have AVX/AVX2/AVX512 and this
      issue will be found only when one actually uses AVX/AVX2/AVX512
      _AND_ does eagerfpu=off.
      
      Reference: Intel Software Developer's Manual Vol. 3A
      
      Sec. 2.5 Control Registers:
      TS Task Switched bit (bit 3 of CR0) -- Allows the saving of the
      x87 FPU/ MMX/SSE/SSE2/SSE3/SSSE3/SSE4 context on a task switch
      to be delayed until an x87 FPU/MMX/SSE/SSE2/SSE3/SSSE3/SSE4
      instruction is actually executed by the new task.
      
      Sec. 13.4.1 Using the TS Flag to Control the Saving of the X87
      FPU and SSE State
      When the TS flag is set, the processor monitors the instruction
      stream for x87 FPU, MMX, SSE instructions. When the processor
      detects one of these instructions, it raises a
      device-not-available exeception (#NM) prior to executing the
      instruction.
      Signed-off-by: NYu-cheng Yu <yu-cheng.yu@intel.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Quentin Casasnovas <quentin.casasnovas@oracle.com>
      Cc: Ravi V. Shankar <ravi.v.shankar@intel.com>
      Cc: Sai Praneeth Prakhya <sai.praneeth.prakhya@intel.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: yu-cheng yu <yu-cheng.yu@intel.com>
      Link: http://lkml.kernel.org/r/1452119094-7252-5-git-send-email-yu-cheng.yu@intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      394db20c