1. 13 4月, 2016 1 次提交
  2. 13 3月, 2016 1 次提交
    • F
      x86/cpufeature: Enable new AVX-512 features · d0500494
      Fenghua Yu 提交于
      A few new AVX-512 instruction groups/features are added in cpufeatures.h
      for enuermation: AVX512DQ, AVX512BW, and AVX512VL.
      
      Clear the flags in fpu__xstate_clear_all_cpu_caps().
      
      The specification for latest AVX-512 including the features can be found at:
      
        https://software.intel.com/sites/default/files/managed/07/b7/319433-023.pdf
      
      Note, I didn't enable the flags in KVM. Hopefully the KVM guys can pick up
      the flags and enable them in KVM.
      Signed-off-by: NFenghua Yu <fenghua.yu@intel.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Gleb Natapov <gleb@kernel.org>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Quentin Casasnovas <quentin.casasnovas@oracle.com>
      Cc: Ravi V Shankar <ravi.v.shankar@intel.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: kvm@vger.kernel.org
      Link: http://lkml.kernel.org/r/1457667498-37357-1-git-send-email-fenghua.yu@intel.com
      [ Added more detailed feature descriptions. ]
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      d0500494
  3. 12 3月, 2016 1 次提交
    • B
      x86/fpu: Fix eager-FPU handling on legacy FPU machines · 6e686709
      Borislav Petkov 提交于
      i486 derived cores like Intel Quark support only the very old,
      legacy x87 FPU (FSAVE/FRSTOR, CPUID bit FXSR is not set), and
      our FPU code wasn't handling the saving and restoring there
      properly in the 'eagerfpu' case.
      
      So after we made eagerfpu the default for all CPU types:
      
        58122bf1 x86/fpu: Default eagerfpu=on on all CPUs
      
      these old FPU designs broke. First, Andy Shevchenko reported a splat:
      
        WARNING: CPU: 0 PID: 823 at arch/x86/include/asm/fpu/internal.h:163 fpu__clear+0x8c/0x160
      
      which was us trying to execute FXRSTOR on those machines even though
      they don't support it.
      
      After taking care of that, Bryan O'Donoghue reported that a simple FPU
      test still failed because we weren't initializing the FPU state properly
      on those machines.
      
      Take care of all that.
      Reported-and-tested-by: NBryan O'Donoghue <pure.logic@nexus-software.ie>
      Reported-by: NAndy Shevchenko <andy.shevchenko@gmail.com>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Acked-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Quentin Casasnovas <quentin.casasnovas@oracle.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Yu-cheng <yu-cheng.yu@intel.com>
      Link: http://lkml.kernel.org/r/20160311113206.GD4312@pd.tnicSigned-off-by: NIngo Molnar <mingo@kernel.org>
      6e686709
  4. 10 3月, 2016 1 次提交
    • Y
      x86/fpu: Revert ("x86/fpu: Disable AVX when eagerfpu is off") · a65050c6
      Yu-cheng Yu 提交于
      Leonid Shatz noticed that the SDM interpretation of the following
      recent commit:
      
        394db20c ("x86/fpu: Disable AVX when eagerfpu is off")
      
      ... is incorrect and that the original behavior of the FPU code was correct.
      
      Because AVX is not stated in CR0 TS bit description, it was mistakenly
      believed to be not supported for lazy context switch. This turns out
      to be false:
      
        Intel Software Developer's Manual Vol. 3A, Sec. 2.5 Control Registers:
      
         'TS Task Switched bit (bit 3 of CR0) -- Allows the saving of the x87 FPU/
          MMX/SSE/SSE2/SSE3/SSSE3/SSE4 context on a task switch to be delayed until
          an x87 FPU/MMX/SSE/SSE2/SSE3/SSSE3/SSE4 instruction is actually executed
          by the new task.'
      
        Intel Software Developer's Manual Vol. 2A, Sec. 2.4 Instruction Exception
        Specification:
      
         'AVX instructions refer to exceptions by classes that include #NM
          "Device Not Available" exception for lazy context switch.'
      
      So revert the commit.
      Reported-by: NLeonid Shatz <leonid.shatz@ravellosystems.com>
      Signed-off-by: NYu-cheng Yu <yu-cheng.yu@intel.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Ravi V. Shankar <ravi.v.shankar@intel.com>
      Cc: Sai Praneeth Prakhya <sai.praneeth.prakhya@intel.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/1457569734-3785-1-git-send-email-yu-cheng.yu@intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      a65050c6
  5. 09 3月, 2016 1 次提交
    • A
      x86/fpu: Fix 'no387' regression · f363938c
      Andy Lutomirski 提交于
      After fixing FPU option parsing, we now parse the 'no387' boot option
      too early: no387 clears X86_FEATURE_FPU before it's even probed, so
      the boot CPU promptly re-enables it.
      
      I suspect it gets even more confused on SMP.
      
      Fix the probing code to leave X86_FEATURE_FPU off if it's been
      disabled by setup_clear_cpu_cap().
      Signed-off-by: NAndy Lutomirski <luto@kernel.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Quentin Casasnovas <quentin.casasnovas@oracle.com>
      Cc: Sai Praneeth Prakhya <sai.praneeth.prakhya@intel.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: yu-cheng yu <yu-cheng.yu@intel.com>
      Fixes: 4f81cbaf ("x86/fpu: Fix early FPU command-line parsing")
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      f363938c
  6. 24 2月, 2016 1 次提交
  7. 19 2月, 2016 3 次提交
    • D
      mm/core, x86/mm/pkeys: Add execute-only protection keys support · 62b5f7d0
      Dave Hansen 提交于
      Protection keys provide new page-based protection in hardware.
      But, they have an interesting attribute: they only affect data
      accesses and never affect instruction fetches.  That means that
      if we set up some memory which is set as "access-disabled" via
      protection keys, we can still execute from it.
      
      This patch uses protection keys to set up mappings to do just that.
      If a user calls:
      
      	mmap(..., PROT_EXEC);
      or
      	mprotect(ptr, sz, PROT_EXEC);
      
      (note PROT_EXEC-only without PROT_READ/WRITE), the kernel will
      notice this, and set a special protection key on the memory.  It
      also sets the appropriate bits in the Protection Keys User Rights
      (PKRU) register so that the memory becomes unreadable and
      unwritable.
      
      I haven't found any userspace that does this today.  With this
      facility in place, we expect userspace to move to use it
      eventually.  Userspace _could_ start doing this today.  Any
      PROT_EXEC calls get converted to PROT_READ inside the kernel, and
      would transparently be upgraded to "true" PROT_EXEC with this
      code.  IOW, userspace never has to do any PROT_EXEC runtime
      detection.
      
      This feature provides enhanced protection against leaking
      executable memory contents.  This helps thwart attacks which are
      attempting to find ROP gadgets on the fly.
      
      But, the security provided by this approach is not comprehensive.
      The PKRU register which controls access permissions is a normal
      user register writable from unprivileged userspace.  An attacker
      who can execute the 'wrpkru' instruction can easily disable the
      protection provided by this feature.
      
      The protection key that is used for execute-only support is
      permanently dedicated at compile time.  This is fine for now
      because there is currently no API to set a protection key other
      than this one.
      
      Despite there being a constant PKRU value across the entire
      system, we do not set it unless this feature is in use in a
      process.  That is to preserve the PKRU XSAVE 'init state',
      which can lead to faster context switches.
      
      PKRU *is* a user register and the kernel is modifying it.  That
      means that code doing:
      
      	pkru = rdpkru()
      	pkru |= 0x100;
      	mmap(..., PROT_EXEC);
      	wrpkru(pkru);
      
      could lose the bits in PKRU that enforce execute-only
      permissions.  To avoid this, we suggest avoiding ever calling
      mmap() or mprotect() when the PKRU value is expected to be
      unstable.
      Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com>
      Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Chen Gang <gang.chen.5i5j@gmail.com>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Dave Chinner <david@fromorbit.com>
      Cc: Dave Hansen <dave@sr71.net>
      Cc: David Hildenbrand <dahi@linux.vnet.ibm.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Konstantin Khlebnikov <koct9i@gmail.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Piotr Kwapulinski <kwapulinski.piotr@gmail.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Stephen Smalley <sds@tycho.nsa.gov>
      Cc: Vladimir Murzin <vladimir.murzin@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: keescook@google.com
      Cc: linux-kernel@vger.kernel.org
      Cc: linux-mm@kvack.org
      Link: http://lkml.kernel.org/r/20160212210240.CB4BB5CA@viggo.jf.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      62b5f7d0
    • D
      x86/mm/pkeys: Allow kernel to modify user pkey rights register · 84594296
      Dave Hansen 提交于
      The Protection Key Rights for User memory (PKRU) is a 32-bit
      user-accessible register.  It contains two bits for each
      protection key: one to write-disable (WD) access to memory
      covered by the key and another to access-disable (AD).
      
      Userspace can read/write the register with the RDPKRU and WRPKRU
      instructions.  But, the register is saved and restored with the
      XSAVE family of instructions, which means we have to treat it
      like a floating point register.
      
      The kernel needs to write to the register if it wants to
      implement execute-only memory or if it implements a system call
      to change PKRU.
      
      To do this, we need to create a 'pkru_state' buffer, read the old
      contents in to it, modify it, and then tell the FPU code that
      there is modified data in there so it can (possibly) move the
      buffer back in to the registers.
      
      This uses the fpu__xfeature_set_state() function that we defined
      in the previous patch.
      Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com>
      Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Dave Hansen <dave@sr71.net>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: linux-mm@kvack.org
      Link: http://lkml.kernel.org/r/20160212210236.0BE13217@viggo.jf.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      84594296
    • D
      x86/fpu: Allow setting of XSAVE state · b8b9b6ba
      Dave Hansen 提交于
      We want to modify the Protection Key rights inside the kernel, so
      we need to change PKRU's contents.  But, if we do a plain
      'wrpkru', when we return to userspace we might do an XRSTOR and
      wipe out the kernel's 'wrpkru'.  So, we need to go after PKRU in
      the xsave buffer.
      
      We do this by:
      
        1. Ensuring that we have the XSAVE registers (fpregs) in the
           kernel FPU buffer (fpstate)
        2. Looking up the location of a given state in the buffer
        3. Filling in the stat
        4. Ensuring that the hardware knows that state is present there
           (basically that the 'init optimization' is not in place).
        5. Copying the newly-modified state back to the registers if
           necessary.
      Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com>
      Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Dave Hansen <dave@sr71.net>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Quentin Casasnovas <quentin.casasnovas@oracle.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: linux-mm@kvack.org
      Link: http://lkml.kernel.org/r/20160212210235.5A3139BF@viggo.jf.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      b8b9b6ba
  8. 16 2月, 2016 2 次提交
    • D
      x86/fpu, x86/mm/pkeys: Add PKRU xsave fields and data structures · c8df4009
      Dave Hansen 提交于
      The protection keys register (PKRU) is saved and restored using
      xsave.  Define the data structure that we will use to access it
      inside the xsave buffer.
      
      Note that we also have to widen the printk of the xsave feature
      masks since this is feature 0x200 and we only did two characters
      before.
      Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com>
      Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Dave Hansen <dave@sr71.net>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: linux-mm@kvack.org
      Link: http://lkml.kernel.org/r/20160212210204.56DF8F7B@viggo.jf.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      c8df4009
    • D
      x86/fpu: Add placeholder for 'Processor Trace' XSAVE state · 1f96b1ef
      Dave Hansen 提交于
      There is an XSAVE state component for Intel Processor Trace (PT).
      But, we do not currently use it.
      
      We add a placeholder in the code for it so it is not a mystery and
      also so we do not need an explicit enum initialization for Protection
      Keys in a moment.
      
      Why don't we use it?
      
      We might end up using this at _some_ point in the future.  But,
      this is a "system" state which requires using the currently
      unsupported XSAVES feature.  Unlike all the other XSAVE states,
      PT state is also not directly tied to a thread.  You might
      context-switch between threads, but not want to change any of the
      PT state.  Or, you might switch between threads, and *do* want to
      change PT state, all depending on what is being traced.
      
      We currently just manually set some MSRs to do this PT context
      switching, and it is unclear whether replacing our direct MSR use
      with XSAVE will be a net win or loss, both in code complexity and
      performance.
      Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com>
      Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Dave Hansen <dave@sr71.net>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: fenghua.yu@intel.com
      Cc: linux-mm@kvack.org
      Cc: yu-cheng.yu@intel.com
      Link: http://lkml.kernel.org/r/20160212210158.5E4BCAE2@viggo.jf.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      1f96b1ef
  9. 09 2月, 2016 4 次提交
    • A
      x86/fpu: Default eagerfpu=on on all CPUs · 58122bf1
      Andy Lutomirski 提交于
      We have eager and lazy FPU modes, introduced in:
      
        304bceda ("x86, fpu: use non-lazy fpu restore for processors supporting xsave")
      
      The result is rather messy.  There are two code paths in almost all
      of the FPU code, and only one of them (the eager case) is tested
      frequently, since most kernel developers have new enough hardware
      that we use eagerfpu.
      
      It seems that, on any remotely recent hardware, eagerfpu is a win:
      glibc uses SSE2, so laziness is probably overoptimistic, and, in any
      case, manipulating TS is far slower that saving and restoring the
      full state.  (Stores to CR0.TS are serializing and are poorly
      optimized.)
      
      To try to shake out any latent issues on old hardware, this changes
      the default to eager on all CPUs.  If no performance or functionality
      problems show up, a subsequent patch could remove lazy mode entirely.
      Signed-off-by: NAndy Lutomirski <luto@kernel.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Quentin Casasnovas <quentin.casasnovas@oracle.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Sai Praneeth Prakhya <sai.praneeth.prakhya@intel.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: yu-cheng yu <yu-cheng.yu@intel.com>
      Link: http://lkml.kernel.org/r/ac290de61bf08d9cfc2664a4f5080257ffc1075a.1453675014.git.luto@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      58122bf1
    • A
      x86/fpu: Fold fpu_copy() into fpu__copy() · a20d7297
      Andy Lutomirski 提交于
      Splitting it into two functions needlessly obfuscated the code.
      While we're at it, improve the comment slightly.
      Signed-off-by: NAndy Lutomirski <luto@kernel.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Quentin Casasnovas <quentin.casasnovas@oracle.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Sai Praneeth Prakhya <sai.praneeth.prakhya@intel.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: yu-cheng yu <yu-cheng.yu@intel.com>
      Link: http://lkml.kernel.org/r/3eb5a63a9c5c84077b2677a7dfe684eef96fe59e.1453675014.git.luto@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      a20d7297
    • A
      x86/fpu: Fix FNSAVE usage in eagerfpu mode · 5ed73f40
      Andy Lutomirski 提交于
      In eager fpu mode, having deactivated FPU without immediately
      reloading some other context is illegal.  Therefore, to recover from
      FNSAVE, we can't just deactivate the state -- we need to reload it
      if we're not actively context switching.
      
      We had this wrong in fpu__save() and fpu__copy().  Fix both.
      __kernel_fpu_begin() was fine -- add a comment.
      
      This fixes a warning triggerable with nofxsr eagerfpu=on.
      Signed-off-by: NAndy Lutomirski <luto@kernel.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Quentin Casasnovas <quentin.casasnovas@oracle.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Sai Praneeth Prakhya <sai.praneeth.prakhya@intel.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: yu-cheng yu <yu-cheng.yu@intel.com>
      Link: http://lkml.kernel.org/r/60662444e13c76f06e23c15c5dcdba31b4ac3d67.1453675014.git.luto@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      5ed73f40
    • A
      x86/fpu: Fix math emulation in eager fpu mode · 4ecd16ec
      Andy Lutomirski 提交于
      Systems without an FPU are generally old and therefore use lazy FPU
      switching. Unsurprisingly, math emulation in eager FPU mode is a
      bit buggy. Fix it.
      
      There were two bugs involving kernel code trying to use the FPU
      registers in eager mode even if they didn't exist and one BUG_ON()
      that was incorrect.
      Signed-off-by: NAndy Lutomirski <luto@kernel.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Quentin Casasnovas <quentin.casasnovas@oracle.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Sai Praneeth Prakhya <sai.praneeth.prakhya@intel.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: yu-cheng yu <yu-cheng.yu@intel.com>
      Link: http://lkml.kernel.org/r/b4b8d112436bd6fab866e1b4011131507e8d7fbe.1453675014.git.luto@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      4ecd16ec
  10. 12 1月, 2016 4 次提交
    • Y
      x86/fpu: Disable AVX when eagerfpu is off · 394db20c
      yu-cheng yu 提交于
      When "eagerfpu=off" is given as a command-line input, the kernel
      should disable AVX support.
      
      The Task Switched bit used for lazy context switching does not
      support AVX. If AVX is enabled without eagerfpu context
      switching, one task's AVX state could become corrupted or leak
      to other tasks. This is a bug and has bad security implications.
      
      This only affects systems that have AVX/AVX2/AVX512 and this
      issue will be found only when one actually uses AVX/AVX2/AVX512
      _AND_ does eagerfpu=off.
      
      Reference: Intel Software Developer's Manual Vol. 3A
      
      Sec. 2.5 Control Registers:
      TS Task Switched bit (bit 3 of CR0) -- Allows the saving of the
      x87 FPU/ MMX/SSE/SSE2/SSE3/SSSE3/SSE4 context on a task switch
      to be delayed until an x87 FPU/MMX/SSE/SSE2/SSE3/SSSE3/SSE4
      instruction is actually executed by the new task.
      
      Sec. 13.4.1 Using the TS Flag to Control the Saving of the X87
      FPU and SSE State
      When the TS flag is set, the processor monitors the instruction
      stream for x87 FPU, MMX, SSE instructions. When the processor
      detects one of these instructions, it raises a
      device-not-available exeception (#NM) prior to executing the
      instruction.
      Signed-off-by: NYu-cheng Yu <yu-cheng.yu@intel.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Quentin Casasnovas <quentin.casasnovas@oracle.com>
      Cc: Ravi V. Shankar <ravi.v.shankar@intel.com>
      Cc: Sai Praneeth Prakhya <sai.praneeth.prakhya@intel.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: yu-cheng yu <yu-cheng.yu@intel.com>
      Link: http://lkml.kernel.org/r/1452119094-7252-5-git-send-email-yu-cheng.yu@intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      394db20c
    • Y
      x86/fpu: Disable MPX when eagerfpu is off · a5fe93a5
      yu-cheng yu 提交于
      This issue is a fallout from the command-line parsing move.
      
      When "eagerfpu=off" is given as a command-line input, the kernel
      should disable MPX support. The decision for turning off MPX was
      made in fpu__init_system_ctx_switch(), which is after the
      selection of the XSAVE format. This patch fixes it by getting
      that decision done earlier in fpu__init_system_xstate().
      Signed-off-by: NYu-cheng Yu <yu-cheng.yu@intel.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Quentin Casasnovas <quentin.casasnovas@oracle.com>
      Cc: Ravi V. Shankar <ravi.v.shankar@intel.com>
      Cc: Sai Praneeth Prakhya <sai.praneeth.prakhya@intel.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: yu-cheng yu <yu-cheng.yu@intel.com>
      Link: http://lkml.kernel.org/r/1452119094-7252-4-git-send-email-yu-cheng.yu@intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      a5fe93a5
    • Y
      x86/fpu: Disable XGETBV1 when no XSAVE · eb7c5f87
      yu-cheng yu 提交于
      When "noxsave" is given as a command-line input, the kernel
      should disable XGETBV1. This issue currently does not cause any
      actual problems. XGETBV1 is only useful if we have something
      using the 'init optimization' (i.e. xsaveopt, xsaves). We
      already clear both of those in fpu__xstate_clear_all_cpu_caps().
      But this is good for completeness.
      Signed-off-by: NYu-cheng Yu <yu-cheng.yu@intel.com>
      Reviewed-by: NDave Hansen <dave.hansen@intel.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Quentin Casasnovas <quentin.casasnovas@oracle.com>
      Cc: Ravi V. Shankar <ravi.v.shankar@intel.com>
      Cc: Sai Praneeth Prakhya <sai.praneeth.prakhya@intel.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: yu-cheng yu <yu-cheng.yu@intel.com>
      Link: http://lkml.kernel.org/r/1452119094-7252-3-git-send-email-yu-cheng.yu@intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      eb7c5f87
    • Y
      x86/fpu: Fix early FPU command-line parsing · 4f81cbaf
      yu-cheng yu 提交于
      The function fpu__init_system() is executed before
      parse_early_param(). This causes wrong FPU configuration. This
      patch fixes this issue by parsing boot_command_line in the
      beginning of fpu__init_system().
      
      With all four patches in this series, each parameter disables
      features as the following:
      
      eagerfpu=off: eagerfpu, avx, avx2, avx512, mpx
      no387: fpu
      nofxsr: fxsr, fxsropt, xmm
      noxsave: xsave, xsaveopt, xsaves, xsavec, avx, avx2, avx512,
      mpx, xgetbv1 noxsaveopt: xsaveopt
      noxsaves: xsaves
      Signed-off-by: NYu-cheng Yu <yu-cheng.yu@intel.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Quentin Casasnovas <quentin.casasnovas@oracle.com>
      Cc: Ravi V. Shankar <ravi.v.shankar@intel.com>
      Cc: Sai Praneeth Prakhya <sai.praneeth.prakhya@intel.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: yu-cheng yu <yu-cheng.yu@intel.com>
      Link: http://lkml.kernel.org/r/1452119094-7252-2-git-send-email-yu-cheng.yu@intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      4f81cbaf
  11. 06 1月, 2016 1 次提交
  12. 19 12月, 2015 1 次提交
  13. 27 11月, 2015 1 次提交
  14. 12 11月, 2015 2 次提交
    • H
      x86/fpu: Fix get_xsave_addr() behavior under virtualization · a05917b6
      Huaitong Han 提交于
      KVM uses the get_xsave_addr() function in a different fashion from
      the native kernel, in that the 'xsave' parameter belongs to guest vcpu,
      not the currently running task.
      
      But 'xsave' is replaced with current task's (host) xsave structure, so
      get_xsave_addr() will incorrectly return the bad xsave address to KVM.
      
      Fix it so that the passed in 'xsave' address is used - as intended
      originally.
      Signed-off-by: NHuaitong Han <huaitong.han@intel.com>
      Reviewed-by: NDave Hansen <dave.hansen@linux.intel.com>
      Cc: <stable@vger.kernel.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Quentin Casasnovas <quentin.casasnovas@oracle.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: dave.hansen@intel.com
      Link: http://lkml.kernel.org/r/1446800423-21622-1-git-send-email-huaitong.han@intel.com
      [ Tidied up the changelog. ]
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      a05917b6
    • D
      x86/fpu: Fix 32-bit signal frame handling · ab6b5294
      Dave Hansen 提交于
      (This should have gone to LKML originally. Sorry for the extra
       noise, folks on the cc.)
      
      Background:
      
      Signal frames on x86 have two formats:
      
        1. For 32-bit executables (whether on a real 32-bit kernel or
           under 32-bit emulation on a 64-bit kernel) we have a
          'fpregset_t' that includes the "FSAVE" registers.
      
        2. For 64-bit executables (on 64-bit kernels obviously), the
           'fpregset_t' is smaller and does not contain the "FSAVE"
           state.
      
      When creating the signal frame, we have to be aware of whether
      we are running a 32 or 64-bit executable so we create the
      correct format signal frame.
      
      Problem:
      
      save_xstate_epilog() uses 'fx_sw_reserved_ia32' whenever it is
      called for a 32-bit executable.  This is for real 32-bit and
      ia32 emulation.
      
      But, fpu__init_prepare_fx_sw_frame() only initializes
      'fx_sw_reserved_ia32' when emulation is enabled, *NOT* for real
      32-bit kernels.
      
      This leads to really wierd situations where 32-bit programs
      lose their extended state when returning from a signal handler.
      The kernel copies the uninitialized (zero) 'fx_sw_reserved_ia32'
      out to userspace in save_xstate_epilog().  But when returning
      from the signal, the kernel errors out in check_for_xstate()
      when it does not see FP_XSTATE_MAGIC1 present (because it was
      zeroed).  This leads to the FPU/XSAVE state being initialized.
      
      For MPX, this leads to the most permissive state and means we
      silently lose bounds violations.  I think this would also mean
      that we could lose *ANY* FPU/SSE/AVX state.  I'm not sure why
      no one has spotted this bug.
      
      I believe this was broken by:
      
      	72a671ce ("x86, fpu: Unify signal handling code paths for x86 and x86_64 kernels")
      
      way back in 2012.
      Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com>
      Cc: <stable@vger.kernel.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: dave@sr71.net
      Cc: fenghua.yu@intel.com
      Cc: yu-cheng.yu@intel.com
      Link: http://lkml.kernel.org/r/20151111002354.A0799571@viggo.jf.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      ab6b5294
  15. 14 9月, 2015 11 次提交
    • D
      x86/fpu: Check CPU-provided sizes against struct declarations · ef78f2a4
      Dave Hansen 提交于
      We now have C structures defined for each of the XSAVE state
      components that we support.  This patch adds checks during our
      verification pass to ensure that the CPU-provided data
      enumerated in CPUID leaves matches our C structures.
      
      If not, we warn and dump all the XSAVE CPUID leaves.
      
      Note: this *actually* found an inconsistency with the MPX
      'bndcsr' state.  The hardware pads it out differently from
      our C structures.  This patch caught it and warned.
      Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tim Chen <tim.c.chen@linux.intel.com>
      Cc: dave@sr71.net
      Cc: linux-kernel@vger.kernel.org
      Link: http://lkml.kernel.org/r/20150902233131.A8DB36DA@viggo.jf.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      ef78f2a4
    • D
      x86/fpu: Check to ensure increasing-offset xstate offsets · e6e888f9
      Dave Hansen 提交于
      The xstate CPUID leaves enumerate where each state component is
      inside the XSAVE buffer, along with the size of the entire
      buffer.  Our new XSAVE sanity-checking code extrapolates an
      expected _total_ buffer size by looking at the last component
      that it encounters.
      
      That method requires that the highest-numbered component also
      be the one with the highest offset.  This is a pretty safe
      assumption, but let's add some code to ensure it stays true.
      
      To make this check work correctly, we also need to ensure we
      only consider the offsets from enabled features because the
      offset register (ebx) will return 0 on unsupported features.
      
      This also means that we will preserve the -1's that we
      initialized xstate_offsets/sizes[] with.  That will help
      find bugs.
      Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tim Chen <tim.c.chen@linux.intel.com>
      Cc: dave@sr71.net
      Cc: linux-kernel@vger.kernel.org
      Link: http://lkml.kernel.org/r/20150902233130.0843AB15@viggo.jf.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      e6e888f9
    • D
      x86/fpu: Correct and check XSAVE xstate size calculations · 65ac2e9b
      Dave Hansen 提交于
      Note: our xsaves support is currently broken and disabled.  This
      patch does not fix it, but it is an incremental improvement.
      
      This might be useful to someone backporting the entire set of
      XSAVES patches at some point, but it should not be backported
      alone.
      
      Ingo said he wanted something like this (bullets 2 and 3):
      
        http://lkml.kernel.org/r/20150808091508.GB32641@gmail.com
      
      There are currently two xsave buffer formats: standard and
      compacted.  The standard format is waht 'XSAVE' and 'XSAVEOPT'
      produce while 'XSAVES' and 'XSAVEC' produce a compacted-formet
      buffer.  (The kernel never uses XSAVEC)
      
      But, the XSAVES buffer *ALSO* contains "system state components"
      which are never saved by a plain XSAVE.  So, XSAVES has two
      things that might make its buffer differently-sized from an
      XSAVE-produced one.
      
      The current code assumes that an XSAVES buffer's size is simply
      the sum of the sizes of the (user) states which are supported.
      This seems to work in most cases, but it is not consistent with
      what the SDM says, and it breaks if we 'align' a component in
      the buffer.  The calculation is also unnecessary work since the
      CPU *tells* us the size of the buffer directly.
      
      This patch just reads the size of the buffer right out of the
      CPUID leaf instead of trying to derive it.
      
      But, blindly trusting the CPU like this is dangerous.  We add
      a verification pass in do_extra_xstate_size_checks() to ensure
      that the size we calculate matches with what we see from the
      hardware.  When it comes down to it, we trust but verify the
      CPU.
      Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tim Chen <tim.c.chen@linux.intel.com>
      Cc: dave@sr71.net
      Cc: linux-kernel@vger.kernel.org
      Link: http://lkml.kernel.org/r/20150902233130.234FE1EC@viggo.jf.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      65ac2e9b
    • D
      x86/fpu: Add xfeature_enabled() helper instead of test_bit() · 633d54c4
      Dave Hansen 提交于
      We currently use test_bit() in a few places to see if an
      xfeature is enabled.  It ends up being a bit ugly because
      'xfeatures_mask' is a u64 and test_bit wants an 'unsigned long'
      so it requires a cast.  The *_bit() functions are also
      techincally atomic, which we have no need for here.
      
      So, remove the test_bit()s and replace with the new
      xfeature_enabled() helper.
      
      This also provides a central place to add a comment about the
      future need to support 'system xstates'.
      Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tim Chen <tim.c.chen@linux.intel.com>
      Cc: dave@sr71.net
      Cc: linux-kernel@vger.kernel.org
      Link: http://lkml.kernel.org/r/20150902233129.B1534F86@viggo.jf.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      633d54c4
    • D
      x86/fpu: Remove 'xfeature_nr' · ee9ae257
      Dave Hansen 提交于
      xfeature_nr ended up being initialized too late for me to
      use it in the "xsave size sanity check" patch which is
      later in the series.  I tried to move around its initialization
      but realized that it was just as easy to get rid of it.
      
      We only have 9 XFEATURES.  Instead of dynamically calculating
      and storing the last feature, just use the compile-time max:
      XFEATURES_NR_MAX.  Note that even with 'xfeatures_nr' we can
      had "holes" in the xfeatures_mask that we had to deal with.
      
      We also change a 'leaf' variable to be a plain 'i'.  Although
      it is used to grab a cpuid leaf in this one loop, all of the
      other loops just use an 'i' and I find it much more obvious
      to keep the naming consistent across all the similar loops.
      Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tim Chen <tim.c.chen@linux.intel.com>
      Cc: dave@sr71.net
      Cc: linux-kernel@vger.kernel.org
      Link: http://lkml.kernel.org/r/20150902233128.3F30DF5A@viggo.jf.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      ee9ae257
    • D
      x86/fpu: Rework XSTATE_* macros to remove magic '2' · 8a93c9e0
      Dave Hansen 提交于
      The 'xstate.c' code has a bunch of references to '2'.  This
      is because we have a lot more work to do for the "extended"
      xstates than the "legacy" ones and state component 2 is the
      first "extended" state.
      
      This patch replaces all of the instances of '2' with
      FIRST_EXTENDED_XFEATURE, which clearly explains what is
      going on.
      Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tim Chen <tim.c.chen@linux.intel.com>
      Cc: dave@sr71.net
      Cc: linux-kernel@vger.kernel.org
      Link: http://lkml.kernel.org/r/20150902233128.A8C0BF51@viggo.jf.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      8a93c9e0
    • D
      x86/fpu: Rename XFEATURES_NR_MAX · dad8c4fe
      Dave Hansen 提交于
      This is a logcal followon to the last patch.  It makes the
      XFEATURE_MAX naming consistent with the other enum values.
      This is what Ingo suggested.
      Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tim Chen <tim.c.chen@linux.intel.com>
      Cc: dave@sr71.net
      Cc: linux-kernel@vger.kernel.org
      Link: http://lkml.kernel.org/r/20150902233127.A541448F@viggo.jf.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      dad8c4fe
    • D
      x86/fpu: Rename XSAVE macros · d91cab78
      Dave Hansen 提交于
      There are two concepts that have some confusing naming:
       1. Extended State Component numbers (currently called
          XFEATURE_BIT_*)
       2. Extended State Component masks (currently called XSTATE_*)
      
      The numbers are (currently) from 0-9.  State component 3 is the
      bounds registers for MPX, for instance.
      
      But when we want to enable "state component 3", we go set a bit
      in XCR0.  The bit we set is 1<<3.  We can check to see if a
      state component feature is enabled by looking at its bit.
      
      The current 'xfeature_bit's are at best xfeature bit _numbers_.
      Calling them bits is at best inconsistent with ending the enum
      list with 'XFEATURES_NR_MAX'.
      
      This patch renames the enum to be 'xfeature'.  These also
      happen to be what the Intel documentation calls a "state
      component".
      
      We also want to differentiate these from the "XSTATE_*" macros.
      The "XSTATE_*" macros are a mask, and we rename them to match.
      
      These macros are reasonably widely used so this patch is a
      wee bit big, but this really is just a rename.
      
      The only non-mechanical part of this is the
      
      	s/XSTATE_EXTEND_MASK/XFEATURE_MASK_EXTEND/
      
      We need a better name for it, but that's another patch.
      Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tim Chen <tim.c.chen@linux.intel.com>
      Cc: dave@sr71.net
      Cc: linux-kernel@vger.kernel.org
      Link: http://lkml.kernel.org/r/20150902233126.38653250@viggo.jf.intel.com
      [ Ported to v4.3-rc1. ]
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      d91cab78
    • D
      x86/fpu: Remove XSTATE_RESERVE · 4109ca06
      Dave Hansen 提交于
      The original purpose of XSTATE_RESERVE was to carve out space
      to store all of the possible extended state components that
      get saved with the XSAVE instruction(s).
      
      However, we are now almost entirely dynamically allocating
      the buffers we use for XSAVE by placing them at the end of
      the task_struct and them sizing them at boot.  The one
      exception for that is the init_task.
      
      The maximum extended state component size that we have today
      is on systems with space for AVX-512 and Memory Protection
      Keys: 2696 bytes.  We have reserved a PAGE_SIZE buffer in
      the init_task via fpregs_state->__padding.
      
      This check ensures that even if the component sizes or
      layout were changed (which we do not expect), that we will
      still not overflow the init_task's buffer.
      
      In the case that we detect we might overflow the buffer,
      we completely disable XSAVE support in the kernel and try
      to boot as if we had 'legacy x87 FPU' support in place.
      This is a crippled state without any of the XSAVE-enabled
      features (MPX, AVX, etc...).  But, it at least let us
      boot safely.
      Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tim Chen <tim.c.chen@linux.intel.com>
      Cc: dave@sr71.net
      Cc: linux-kernel@vger.kernel.org
      Link: http://lkml.kernel.org/r/20150902233125.D948D475@viggo.jf.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      4109ca06
    • D
      x86/fpu: Move XSAVE-disabling code to a helper · 0a265375
      Dave Hansen 提交于
      When we want to _completely_ disable XSAVE support as far as
      the kernel is concerned, we have a big set of feature flags
      to clear.  We currently only do this in cases where the user
      asks for it to be disabled, but we are about to expand the
      places where we do it to handle errors too.
      
      Move the code in to xstate.c, and put it in the xstate.h
      header.  We will use it in the next patch too.
      Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tim Chen <tim.c.chen@linux.intel.com>
      Cc: dave@sr71.net
      Cc: linux-kernel@vger.kernel.org
      Link: http://lkml.kernel.org/r/20150902233124.EA9A70E5@viggo.jf.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      0a265375
    • D
      x86/fpu: Print xfeature buffer size in decimal · b0815359
      Dave Hansen 提交于
      This is utterly a personal taste thing, but I find it way easier
      to read structure sizes in decimal than in hex.
      Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tim Chen <tim.c.chen@linux.intel.com>
      Cc: dave@sr71.net
      Cc: linux-kernel@vger.kernel.org
      Link: http://lkml.kernel.org/r/20150902233124.1A8B04A8@viggo.jf.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      b0815359
  16. 08 9月, 2015 1 次提交
  17. 22 8月, 2015 2 次提交
    • I
      x86/fpu/math-emu: Fix crash in fork() · 827409b2
      Ingo Molnar 提交于
      During later stages of math-emu bootup the following crash triggers:
      
      	 math_emulate: 0060:c100d0a8
      	 Kernel panic - not syncing: Math emulation needed in kernel
      	 CPU: 0 PID: 1511 Comm: login Not tainted 4.2.0-rc7+ #1012
      	 [...]
      	 Call Trace:
      	  [<c181d50d>] dump_stack+0x41/0x52
      	  [<c181c918>] panic+0x77/0x189
      	  [<c1003530>] ? math_error+0x140/0x140
      	  [<c164c2d7>] math_emulate+0xba7/0xbd0
      	  [<c100d0a8>] ? fpu__copy+0x138/0x1c0
      	  [<c1109c3c>] ? __alloc_pages_nodemask+0x12c/0x870
      	  [<c136ac20>] ? proc_clear_tty+0x40/0x70
      	  [<c136ac6e>] ? session_clear_tty+0x1e/0x30
      	  [<c1003530>] ? math_error+0x140/0x140
      	  [<c1003575>] do_device_not_available+0x45/0x70
      	  [<c100d0a8>] ? fpu__copy+0x138/0x1c0
      	  [<c18258e6>] error_code+0x5a/0x60
      	  [<c1003530>] ? math_error+0x140/0x140
      	  [<c100d0a8>] ? fpu__copy+0x138/0x1c0
      	  [<c100c205>] arch_dup_task_struct+0x25/0x30
      	  [<c1048cea>] copy_process.part.51+0xea/0x1480
      	  [<c115a8e5>] ? dput+0x175/0x200
      	  [<c136af70>] ? no_tty+0x30/0x30
      	  [<c1157242>] ? do_vfs_ioctl+0x322/0x540
      	  [<c104a21a>] _do_fork+0xca/0x340
      	  [<c1057b06>] ? SyS_rt_sigaction+0x66/0x90
      	  [<c104a557>] SyS_clone+0x27/0x30
      	  [<c1824a80>] sysenter_do_call+0x12/0x12
      
      The reason is the incorrect assumption in fpu_copy(), that FNSAVE
      can be executed from math-emu kernels as well.
      
      Don't try to copy the registers, the soft state will be copied
      by fork anyway, so the child task inherits the parent task's
      soft math state.
      
      With this fix applied math-emu kernels boot up fine on modern
      hardware and the 'no387 nofxsr' boot options.
      
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Bobby Powers <bobbypowers@gmail.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Quentin Casasnovas <quentin.casasnovas@oracle.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      827409b2
    • I
      x86/fpu/math-emu: Fix math-emu boot crash · 5fc96038
      Ingo Molnar 提交于
      On a math-emu bootup the following crash occurs:
      
      	Initializing CPU#0
      	------------[ cut here ]------------
      	kernel BUG at arch/x86/kernel/traps.c:779!
      	invalid opcode: 0000 [#1] SMP
      	[...]
      	EIP is at do_device_not_available+0xe/0x70
      	[...]
      	Call Trace:
      	 [<c18238e6>] error_code+0x5a/0x60
      	 [<c1002bd0>] ? math_error+0x140/0x140
      	 [<c100bbd9>] ? fpu__init_cpu+0x59/0xa0
      	 [<c1012322>] cpu_init+0x202/0x330
      	 [<c104509f>] ? __native_set_fixmap+0x1f/0x30
      	 [<c1b56ab0>] trap_init+0x305/0x346
      	 [<c1b548af>] start_kernel+0x1a5/0x35d
      	 [<c1b542b4>] i386_start_kernel+0x82/0x86
      
      The reason is that in the following commit:
      
        b1276c48 ("x86/fpu: Initialize fpregs in fpu__init_cpu_generic()")
      
      I failed to consider math-emu's limitation that it cannot execute the
      FNINIT instruction in kernel mode.
      
      The long term fix might be to allow math-emu to execute (certain) kernel
      mode FPU instructions, but for now apply the safe (albeit somewhat ugly)
      fix: initialize the emulation state explicitly without trapping out to
      the FPU emulator.
      
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Quentin Casasnovas <quentin.casasnovas@oracle.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      5fc96038
  18. 21 7月, 2015 1 次提交
  19. 18 7月, 2015 1 次提交