1. 18 6月, 2016 1 次提交
  2. 13 4月, 2016 2 次提交
  3. 13 3月, 2016 1 次提交
    • F
      x86/cpufeature: Enable new AVX-512 features · d0500494
      Fenghua Yu 提交于
      A few new AVX-512 instruction groups/features are added in cpufeatures.h
      for enuermation: AVX512DQ, AVX512BW, and AVX512VL.
      
      Clear the flags in fpu__xstate_clear_all_cpu_caps().
      
      The specification for latest AVX-512 including the features can be found at:
      
        https://software.intel.com/sites/default/files/managed/07/b7/319433-023.pdf
      
      Note, I didn't enable the flags in KVM. Hopefully the KVM guys can pick up
      the flags and enable them in KVM.
      Signed-off-by: NFenghua Yu <fenghua.yu@intel.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Gleb Natapov <gleb@kernel.org>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Quentin Casasnovas <quentin.casasnovas@oracle.com>
      Cc: Ravi V Shankar <ravi.v.shankar@intel.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: kvm@vger.kernel.org
      Link: http://lkml.kernel.org/r/1457667498-37357-1-git-send-email-fenghua.yu@intel.com
      [ Added more detailed feature descriptions. ]
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      d0500494
  4. 19 2月, 2016 3 次提交
    • D
      mm/core, x86/mm/pkeys: Add execute-only protection keys support · 62b5f7d0
      Dave Hansen 提交于
      Protection keys provide new page-based protection in hardware.
      But, they have an interesting attribute: they only affect data
      accesses and never affect instruction fetches.  That means that
      if we set up some memory which is set as "access-disabled" via
      protection keys, we can still execute from it.
      
      This patch uses protection keys to set up mappings to do just that.
      If a user calls:
      
      	mmap(..., PROT_EXEC);
      or
      	mprotect(ptr, sz, PROT_EXEC);
      
      (note PROT_EXEC-only without PROT_READ/WRITE), the kernel will
      notice this, and set a special protection key on the memory.  It
      also sets the appropriate bits in the Protection Keys User Rights
      (PKRU) register so that the memory becomes unreadable and
      unwritable.
      
      I haven't found any userspace that does this today.  With this
      facility in place, we expect userspace to move to use it
      eventually.  Userspace _could_ start doing this today.  Any
      PROT_EXEC calls get converted to PROT_READ inside the kernel, and
      would transparently be upgraded to "true" PROT_EXEC with this
      code.  IOW, userspace never has to do any PROT_EXEC runtime
      detection.
      
      This feature provides enhanced protection against leaking
      executable memory contents.  This helps thwart attacks which are
      attempting to find ROP gadgets on the fly.
      
      But, the security provided by this approach is not comprehensive.
      The PKRU register which controls access permissions is a normal
      user register writable from unprivileged userspace.  An attacker
      who can execute the 'wrpkru' instruction can easily disable the
      protection provided by this feature.
      
      The protection key that is used for execute-only support is
      permanently dedicated at compile time.  This is fine for now
      because there is currently no API to set a protection key other
      than this one.
      
      Despite there being a constant PKRU value across the entire
      system, we do not set it unless this feature is in use in a
      process.  That is to preserve the PKRU XSAVE 'init state',
      which can lead to faster context switches.
      
      PKRU *is* a user register and the kernel is modifying it.  That
      means that code doing:
      
      	pkru = rdpkru()
      	pkru |= 0x100;
      	mmap(..., PROT_EXEC);
      	wrpkru(pkru);
      
      could lose the bits in PKRU that enforce execute-only
      permissions.  To avoid this, we suggest avoiding ever calling
      mmap() or mprotect() when the PKRU value is expected to be
      unstable.
      Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com>
      Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Chen Gang <gang.chen.5i5j@gmail.com>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Dave Chinner <david@fromorbit.com>
      Cc: Dave Hansen <dave@sr71.net>
      Cc: David Hildenbrand <dahi@linux.vnet.ibm.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Konstantin Khlebnikov <koct9i@gmail.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Piotr Kwapulinski <kwapulinski.piotr@gmail.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Stephen Smalley <sds@tycho.nsa.gov>
      Cc: Vladimir Murzin <vladimir.murzin@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: keescook@google.com
      Cc: linux-kernel@vger.kernel.org
      Cc: linux-mm@kvack.org
      Link: http://lkml.kernel.org/r/20160212210240.CB4BB5CA@viggo.jf.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      62b5f7d0
    • D
      x86/mm/pkeys: Allow kernel to modify user pkey rights register · 84594296
      Dave Hansen 提交于
      The Protection Key Rights for User memory (PKRU) is a 32-bit
      user-accessible register.  It contains two bits for each
      protection key: one to write-disable (WD) access to memory
      covered by the key and another to access-disable (AD).
      
      Userspace can read/write the register with the RDPKRU and WRPKRU
      instructions.  But, the register is saved and restored with the
      XSAVE family of instructions, which means we have to treat it
      like a floating point register.
      
      The kernel needs to write to the register if it wants to
      implement execute-only memory or if it implements a system call
      to change PKRU.
      
      To do this, we need to create a 'pkru_state' buffer, read the old
      contents in to it, modify it, and then tell the FPU code that
      there is modified data in there so it can (possibly) move the
      buffer back in to the registers.
      
      This uses the fpu__xfeature_set_state() function that we defined
      in the previous patch.
      Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com>
      Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Dave Hansen <dave@sr71.net>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: linux-mm@kvack.org
      Link: http://lkml.kernel.org/r/20160212210236.0BE13217@viggo.jf.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      84594296
    • D
      x86/fpu: Allow setting of XSAVE state · b8b9b6ba
      Dave Hansen 提交于
      We want to modify the Protection Key rights inside the kernel, so
      we need to change PKRU's contents.  But, if we do a plain
      'wrpkru', when we return to userspace we might do an XRSTOR and
      wipe out the kernel's 'wrpkru'.  So, we need to go after PKRU in
      the xsave buffer.
      
      We do this by:
      
        1. Ensuring that we have the XSAVE registers (fpregs) in the
           kernel FPU buffer (fpstate)
        2. Looking up the location of a given state in the buffer
        3. Filling in the stat
        4. Ensuring that the hardware knows that state is present there
           (basically that the 'init optimization' is not in place).
        5. Copying the newly-modified state back to the registers if
           necessary.
      Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com>
      Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Dave Hansen <dave@sr71.net>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Quentin Casasnovas <quentin.casasnovas@oracle.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: linux-mm@kvack.org
      Link: http://lkml.kernel.org/r/20160212210235.5A3139BF@viggo.jf.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      b8b9b6ba
  5. 16 2月, 2016 2 次提交
    • D
      x86/fpu, x86/mm/pkeys: Add PKRU xsave fields and data structures · c8df4009
      Dave Hansen 提交于
      The protection keys register (PKRU) is saved and restored using
      xsave.  Define the data structure that we will use to access it
      inside the xsave buffer.
      
      Note that we also have to widen the printk of the xsave feature
      masks since this is feature 0x200 and we only did two characters
      before.
      Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com>
      Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Dave Hansen <dave@sr71.net>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: linux-mm@kvack.org
      Link: http://lkml.kernel.org/r/20160212210204.56DF8F7B@viggo.jf.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      c8df4009
    • D
      x86/fpu: Add placeholder for 'Processor Trace' XSAVE state · 1f96b1ef
      Dave Hansen 提交于
      There is an XSAVE state component for Intel Processor Trace (PT).
      But, we do not currently use it.
      
      We add a placeholder in the code for it so it is not a mystery and
      also so we do not need an explicit enum initialization for Protection
      Keys in a moment.
      
      Why don't we use it?
      
      We might end up using this at _some_ point in the future.  But,
      this is a "system" state which requires using the currently
      unsupported XSAVES feature.  Unlike all the other XSAVE states,
      PT state is also not directly tied to a thread.  You might
      context-switch between threads, but not want to change any of the
      PT state.  Or, you might switch between threads, and *do* want to
      change PT state, all depending on what is being traced.
      
      We currently just manually set some MSRs to do this PT context
      switching, and it is unclear whether replacing our direct MSR use
      with XSAVE will be a net win or loss, both in code complexity and
      performance.
      Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com>
      Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Dave Hansen <dave@sr71.net>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: fenghua.yu@intel.com
      Cc: linux-mm@kvack.org
      Cc: yu-cheng.yu@intel.com
      Link: http://lkml.kernel.org/r/20160212210158.5E4BCAE2@viggo.jf.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      1f96b1ef
  6. 12 1月, 2016 2 次提交
    • Y
      x86/fpu: Disable MPX when eagerfpu is off · a5fe93a5
      yu-cheng yu 提交于
      This issue is a fallout from the command-line parsing move.
      
      When "eagerfpu=off" is given as a command-line input, the kernel
      should disable MPX support. The decision for turning off MPX was
      made in fpu__init_system_ctx_switch(), which is after the
      selection of the XSAVE format. This patch fixes it by getting
      that decision done earlier in fpu__init_system_xstate().
      Signed-off-by: NYu-cheng Yu <yu-cheng.yu@intel.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Quentin Casasnovas <quentin.casasnovas@oracle.com>
      Cc: Ravi V. Shankar <ravi.v.shankar@intel.com>
      Cc: Sai Praneeth Prakhya <sai.praneeth.prakhya@intel.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: yu-cheng yu <yu-cheng.yu@intel.com>
      Link: http://lkml.kernel.org/r/1452119094-7252-4-git-send-email-yu-cheng.yu@intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      a5fe93a5
    • Y
      x86/fpu: Disable XGETBV1 when no XSAVE · eb7c5f87
      yu-cheng yu 提交于
      When "noxsave" is given as a command-line input, the kernel
      should disable XGETBV1. This issue currently does not cause any
      actual problems. XGETBV1 is only useful if we have something
      using the 'init optimization' (i.e. xsaveopt, xsaves). We
      already clear both of those in fpu__xstate_clear_all_cpu_caps().
      But this is good for completeness.
      Signed-off-by: NYu-cheng Yu <yu-cheng.yu@intel.com>
      Reviewed-by: NDave Hansen <dave.hansen@intel.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Quentin Casasnovas <quentin.casasnovas@oracle.com>
      Cc: Ravi V. Shankar <ravi.v.shankar@intel.com>
      Cc: Sai Praneeth Prakhya <sai.praneeth.prakhya@intel.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: yu-cheng yu <yu-cheng.yu@intel.com>
      Link: http://lkml.kernel.org/r/1452119094-7252-3-git-send-email-yu-cheng.yu@intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      eb7c5f87
  7. 27 11月, 2015 1 次提交
  8. 12 11月, 2015 1 次提交
  9. 14 9月, 2015 11 次提交
    • D
      x86/fpu: Check CPU-provided sizes against struct declarations · ef78f2a4
      Dave Hansen 提交于
      We now have C structures defined for each of the XSAVE state
      components that we support.  This patch adds checks during our
      verification pass to ensure that the CPU-provided data
      enumerated in CPUID leaves matches our C structures.
      
      If not, we warn and dump all the XSAVE CPUID leaves.
      
      Note: this *actually* found an inconsistency with the MPX
      'bndcsr' state.  The hardware pads it out differently from
      our C structures.  This patch caught it and warned.
      Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tim Chen <tim.c.chen@linux.intel.com>
      Cc: dave@sr71.net
      Cc: linux-kernel@vger.kernel.org
      Link: http://lkml.kernel.org/r/20150902233131.A8DB36DA@viggo.jf.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      ef78f2a4
    • D
      x86/fpu: Check to ensure increasing-offset xstate offsets · e6e888f9
      Dave Hansen 提交于
      The xstate CPUID leaves enumerate where each state component is
      inside the XSAVE buffer, along with the size of the entire
      buffer.  Our new XSAVE sanity-checking code extrapolates an
      expected _total_ buffer size by looking at the last component
      that it encounters.
      
      That method requires that the highest-numbered component also
      be the one with the highest offset.  This is a pretty safe
      assumption, but let's add some code to ensure it stays true.
      
      To make this check work correctly, we also need to ensure we
      only consider the offsets from enabled features because the
      offset register (ebx) will return 0 on unsupported features.
      
      This also means that we will preserve the -1's that we
      initialized xstate_offsets/sizes[] with.  That will help
      find bugs.
      Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tim Chen <tim.c.chen@linux.intel.com>
      Cc: dave@sr71.net
      Cc: linux-kernel@vger.kernel.org
      Link: http://lkml.kernel.org/r/20150902233130.0843AB15@viggo.jf.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      e6e888f9
    • D
      x86/fpu: Correct and check XSAVE xstate size calculations · 65ac2e9b
      Dave Hansen 提交于
      Note: our xsaves support is currently broken and disabled.  This
      patch does not fix it, but it is an incremental improvement.
      
      This might be useful to someone backporting the entire set of
      XSAVES patches at some point, but it should not be backported
      alone.
      
      Ingo said he wanted something like this (bullets 2 and 3):
      
        http://lkml.kernel.org/r/20150808091508.GB32641@gmail.com
      
      There are currently two xsave buffer formats: standard and
      compacted.  The standard format is waht 'XSAVE' and 'XSAVEOPT'
      produce while 'XSAVES' and 'XSAVEC' produce a compacted-formet
      buffer.  (The kernel never uses XSAVEC)
      
      But, the XSAVES buffer *ALSO* contains "system state components"
      which are never saved by a plain XSAVE.  So, XSAVES has two
      things that might make its buffer differently-sized from an
      XSAVE-produced one.
      
      The current code assumes that an XSAVES buffer's size is simply
      the sum of the sizes of the (user) states which are supported.
      This seems to work in most cases, but it is not consistent with
      what the SDM says, and it breaks if we 'align' a component in
      the buffer.  The calculation is also unnecessary work since the
      CPU *tells* us the size of the buffer directly.
      
      This patch just reads the size of the buffer right out of the
      CPUID leaf instead of trying to derive it.
      
      But, blindly trusting the CPU like this is dangerous.  We add
      a verification pass in do_extra_xstate_size_checks() to ensure
      that the size we calculate matches with what we see from the
      hardware.  When it comes down to it, we trust but verify the
      CPU.
      Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tim Chen <tim.c.chen@linux.intel.com>
      Cc: dave@sr71.net
      Cc: linux-kernel@vger.kernel.org
      Link: http://lkml.kernel.org/r/20150902233130.234FE1EC@viggo.jf.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      65ac2e9b
    • D
      x86/fpu: Add xfeature_enabled() helper instead of test_bit() · 633d54c4
      Dave Hansen 提交于
      We currently use test_bit() in a few places to see if an
      xfeature is enabled.  It ends up being a bit ugly because
      'xfeatures_mask' is a u64 and test_bit wants an 'unsigned long'
      so it requires a cast.  The *_bit() functions are also
      techincally atomic, which we have no need for here.
      
      So, remove the test_bit()s and replace with the new
      xfeature_enabled() helper.
      
      This also provides a central place to add a comment about the
      future need to support 'system xstates'.
      Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tim Chen <tim.c.chen@linux.intel.com>
      Cc: dave@sr71.net
      Cc: linux-kernel@vger.kernel.org
      Link: http://lkml.kernel.org/r/20150902233129.B1534F86@viggo.jf.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      633d54c4
    • D
      x86/fpu: Remove 'xfeature_nr' · ee9ae257
      Dave Hansen 提交于
      xfeature_nr ended up being initialized too late for me to
      use it in the "xsave size sanity check" patch which is
      later in the series.  I tried to move around its initialization
      but realized that it was just as easy to get rid of it.
      
      We only have 9 XFEATURES.  Instead of dynamically calculating
      and storing the last feature, just use the compile-time max:
      XFEATURES_NR_MAX.  Note that even with 'xfeatures_nr' we can
      had "holes" in the xfeatures_mask that we had to deal with.
      
      We also change a 'leaf' variable to be a plain 'i'.  Although
      it is used to grab a cpuid leaf in this one loop, all of the
      other loops just use an 'i' and I find it much more obvious
      to keep the naming consistent across all the similar loops.
      Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tim Chen <tim.c.chen@linux.intel.com>
      Cc: dave@sr71.net
      Cc: linux-kernel@vger.kernel.org
      Link: http://lkml.kernel.org/r/20150902233128.3F30DF5A@viggo.jf.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      ee9ae257
    • D
      x86/fpu: Rework XSTATE_* macros to remove magic '2' · 8a93c9e0
      Dave Hansen 提交于
      The 'xstate.c' code has a bunch of references to '2'.  This
      is because we have a lot more work to do for the "extended"
      xstates than the "legacy" ones and state component 2 is the
      first "extended" state.
      
      This patch replaces all of the instances of '2' with
      FIRST_EXTENDED_XFEATURE, which clearly explains what is
      going on.
      Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tim Chen <tim.c.chen@linux.intel.com>
      Cc: dave@sr71.net
      Cc: linux-kernel@vger.kernel.org
      Link: http://lkml.kernel.org/r/20150902233128.A8C0BF51@viggo.jf.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      8a93c9e0
    • D
      x86/fpu: Rename XFEATURES_NR_MAX · dad8c4fe
      Dave Hansen 提交于
      This is a logcal followon to the last patch.  It makes the
      XFEATURE_MAX naming consistent with the other enum values.
      This is what Ingo suggested.
      Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tim Chen <tim.c.chen@linux.intel.com>
      Cc: dave@sr71.net
      Cc: linux-kernel@vger.kernel.org
      Link: http://lkml.kernel.org/r/20150902233127.A541448F@viggo.jf.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      dad8c4fe
    • D
      x86/fpu: Rename XSAVE macros · d91cab78
      Dave Hansen 提交于
      There are two concepts that have some confusing naming:
       1. Extended State Component numbers (currently called
          XFEATURE_BIT_*)
       2. Extended State Component masks (currently called XSTATE_*)
      
      The numbers are (currently) from 0-9.  State component 3 is the
      bounds registers for MPX, for instance.
      
      But when we want to enable "state component 3", we go set a bit
      in XCR0.  The bit we set is 1<<3.  We can check to see if a
      state component feature is enabled by looking at its bit.
      
      The current 'xfeature_bit's are at best xfeature bit _numbers_.
      Calling them bits is at best inconsistent with ending the enum
      list with 'XFEATURES_NR_MAX'.
      
      This patch renames the enum to be 'xfeature'.  These also
      happen to be what the Intel documentation calls a "state
      component".
      
      We also want to differentiate these from the "XSTATE_*" macros.
      The "XSTATE_*" macros are a mask, and we rename them to match.
      
      These macros are reasonably widely used so this patch is a
      wee bit big, but this really is just a rename.
      
      The only non-mechanical part of this is the
      
      	s/XSTATE_EXTEND_MASK/XFEATURE_MASK_EXTEND/
      
      We need a better name for it, but that's another patch.
      Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tim Chen <tim.c.chen@linux.intel.com>
      Cc: dave@sr71.net
      Cc: linux-kernel@vger.kernel.org
      Link: http://lkml.kernel.org/r/20150902233126.38653250@viggo.jf.intel.com
      [ Ported to v4.3-rc1. ]
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      d91cab78
    • D
      x86/fpu: Remove XSTATE_RESERVE · 4109ca06
      Dave Hansen 提交于
      The original purpose of XSTATE_RESERVE was to carve out space
      to store all of the possible extended state components that
      get saved with the XSAVE instruction(s).
      
      However, we are now almost entirely dynamically allocating
      the buffers we use for XSAVE by placing them at the end of
      the task_struct and them sizing them at boot.  The one
      exception for that is the init_task.
      
      The maximum extended state component size that we have today
      is on systems with space for AVX-512 and Memory Protection
      Keys: 2696 bytes.  We have reserved a PAGE_SIZE buffer in
      the init_task via fpregs_state->__padding.
      
      This check ensures that even if the component sizes or
      layout were changed (which we do not expect), that we will
      still not overflow the init_task's buffer.
      
      In the case that we detect we might overflow the buffer,
      we completely disable XSAVE support in the kernel and try
      to boot as if we had 'legacy x87 FPU' support in place.
      This is a crippled state without any of the XSAVE-enabled
      features (MPX, AVX, etc...).  But, it at least let us
      boot safely.
      Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tim Chen <tim.c.chen@linux.intel.com>
      Cc: dave@sr71.net
      Cc: linux-kernel@vger.kernel.org
      Link: http://lkml.kernel.org/r/20150902233125.D948D475@viggo.jf.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      4109ca06
    • D
      x86/fpu: Move XSAVE-disabling code to a helper · 0a265375
      Dave Hansen 提交于
      When we want to _completely_ disable XSAVE support as far as
      the kernel is concerned, we have a big set of feature flags
      to clear.  We currently only do this in cases where the user
      asks for it to be disabled, but we are about to expand the
      places where we do it to handle errors too.
      
      Move the code in to xstate.c, and put it in the xstate.h
      header.  We will use it in the next patch too.
      Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tim Chen <tim.c.chen@linux.intel.com>
      Cc: dave@sr71.net
      Cc: linux-kernel@vger.kernel.org
      Link: http://lkml.kernel.org/r/20150902233124.EA9A70E5@viggo.jf.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      0a265375
    • D
      x86/fpu: Print xfeature buffer size in decimal · b0815359
      Dave Hansen 提交于
      This is utterly a personal taste thing, but I find it way easier
      to read structure sizes in decimal than in hex.
      Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tim Chen <tim.c.chen@linux.intel.com>
      Cc: dave@sr71.net
      Cc: linux-kernel@vger.kernel.org
      Link: http://lkml.kernel.org/r/20150902233124.1A8B04A8@viggo.jf.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      b0815359
  10. 12 6月, 2015 1 次提交
    • D
      x86/fpu: Fix double-increment in setup_xstate_features() · a8424003
      Dave Hansen 提交于
      I noticed that my MPX tracepoints were producing garbage for the
      lower and upper bounds:
      
      	mpx_bounds_register_exception: address referenced: 0x00007fffffffccb7 bounds: lower: 0x0 ~upper: 0xffffffffffffffff
      	mpx_bounds_register_exception: address referenced: 0x00007fffffffccbf bounds: lower: 0x0 ~upper: 0xffffffffffffffff
      
      This is, of course, bogus because 0x00007fffffffccbf is *within*
      the bounds.  I assumed that my instruction decoder was bad and
      went looking at it.  But I eventually realized that I was
      getting a '0' offset back from xstate_offsets[BNDREGS].
      
      It was being skipped in the initialization, which is obviously
      bogus, so remove the extra leaf++.
      
      This also goes an initializes xstate_offsets/sizes[] to -1 so
      so that bugs like this will oops instead of silently failing
      in interesting ways.
      
      This was introduced by:
      
      	39f1acd2 ("x86/fpu/xstate: Don't assume the first zero xfeatures zero bit means the end")
      Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: dave@sr71.net
      Link: http://lkml.kernel.org/r/20150611193400.2E0B00DB@viggo.jf.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      a8424003
  11. 09 6月, 2015 2 次提交
    • D
      x86/fpu/xstate: Wrap get_xsave_addr() to make it safer · 04cd027b
      Dave Hansen 提交于
      The MPX code appears is calling a low-level FPU function
      (copy_fpregs_to_fpstate()).  This function is not able to
      be called in all contexts, although it is safe to call
      directly in some cases.
      
      Although probably correct, the current code is ugly and
      potentially error-prone.  So, add a wrapper that calls
      the (slightly) higher-level fpu__save() (which is preempt-
      safe) and also ensures that we even *have* an FPU context
      (in the case that this was called when in lazy FPU mode).
      
      Ingo had this to say about the details about when we need
      preemption disabled:
      
      > it's indeed generally unsafe to access/copy FPU registers with preemption enabled,
      > for two reasons:
      >
      >   - on older systems that use FSAVE the instruction destroys FPU register
      >     contents, which has to be handled carefully
      >
      >   - even on newer systems if we copy to FPU registers (which this code doesn't)
      >     then we don't want a context switch to occur in the middle of it, because a
      >     context switch will write to the fpstate, potentially overwriting our new data
      >     with old FPU state.
      >
      > But it's safe to access FPU registers with preemption enabled in a couple of
      > special cases:
      >
      >   - potentially destructively saving FPU registers: the signal handling code does
      >     this in copy_fpstate_to_sigframe(), because it can rely on the signal restore
      >     side to restore the original FPU state.
      >
      >   - reading FPU registers on modern systems: we don't do this anywhere at the
      >     moment, mostly to keep symmetry with older systems where FSAVE is
      >     destructive.
      >
      >   - initializing FPU registers on modern systems: fpu__clear() does this. Here
      >     it's safe because we don't copy from the fpstate.
      >
      >   - directly writing FPU registers from user-space memory (!). We do this in
      >     fpu__restore_sig(), and it's safe because neither context switches nor
      >     irq-handler FPU use can corrupt the source context of the copy (which is
      >     user-space memory).
      >
      > Note that the MPX code's current use of copy_fpregs_to_fpstate() was safe I think,
      > because:
      >
      >  - MPX is predicated on eagerfpu, so the destructive F[N]SAVE instruction won't be
      >    used.
      >
      >  - the code was only reading FPU registers, and was doing it only in places that
      >    guaranteed that an FPU state was already active (i.e. didn't do it in
      >    kthreads)
      Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com>
      Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Dave Hansen <dave@sr71.net>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Suresh Siddha <sbsiddha@gmail.com>
      Cc: bp@alien8.de
      Link: http://lkml.kernel.org/r/20150607183700.AA881696@viggo.jf.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      04cd027b
    • D
      x86/fpu/xstate: Fix up bad get_xsave_addr() assumptions · 0c4109be
      Dave Hansen 提交于
      get_xsave_addr() assumes that if an xsave bit is present in the
      hardware (pcntxt_mask) that it is present in a given xsave
      buffer.  Due to an bug in the xsave code on all of the systems
      that have MPX (and thus all the users of this code), that has
      been a true assumption.
      
      But, the bug is getting fixed, so our assumption is not going
      to hold any more.
      
      It's quite possible (and normal) for an enabled state to be
      present on 'pcntxt_mask', but *not* in 'xstate_bv'.  We need
      to consult 'xstate_bv'.
      Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com>
      Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Dave Hansen <dave@sr71.net>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Link: http://lkml.kernel.org/r/20150607183700.1E739B34@viggo.jf.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      0c4109be
  12. 27 5月, 2015 1 次提交
    • I
      x86/fpu: Simplify copy_kernel_to_xregs_booting() · d65fcd60
      Ingo Molnar 提交于
      copy_kernel_to_xregs_booting() has a second parameter that is the mask
      of xfeatures that should be copied - but this parameter is always -1.
      
      Simplify the call site of this function, this also makes it more
      similar to the function call signature of other copy_kernel_to*regs()
      functions.
      
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Bobby Powers <bobbypowers@gmail.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      d65fcd60
  13. 25 5月, 2015 1 次提交
    • I
      x86/fpu: Fix fpu__init_system_xstate() comments · 6e553594
      Ingo Molnar 提交于
      Remove obsolete comment about __init limitations: in the new code there aren't any.
      
      Also standardize the comment style in the function while at it.
      
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      6e553594
  14. 19 5月, 2015 11 次提交
    • I
      x86/fpu: Add CONFIG_X86_DEBUG_FPU=y FPU debugging code · e97131a8
      Ingo Molnar 提交于
      There are various internal FPU state debugging checks that never
      trigger in practice, but which are useful for FPU code development.
      
      Separate these out into CONFIG_X86_DEBUG_FPU=y, and also add a
      couple of new ones.
      
      The size difference is about 0.5K of code on defconfig:
      
         text        data     bss          filename
         15028906    2578816  1638400      vmlinux
         15029430    2578816  1638400      vmlinux
      
      ( Keep this enabled by default until the new FPU code is debugged. )
      
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      e97131a8
    • I
      x86/fpu/init: Propagate __init annotations · 32231879
      Ingo Molnar 提交于
      Now that all the FPU init function call dependencies are
      cleaned up we can propagate __init annotations deeper.
      
      This shrinks the runtime size of the kernel a bit, and
      also addresses a few section warnings.
      
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      32231879
    • I
      x86/fpu/xstate: Clean up setup_xstate_comp() call · 5fd402df
      Ingo Molnar 提交于
      So call setup_xstate_comp() from the xstate init code, not
      from the generic fpu__init_system() code.
      
      This allows us to remove the protytype from xstate.h as well.
      
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      5fd402df
    • I
      x86/fpu/xstate: Don't assume the first zero xfeatures zero bit means the end · 39f1acd2
      Ingo Molnar 提交于
      The current xstate code in setup_xstate_features() assumes that
      the first zero bit means the end of xfeatures - but that is not
      so, the SDM clearly states that an arbitrary set of xfeatures
      might be enabled - and it is also clear from the description
      of the compaction feature that holes are possible:
      
        "13-6 Vol. 1MANAGING STATE USING THE XSAVE FEATURE SET
        [...]
      
        Compacted format. Each state component i (i ≥ 2) is located at a byte
        offset from the base address of the XSAVE area based on the XCOMP_BV
        field in the XSAVE header:
      
        — If XCOMP_BV[i] = 0, state component i is not in the XSAVE area.
      
        — If XCOMP_BV[i] = 1, the following items apply:
      
        • If XCOMP_BV[j] = 0 for every j, 2 ≤ j < i, state component i is
          located at a byte offset 576 from the base address of the XSAVE
          area. (This item applies if i is the first bit set in bits 62:2 of
          the XCOMP_BV; it implies that state component i is located at the
          beginning of the extended region.)
      
        • Otherwise, let j, 2 ≤ j < i, be the greatest value such that
          XCOMP_BV[j] = 1. Then state component i is located at a byte offset
          X from the location of state component j, where X is the number of
          bytes required for state component j as enumerated in
          CPUID.(EAX=0DH,ECX=j):EAX. (This item implies that state component i
          immediately follows the preceding state component whose bit is set
          in XCOMP_BV.)"
      
      So don't assume that the first zero xfeatures bit means the end of
      all xfeatures - iterate through all of them.
      
      I'm not aware of hardware that triggers this currently.
      
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      39f1acd2
    • I
      x86/fpu: Change fpu->fpregs_active from 'int' to 'char', add lazy switching comments · aeb997b9
      Ingo Molnar 提交于
      Improve the memory layout of 'struct fpu':
      
       - change ->fpregs_active from 'int' to 'char' - it's just a single flag
         and modern x86 CPUs can do efficient byte accesses.
      
       - pack related fields closer to each other: often 'fpu->state' will not be
         touched, while the other fields will - so pack them into a group.
      
      Also add comments to each field, describing their purpose, and add
      some background information about lazy restores.
      
      Also fix an obsolete, lazy switching related comment in fpu_copy()'s description.
      
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      aeb997b9
    • I
      x86/fpu: Harmonize FPU register state types · c47ada30
      Ingo Molnar 提交于
      Use these consistent names:
      
          struct fregs_state           # was: i387_fsave_struct
          struct fxregs_state          # was: i387_fxsave_struct
          struct swregs_state          # was: i387_soft_struct
          struct xregs_state           # was: xsave_struct
          union  fpregs_state          # was: thread_xstate
      
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      c47ada30
    • I
      x86/fpu: Factor out fpu/signal.c · b992c660
      Ingo Molnar 提交于
      fpu/xstate.c has a lot of generic FPU signal frame handling routines,
      move them into a separate file: fpu/signal.c.
      
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      b992c660
    • I
      x86/fpu: Rename all the fpregs, xregs, fxregs and fregs handling functions · c6813144
      Ingo Molnar 提交于
      Standardize the naming of the various functions that copy register
      content in specific FPU context formats:
      
        copy_fxregs_to_kernel()         # was: fpu_fxsave()
        copy_xregs_to_kernel()          # was: xsave_state()
      
        copy_kernel_to_fregs()          # was: frstor_checking()
        copy_kernel_to_fxregs()         # was: fxrstor_checking()
        copy_kernel_to_xregs()          # was: fpu_xrstor_checking()
        copy_kernel_to_xregs_booting()  # was: xrstor_state_booting()
      
        copy_fregs_to_user()            # was: fsave_user()
        copy_fxregs_to_user()           # was: fxsave_user()
        copy_xregs_to_user()            # was: xsave_user()
      
        copy_user_to_fregs()            # was: frstor_user()
        copy_user_to_fxregs()           # was: fxrstor_user()
        copy_user_to_xregs()            # was: xrestore_user()
        copy_user_to_fpregs_zeroing()   # was: restore_user_xstate()
      
      Eliminate fpu_xrstor_checking(), because it was just a wrapper.
      
      No change in functionality.
      
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      c6813144
    • I
      x86/fpu: Generalize 'init_xstate_ctx' · 6f575023
      Ingo Molnar 提交于
      So the handling of init_xstate_ctx has a layering violation: both
      'struct xsave_struct' and 'union thread_xstate' have a
      'struct i387_fxsave_struct' member:
      
         xsave_struct::i387
         thread_xstate::fxsave
      
      The handling of init_xstate_ctx is generic, it is used on all
      CPUs, with or without XSAVE instruction. So it's confusing how
      the generic code passes around and handles an XSAVE specific
      format.
      
      What we really want is for init_xstate_ctx to be a proper
      fpstate and we use its ::fxsave and ::xsave members, as
      appropriate.
      
      Since the xsave_struct::i387 and thread_xstate::fxsave aliases
      each other this is not a functional problem.
      
      So implement this, and move init_xstate_ctx to the generic FPU
      code in the process.
      
      Also, since init_xstate_ctx is not XSAVE specific anymore,
      rename it to init_fpstate, and mark it __read_mostly,
      because it's only modified once during bootup, and used
      as a reference fpstate later on.
      
      There's no change in functionality.
      
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      6f575023
    • I
      x86/fpu: Create 'union thread_xstate' helper for fpstate_init() · bf935b0b
      Ingo Molnar 提交于
      fpstate_init() only uses fpu->state, so pass that in to it.
      
      This enables the cleanup we will do in the next patch.
      
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      bf935b0b
    • I
      x86/fpu: Remove run-once init quirks · acd58a3a
      Ingo Molnar 提交于
      Remove various boot quirks that came from the old code.
      
      The new code is cleanly split up into per-system and per-cpu
      init sequences, and system init functions are only called once.
      
      Remove the run-once quirks.
      
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      acd58a3a