1. 30 5月, 2014 1 次提交
  2. 12 3月, 2014 1 次提交
    • S
      x86, fpu: Check tsk_used_math() in kernel_fpu_end() for eager FPU · 731bd6a9
      Suresh Siddha 提交于
      For non-eager fpu mode, thread's fpu state is allocated during the first
      fpu usage (in the context of device not available exception). This
      (math_state_restore()) can be a blocking call and hence we enable
      interrupts (which were originally disabled when the exception happened),
      allocate memory and disable interrupts etc.
      
      But the eager-fpu mode, call's the same math_state_restore() from
      kernel_fpu_end(). The assumption being that tsk_used_math() is always
      set for the eager-fpu mode and thus avoid the code path of enabling
      interrupts, allocating fpu state using blocking call and disable
      interrupts etc.
      
      But the below issue was noticed by Maarten Baert, Nate Eldredge and
      few others:
      
      If a user process dumps core on an ecrypt fs while aesni-intel is loaded,
      we get a BUG() in __find_get_block() complaining that it was called with
      interrupts disabled; then all further accesses to our ecrypt fs hang
      and we have to reboot.
      
      The aesni-intel code (encrypting the core file that we are writing) needs
      the FPU and quite properly wraps its code in kernel_fpu_{begin,end}(),
      the latter of which calls math_state_restore(). So after kernel_fpu_end(),
      interrupts may be disabled, which nobody seems to expect, and they stay
      that way until we eventually get to __find_get_block() which barfs.
      
      For eager fpu, most the time, tsk_used_math() is true. At few instances
      during thread exit, signal return handling etc, tsk_used_math() might
      be false.
      
      In kernel_fpu_end(), for eager-fpu, call math_state_restore()
      only if tsk_used_math() is set. Otherwise, don't bother. Kernel code
      path which cleared tsk_used_math() knows what needs to be done
      with the fpu state.
      Reported-by: NMaarten Baert <maarten-baert@hotmail.com>
      Reported-by: NNate Eldredge <nate@thatsmathematics.com>
      Suggested-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: NSuresh Siddha <sbsiddha@gmail.com>
      Link: http://lkml.kernel.org/r/1391410583.3801.6.camel@europa
      Cc: George Spelvin <linux@horizon.com>
      Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      731bd6a9
  3. 13 11月, 2013 1 次提交
  4. 27 7月, 2013 1 次提交
    • H
      x86, fpu: correct the asm constraints for fxsave, unbreak mxcsr.daz · eaa5a990
      H.J. Lu 提交于
      GCC will optimize mxcsr_feature_mask_init in arch/x86/kernel/i387.c:
      
      		memset(&fx_scratch, 0, sizeof(struct i387_fxsave_struct));
      		asm volatile("fxsave %0" : : "m" (fx_scratch));
      		mask = fx_scratch.mxcsr_mask;
      		if (mask == 0)
      			mask = 0x0000ffbf;
      
      to
      
      		memset(&fx_scratch, 0, sizeof(struct i387_fxsave_struct));
      		asm volatile("fxsave %0" : : "m" (fx_scratch));
      		mask = 0x0000ffbf;
      
      since asm statement doesn’t say it will update fx_scratch.  As the
      result, the DAZ bit will be cleared.  This patch fixes it. This bug
      dates back to at least kernel 2.6.12.
      Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      Cc: <stable@vger.kernel.org>
      eaa5a990
  5. 15 7月, 2013 1 次提交
    • P
      x86: delete __cpuinit usage from all x86 files · 148f9bb8
      Paul Gortmaker 提交于
      The __cpuinit type of throwaway sections might have made sense
      some time ago when RAM was more constrained, but now the savings
      do not offset the cost and complications.  For example, the fix in
      commit 5e427ec2 ("x86: Fix bit corruption at CPU resume time")
      is a good example of the nasty type of bugs that can be created
      with improper use of the various __init prefixes.
      
      After a discussion on LKML[1] it was decided that cpuinit should go
      the way of devinit and be phased out.  Once all the users are gone,
      we can then finally remove the macros themselves from linux/init.h.
      
      Note that some harmless section mismatch warnings may result, since
      notify_cpu_starting() and cpu_up() are arch independent (kernel/cpu.c)
      are flagged as __cpuinit  -- so if we remove the __cpuinit from
      arch specific callers, we will also get section mismatch warnings.
      As an intermediate step, we intend to turn the linux/init.h cpuinit
      content into no-ops as early as possible, since that will get rid
      of these warnings.  In any case, they are temporary and harmless.
      
      This removes all the arch/x86 uses of the __cpuinit macros from
      all C files.  x86 only had the one __CPUINIT used in assembly files,
      and it wasn't paired off with a .previous or a __FINIT, so we can
      delete it directly w/o any corresponding additional change there.
      
      [1] https://lkml.org/lkml/2013/5/20/589
      
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: x86@kernel.org
      Acked-by: NIngo Molnar <mingo@kernel.org>
      Acked-by: NThomas Gleixner <tglx@linutronix.de>
      Acked-by: NH. Peter Anvin <hpa@linux.intel.com>
      Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
      148f9bb8
  6. 07 6月, 2013 1 次提交
  7. 31 5月, 2013 1 次提交
    • P
      x86: Allow FPU to be used at interrupt time even with eagerfpu · 5187b28f
      Pekka Riikonen 提交于
      With the addition of eagerfpu the irq_fpu_usable() now returns false
      negatives especially in the case of ksoftirqd and interrupted idle task,
      two common cases for FPU use for example in networking/crypto.  With
      eagerfpu=off FPU use is possible in those contexts.  This is because of
      the eagerfpu check in interrupted_kernel_fpu_idle():
      
      ...
        * For now, with eagerfpu we will return interrupted kernel FPU
        * state as not-idle. TBD: Ideally we can change the return value
        * to something like __thread_has_fpu(current). But we need to
        * be careful of doing __thread_clear_has_fpu() before saving
        * the FPU etc for supporting nested uses etc. For now, take
        * the simple route!
      ...
       	if (use_eager_fpu())
       		return 0;
      
      As eagerfpu is automatically "on" on those CPUs that also have the
      features like AES-NI this patch changes the eagerfpu check to return 1 in
      case the kernel_fpu_begin() has not been said yet.  Once it has been the
      __thread_has_fpu() will start returning 0.
      
      Notice that with eagerfpu the __thread_has_fpu is always true initially.
      FPU use is thus always possible no matter what task is under us, unless
      the state has already been saved with kernel_fpu_begin().
      
      [ hpa: this is a performance regression, not a correctness regression,
        but since it can be quite serious on CPUs which need encryption at
        interrupt time I am marking this for urgent/stable. ]
      Signed-off-by: NPekka Riikonen <priikone@iki.fi>
      Link: http://lkml.kernel.org/r/alpine.GSO.2.00.1305131356320.18@git.silcnet.org
      Cc: <stable@vger.kernel.org> v3.7+
      Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      5187b28f
  8. 15 11月, 2012 1 次提交
  9. 22 9月, 2012 1 次提交
    • S
      x86, kvm: fix kvm's usage of kernel_fpu_begin/end() · b1a74bf8
      Suresh Siddha 提交于
      Preemption is disabled between kernel_fpu_begin/end() and as such
      it is not a good idea to use these routines in kvm_load/put_guest_fpu()
      which can be very far apart.
      
      kvm_load/put_guest_fpu() routines are already called with
      preemption disabled and KVM already uses the preempt notifier to save
      the guest fpu state using kvm_put_guest_fpu().
      
      So introduce __kernel_fpu_begin/end() routines which don't touch
      preemption and use them instead of kernel_fpu_begin/end()
      for KVM's use model of saving/restoring guest FPU state.
      
      Also with this change (and with eagerFPU model), fix the host cr0.TS vm-exit
      state in the case of VMX. For eagerFPU case, host cr0.TS is always clear.
      So no need to worry about it. For the traditional lazyFPU restore case,
      change the cr0.TS bit for the host state during vm-exit to be always clear
      and cr0.TS bit is set in the __vmx_load_host_state() when the FPU
      (guest FPU or the host task's FPU) state is not active. This ensures
      that the host/guest FPU state is properly saved, restored
      during context-switch and with interrupts (using irq_fpu_usable()) not
      stomping on the active FPU state.
      Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com>
      Link: http://lkml.kernel.org/r/1348164109.26695.338.camel@sbsiddha-desk.sc.intel.com
      Cc: Avi Kivity <avi@redhat.com>
      Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      b1a74bf8
  10. 19 9月, 2012 3 次提交
    • S
      x86, fpu: decouple non-lazy/eager fpu restore from xsave · 5d2bd700
      Suresh Siddha 提交于
      Decouple non-lazy/eager fpu restore policy from the existence of the xsave
      feature. Introduce a synthetic CPUID flag to represent the eagerfpu
      policy. "eagerfpu=on" boot paramter will enable the policy.
      Requested-by: NH. Peter Anvin <hpa@zytor.com>
      Requested-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com>
      Link: http://lkml.kernel.org/r/1347300665-6209-2-git-send-email-suresh.b.siddha@intel.comSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      5d2bd700
    • S
      x86, fpu: use non-lazy fpu restore for processors supporting xsave · 304bceda
      Suresh Siddha 提交于
      Fundamental model of the current Linux kernel is to lazily init and
      restore FPU instead of restoring the task state during context switch.
      This changes that fundamental lazy model to the non-lazy model for
      the processors supporting xsave feature.
      
      Reasons driving this model change are:
      
      i. Newer processors support optimized state save/restore using xsaveopt and
      xrstor by tracking the INIT state and MODIFIED state during context-switch.
      This is faster than modifying the cr0.TS bit which has serializing semantics.
      
      ii. Newer glibc versions use SSE for some of the optimized copy/clear routines.
      With certain workloads (like boot, kernel-compilation etc), application
      completes its work with in the first 5 task switches, thus taking upto 5 #DNA
      traps with the kernel not getting a chance to apply the above mentioned
      pre-load heuristic.
      
      iii. Some xstate features (like AMD's LWP feature) don't honor the cr0.TS bit
      and thus will not work correctly in the presence of lazy restore. Non-lazy
      state restore is needed for enabling such features.
      
      Some data on a two socket SNB system:
       * Saved 20K DNA exceptions during boot on a two socket SNB system.
       * Saved 50K DNA exceptions during kernel-compilation workload.
       * Improved throughput of the AVX based checksumming function inside the
         kernel by ~15% as xsave/xrstor is faster than the serializing clts/stts
         pair.
      
      Also now kernel_fpu_begin/end() relies on the patched
      alternative instructions. So move check_fpu() which uses the
      kernel_fpu_begin/end() after alternative_instructions().
      Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com>
      Link: http://lkml.kernel.org/r/1345842782-24175-7-git-send-email-suresh.b.siddha@intel.com
      Merge 32-bit boot fix from,
      Link: http://lkml.kernel.org/r/1347300665-6209-4-git-send-email-suresh.b.siddha@intel.com
      Cc: Jim Kukunas <james.t.kukunas@linux.intel.com>
      Cc: NeilBrown <neilb@suse.de>
      Cc: Avi Kivity <avi@redhat.com>
      Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      304bceda
    • S
      x86, fpu: Unify signal handling code paths for x86 and x86_64 kernels · 72a671ce
      Suresh Siddha 提交于
      Currently for x86 and x86_32 binaries, fpstate in the user sigframe is copied
      to/from the fpstate in the task struct.
      
      And in the case of signal delivery for x86_64 binaries, if the fpstate is live
      in the CPU registers, then the live state is copied directly to the user
      sigframe. Otherwise  fpstate in the task struct is copied to the user sigframe.
      During restore, fpstate in the user sigframe is restored directly to the live
      CPU registers.
      
      Historically, different code paths led to different bugs. For example,
      x86_64 code path was not preemption safe till recently. Also there is lot
      of code duplication for support of new features like xsave etc.
      
      Unify signal handling code paths for x86 and x86_64 kernels.
      
      New strategy is as follows:
      
      Signal delivery: Both for 32/64-bit frames, align the core math frame area to
      64bytes as needed by xsave (this where the main fpu/extended state gets copied
      to and excludes the legacy compatibility fsave header for the 32-bit [f]xsave
      frames). If the state is live, copy the register state directly to the user
      frame. If not live, copy the state in the thread struct to the user frame. And
      for 32-bit [f]xsave frames, construct the fsave header separately before
      the actual [f]xsave area.
      
      Signal return: As the 32-bit frames with [f]xstate has an additional
      'fsave' header, copy everything back from the user sigframe to the
      fpstate in the task structure and reconstruct the fxstate from the 'fsave'
      header (Also user passed pointers may not be correctly aligned for
      any attempt to directly restore any partial state). At the next fpstate usage,
      everything will be restored to the live CPU registers.
      For all the 64-bit frames and the 32-bit fsave frame, restore the state from
      the user sigframe directly to the live CPU registers. 64-bit signals always
      restored the math frame directly, so we can expect the math frame pointer
      to be correctly aligned. For 32-bit fsave frames, there are no alignment
      requirements, so we can restore the state directly.
      
      "lat_sig catch" microbenchmark numbers (for x86, x86_64, x86_32 binaries) are
      with in the noise range with this change.
      Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com>
      Link: http://lkml.kernel.org/r/1343171129-2747-4-git-send-email-suresh.b.siddha@intel.com
      [ Merged in compilation fix ]
      Link: http://lkml.kernel.org/r/1344544736.8326.17.camel@sbsiddha-desk.sc.intel.comSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      72a671ce
  11. 15 5月, 2012 1 次提交
  12. 17 4月, 2012 1 次提交
  13. 22 2月, 2012 2 次提交
  14. 21 7月, 2011 1 次提交
    • P
      treewide: fix potentially dangerous trailing ';' in #defined values/expressions · 497888cf
      Phil Carmody 提交于
      All these are instances of
        #define NAME value;
      or
        #define NAME(params_opt) value;
      
      These of course fail to build when used in contexts like
        if(foo $OP NAME)
        while(bar $OP NAME)
      and may silently generate the wrong code in contexts such as
        foo = NAME + 1;    /* foo = value; + 1; */
        bar = NAME - 1;    /* bar = value; - 1; */
        baz = NAME & quux; /* baz = value; & quux; */
      
      Reported on comp.lang.c,
      Message-ID: <ab0d55fe-25e5-482b-811e-c475aa6065c3@c29g2000yqd.googlegroups.com>
      Initial analysis of the dangers provided by Keith Thompson in that thread.
      
      There are many more instances of more complicated macros having unnecessary
      trailing semicolons, but this pile seems to be all of the cases of simple
      values suffering from the problem. (Thus things that are likely to be found
      in one of the contexts above, more complicated ones aren't.)
      Signed-off-by: NPhil Carmody <ext-phil.2.carmody@nokia.com>
      Signed-off-by: NJiri Kosina <jkosina@suse.cz>
      497888cf
  15. 18 3月, 2011 1 次提交
  16. 12 1月, 2011 1 次提交
  17. 10 9月, 2010 3 次提交
  18. 15 8月, 2010 1 次提交
    • X
      KVM: fix poison overwritten caused by using wrong xstate size · f45755b8
      Xiaotian Feng 提交于
      fpu.state is allocated from task_xstate_cachep, the size of task_xstate_cachep
      is xstate_size. xstate_size is set from cpuid instruction, which is often
      smaller than sizeof(struct xsave_struct). kvm is using sizeof(struct xsave_struct)
      to fill in/out fpu.state.xsave, as what we allocated for fpu.state is
      xstate_size, kernel will write out of memory and caused poison/redzone/padding
      overwritten warnings.
      Signed-off-by: NXiaotian Feng <dfeng@redhat.com>
      Reviewed-by: NSheng Yang <sheng@linux.intel.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Suresh Siddha <suresh.b.siddha@intel.com>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Avi Kivity <avi@redhat.com>
      Cc: Robert Richter <robert.richter@amd.com>
      Cc: Sheng Yang <sheng@linux.intel.com>
      Cc: Marcelo Tosatti <mtosatti@redhat.com>
      Cc: Gleb Natapov <gleb@redhat.com>
      Cc: Jan Kiszka <jan.kiszka@siemens.com>
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      f45755b8
  19. 13 8月, 2010 1 次提交
  20. 01 8月, 2010 1 次提交
  21. 22 7月, 2010 1 次提交
  22. 21 7月, 2010 1 次提交
  23. 20 7月, 2010 1 次提交
    • S
      x86, xsave: Sync xsave memory layout with its header for user handling · 29104e10
      Suresh Siddha 提交于
      With xsaveopt, if a processor implementation discern that a processor state
      component is in its initialized state it may modify the corresponding bit in
      the xsave_hdr.xstate_bv as '0', with out modifying the corresponding memory
      layout. Hence wHile presenting the xstate information to the user, we always
      ensure that the memory layout of a feature will be in the init state if the
      corresponding header bit is zero. This ensures the consistency and avoids the
      condition of the user seeing some some stale state in the memory layout during
      signal handling, debugging etc.
      Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com>
      LKML-Reference: <20100719230205.351459480@sbs-t61.sc.intel.com>
      Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      29104e10
  24. 11 5月, 2010 2 次提交
    • A
      x86: Introduce 'struct fpu' and related API · 86603283
      Avi Kivity 提交于
      Currently all fpu state access is through tsk->thread.xstate.  Since we wish
      to generalize fpu access to non-task contexts, wrap the state in a new
      'struct fpu' and convert existing access to use an fpu API.
      
      Signal frame handlers are not converted to the API since they will remain
      task context only things.
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      Acked-by: NSuresh Siddha <suresh.b.siddha@intel.com>
      LKML-Reference: <1273135546-29690-3-git-send-email-avi@redhat.com>
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      86603283
    • A
      x86: Eliminate TS_XSAVE · c9ad4882
      Avi Kivity 提交于
      The fpu code currently uses current->thread_info->status & TS_XSAVE as
      a way to distinguish between XSAVE capable processors and older processors.
      The decision is not really task specific; instead we use the task status to
      avoid a global memory reference - the value should be the same across all
      threads.
      
      Eliminate this tie-in into the task structure by using an alternative
      instruction keyed off the XSAVE cpu feature; this results in shorter and
      faster code, without introducing a global memory reference.
      
      [ hpa: in the future, this probably should use an asm jmp ]
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      Acked-by: NSuresh Siddha <suresh.b.siddha@intel.com>
      LKML-Reference: <1273135546-29690-2-git-send-email-avi@redhat.com>
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      c9ad4882
  25. 30 3月, 2010 1 次提交
    • T
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking... · 5a0e3ad6
      Tejun Heo 提交于
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
      
      percpu.h is included by sched.h and module.h and thus ends up being
      included when building most .c files.  percpu.h includes slab.h which
      in turn includes gfp.h making everything defined by the two files
      universally available and complicating inclusion dependencies.
      
      percpu.h -> slab.h dependency is about to be removed.  Prepare for
      this change by updating users of gfp and slab facilities include those
      headers directly instead of assuming availability.  As this conversion
      needs to touch large number of source files, the following script is
      used as the basis of conversion.
      
        http://userweb.kernel.org/~tj/misc/slabh-sweep.py
      
      The script does the followings.
      
      * Scan files for gfp and slab usages and update includes such that
        only the necessary includes are there.  ie. if only gfp is used,
        gfp.h, if slab is used, slab.h.
      
      * When the script inserts a new include, it looks at the include
        blocks and try to put the new include such that its order conforms
        to its surrounding.  It's put in the include block which contains
        core kernel includes, in the same order that the rest are ordered -
        alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
        doesn't seem to be any matching order.
      
      * If the script can't find a place to put a new include (mostly
        because the file doesn't have fitting include block), it prints out
        an error message indicating which .h file needs to be added to the
        file.
      
      The conversion was done in the following steps.
      
      1. The initial automatic conversion of all .c files updated slightly
         over 4000 files, deleting around 700 includes and adding ~480 gfp.h
         and ~3000 slab.h inclusions.  The script emitted errors for ~400
         files.
      
      2. Each error was manually checked.  Some didn't need the inclusion,
         some needed manual addition while adding it to implementation .h or
         embedding .c file was more appropriate for others.  This step added
         inclusions to around 150 files.
      
      3. The script was run again and the output was compared to the edits
         from #2 to make sure no file was left behind.
      
      4. Several build tests were done and a couple of problems were fixed.
         e.g. lib/decompress_*.c used malloc/free() wrappers around slab
         APIs requiring slab.h to be added manually.
      
      5. The script was run on all .h files but without automatically
         editing them as sprinkling gfp.h and slab.h inclusions around .h
         files could easily lead to inclusion dependency hell.  Most gfp.h
         inclusion directives were ignored as stuff from gfp.h was usually
         wildly available and often used in preprocessor macros.  Each
         slab.h inclusion directive was examined and added manually as
         necessary.
      
      6. percpu.h was updated not to include slab.h.
      
      7. Build test were done on the following configurations and failures
         were fixed.  CONFIG_GCOV_KERNEL was turned off for all tests (as my
         distributed build env didn't work with gcov compiles) and a few
         more options had to be turned off depending on archs to make things
         build (like ipr on powerpc/64 which failed due to missing writeq).
      
         * x86 and x86_64 UP and SMP allmodconfig and a custom test config.
         * powerpc and powerpc64 SMP allmodconfig
         * sparc and sparc64 SMP allmodconfig
         * ia64 SMP allmodconfig
         * s390 SMP allmodconfig
         * alpha SMP allmodconfig
         * um on x86_64 SMP allmodconfig
      
      8. percpu.h modifications were reverted so that it could be applied as
         a separate patch and serve as bisection point.
      
      Given the fact that I had only a couple of failures from tests on step
      6, I'm fairly confident about the coverage of this conversion patch.
      If there is a breakage, it's likely to be something in one of the arch
      headers which should be easily discoverable easily on most builds of
      the specific arch.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Guess-its-ok-by: NChristoph Lameter <cl@linux-foundation.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
      5a0e3ad6
  26. 24 2月, 2010 2 次提交
  27. 12 2月, 2010 1 次提交
    • S
      x86, ptrace: regset extensions to support xstate · 5b3efd50
      Suresh Siddha 提交于
      Add the xstate regset support which helps extend the kernel ptrace and the
      core-dump interfaces to support AVX state etc.
      
      This regset interface is designed to support all the future state that gets
      supported using xsave/xrstor infrastructure.
      
      Looking at the memory layout saved by "xsave", one can't say which state
      is represented in the memory layout. This is because if a particular state is
      in init state, in the xsave hdr it can be represented by bit '0'. And hence
      we can't really say by the xsave header wether a state is in init state or
      the state is not saved in the memory layout.
      
      And hence the xsave memory layout available through this regset
      interface uses SW usable bytes [464..511] to convey what state is represented
      in the memory layout.
      
      First 8 bytes of the sw_usable_bytes[464..467] will be set to OS enabled xstate
      mask(which is same as the 64bit mask returned by the xgetbv's xCR0).
      
      The note NT_X86_XSTATE represents the extended state information in the
      core file, using the above mentioned memory layout.
      Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com>
      LKML-Reference: <20100211195614.802495327@sbs-t61.sc.intel.com>
      Signed-off-by: NHongjiu Lu <hjl.tools@gmail.com>
      Cc: Roland McGrath <roland@redhat.com>
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      5b3efd50
  28. 05 3月, 2009 1 次提交
    • D
      x86, math-emu: fix init_fpu for task != current · ab9e1858
      Daniel Glöckner 提交于
      Impact: fix math-emu related crash while using GDB/ptrace
      
      init_fpu() calls finit to initialize a task's xstate, while finit always
      works on the current task. If we use PTRACE_GETFPREGS on another
      process and both processes did not already use floating point, we get
      a null pointer exception in finit.
      
      This patch creates a new function finit_task that takes a task_struct
      parameter. finit becomes a wrapper that simply calls finit_task with
      current. On the plus side this avoids many calls to get_current which
      would each resolve to an inline assembler mov instruction.
      
      An empty finit_task has been added to i387.h to avoid linker errors in
      case the compiler still emits the call in init_fpu when
      CONFIG_MATH_EMULATION is not defined.
      
      The declaration of finit in i387.h has been removed as the remaining
      code using this function gets its prototype from fpu_proto.h.
      Signed-off-by: NDaniel Glöckner <dg@emlix.com>
      Cc: Suresh Siddha <suresh.b.siddha@intel.com>
      Cc: "Pallipadi Venkatesh" <venkatesh.pallipadi@intel.com>
      Cc: Arjan van de Ven <arjan@infradead.org>
      Cc: Bill Metzenthen <billm@melbpc.org.au>
      LKML-Reference: <E1Lew31-0004il-Fg@mailer.emlix.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      ab9e1858
  29. 20 11月, 2008 1 次提交
    • R
      x86: fix __cpuinit/__init tangle in init_thread_xstate() · 9bc646f1
      Rakib Mullick 提交于
      Impact:	fix incorrect __init annotation
      
      This patch removes the following section mismatch warning. A patch set
      was send previously (http://lkml.org/lkml/2008/11/10/407). But
      introduce some other problem, reported by Rufus
      (http://lkml.org/lkml/2008/11/11/46). Then Ingo Molnar suggest that,
      it's best to remove __init from xsave_cntxt_init(void). Which is the
      second patch in this series. Now, this one removes the following
      warning.
      
      WARNING: arch/x86/kernel/built-in.o(.cpuinit.text+0x2237): Section
      mismatch in reference from the function cpu_init() to the function
      .init.text:init_thread_xstate()
      The function __cpuinit cpu_init() references
      a function __init init_thread_xstate().
      If init_thread_xstate is only used by cpu_init then
      annotate init_thread_xstate with a matching annotation.
      Signed-off-by: NRakib Mullick <rakib.mullick@gmail.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      9bc646f1
  30. 08 10月, 2008 1 次提交
    • S
      x86: xsave: set FP, SSE bits in the xsave header in the user sigcontext · 04944b79
      Suresh Siddha 提交于
      If a processor implementation discern that a processor state component is in
      its initialized state, it may modify the corresponding bit in the
      xsave header.xstate_bv as '0'. State in the memory layout setup by 'xsave'
      will be consistent with the bit values in the header.
      
      During signal handling, legacy applications may change the FP/SSE bits
      in the sigcontext memory layout without touching the FP/SSE header bits
      in the xsave header. So always set FP/SSE bits in the xsave header
      while saving the sigcontext state to the user space. During signal return,
      this will enable the kernel to capture any changes to the FP/SSE bits by the
      legacy applications which don't touch xsave headers.
      
      xsave aware apps can change the xstate_bv in the xsave header aswell
      as change any contents in the memory layout. xrestor as part of sigreturn
      will capture all the changes.
      Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com>
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      04944b79
  31. 31 7月, 2008 3 次提交