1. 23 3月, 2015 1 次提交
  2. 13 3月, 2015 1 次提交
    • O
      x86/fpu: Avoid math_state_restore() without used_math() in __restore_xstate_sig() · a7c80ebc
      Oleg Nesterov 提交于
      math_state_restore() assumes it is called with irqs disabled,
      but this is not true if the caller is __restore_xstate_sig().
      
      This means that if ia32_fxstate == T and __copy_from_user()
      fails, __restore_xstate_sig() returns with irqs disabled too.
      
      This triggers:
      
        BUG: sleeping function called from invalid context at kernel/locking/rwsem.c:41
         dump_stack
         ___might_sleep
         ? _raw_spin_unlock_irqrestore
         __might_sleep
         down_read
         ? _raw_spin_unlock_irqrestore
         print_vma_addr
         signal_fault
         sys32_rt_sigreturn
      
      Change __restore_xstate_sig() to call set_used_math()
      unconditionally. This avoids enabling and disabling interrupts
      in math_state_restore(). If copy_from_user() fails, we can
      simply do fpu_finit() by hand.
      
      [ Note: this is only the first step. math_state_restore() should
              not check used_math(), it should set this flag. While
      	init_fpu() should simply die. ]
      Signed-off-by: NOleg Nesterov <oleg@redhat.com>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Cc: <stable@vger.kernel.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Pekka Riikonen <priikone@iki.fi>
      Cc: Quentin Casasnovas <quentin.casasnovas@oracle.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Suresh Siddha <sbsiddha@gmail.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/20150307153844.GB25954@redhat.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      a7c80ebc
  3. 23 2月, 2015 1 次提交
  4. 04 2月, 2015 1 次提交
  5. 05 12月, 2014 1 次提交
  6. 03 9月, 2014 2 次提交
  7. 31 5月, 2014 1 次提交
  8. 30 5月, 2014 3 次提交
  9. 07 12月, 2013 1 次提交
  10. 15 7月, 2013 1 次提交
    • P
      x86: delete __cpuinit usage from all x86 files · 148f9bb8
      Paul Gortmaker 提交于
      The __cpuinit type of throwaway sections might have made sense
      some time ago when RAM was more constrained, but now the savings
      do not offset the cost and complications.  For example, the fix in
      commit 5e427ec2 ("x86: Fix bit corruption at CPU resume time")
      is a good example of the nasty type of bugs that can be created
      with improper use of the various __init prefixes.
      
      After a discussion on LKML[1] it was decided that cpuinit should go
      the way of devinit and be phased out.  Once all the users are gone,
      we can then finally remove the macros themselves from linux/init.h.
      
      Note that some harmless section mismatch warnings may result, since
      notify_cpu_starting() and cpu_up() are arch independent (kernel/cpu.c)
      are flagged as __cpuinit  -- so if we remove the __cpuinit from
      arch specific callers, we will also get section mismatch warnings.
      As an intermediate step, we intend to turn the linux/init.h cpuinit
      content into no-ops as early as possible, since that will get rid
      of these warnings.  In any case, they are temporary and harmless.
      
      This removes all the arch/x86 uses of the __cpuinit macros from
      all C files.  x86 only had the one __CPUINIT used in assembly files,
      and it wasn't paired off with a .previous or a __FINIT, so we can
      delete it directly w/o any corresponding additional change there.
      
      [1] https://lkml.org/lkml/2013/5/20/589
      
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: x86@kernel.org
      Acked-by: NIngo Molnar <mingo@kernel.org>
      Acked-by: NThomas Gleixner <tglx@linutronix.de>
      Acked-by: NH. Peter Anvin <hpa@linux.intel.com>
      Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
      148f9bb8
  11. 07 6月, 2013 1 次提交
  12. 26 9月, 2012 1 次提交
  13. 19 9月, 2012 8 次提交
  14. 05 9月, 2012 1 次提交
  15. 06 6月, 2012 1 次提交
  16. 17 5月, 2012 1 次提交
    • S
      x86, xsave: remove thread_has_fpu() bug check in __sanitize_i387_state() · d75f1b39
      Suresh Siddha 提交于
      Code paths like fork(), exit() and signal handling flush the fpu
      state explicitly to the structures in memory.
      
      BUG_ON() in __sanitize_i387_state() is checking that the fpu state
      is not live any more. But for preempt kernels, task can be scheduled
      out and in at any place and the preload_fpu logic during context switch
      can make the fpu registers live again.
      
      For example, consider a 64-bit Task which uses fpu frequently and as such
      you will find its fpu_counter mostly non-zero. During its time slice, kernel
      used fpu by doing kernel_fpu_begin/kernel_fpu_end(). After this, in the same
      scheduling slice, task-A got a signal to handle. Then during the signal
      setup path we got preempted when we are just before the sanitize_i387_state()
      in arch/x86/kernel/xsave.c:save_i387_xstate(). And when we come back we
      will have the fpu registers live that can hit the bug_on.
      
      Similarly during core dump, other threads can context-switch in and out
      (because of spurious wakeups while waiting for the coredump to finish in
       kernel/exit.c:exit_mm()) and the main thread dumping core can run into this
      bug when it finds some other thread with its fpu registers live on some other cpu.
      
      So remove the paranoid check for now, even though it caught a bug in the
      multi-threaded core dump case (fixed in the previous patch).
      Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com>
      Link: http://lkml.kernel.org/r/1336692811-30576-3-git-send-email-suresh.b.siddha@intel.com
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      d75f1b39
  17. 22 2月, 2012 1 次提交
    • L
      i387: Split up <asm/i387.h> into exported and internal interfaces · 1361b83a
      Linus Torvalds 提交于
      While various modules include <asm/i387.h> to get access to things we
      actually *intend* for them to use, most of that header file was really
      pretty low-level internal stuff that we really don't want to expose to
      others.
      
      So split the header file into two: the small exported interfaces remain
      in <asm/i387.h>, while the internal definitions that are only used by
      core architecture code are now in <asm/fpu-internal.h>.
      
      The guiding principle for this was to expose functions that we export to
      modules, and leave them in <asm/i387.h>, while stuff that is used by
      task switching or was marked GPL-only is in <asm/fpu-internal.h>.
      
      The fpu-internal.h file could be further split up too, especially since
      arch/x86/kvm/ uses some of the remaining stuff for its module.  But that
      kvm usage should probably be abstracted out a bit, and at least now the
      internal FPU accessor functions are much more contained.  Even if it
      isn't perhaps as contained as it _could_ be.
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Link: http://lkml.kernel.org/r/alpine.LFD.2.02.1202211340330.5354@i5.linux-foundation.orgSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      1361b83a
  18. 19 2月, 2012 1 次提交
    • L
      i387: move TS_USEDFPU flag from thread_info to task_struct · f94edacf
      Linus Torvalds 提交于
      This moves the bit that indicates whether a thread has ownership of the
      FPU from the TS_USEDFPU bit in thread_info->status to a word of its own
      (called 'has_fpu') in task_struct->thread.has_fpu.
      
      This fixes two independent bugs at the same time:
      
       - changing 'thread_info->status' from the scheduler causes nasty
         problems for the other users of that variable, since it is defined to
         be thread-synchronous (that's what the "TS_" part of the naming was
         supposed to indicate).
      
         So perfectly valid code could (and did) do
      
      	ti->status |= TS_RESTORE_SIGMASK;
      
         and the compiler was free to do that as separate load, or and store
         instructions.  Which can cause problems with preemption, since a task
         switch could happen in between, and change the TS_USEDFPU bit. The
         change to TS_USEDFPU would be overwritten by the final store.
      
         In practice, this seldom happened, though, because the 'status' field
         was seldom used more than once, so gcc would generally tend to
         generate code that used a read-modify-write instruction and thus
         happened to avoid this problem - RMW instructions are naturally low
         fat and preemption-safe.
      
       - On x86-32, the current_thread_info() pointer would, during interrupts
         and softirqs, point to a *copy* of the real thread_info, because
         x86-32 uses %esp to calculate the thread_info address, and thus the
         separate irq (and softirq) stacks would cause these kinds of odd
         thread_info copy aliases.
      
         This is normally not a problem, since interrupts aren't supposed to
         look at thread information anyway (what thread is running at
         interrupt time really isn't very well-defined), but it confused the
         heck out of irq_fpu_usable() and the code that tried to squirrel
         away the FPU state.
      
         (It also caused untold confusion for us poor kernel developers).
      
      It also turns out that using 'task_struct' is actually much more natural
      for most of the call sites that care about the FPU state, since they
      tend to work with the task struct for other reasons anyway (ie
      scheduling).  And the FPU data that we are going to save/restore is
      found there too.
      
      Thanks to Arjan Van De Ven <arjan@linux.intel.com> for pointing us to
      the %esp issue.
      
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Reported-and-tested-by: NRaphael Prevost <raphael@buro.asia>
      Acked-and-tested-by: NSuresh Siddha <suresh.b.siddha@intel.com>
      Tested-by: NPeter Anvin <hpa@zytor.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      f94edacf
  19. 17 2月, 2012 2 次提交
    • L
      i387: don't ever touch TS_USEDFPU directly, use helper functions · 6d59d7a9
      Linus Torvalds 提交于
      This creates three helper functions that do the TS_USEDFPU accesses, and
      makes everybody that used to do it by hand use those helpers instead.
      
      In addition, there's a couple of helper functions for the "change both
      CR0.TS and TS_USEDFPU at the same time" case, and the places that do
      that together have been changed to use those.  That means that we have
      fewer random places that open-code this situation.
      
      The intent is partly to clarify the code without actually changing any
      semantics yet (since we clearly still have some hard to reproduce bug in
      this area), but also to make it much easier to use another approach
      entirely to caching the CR0.TS bit for software accesses.
      
      Right now we use a bit in the thread-info 'status' variable (this patch
      does not change that), but we might want to make it a full field of its
      own or even make it a per-cpu variable.
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      6d59d7a9
    • L
      i387: fix x86-64 preemption-unsafe user stack save/restore · 15d8791c
      Linus Torvalds 提交于
      Commit 5b1cbac3 ("i387: make irq_fpu_usable() tests more robust")
      added a sanity check to the #NM handler to verify that we never cause
      the "Device Not Available" exception in kernel mode.
      
      However, that check actually pinpointed a (fundamental) race where we do
      cause that exception as part of the signal stack FPU state save/restore
      code.
      
      Because we use the floating point instructions themselves to save and
      restore state directly from user mode, we cannot do that atomically with
      testing the TS_USEDFPU bit: the user mode access itself may cause a page
      fault, which causes a task switch, which saves and restores the FP/MMX
      state from the kernel buffers.
      
      This kind of "recursive" FP state save is fine per se, but it means that
      when the signal stack save/restore gets restarted, it will now take the
      '#NM' exception we originally tried to avoid.  With preemption this can
      happen even without the page fault - but because of the user access, we
      cannot just disable preemption around the save/restore instruction.
      
      There are various ways to solve this, including using the
      "enable/disable_page_fault()" helpers to not allow page faults at all
      during the sequence, and fall back to copying things by hand without the
      use of the native FP state save/restore instructions.
      
      However, the simplest thing to do is to just allow the #NM from kernel
      space, but fix the race in setting and clearing CR0.TS that this all
      exposed: the TS bit changes and the TS_USEDFPU bit absolutely have to be
      atomic wrt scheduling, so while the actual state save/restore can be
      interrupted and restarted, the act of actually clearing/setting CR0.TS
      and the TS_USEDFPU bit together must not.
      
      Instead of just adding random "preempt_disable/enable()" calls to what
      is already excessively ugly code, this introduces some helper functions
      that mostly mirror the "kernel_fpu_begin/end()" functionality, just for
      the user state instead.
      
      Those helper functions should probably eventually replace the other
      ad-hoc CR0.TS and TS_USEDFPU tests too, but I'll need to think about it
      some more: the task switching functionality in particular needs to
      expose the difference between the 'prev' and 'next' threads, while the
      new helper functions intentionally were written to only work with
      'current'.
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      15d8791c
  20. 18 3月, 2011 1 次提交
  21. 14 12月, 2010 1 次提交
  22. 22 7月, 2010 6 次提交
  23. 21 7月, 2010 1 次提交
  24. 20 7月, 2010 1 次提交
    • S
      x86, xsave: Sync xsave memory layout with its header for user handling · 29104e10
      Suresh Siddha 提交于
      With xsaveopt, if a processor implementation discern that a processor state
      component is in its initialized state it may modify the corresponding bit in
      the xsave_hdr.xstate_bv as '0', with out modifying the corresponding memory
      layout. Hence wHile presenting the xstate information to the user, we always
      ensure that the memory layout of a feature will be in the init state if the
      corresponding header bit is zero. This ensures the consistency and avoids the
      condition of the user seeing some some stale state in the memory layout during
      signal handling, debugging etc.
      Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com>
      LKML-Reference: <20100719230205.351459480@sbs-t61.sc.intel.com>
      Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      29104e10