1. 23 3月, 2015 3 次提交
    • B
      x86/fpu: Fold __drop_fpu() into its sole user · d2d0ac9a
      Borislav Petkov 提交于
      Fold it into drop_fpu(). Phew, one less FPU function to pay attention
      to.
      
      No functionality change.
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Acked-by: NOleg Nesterov <oleg@redhat.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Pekka Riikonen <priikone@iki.fi>
      Cc: Quentin Casasnovas <quentin.casasnovas@oracle.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Suresh Siddha <sbsiddha@gmail.com>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      d2d0ac9a
    • O
      x86/fpu: Introduce restore_init_xstate() · 8f4d8186
      Oleg Nesterov 提交于
      Extract the "use_eager_fpu()" code from drop_init_fpu() into a new,
      simple helper restore_init_xstate(). The next patch adds another user.
      
      - It is not clear why we do not check use_fxsr() like fpu_restore_checking()
        does. eager_fpu_init_bp() calls setup_init_fpu_buf() too, and we have the
        "eagerfpu=on" kernel option.
      
      - Ignoring the fact that init_xstate_buf is "struct xsave_struct *", not
        "union thread_xstate *", it is not clear why we can not simply use
        fpu_restore_checking() and avoid the code duplication.
      
      - It is not clear why we can't call setup_init_fpu_buf() unconditionally
        to always create init_xstate_buf(). Then do_device_not_available() path
        (at least) could use restore_init_xstate() too. It doesn't need to init
        fpu->state, its content doesn't matter until unlazy_fpu()/__switch_to()/etc
        which overwrites this memory anyway.
      Signed-off-by: NOleg Nesterov <oleg@redhat.com>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Pekka Riikonen <priikone@iki.fi>
      Cc: Quentin Casasnovas <quentin.casasnovas@oracle.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Suresh Siddha <sbsiddha@gmail.com>
      Link: http://lkml.kernel.org/r/20150311173429.GD5032@redhat.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      8f4d8186
    • O
      x86/fpu: Document user_fpu_begin() · fb14b4ea
      Oleg Nesterov 提交于
      Currently, user_fpu_begin() has a single caller and it is not clear why
      do we actually need it and why we should not worry about preemption
      right after preempt_enable().
      Signed-off-by: NOleg Nesterov <oleg@redhat.com>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Pekka Riikonen <priikone@iki.fi>
      Cc: Quentin Casasnovas <quentin.casasnovas@oracle.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Suresh Siddha <sbsiddha@gmail.com>
      Link: http://lkml.kernel.org/r/20150311173409.GC5032@redhat.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      fb14b4ea
  2. 13 3月, 2015 1 次提交
  3. 10 3月, 2015 1 次提交
  4. 19 2月, 2015 6 次提交
  5. 23 12月, 2014 1 次提交
  6. 03 9月, 2014 1 次提交
  7. 19 6月, 2014 1 次提交
  8. 30 5月, 2014 1 次提交
  9. 17 4月, 2014 1 次提交
  10. 12 1月, 2014 1 次提交
  11. 13 11月, 2013 1 次提交
  12. 21 6月, 2013 1 次提交
    • B
      x86, fpu: Use static_cpu_has_safe before alternatives · 5f8c4218
      Borislav Petkov 提交于
      The call stack below shows how this happens: basically eager_fpu_init()
      calls __thread_fpu_begin(current) which then does if (!use_eager_fpu()),
      which, in turn, uses static_cpu_has.
      
      And we're executing before alternatives so static_cpu_has doesn't work
      there yet.
      
      Use the safe variant in this path which becomes optimal after
      alternatives have run.
      
      WARNING: at arch/x86/kernel/cpu/common.c:1368 warn_pre_alternatives+0x1e/0x20()
      You're using static_cpu_has before alternatives have run!
      Modules linked in:
      Pid: 0, comm: swapper Not tainted 3.9.0-rc8+ #1
      Call Trace:
       warn_slowpath_common
       warn_slowpath_fmt
       ? fpu_finit
       warn_pre_alternatives
       eager_fpu_init
       fpu_init
       cpu_init
       trap_init
       start_kernel
       ? repair_env_string
       x86_64_start_reservations
       x86_64_start_kernel
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Link: http://lkml.kernel.org/r/1370772454-6106-6-git-send-email-bp@alien8.deSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      5f8c4218
  13. 07 6月, 2013 1 次提交
  14. 14 2月, 2013 1 次提交
  15. 01 12月, 2012 1 次提交
    • V
      x86, fpu: Avoid FPU lazy restore after suspend · 644c1541
      Vincent Palatin 提交于
      When a cpu enters S3 state, the FPU state is lost.
      After resuming for S3, if we try to lazy restore the FPU for a process running
      on the same CPU, this will result in a corrupted FPU context.
      
      Ensure that "fpu_owner_task" is properly invalided when (re-)initializing a CPU,
      so nobody will try to lazy restore a state which doesn't exist in the hardware.
      
      Tested with a 64-bit kernel on a 4-core Ivybridge CPU with eagerfpu=off,
      by doing thousands of suspend/resume cycles with 4 processes doing FPU
      operations running. Without the patch, a process is killed after a
      few hundreds cycles by a SIGFPE.
      
      Cc: Duncan Laurie <dlaurie@chromium.org>
      Cc: Olof Johansson <olofj@chromium.org>
      Cc: <stable@kernel.org> v3.4+ # for 3.4 need to replace this_cpu_write by percpu_write
      Signed-off-by: NVincent Palatin <vpalatin@chromium.org>
      Link: http://lkml.kernel.org/r/1354306532-1014-1-git-send-email-vpalatin@chromium.orgSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      644c1541
  16. 26 9月, 2012 1 次提交
  17. 22 9月, 2012 1 次提交
  18. 19 9月, 2012 8 次提交
  19. 15 5月, 2012 1 次提交
  20. 22 2月, 2012 2 次提交
  21. 21 2月, 2012 3 次提交
    • L
      i387: support lazy restore of FPU state · 7e16838d
      Linus Torvalds 提交于
      This makes us recognize when we try to restore FPU state that matches
      what we already have in the FPU on this CPU, and avoids the restore
      entirely if so.
      
      To do this, we add two new data fields:
      
       - a percpu 'fpu_owner_task' variable that gets written any time we
         update the "has_fpu" field, and thus acts as a kind of back-pointer
         to the task that owns the CPU.  The exception is when we save the FPU
         state as part of a context switch - if the save can keep the FPU
         state around, we leave the 'fpu_owner_task' variable pointing at the
         task whose FP state still remains on the CPU.
      
       - a per-thread 'last_cpu' field, that indicates which CPU that thread
         used its FPU on last.  We update this on every context switch
         (writing an invalid CPU number if the last context switch didn't
         leave the FPU in a lazily usable state), so we know that *that*
         thread has done nothing else with the FPU since.
      
      These two fields together can be used when next switching back to the
      task to see if the CPU still matches: if 'fpu_owner_task' matches the
      task we are switching to, we know that no other task (or kernel FPU
      usage) touched the FPU on this CPU in the meantime, and if the current
      CPU number matches the 'last_cpu' field, we know that this thread did no
      other FP work on any other CPU, so the FPU state on the CPU must match
      what was saved on last context switch.
      
      In that case, we can avoid the 'f[x]rstor' entirely, and just clear the
      CR0.TS bit.
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      7e16838d
    • L
      i387: use 'restore_fpu_checking()' directly in task switching code · 80ab6f1e
      Linus Torvalds 提交于
      This inlines what is usually just a couple of instructions, but more
      importantly it also fixes the theoretical error case (can that FPU
      restore really ever fail? Maybe we should remove the checking).
      
      We can't start sending signals from within the scheduler, we're much too
      deep in the kernel and are holding the runqueue lock etc.  So don't
      bother even trying.
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      80ab6f1e
    • L
      i387: fix up some fpu_counter confusion · cea20ca3
      Linus Torvalds 提交于
      This makes sure we clear the FPU usage counter for newly created tasks,
      just so that we start off in a known state (for example, don't try to
      preload the FPU state on the first task switch etc).
      
      It also fixes a thinko in when we increment the fpu_counter at task
      switch time, introduced by commit 34ddc81a ("i387: re-introduce FPU
      state preloading at context switch time").  We should increment the
      *new* task fpu_counter, not the old task, and only if we decide to use
      that state (whether lazily or preloaded).
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      cea20ca3
  22. 19 2月, 2012 2 次提交
    • L
      i387: re-introduce FPU state preloading at context switch time · 34ddc81a
      Linus Torvalds 提交于
      After all the FPU state cleanups and finally finding the problem that
      caused all our FPU save/restore problems, this re-introduces the
      preloading of FPU state that was removed in commit b3b0870e ("i387:
      do not preload FPU state at task switch time").
      
      However, instead of simply reverting the removal, this reimplements
      preloading with several fixes, most notably
      
       - properly abstracted as a true FPU state switch, rather than as
         open-coded save and restore with various hacks.
      
         In particular, implementing it as a proper FPU state switch allows us
         to optimize the CR0.TS flag accesses: there is no reason to set the
         TS bit only to then almost immediately clear it again.  CR0 accesses
         are quite slow and expensive, don't flip the bit back and forth for
         no good reason.
      
       - Make sure that the same model works for both x86-32 and x86-64, so
         that there are no gratuitous differences between the two due to the
         way they save and restore segment state differently due to
         architectural differences that really don't matter to the FPU state.
      
       - Avoid exposing the "preload" state to the context switch routines,
         and in particular allow the concept of lazy state restore: if nothing
         else has used the FPU in the meantime, and the process is still on
         the same CPU, we can avoid restoring state from memory entirely, just
         re-expose the state that is still in the FPU unit.
      
         That optimized lazy restore isn't actually implemented here, but the
         infrastructure is set up for it.  Of course, older CPU's that use
         'fnsave' to save the state cannot take advantage of this, since the
         state saving also trashes the state.
      
      In other words, there is now an actual _design_ to the FPU state saving,
      rather than just random historical baggage.  Hopefully it's easier to
      follow as a result.
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      34ddc81a
    • L
      i387: move TS_USEDFPU flag from thread_info to task_struct · f94edacf
      Linus Torvalds 提交于
      This moves the bit that indicates whether a thread has ownership of the
      FPU from the TS_USEDFPU bit in thread_info->status to a word of its own
      (called 'has_fpu') in task_struct->thread.has_fpu.
      
      This fixes two independent bugs at the same time:
      
       - changing 'thread_info->status' from the scheduler causes nasty
         problems for the other users of that variable, since it is defined to
         be thread-synchronous (that's what the "TS_" part of the naming was
         supposed to indicate).
      
         So perfectly valid code could (and did) do
      
      	ti->status |= TS_RESTORE_SIGMASK;
      
         and the compiler was free to do that as separate load, or and store
         instructions.  Which can cause problems with preemption, since a task
         switch could happen in between, and change the TS_USEDFPU bit. The
         change to TS_USEDFPU would be overwritten by the final store.
      
         In practice, this seldom happened, though, because the 'status' field
         was seldom used more than once, so gcc would generally tend to
         generate code that used a read-modify-write instruction and thus
         happened to avoid this problem - RMW instructions are naturally low
         fat and preemption-safe.
      
       - On x86-32, the current_thread_info() pointer would, during interrupts
         and softirqs, point to a *copy* of the real thread_info, because
         x86-32 uses %esp to calculate the thread_info address, and thus the
         separate irq (and softirq) stacks would cause these kinds of odd
         thread_info copy aliases.
      
         This is normally not a problem, since interrupts aren't supposed to
         look at thread information anyway (what thread is running at
         interrupt time really isn't very well-defined), but it confused the
         heck out of irq_fpu_usable() and the code that tried to squirrel
         away the FPU state.
      
         (It also caused untold confusion for us poor kernel developers).
      
      It also turns out that using 'task_struct' is actually much more natural
      for most of the call sites that care about the FPU state, since they
      tend to work with the task struct for other reasons anyway (ie
      scheduling).  And the FPU data that we are going to save/restore is
      found there too.
      
      Thanks to Arjan Van De Ven <arjan@linux.intel.com> for pointing us to
      the %esp issue.
      
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Reported-and-tested-by: NRaphael Prevost <raphael@buro.asia>
      Acked-and-tested-by: NSuresh Siddha <suresh.b.siddha@intel.com>
      Tested-by: NPeter Anvin <hpa@zytor.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      f94edacf