1. 24 4月, 2014 1 次提交
    • M
      kprobes/x86: Call exception handlers directly from do_int3/do_debug · 6f6343f5
      Masami Hiramatsu 提交于
      To avoid a kernel crash by probing on lockdep code, call
      kprobe_int3_handler() and kprobe_debug_handler()(which was
      formerly called post_kprobe_handler()) directly from
      do_int3 and do_debug.
      
      Currently kprobes uses notify_die() to hook the int3/debug
      exceptoins. Since there is a locking code in notify_die,
      the lockdep code can be invoked. And because the lockdep
      involves printk() related things, theoretically, we need to
      prohibit probing on such code, which means much longer blacklist
      we'll have. Instead, hooking the int3/debug for kprobes before
      notify_die() can avoid this problem.
      
      Anyway, most of the int3 handlers in the kernel are already
      called from do_int3 directly, e.g. ftrace_int3_handler,
      poke_int3_handler, kgdb_ll_trap. Actually only
      kprobe_exceptions_notify is on the notifier_call_chain.
      Signed-off-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Reviewed-by: NSteven Rostedt <rostedt@goodmis.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: Jiri Kosina <jkosina@suse.cz>
      Cc: Jonathan Lebon <jlebon@redhat.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: Seiji Aguchi <seiji.aguchi@hds.com>
      Link: http://lkml.kernel.org/r/20140417081733.26341.24423.stgit@ltc230.yrl.intra.hitachi.co.jpSigned-off-by: NIngo Molnar <mingo@kernel.org>
      6f6343f5
  2. 12 12月, 2013 1 次提交
  3. 13 11月, 2013 1 次提交
  4. 09 11月, 2013 1 次提交
  5. 25 9月, 2013 1 次提交
  6. 23 7月, 2013 1 次提交
    • J
      kprobes/x86: Call out into INT3 handler directly instead of using notifier · 17f41571
      Jiri Kosina 提交于
      In fd4363ff ("x86: Introduce int3 (breakpoint)-based
      instruction patching"), the mechanism that was introduced for
      notifying alternatives code from int3 exception handler that and
      exception occured was die_notifier.
      
      This is however problematic, as early code might be using jump
      labels even before the notifier registration has been performed,
      which will then lead to an oops due to unhandled exception. One
      of such occurences has been encountered by Fengguang:
      
       int3: 0000 [#1] PREEMPT SMP DEBUG_PAGEALLOC
       Modules linked in:
       CPU: 1 PID: 0 Comm: swapper/1 Not tainted 3.11.0-rc1-01429-g04bf576 #8
       task: ffff88000da1b040 ti: ffff88000da1c000 task.ti: ffff88000da1c000
       RIP: 0010:[<ffffffff811098cc>]  [<ffffffff811098cc>] ttwu_do_wakeup+0x28/0x225
       RSP: 0000:ffff88000dd03f10  EFLAGS: 00000006
       RAX: 0000000000000000 RBX: ffff88000dd12940 RCX: ffffffff81769c40
       RDX: 0000000000000002 RSI: 0000000000000000 RDI: 0000000000000001
       RBP: ffff88000dd03f28 R08: ffffffff8176a8c0 R09: 0000000000000002
       R10: ffffffff810ff484 R11: ffff88000dd129e8 R12: ffff88000dbc90c0
       R13: ffff88000dbc90c0 R14: ffff88000da1dfd8 R15: ffff88000da1dfd8
       FS:  0000000000000000(0000) GS:ffff88000dd00000(0000) knlGS:0000000000000000
       CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
       CR2: 00000000ffffffff CR3: 0000000001c88000 CR4: 00000000000006e0
       Stack:
        ffff88000dd12940 ffff88000dbc90c0 ffff88000da1dfd8 ffff88000dd03f48
        ffffffff81109e2b ffff88000dd12940 0000000000000000 ffff88000dd03f68
        ffffffff81109e9e 0000000000000000 0000000000012940 ffff88000dd03f98
       Call Trace:
        <IRQ>
        [<ffffffff81109e2b>] ttwu_do_activate.constprop.56+0x6d/0x79
        [<ffffffff81109e9e>] sched_ttwu_pending+0x67/0x84
        [<ffffffff8110c845>] scheduler_ipi+0x15a/0x2b0
        [<ffffffff8104dfb4>] smp_reschedule_interrupt+0x38/0x41
        [<ffffffff8173bf5d>] reschedule_interrupt+0x6d/0x80
        <EOI>
        [<ffffffff810ff484>] ? __atomic_notifier_call_chain+0x5/0xc1
        [<ffffffff8105cc30>] ? native_safe_halt+0xd/0x16
        [<ffffffff81015f10>] default_idle+0x147/0x282
        [<ffffffff81017026>] arch_cpu_idle+0x3d/0x5d
        [<ffffffff81127d6a>] cpu_idle_loop+0x46d/0x5db
        [<ffffffff81127f5c>] cpu_startup_entry+0x84/0x84
        [<ffffffff8104f4f8>] start_secondary+0x3c8/0x3d5
        [...]
      
      Fix this by directly calling poke_int3_handler() from the int3
      exception handler (analogically to what ftrace has been doing
      already), instead of relying on notifier, registration of which
      might not have yet been finalized by the time of the first trap.
      Reported-and-tested-by: NFengguang Wu <fengguang.wu@intel.com>
      Signed-off-by: NJiri Kosina <jkosina@suse.cz>
      Acked-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Cc: H. Peter Anvin <hpa@linux.intel.com>
      Cc: Fengguang Wu <fengguang.wu@intel.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Link: http://lkml.kernel.org/r/alpine.LNX.2.00.1307231007490.14024@pobox.suse.czSigned-off-by: NIngo Molnar <mingo@kernel.org>
      17f41571
  7. 17 7月, 2013 1 次提交
    • K
      x86: Make sure IDT is page aligned · 4df05f36
      Kees Cook 提交于
      Since the IDT is referenced from a fixmap, make sure it is page aligned.
      Merge with 32-bit one, since it was already aligned to deal with F00F
      bug. Since bss is cleared before IDT setup, it can live there. This also
      moves the other *_idt_table variables into common locations.
      
      This avoids the risk of the IDT ever being moved in the bss and having
      the mapping be offset, resulting in calling incorrect handlers. In the
      current upstream kernel this is not a manifested bug, but heavily patched
      kernels (such as those using the PaX patch series) did encounter this bug.
      
      The tables other than idt_table technically do not need to be page
      aligned, at least not at the current time, but using a common
      declaration avoids mistakes.  On 64 bits the table is exactly one page
      long, anyway.
      Signed-off-by: NKees Cook <keescook@chromium.org>
      Link: http://lkml.kernel.org/r/20130716183441.GA14232@www.outflux.netReported-by: NPaX Team <pageexec@gmail.com>
      Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      4df05f36
  8. 21 6月, 2013 1 次提交
  9. 19 6月, 2013 1 次提交
  10. 14 5月, 2013 1 次提交
  11. 12 4月, 2013 1 次提交
    • K
      x86: Use a read-only IDT alias on all CPUs · 4eefbe79
      Kees Cook 提交于
      Make a copy of the IDT (as seen via the "sidt" instruction) read-only.
      This primarily removes the IDT from being a target for arbitrary memory
      write attacks, and has the added benefit of also not leaking the kernel
      base offset, if it has been relocated.
      
      We already did this on vendor == Intel and family == 5 because of the
      F0 0F bug -- regardless of if a particular CPU had the F0 0F bug or
      not.  Since the workaround was so cheap, there simply was no reason to
      be very specific.  This patch extends the readonly alias to all CPUs,
      but does not activate the #PF to #UD conversion code needed to deliver
      the proper exception in the F0 0F case except on Intel family 5
      processors.
      Signed-off-by: NKees Cook <keescook@chromium.org>
      Link: http://lkml.kernel.org/r/20130410192422.GA17344@www.outflux.net
      Cc: Eric Northup <digitaleric@google.com>
      Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      4eefbe79
  12. 08 3月, 2013 2 次提交
    • F
      context_tracking: Restore correct previous context state on exception exit · 6c1e0256
      Frederic Weisbecker 提交于
      On exception exit, we restore the previous context tracking state based on
      the regs of the interrupted frame. Iff that frame is in user mode as
      stated by user_mode() helper, we restore the context tracking user mode.
      
      However there is a tiny chunck of low level arch code after we pass through
      user_enter() and until the CPU eventually resumes userspace.
      If an exception happens in this tiny area, exception_enter() correctly
      exits the context tracking user mode but exception_exit() won't restore
      it because of the value returned by user_mode(regs).
      
      As a result we may return to userspace with the wrong context tracking
      state.
      
      To fix this, change exception_enter() to return the context tracking state
      prior to its call and pass this saved state to exception_exit(). This restores
      the real context tracking state of the interrupted frame.
      
      (May be this patch was suggested to me, I don't recall exactly. If so,
      sorry for the missing credit).
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Li Zhong <zhong@linux.vnet.ibm.com>
      Cc: Kevin Hilman <khilman@linaro.org>
      Cc: Mats Liljegren <mats.liljegren@enea.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Namhyung Kim <namhyung.kim@lge.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      6c1e0256
    • F
      context_tracking: Move exception handling to generic code · 56dd9470
      Frederic Weisbecker 提交于
      Exceptions handling on context tracking should share common
      treatment: on entry we exit user mode if the exception triggered
      in that context. Then on exception exit we return to that previous
      context.
      
      Generalize this to avoid duplication across archs.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Li Zhong <zhong@linux.vnet.ibm.com>
      Cc: Kevin Hilman <khilman@linaro.org>
      Cc: Mats Liljegren <mats.liljegren@enea.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Namhyung Kim <namhyung.kim@lge.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      56dd9470
  13. 30 1月, 2013 1 次提交
    • H
      x86, 64bit: Use a #PF handler to materialize early mappings on demand · 8170e6be
      H. Peter Anvin 提交于
      Linear mode (CR0.PG = 0) is mutually exclusive with 64-bit mode; all
      64-bit code has to use page tables.  This makes it awkward before we
      have first set up properly all-covering page tables to access objects
      that are outside the static kernel range.
      
      So far we have dealt with that simply by mapping a fixed amount of
      low memory, but that fails in at least two upcoming use cases:
      
      1. We will support load and run kernel, struct boot_params, ramdisk,
         command line, etc. above the 4 GiB mark.
      2. need to access ramdisk early to get microcode to update that as
         early possible.
      
      We could use early_iomap to access them too, but it will make code to
      messy and hard to be unified with 32 bit.
      
      Hence, set up a #PF table and use a fixed number of buffers to set up
      page tables on demand.  If the buffers fill up then we simply flush
      them and start over.  These buffers are all in __initdata, so it does
      not increase RAM usage at runtime.
      
      Thus, with the help of the #PF handler, we can set the final kernel
      mapping from blank, and switch to init_level4_pgt later.
      
      During the switchover in head_64.S, before #PF handler is available,
      we use three pages to handle kernel crossing 1G, 512G boundaries with
      sharing page by playing games with page aliasing: the same page is
      mapped twice in the higher-level tables with appropriate wraparound.
      The kernel region itself will be properly mapped; other mappings may
      be spurious.
      
      early_make_pgtable is using kernel high mapping address to access pages
      to set page table.
      
      -v4: Add phys_base offset to make kexec happy, and add
      	init_mapping_kernel()   - Yinghai
      -v5: fix compiling with xen, and add back ident level3 and level2 for xen
           also move back init_level4_pgt from BSS to DATA again.
           because we have to clear it anyway.  - Yinghai
      -v6: switch to init_level4_pgt in init_mem_mapping. - Yinghai
      -v7: remove not needed clear_page for init_level4_page
           it is with fill 512,8,0 already in head_64.S  - Yinghai
      -v8: we need to keep that handler alive until init_mem_mapping and don't
           let early_trap_init to trash that early #PF handler.
           So split early_trap_pf_init out and move it down. - Yinghai
      -v9: switchover only cover kernel space instead of 1G so could avoid
           touch possible mem holes. - Yinghai
      -v11: change far jmp back to far return to initial_code, that is needed
           to fix failure that is reported by Konrad on AMD systems.  - Yinghai
      Signed-off-by: NYinghai Lu <yinghai@kernel.org>
      Link: http://lkml.kernel.org/r/1359058816-7615-12-git-send-email-yinghai@kernel.orgSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      8170e6be
  14. 18 12月, 2012 1 次提交
  15. 01 12月, 2012 1 次提交
    • F
      context_tracking: New context tracking susbsystem · 91d1aa43
      Frederic Weisbecker 提交于
      Create a new subsystem that probes on kernel boundaries
      to keep track of the transitions between level contexts
      with two basic initial contexts: user or kernel.
      
      This is an abstraction of some RCU code that use such tracking
      to implement its userspace extended quiescent state.
      
      We need to pull this up from RCU into this new level of indirection
      because this tracking is also going to be used to implement an "on
      demand" generic virtual cputime accounting. A necessary step to
      shutdown the tick while still accounting the cputime.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Li Zhong <zhong@linux.vnet.ibm.com>
      Cc: Gilad Ben-Yossef <gilad@benyossef.com>
      Reviewed-by: NSteven Rostedt <rostedt@goodmis.org>
      [ paulmck: fix whitespace error and email address. ]
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      91d1aa43
  16. 26 9月, 2012 3 次提交
    • F
      x86: Exception hooks for userspace RCU extended QS · 6ba3c97a
      Frederic Weisbecker 提交于
      Add necessary hooks to x86 exception for userspace
      RCU extended quiescent state support.
      
      This includes traps, page fault, debug exceptions, etc...
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Alessio Igor Bogani <abogani@kernel.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Avi Kivity <avi@redhat.com>
      Cc: Chris Metcalf <cmetcalf@tilera.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Geoff Levand <geoff@infradead.org>
      Cc: Gilad Ben Yossef <gilad@benyossef.com>
      Cc: Hakan Akkan <hakanakkan@gmail.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Josh Triplett <josh@joshtriplett.org>
      Cc: Kevin Hilman <khilman@ti.com>
      Cc: Max Krasnyansky <maxk@qualcomm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephen Hemminger <shemminger@vyatta.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Sven-Thorsten Dietrich <thebigcorporation@gmail.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      6ba3c97a
    • F
      x86: Unspaghettize do_general_protection() · ef3f6288
      Frederic Weisbecker 提交于
      There is some unnatural label based layout in this function.
      Convert the unnecessary goto to readable conditional blocks.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      ef3f6288
    • F
      x86: Unspaghettize do_trap() · c416ddf5
      Frederic Weisbecker 提交于
      Cleanup the label maze in this function. Having a
      seperate function to first handle the traps that don't
      generate a signal makes it easier to convert into
      more readable conditional paths.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Link: http://lkml.kernel.org/r/1348577479-2564-1-git-send-email-fweisbec@gmail.com
      [ Fixed 32-bit build failure. ]
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      c416ddf5
  17. 19 9月, 2012 2 次提交
  18. 06 6月, 2012 1 次提交
  19. 01 6月, 2012 1 次提交
    • S
      ftrace: Synchronize variable setting with breakpoints · a192cd04
      Steven Rostedt 提交于
      When the function tracer starts modifying the code via breakpoints
      it sets a variable (modifying_ftrace_code) to inform the breakpoint
      handler to call the ftrace int3 code.
      
      But there's no synchronization between setting this code and the
      handler, thus it is possible for the handler to be called on another
      CPU before it sees the variable. This will cause a kernel crash as
      the int3 handler will not know what to do with it.
      
      I originally added smp_mb()'s to force the visibility of the variable
      but H. Peter Anvin suggested that I just make it atomic.
      
      [ Added comments as suggested by Peter Zijlstra ]
      Suggested-by: NH. Peter Anvin <hpa@zytor.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      a192cd04
  20. 18 5月, 2012 1 次提交
    • P
      MCA: delete all remaining traces of microchannel bus support. · bb8187d3
      Paul Gortmaker 提交于
      Hardware with MCA bus is limited to 386 and 486 class machines
      that are now 20+ years old and typically with less than 32MB
      of memory.  A quick search on the internet, and you see that
      even the MCA hobbyist/enthusiast community has lost interest
      in the early 2000 era and never really even moved ahead from
      the 2.4 kernels to the 2.6 series.
      
      This deletes anything remaining related to CONFIG_MCA from core
      kernel code and from the x86 architecture.  There is no point in
      carrying this any further into the future.
      
      One complication to watch for is inadvertently scooping up
      stuff relating to machine check, since there is overlap in
      the TLA name space (e.g. arch/x86/boot/mca.c).
      
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: James Bottomley <JBottomley@Parallels.com>
      Cc: x86@kernel.org
      Acked-by: NIngo Molnar <mingo@elte.hu>
      Acked-by: NH. Peter Anvin <hpa@zytor.com>
      Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
      bb8187d3
  21. 28 4月, 2012 1 次提交
    • S
      ftrace/x86: Have arch x86_64 use breakpoints instead of stop machine · 08d636b6
      Steven Rostedt 提交于
      This method changes x86 to add a breakpoint to the mcount locations
      instead of calling stop machine.
      
      Now that iret can be handled by NMIs, we perform the following to
      update code:
      
      1) Add a breakpoint to all locations that will be modified
      
      2) Sync all cores
      
      3) Update all locations to be either a nop or call (except breakpoint
         op)
      
      4) Sync all cores
      
      5) Remove the breakpoint with the new code.
      
      6) Sync all cores
      
      [
        Added updates that Masami suggested:
         Use unlikely(modifying_ftrace_code) in int3 trap to keep kprobes efficient.
         Don't use NOTIFY_* in ftrace handler in int3 as it is not a notifier.
      ]
      
      Cc: H. Peter Anvin <hpa@zytor.com>
      Acked-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      08d636b6
  22. 29 3月, 2012 1 次提交
  23. 13 3月, 2012 1 次提交
  24. 10 3月, 2012 1 次提交
  25. 22 2月, 2012 1 次提交
    • L
      i387: Split up <asm/i387.h> into exported and internal interfaces · 1361b83a
      Linus Torvalds 提交于
      While various modules include <asm/i387.h> to get access to things we
      actually *intend* for them to use, most of that header file was really
      pretty low-level internal stuff that we really don't want to expose to
      others.
      
      So split the header file into two: the small exported interfaces remain
      in <asm/i387.h>, while the internal definitions that are only used by
      core architecture code are now in <asm/fpu-internal.h>.
      
      The guiding principle for this was to expose functions that we export to
      modules, and leave them in <asm/i387.h>, while stuff that is used by
      task switching or was marked GPL-only is in <asm/fpu-internal.h>.
      
      The fpu-internal.h file could be further split up too, especially since
      arch/x86/kvm/ uses some of the remaining stuff for its module.  But that
      kvm usage should probably be abstracted out a bit, and at least now the
      internal FPU accessor functions are much more contained.  Even if it
      isn't perhaps as contained as it _could_ be.
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Link: http://lkml.kernel.org/r/alpine.LFD.2.02.1202211340330.5354@i5.linux-foundation.orgSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      1361b83a
  26. 21 2月, 2012 1 次提交
  27. 19 2月, 2012 2 次提交
    • L
      i387: re-introduce FPU state preloading at context switch time · 34ddc81a
      Linus Torvalds 提交于
      After all the FPU state cleanups and finally finding the problem that
      caused all our FPU save/restore problems, this re-introduces the
      preloading of FPU state that was removed in commit b3b0870e ("i387:
      do not preload FPU state at task switch time").
      
      However, instead of simply reverting the removal, this reimplements
      preloading with several fixes, most notably
      
       - properly abstracted as a true FPU state switch, rather than as
         open-coded save and restore with various hacks.
      
         In particular, implementing it as a proper FPU state switch allows us
         to optimize the CR0.TS flag accesses: there is no reason to set the
         TS bit only to then almost immediately clear it again.  CR0 accesses
         are quite slow and expensive, don't flip the bit back and forth for
         no good reason.
      
       - Make sure that the same model works for both x86-32 and x86-64, so
         that there are no gratuitous differences between the two due to the
         way they save and restore segment state differently due to
         architectural differences that really don't matter to the FPU state.
      
       - Avoid exposing the "preload" state to the context switch routines,
         and in particular allow the concept of lazy state restore: if nothing
         else has used the FPU in the meantime, and the process is still on
         the same CPU, we can avoid restoring state from memory entirely, just
         re-expose the state that is still in the FPU unit.
      
         That optimized lazy restore isn't actually implemented here, but the
         infrastructure is set up for it.  Of course, older CPU's that use
         'fnsave' to save the state cannot take advantage of this, since the
         state saving also trashes the state.
      
      In other words, there is now an actual _design_ to the FPU state saving,
      rather than just random historical baggage.  Hopefully it's easier to
      follow as a result.
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      34ddc81a
    • L
      i387: move TS_USEDFPU flag from thread_info to task_struct · f94edacf
      Linus Torvalds 提交于
      This moves the bit that indicates whether a thread has ownership of the
      FPU from the TS_USEDFPU bit in thread_info->status to a word of its own
      (called 'has_fpu') in task_struct->thread.has_fpu.
      
      This fixes two independent bugs at the same time:
      
       - changing 'thread_info->status' from the scheduler causes nasty
         problems for the other users of that variable, since it is defined to
         be thread-synchronous (that's what the "TS_" part of the naming was
         supposed to indicate).
      
         So perfectly valid code could (and did) do
      
      	ti->status |= TS_RESTORE_SIGMASK;
      
         and the compiler was free to do that as separate load, or and store
         instructions.  Which can cause problems with preemption, since a task
         switch could happen in between, and change the TS_USEDFPU bit. The
         change to TS_USEDFPU would be overwritten by the final store.
      
         In practice, this seldom happened, though, because the 'status' field
         was seldom used more than once, so gcc would generally tend to
         generate code that used a read-modify-write instruction and thus
         happened to avoid this problem - RMW instructions are naturally low
         fat and preemption-safe.
      
       - On x86-32, the current_thread_info() pointer would, during interrupts
         and softirqs, point to a *copy* of the real thread_info, because
         x86-32 uses %esp to calculate the thread_info address, and thus the
         separate irq (and softirq) stacks would cause these kinds of odd
         thread_info copy aliases.
      
         This is normally not a problem, since interrupts aren't supposed to
         look at thread information anyway (what thread is running at
         interrupt time really isn't very well-defined), but it confused the
         heck out of irq_fpu_usable() and the code that tried to squirrel
         away the FPU state.
      
         (It also caused untold confusion for us poor kernel developers).
      
      It also turns out that using 'task_struct' is actually much more natural
      for most of the call sites that care about the FPU state, since they
      tend to work with the task struct for other reasons anyway (ie
      scheduling).  And the FPU data that we are going to save/restore is
      found there too.
      
      Thanks to Arjan Van De Ven <arjan@linux.intel.com> for pointing us to
      the %esp issue.
      
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Reported-and-tested-by: NRaphael Prevost <raphael@buro.asia>
      Acked-and-tested-by: NSuresh Siddha <suresh.b.siddha@intel.com>
      Tested-by: NPeter Anvin <hpa@zytor.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      f94edacf
  28. 17 2月, 2012 4 次提交
    • L
      i387: move AMD K7/K8 fpu fxsave/fxrstor workaround from save to restore · 4903062b
      Linus Torvalds 提交于
      The AMD K7/K8 CPUs don't save/restore FDP/FIP/FOP unless an exception is
      pending.  In order to not leak FIP state from one process to another, we
      need to do a floating point load after the fxsave of the old process,
      and before the fxrstor of the new FPU state.  That resets the state to
      the (uninteresting) kernel load, rather than some potentially sensitive
      user information.
      
      We used to do this directly after the FPU state save, but that is
      actually very inconvenient, since it
      
       (a) corrupts what is potentially perfectly good FPU state that we might
           want to lazy avoid restoring later and
      
       (b) on x86-64 it resulted in a very annoying ordering constraint, where
           "__unlazy_fpu()" in the task switch needs to be delayed until after
           the DS segment has been reloaded just to get the new DS value.
      
      Coupling it to the fxrstor instead of the fxsave automatically avoids
      both of these issues, and also ensures that we only do it when actually
      necessary (the FP state after a save may never actually get used).  It's
      simply a much more natural place for the leaked state cleanup.
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      4903062b
    • L
      i387: do not preload FPU state at task switch time · b3b0870e
      Linus Torvalds 提交于
      Yes, taking the trap to re-load the FPU/MMX state is expensive, but so
      is spending several days looking for a bug in the state save/restore
      code.  And the preload code has some rather subtle interactions with
      both paravirtualization support and segment state restore, so it's not
      nearly as simple as it should be.
      
      Also, now that we no longer necessarily depend on a single bit (ie
      TS_USEDFPU) for keeping track of the state of the FPU, we migth be able
      to do better.  If we are really switching between two processes that
      keep touching the FP state, save/restore is inevitable, but in the case
      of having one process that does most of the FPU usage, we may actually
      be able to do much better than the preloading.
      
      In particular, we may be able to keep track of which CPU the process ran
      on last, and also per CPU keep track of which process' FP state that CPU
      has.  For modern CPU's that don't destroy the FPU contents on save time,
      that would allow us to do a lazy restore by just re-enabling the
      existing FPU state - with no restore cost at all!
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      b3b0870e
    • L
      i387: don't ever touch TS_USEDFPU directly, use helper functions · 6d59d7a9
      Linus Torvalds 提交于
      This creates three helper functions that do the TS_USEDFPU accesses, and
      makes everybody that used to do it by hand use those helpers instead.
      
      In addition, there's a couple of helper functions for the "change both
      CR0.TS and TS_USEDFPU at the same time" case, and the places that do
      that together have been changed to use those.  That means that we have
      fewer random places that open-code this situation.
      
      The intent is partly to clarify the code without actually changing any
      semantics yet (since we clearly still have some hard to reproduce bug in
      this area), but also to make it much easier to use another approach
      entirely to caching the CR0.TS bit for software accesses.
      
      Right now we use a bit in the thread-info 'status' variable (this patch
      does not change that), but we might want to make it a full field of its
      own or even make it a per-cpu variable.
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      6d59d7a9
    • L
      i387: fix x86-64 preemption-unsafe user stack save/restore · 15d8791c
      Linus Torvalds 提交于
      Commit 5b1cbac3 ("i387: make irq_fpu_usable() tests more robust")
      added a sanity check to the #NM handler to verify that we never cause
      the "Device Not Available" exception in kernel mode.
      
      However, that check actually pinpointed a (fundamental) race where we do
      cause that exception as part of the signal stack FPU state save/restore
      code.
      
      Because we use the floating point instructions themselves to save and
      restore state directly from user mode, we cannot do that atomically with
      testing the TS_USEDFPU bit: the user mode access itself may cause a page
      fault, which causes a task switch, which saves and restores the FP/MMX
      state from the kernel buffers.
      
      This kind of "recursive" FP state save is fine per se, but it means that
      when the signal stack save/restore gets restarted, it will now take the
      '#NM' exception we originally tried to avoid.  With preemption this can
      happen even without the page fault - but because of the user access, we
      cannot just disable preemption around the save/restore instruction.
      
      There are various ways to solve this, including using the
      "enable/disable_page_fault()" helpers to not allow page faults at all
      during the sequence, and fall back to copying things by hand without the
      use of the native FP state save/restore instructions.
      
      However, the simplest thing to do is to just allow the #NM from kernel
      space, but fix the race in setting and clearing CR0.TS that this all
      exposed: the TS bit changes and the TS_USEDFPU bit absolutely have to be
      atomic wrt scheduling, so while the actual state save/restore can be
      interrupted and restarted, the act of actually clearing/setting CR0.TS
      and the TS_USEDFPU bit together must not.
      
      Instead of just adding random "preempt_disable/enable()" calls to what
      is already excessively ugly code, this introduces some helper functions
      that mostly mirror the "kernel_fpu_begin/end()" functionality, just for
      the user state instead.
      
      Those helper functions should probably eventually replace the other
      ad-hoc CR0.TS and TS_USEDFPU tests too, but I'll need to think about it
      some more: the task switching functionality in particular needs to
      expose the difference between the 'prev' and 'next' threads, while the
      new helper functions intentionally were written to only work with
      'current'.
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      15d8791c
  29. 14 2月, 2012 2 次提交
    • L
      i387: make irq_fpu_usable() tests more robust · 5b1cbac3
      Linus Torvalds 提交于
      Some code - especially the crypto layer - wants to use the x86
      FP/MMX/AVX register set in what may be interrupt (typically softirq)
      context.
      
      That *can* be ok, but the tests for when it was ok were somewhat
      suspect.  We cannot touch the thread-specific status bits either, so
      we'd better check that we're not going to try to save FP state or
      anything like that.
      
      Now, it may be that the TS bit is always cleared *before* we set the
      USEDFPU bit (and only set when we had already cleared the USEDFP
      before), so the TS bit test may actually have been sufficient, but it
      certainly was not obviously so.
      
      So this explicitly verifies that we will not touch the TS_USEDFPU bit,
      and adds a few related sanity-checks.  Because it seems that somehow
      AES-NI is corrupting user FP state.  The cause is not clear, and this
      patch doesn't fix it, but while debugging it I really wanted the code to
      be more obviously correct and robust.
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      5b1cbac3
    • L
      i387: math_state_restore() isn't called from asm · be98c2cd
      Linus Torvalds 提交于
      It was marked asmlinkage for some really old and stale legacy reasons.
      Fix that and the equally stale comment.
      
      Noticed when debugging the irq_fpu_usable() bugs.
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      be98c2cd
  30. 22 12月, 2011 2 次提交
    • S
      x86: Add counter when debug stack is used with interrupts enabled · 42181186
      Steven Rostedt 提交于
      Mathieu Desnoyers pointed out a case that can cause issues with
      NMIs running on the debug stack:
      
        int3 -> interrupt -> NMI -> int3
      
      Because the interrupt changes the stack, the NMI will not see that
      it preempted the debug stack. Looking deeper at this case,
      interrupts only happen when the int3 is from userspace or in
      an a location in the exception table (fixup).
      
        userspace -> int3 -> interurpt -> NMI -> int3
      
      All other int3s that happen in the kernel should be processed
      without ever enabling interrupts, as the do_trap() call will
      panic the kernel if it is called to process any other location
      within the kernel.
      
      Adding a counter around the sections that enable interrupts while
      using the debug stack allows the NMI to also check that case.
      If the NMI sees that it either interrupted a task using the debug
      stack or the debug counter is non-zero, then it will have to
      change the IDT table to make the int3 not change stacks (which will
      corrupt the stack if it does).
      
      Note, I had to move the debug_usage functions out of processor.h
      and into debugreg.h because of the static inlined functions to
      inc and dec the debug_usage counter. __get_cpu_var() requires
      smp.h which includes processor.h, and would fail to build.
      
      Link: http://lkml.kernel.org/r/1323976535.23971.112.camel@gandalf.stny.rr.comReported-by: NMathieu Desnoyers <mathieu.desnoyers@efficios.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: H. Peter Anvin <hpa@linux.intel.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Paul Turner <pjt@google.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      42181186
    • S
      x86: Keep current stack in NMI breakpoints · 228bdaa9
      Steven Rostedt 提交于
      We want to allow NMI handlers to have breakpoints to be able to
      remove stop_machine from ftrace, kprobes and jump_labels. But if
      an NMI interrupts a current breakpoint, and then it triggers a
      breakpoint itself, it will switch to the breakpoint stack and
      corrupt the data on it for the breakpoint processing that it
      interrupted.
      
      Instead, have the NMI check if it interrupted breakpoint processing
      by checking if the stack that is currently used is a breakpoint
      stack. If it is, then load a special IDT that changes the IST
      for the debug exception to keep the same stack in kernel context.
      When the NMI is done, it puts it back.
      
      This way, if the NMI does trigger a breakpoint, it will keep
      using the same stack and not stomp on the breakpoint data for
      the breakpoint it interrupted.
      Suggested-by: NPeter Zijlstra <peterz@infradead.org>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      228bdaa9