1. 02 11月, 2012 1 次提交
    • S
      x86: Don't clobber top of pt_regs in nested NMI · 28696f43
      Salman Qazi 提交于
      The nested NMI modifies the place (instruction, flags and stack)
      that the first NMI will iret to.  However, the copy of registers
      modified is exactly the one that is the part of pt_regs in
      the first NMI.  This can change the behaviour of the first NMI.
      
      In particular, Google's arch_trigger_all_cpu_backtrace handler
      also prints regions of memory surrounding addresses appearing in
      registers.  This results in handled exceptions, after which nested NMIs
      start coming in.  These nested NMIs change the value of registers
      in pt_regs.  This can cause the original NMI handler to produce
      incorrect output.
      
      We solve this problem by interchanging the position of the preserved
      copy of the iret registers ("saved") and the copy subject to being
      trampled by nested NMI ("copied").
      
      Link: http://lkml.kernel.org/r/20121002002919.27236.14388.stgit@dungbeetle.mtv.corp.google.comSigned-off-by: NSalman Qazi <sqazi@google.com>
      [ Added a needed CFI_ADJUST_CFA_OFFSET ]
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      28696f43
  2. 20 10月, 2012 1 次提交
    • D
      xen/x86: don't corrupt %eip when returning from a signal handler · a349e23d
      David Vrabel 提交于
      In 32 bit guests, if a userspace process has %eax == -ERESTARTSYS
      (-512) or -ERESTARTNOINTR (-513) when it is interrupted by an event
      /and/ the process has a pending signal then %eip (and %eax) are
      corrupted when returning to the main process after handling the
      signal.  The application may then crash with SIGSEGV or a SIGILL or it
      may have subtly incorrect behaviour (depending on what instruction it
      returned to).
      
      The occurs because handle_signal() is incorrectly thinking that there
      is a system call that needs to restarted so it adjusts %eip and %eax
      to re-execute the system call instruction (even though user space had
      not done a system call).
      
      If %eax == -514 (-ERESTARTNOHAND (-514) or -ERESTART_RESTARTBLOCK
      (-516) then handle_signal() only corrupted %eax (by setting it to
      -EINTR).  This may cause the application to crash or have incorrect
      behaviour.
      
      handle_signal() assumes that regs->orig_ax >= 0 means a system call so
      any kernel entry point that is not for a system call must push a
      negative value for orig_ax.  For example, for physical interrupts on
      bare metal the inverse of the vector is pushed and page_fault() sets
      regs->orig_ax to -1, overwriting the hardware provided error code.
      
      xen_hypervisor_callback() was incorrectly pushing 0 for orig_ax
      instead of -1.
      
      Classic Xen kernels pushed %eax which works as %eax cannot be both
      non-negative and -RESTARTSYS (etc.), but using -1 is consistent with
      other non-system call entry points and avoids some of the tests in
      handle_signal().
      
      There were similar bugs in xen_failsafe_callback() of both 32 and
      64-bit guests. If the fault was corrected and the normal return path
      was used then 0 was incorrectly pushed as the value for orig_ax.
      Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
      Acked-by: NJan Beulich <JBeulich@suse.com>
      Acked-by: NIan Campbell <ian.campbell@citrix.com>
      Cc: stable@vger.kernel.org
      Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      a349e23d
  3. 13 10月, 2012 1 次提交
  4. 01 10月, 2012 2 次提交
  5. 26 9月, 2012 2 次提交
    • F
      x86: Use the new schedule_user API on userspace preemption · 0430499c
      Frederic Weisbecker 提交于
      This way we can exit the RCU extended quiescent state before
      we schedule a new task from irq/exception exit.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Alessio Igor Bogani <abogani@kernel.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Avi Kivity <avi@redhat.com>
      Cc: Chris Metcalf <cmetcalf@tilera.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Geoff Levand <geoff@infradead.org>
      Cc: Gilad Ben Yossef <gilad@benyossef.com>
      Cc: Hakan Akkan <hakanakkan@gmail.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Josh Triplett <josh@joshtriplett.org>
      Cc: Kevin Hilman <khilman@ti.com>
      Cc: Max Krasnyansky <maxk@qualcomm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephen Hemminger <shemminger@vyatta.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Sven-Thorsten Dietrich <thebigcorporation@gmail.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Reviewed-by: NJosh Triplett <josh@joshtriplett.org>
      0430499c
    • T
      x86_64: Work around old GAS bug · 1b2b23d8
      Tao Guo 提交于
      GAS in binutils(2.16.91) could not parse parentheses within
      macro parameters unless fully parenthesized, and this is a
      workaround to make old gas work without generating below errors:
      
       arch/x86/kernel/entry_64.S: Assembler messages:
       arch/x86/kernel/entry_64.S:387: Error: too many positional arguments
       arch/x86/kernel/entry_64.S:389: Error: too many positional arguments
       [...]
      Signed-off-by: NTao Guo <glorioustao@gmail.com>
      Reluctantly-Acked-by: NJan Beulich <jbeulich@novell.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Link: http://lkml.kernel.org/r/1348648102-12653-1-git-send-email-glorioustao@gmail.com
      [ Jan argues that these old GAS versions are fragile - which is so, but lets give them a chance. ]
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      1b2b23d8
  6. 22 9月, 2012 1 次提交
  7. 14 9月, 2012 1 次提交
  8. 13 9月, 2012 1 次提交
  9. 23 8月, 2012 1 次提交
    • S
      ftrace/x86: Add support for -mfentry to x86_64 · d57c5d51
      Steven Rostedt 提交于
      If the kernel is compiled with gcc 4.6.0 which supports -mfentry,
      then use that instead of mcount.
      
      With mcount, frame pointers are forced with the -pg option and we
      get something like:
      
      <can_vma_merge_before>:
             55                      push   %rbp
             48 89 e5                mov    %rsp,%rbp
             53                      push   %rbx
             41 51                   push   %r9
             e8 fe 6a 39 00          callq  ffffffff81483d00 <mcount>
             31 c0                   xor    %eax,%eax
             48 89 fb                mov    %rdi,%rbx
             48 89 d7                mov    %rdx,%rdi
             48 33 73 30             xor    0x30(%rbx),%rsi
             48 f7 c6 ff ff ff f7    test   $0xfffffffff7ffffff,%rsi
      
      With -mfentry, frame pointers are no longer forced and the call looks
      like this:
      
      <can_vma_merge_before>:
             e8 33 af 37 00          callq  ffffffff81461b40 <__fentry__>
             53                      push   %rbx
             48 89 fb                mov    %rdi,%rbx
             31 c0                   xor    %eax,%eax
             48 89 d7                mov    %rdx,%rdi
             41 51                   push   %r9
             48 33 73 30             xor    0x30(%rbx),%rsi
             48 f7 c6 ff ff ff f7    test   $0xfffffffff7ffffff,%rsi
      
      This adds the ftrace hook at the beginning of the function before a
      frame is set up, and allows the function callbacks to be able to access
      parameters. As kprobes now can use function tracing (at least on x86)
      this speeds up the kprobe hooks that are at the beginning of the
      function.
      
      Link: http://lkml.kernel.org/r/20120807194100.130477900@goodmis.orgAcked-by: NIngo Molnar <mingo@kernel.org>
      Reviewed-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Cc: Andi Kleen <andi@firstfloor.org>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      d57c5d51
  10. 31 7月, 2012 1 次提交
  11. 20 7月, 2012 2 次提交
    • S
      ftrace/x86: Add separate function to save regs · 08f6fba5
      Steven Rostedt 提交于
      Add a way to have different functions calling different trampolines.
      If a ftrace_ops wants regs saved on the return, then have only the
      functions with ops registered to save regs. Functions registered by
      other ops would not be affected, unless the functions overlap.
      
      If one ftrace_ops registered functions A, B and C and another ops
      registered fucntions to save regs on A, and D, then only functions
      A and D would be saving regs. Function B and C would work as normal.
      Although A is registered by both ops: normal and saves regs; this is fine
      as saving the regs is needed to satisfy one of the ops that calls it
      but the regs are ignored by the other ops function.
      
      x86_64 implements the full regs saving, and i386 just passes a NULL
      for regs to satisfy the ftrace_ops passing. Where an arch must supply
      both regs and ftrace_ops parameters, even if regs is just NULL.
      
      It is OK for an arch to pass NULL regs. All function trace users that
      require regs passing must add the flag FTRACE_OPS_FL_SAVE_REGS when
      registering the ftrace_ops. If the arch does not support saving regs
      then the ftrace_ops will fail to register. The flag
      FTRACE_OPS_FL_SAVE_REGS_IF_SUPPORTED may be set that will prevent the
      ftrace_ops from failing to register. In this case, the handler may
      either check if regs is not NULL or check if ARCH_SUPPORTS_FTRACE_SAVE_REGS.
      If the arch supports passing regs it will set this macro and pass regs
      for ops that request them. All other archs will just pass NULL.
      
      Link: Link: http://lkml.kernel.org/r/20120711195745.107705970@goodmis.org
      
      Cc: Alexander van Heukelum <heukelum@fastmail.fm>
      Reviewed-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      08f6fba5
    • S
      ftrace: Pass ftrace_ops as third parameter to function trace callback · 2f5f6ad9
      Steven Rostedt 提交于
      Currently the function trace callback receives only the ip and parent_ip
      of the function that it traced. It would be more powerful to also return
      the ops that registered the function as well. This allows the same function
      to act differently depending on what ftrace_ops registered it.
      
      Link: http://lkml.kernel.org/r/20120612225424.267254552@goodmis.orgReviewed-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      2f5f6ad9
  12. 28 6月, 2012 1 次提交
  13. 07 6月, 2012 1 次提交
  14. 01 6月, 2012 1 次提交
    • S
      ftrace/x86: Do not change stacks in DEBUG when calling lockdep · 5963e317
      Steven Rostedt 提交于
      When both DYNAMIC_FTRACE and LOCKDEP are set, the TRACE_IRQS_ON/OFF
      will call into the lockdep code. The lockdep code can call lots of
      functions that may be traced by ftrace. When ftrace is updating its
      code and hits a breakpoint, the breakpoint handler will call into
      lockdep. If lockdep happens to call a function that also has a breakpoint
      attached, it will jump back into the breakpoint handler resetting
      the stack to the debug stack and corrupt the contents currently on
      that stack.
      
      The 'do_sym' call that calls do_int3() is protected by modifying the
      IST table to point to a different location if another breakpoint is
      hit. But the TRACE_IRQS_OFF/ON are outside that protection, and if
      a breakpoint is hit from those, the stack will get corrupted, and
      the kernel will crash:
      
      [ 1013.243754] BUG: unable to handle kernel NULL pointer dereference at 0000000000000002
      [ 1013.272665] IP: [<ffff880145cc0000>] 0xffff880145cbffff
      [ 1013.285186] PGD 1401b2067 PUD 14324c067 PMD 0
      [ 1013.298832] Oops: 0010 [#1] PREEMPT SMP
      [ 1013.310600] CPU 2
      [ 1013.317904] Modules linked in: ip6t_REJECT nf_conntrack_ipv6 nf_defrag_ipv6 xt_state nf_conntrack ip6table_filter ip6_tables crc32c_intel ghash_clmulni_intel microcode usb_debug serio_raw pcspkr iTCO_wdt i2c_i801 iTCO_vendor_support e1000e nfsd nfs_acl auth_rpcgss lockd sunrpc i915 video i2c_algo_bit drm_kms_helper drm i2c_core [last unloaded: scsi_wait_scan]
      [ 1013.401848]
      [ 1013.407399] Pid: 112, comm: kworker/2:1 Not tainted 3.4.0+ #30
      [ 1013.437943] RIP: 8eb8:[<ffff88014630a000>]  [<ffff88014630a000>] 0xffff880146309fff
      [ 1013.459871] RSP: ffffffff8165e919:ffff88014780f408  EFLAGS: 00010046
      [ 1013.477909] RAX: 0000000000000001 RBX: ffffffff81104020 RCX: 0000000000000000
      [ 1013.499458] RDX: ffff880148008ea8 RSI: ffffffff8131ef40 RDI: ffffffff82203b20
      [ 1013.521612] RBP: ffffffff81005751 R08: 0000000000000000 R09: 0000000000000000
      [ 1013.543121] R10: ffffffff82cdc318 R11: 0000000000000000 R12: ffff880145cc0000
      [ 1013.564614] R13: ffff880148008eb8 R14: 0000000000000002 R15: ffff88014780cb40
      [ 1013.586108] FS:  0000000000000000(0000) GS:ffff880148000000(0000) knlGS:0000000000000000
      [ 1013.609458] CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
      [ 1013.627420] CR2: 0000000000000002 CR3: 0000000141f10000 CR4: 00000000001407e0
      [ 1013.649051] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
      [ 1013.670724] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
      [ 1013.692376] Process kworker/2:1 (pid: 112, threadinfo ffff88013fe0e000, task ffff88014020a6a0)
      [ 1013.717028] Stack:
      [ 1013.724131]  ffff88014780f570 ffff880145cc0000 0000400000004000 0000000000000000
      [ 1013.745918]  cccccccccccccccc ffff88014780cca8 ffffffff811072bb ffffffff81651627
      [ 1013.767870]  ffffffff8118f8a7 ffffffff811072bb ffffffff81f2b6c5 ffffffff81f11bdb
      [ 1013.790021] Call Trace:
      [ 1013.800701] Code: 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a <e7> d7 64 81 ff ff ff ff 01 00 00 00 00 00 00 00 65 d9 64 81 ff
      [ 1013.861443] RIP  [<ffff88014630a000>] 0xffff880146309fff
      [ 1013.884466]  RSP <ffff88014780f408>
      [ 1013.901507] CR2: 0000000000000002
      
      The solution was to reuse the NMI functions that change the IDT table to make the debug
      stack keep its current stack (in kernel mode) when hitting a breakpoint:
      
        call debug_stack_set_zero
        TRACE_IRQS_ON
        call debug_stack_reset
      
      If the TRACE_IRQS_ON happens to hit a breakpoint then it will keep the current stack
      and not crash the box.
      Reported-by: NDave Jones <davej@redhat.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      5963e317
  15. 21 4月, 2012 1 次提交
  16. 27 2月, 2012 1 次提交
  17. 25 2月, 2012 3 次提交
  18. 21 2月, 2012 4 次提交
    • S
      x86: Specify a size for the cmp in the NMI handler · a38449ef
      Steven Rostedt 提交于
      Linus noticed that the cmp used to check if the code segment is
      __KERNEL_CS or not did not specify a size. Perhaps it does not matter
      as H. Peter Anvin noted that user space can not set the bottom two
      bits of the %cs register. But it's best not to let the assembly choose
      and change things between different versions of gas, but instead just
      pick the size.
      
      Four bytes are used to compare the saved code segment against
      __KERNEL_CS. Perhaps this might mess up Xen, but we can fix that when
      the time comes.
      
      Also I noticed that there was another non-specified cmp that checks
      the special stack variable if it is 1 or 0. This too probably doesn't
      matter what cmp is used, but this patch uses cmpl just to make it non
      ambiguous.
      
      Link: http://lkml.kernel.org/r/CA+55aFxfAn9MWRgS3O5k2tqN5ys1XrhSFVO5_9ZAoZKDVgNfGA@mail.gmail.comSuggested-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      a38449ef
    • H
      x32: Handle process creation · d1a797f3
      H. Peter Anvin 提交于
      Allow an x32 process to be started.
      Originally-by: NH. J. Lu <hjl.tools@gmail.com>
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      d1a797f3
    • H
      x32: Signal-related system calls · c5a37394
      H. Peter Anvin 提交于
      x32 uses the 64-bit signal frame format, obviously, but there are some
      structures which mixes that with pointers or sizeof(long) types, as
      such we have to create a handful of system calls specific to x32.  By
      and large these are a mixture of the 64-bit and the compat system
      calls.
      Originally-by: NH. J. Lu <hjl.tools@gmail.com>
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      c5a37394
    • H
      x32: Handle the x32 system call flag · fca460f9
      H. Peter Anvin 提交于
      x32 shares most system calls with x86-64, but unfortunately some
      subsystem (the input subsystem is the chief offender) which require
      is_compat() when operating with a 32-bit userspace.  The input system
      actually has text files in sysfs whose meaning is dependent on
      sizeof(long) in userspace!
      
      We could solve this by having two completely disjoint system call
      tables; requiring that each system call be duplicated.  This patch
      takes a different approach: we add a flag to the system call number;
      this flag doesn't affect the system call dispatch but requests compat
      treatment from affected subsystems for the duration of the system call.
      
      The change of cmpq to cmpl is safe since it immediately follows the
      and.
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      fca460f9
  19. 20 2月, 2012 1 次提交
    • S
      x86/nmi: Test saved %cs in NMI to determine nested NMI case · 45d5a168
      Steven Rostedt 提交于
      Currently, the NMI handler tests if it is nested by checking the
      special variable saved on the stack (set during NMI handling)
      and whether the saved stack is the NMI stack as well (to prevent
      the race when the variable is set to zero).
      
      But userspace may set their %rsp to any value as long as they do
      not derefence it, and it may make it point to the NMI stack,
      which will prevent NMIs from triggering while the userspace app
      is running. (I tested this, and it is indeed the case)
      
      Add another check to determine nested NMIs by looking at the
      saved %cs (code segment register) and making sure that it is the
      kernel code segment.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: <stable@kernel.org>
      Link: http://lkml.kernel.org/r/1329687817.1561.27.camel@acer.local.homeSigned-off-by: NIngo Molnar <mingo@elte.hu>
      45d5a168
  20. 18 1月, 2012 2 次提交
    • E
      audit: inline audit_syscall_entry to reduce burden on archs · b05d8447
      Eric Paris 提交于
      Every arch calls:
      
      if (unlikely(current->audit_context))
      	audit_syscall_entry()
      
      which requires knowledge about audit (the existance of audit_context) in
      the arch code.  Just do it all in static inline in audit.h so that arch's
      can remain blissfully ignorant.
      Signed-off-by: NEric Paris <eparis@redhat.com>
      b05d8447
    • E
      Audit: push audit success and retcode into arch ptrace.h · d7e7528b
      Eric Paris 提交于
      The audit system previously expected arches calling to audit_syscall_exit to
      supply as arguments if the syscall was a success and what the return code was.
      Audit also provides a helper AUDITSC_RESULT which was supposed to simplify things
      by converting from negative retcodes to an audit internal magic value stating
      success or failure.  This helper was wrong and could indicate that a valid
      pointer returned to userspace was a failed syscall.  The fix is to fix the
      layering foolishness.  We now pass audit_syscall_exit a struct pt_reg and it
      in turns calls back into arch code to collect the return value and to
      determine if the syscall was a success or failure.  We also define a generic
      is_syscall_success() macro which determines success/failure based on if the
      value is < -MAX_ERRNO.  This works for arches like x86 which do not use a
      separate mechanism to indicate syscall failure.
      
      We make both the is_syscall_success() and regs_return_value() static inlines
      instead of macros.  The reason is because the audit function must take a void*
      for the regs.  (uml calls theirs struct uml_pt_regs instead of just struct
      pt_regs so audit_syscall_exit can't take a struct pt_regs).  Since the audit
      function takes a void* we need to use static inlines to cast it back to the
      arch correct structure to dereference it.
      
      The other major change is that on some arches, like ia64, MIPS and ppc, we
      change regs_return_value() to give us the negative value on syscall failure.
      THE only other user of this macro, kretprobe_example.c, won't notice and it
      makes the value signed consistently for the audit functions across all archs.
      
      In arch/sh/kernel/ptrace_64.c I see that we were using regs[9] in the old
      audit code as the return value.  But the ptrace_64.h code defined the macro
      regs_return_value() as regs[3].  I have no idea which one is correct, but this
      patch now uses the regs_return_value() function, so it now uses regs[3].
      
      For powerpc we previously used regs->result but now use the
      regs_return_value() function which uses regs->gprs[3].  regs->gprs[3] is
      always positive so the regs_return_value(), much like ia64 makes it negative
      before calling the audit code when appropriate.
      Signed-off-by: NEric Paris <eparis@redhat.com>
      Acked-by: H. Peter Anvin <hpa@zytor.com> [for x86 portion]
      Acked-by: Tony Luck <tony.luck@intel.com> [for ia64]
      Acked-by: Richard Weinberger <richard@nod.at> [for uml]
      Acked-by: David S. Miller <davem@davemloft.net> [for sparc]
      Acked-by: Ralf Baechle <ralf@linux-mips.org> [for mips]
      Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> [for ppc]
      d7e7528b
  21. 22 12月, 2011 3 次提交
    • S
      x86: Add workaround to NMI iret woes · 3f3c8b8c
      Steven Rostedt 提交于
      In x86, when an NMI goes off, the CPU goes into an NMI context that
      prevents other NMIs to trigger on that CPU. If an NMI is suppose to
      trigger, it has to wait till the previous NMI leaves NMI context.
      At that time, the next NMI can trigger (note, only one more NMI will
      trigger, as only one can be latched at a time).
      
      The way x86 gets out of NMI context is by calling iret. The problem
      with this is that this causes problems if the NMI handle either
      triggers an exception, or a breakpoint. Both the exception and the
      breakpoint handlers will finish with an iret. If this happens while
      in NMI context, the CPU will leave NMI context and a new NMI may come
      in. As NMI handlers are not made to be re-entrant, this can cause
      havoc with the system, not to mention, the nested NMI will write
      all over the previous NMI's stack.
      
      Linus Torvalds proposed the following workaround to this problem:
      
      https://lkml.org/lkml/2010/7/14/264
      
      "In fact, I wonder if we couldn't just do a software NMI disable
      instead? Hav ea per-cpu variable (in the _core_ percpu areas that get
      allocated statically) that points to the NMI stack frame, and just
      make the NMI code itself do something like
      
       NMI entry:
       - load percpu NMI stack frame pointer
       - if non-zero we know we're nested, and should ignore this NMI:
          - we're returning to kernel mode, so return immediately by using
      "popf/ret", which also keeps NMI's disabled in the hardware until the
      "real" NMI iret happens.
          - before the popf/iret, use the NMI stack pointer to make the NMI
      return stack be invalid and cause a fault
        - set the NMI stack pointer to the current stack pointer
      
       NMI exit (not the above "immediate exit because we nested"):
         clear the percpu NMI stack pointer
         Just do the iret.
      
      Now, the thing is, now the "iret" is atomic. If we had a nested NMI,
      we'll take a fault, and that re-does our "delayed" NMI - and NMI's
      will stay masked.
      
      And if we didn't have a nested NMI, that iret will now unmask NMI's,
      and everything is happy."
      
      I first tried to follow this advice but as I started implementing this
      code, a few gotchas showed up.
      
      One, is accessing per-cpu variables in the NMI handler.
      
      The problem is that per-cpu variables use the %gs register to get the
      variable for the given CPU. But as the NMI may happen in userspace,
      we must first perform a SWAPGS to get to it. The NMI handler already
      does this later in the code, but its too late as we have saved off
      all the registers and we don't want to do that for a disabled NMI.
      
      Peter Zijlstra suggested to keep all variables on the stack. This
      simplifies things greatly and it has the added benefit of cache locality.
      
      Two, faulting on the iret.
      
      I really wanted to make this work, but it was becoming very hacky, and
      I never got it to be stable. The iret already had a fault handler for
      userspace faulting with bad segment registers, and getting NMI to trigger
      a fault and detect it was very tricky. But for strange reasons, the system
      would usually take a double fault and crash. I never figured out why
      and decided to go with a simple "jmp" approach. The new approach I took
      also simplified things.
      
      Finally, the last problem with Linus's approach was to have the nested
      NMI handler do a ret instead of an iret to give the first NMI NMI-context
      again.
      
      The problem is that ret is much more limited than an iret. I couldn't figure
      out how to get the stack back where it belonged. I could have copied the
      current stack, pushed the return onto it, but my fear here is that there
      may be some place that writes data below the stack pointer. I know that
      is not something code should depend on, but I don't want to chance it.
      I may add this feature later, but for now, an NMI handler that loses NMI
      context will not get it back.
      
      Here's what is done:
      
      When an NMI comes in, the HW pushes the interrupt stack frame onto the
      per cpu NMI stack that is selected by the IST.
      
      A special location on the NMI stack holds a variable that is set when
      the first NMI handler runs. If this variable is set then we know that
      this is a nested NMI and we process the nested NMI code.
      
      There is still a race when this variable is cleared and an NMI comes
      in just before the first NMI does the return. For this case, if the
      variable is cleared, we also check if the interrupted stack is the
      NMI stack. If it is, then we process the nested NMI code.
      
      Why the two tests and not just test the interrupted stack?
      
      If the first NMI hits a breakpoint and loses NMI context, and then it
      hits another breakpoint and while processing that breakpoint we get a
      nested NMI. When processing a breakpoint, the stack changes to the
      breakpoint stack. If another NMI comes in here we can't rely on the
      interrupted stack to be the NMI stack.
      
      If the variable is not set and the interrupted task's stack is not the
      NMI stack, then we know this is the first NMI and we can process things
      normally. But in order to do so, we need to do a few things first.
      
      1) Set the stack variable that tells us that we are in an NMI handler
      
      2) Make two copies of the interrupt stack frame.
         One copy is used to return on iret
         The other is used to restore the first one if we have a nested NMI.
      
      This is what the stack will look like:
      
      	  +-------------------------+
      	  | original SS             |
      	  | original Return RSP     |
      	  | original RFLAGS         |
      	  | original CS             |
      	  | original RIP            |
      	  +-------------------------+
      	  | temp storage for rdx    |
      	  +-------------------------+
      	  | NMI executing variable  |
      	  +-------------------------+
      	  | Saved SS                |
      	  | Saved Return RSP        |
      	  | Saved RFLAGS            |
      	  | Saved CS                |
      	  | Saved RIP               |
      	  +-------------------------+
      	  | copied SS               |
      	  | copied Return RSP       |
      	  | copied RFLAGS           |
      	  | copied CS               |
      	  | copied RIP              |
      	  +-------------------------+
      	  | pt_regs                 |
      	  +-------------------------+
      
      The original stack frame contains what the HW put in when we entered
      the NMI.
      
      We store %rdx as a temp variable to use. Both the original HW stack
      frame and this %rdx storage will be clobbered by nested NMIs so we
      can not rely on them later in the first NMI handler.
      
      The next item is the special stack variable that is set when we execute
      the rest of the NMI handler.
      
      Then we have two copies of the interrupt stack. The second copy is
      modified by any nested NMIs to let the first NMI know that we triggered
      a second NMI (latched) and that we should repeat the NMI handler.
      
      If the first NMI hits an exception or breakpoint that takes it out of
      NMI context, if a second NMI comes in before the first one finishes,
      it will update the copied interrupt stack to point to a fix up location
      to trigger another NMI.
      
      When the first NMI calls iret, it will instead jump to the fix up
      location. This fix up location will copy the saved interrupt stack back
      to the copy and execute the nmi handler again.
      
      Note, the nested NMI knows enough to check if it preempted a previous
      NMI handler while it is in the fixup location. If it has, it will not
      modify the copied interrupt stack and will just leave as if nothing
      happened. As the NMI handle is about to execute again, there's no reason
      to latch now.
      
      To test all this, I forced the NMI handler to call iret and take itself
      out of NMI context. I also added assemble code to write to the serial to
      make sure that it hits the nested path as well as the fix up path.
      Everything seems to be working fine.
      
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: H. Peter Anvin <hpa@linux.intel.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Paul Turner <pjt@google.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      3f3c8b8c
    • S
      x86: Document the NMI handler about not using paranoid_exit · 1fd466ef
      Steven Rostedt 提交于
      Linus cleaned up the NMI handler but it still needs some comments to
      explain why it uses save_paranoid but not paranoid_exit. Just to keep
      others from adding that in the future, document why it's not used.
      
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Andi Kleen <andi@firstfloor.org>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      1fd466ef
    • L
      x86: Do not schedule while still in NMI context · 549c89b9
      Linus Torvalds 提交于
      The NMI handler uses the paranoid_exit routine that checks the
      NEED_RESCHED flag, and if it is set and the return is for userspace,
      then interrupts are enabled, the stack is swapped to the thread's stack,
      and schedule is called. The problem with this is that we are still in an
      NMI context until an iret is executed. This means that any new NMIs are
      now starved until an interrupt or exception occurs and does the iret.
      
      As NMIs can not be masked and can interrupt any location, they are
      treated as a special case. NEED_RESCHED should not be set in an NMI
      handler. The interruption by the NMI should not disturb the work flow
      for scheduling. Any IPI sent to a processor after sending the
      NEED_RESCHED would have to wait for the NMI anyway, and after the IPI
      finishes the schedule would be called as required.
      
      There is no reason to do anything special leaving an NMI. Remove the
      call to paranoid_exit and do a simple return. This not only fixes the
      bug of starved NMIs, but it also cleans up the code.
      
      Link: http://lkml.kernel.org/r/CA+55aFzgM55hXTs4griX5e9=v_O+=ue+7Rj0PTD=M7hFYpyULQ@mail.gmail.comAcked-by: NAndi Kleen <ak@linux.intel.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: "H. Peter Anvin" <hpa@linux.intel.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Paul Turner <pjt@google.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      549c89b9
  22. 06 12月, 2011 5 次提交
  23. 29 9月, 2011 1 次提交
    • J
      x86-64: Fix CFI data for interrupt frames · eab9e613
      Jan Beulich 提交于
      The patch titled "x86: Don't use frame pointer to save old stack
      on irq entry" did not properly adjust CFI directives, so this
      patch is a follow-up to that one.
      
      With the old stack pointer no longer stored in a callee-saved
      register (plus some offset), we now have to use a CFA expression
      to describe the memory location where it is being found. This
      requires the use of .cfi_escape (allowing arbitrary byte streams
      to be emitted into .eh_frame), as there is no
      .cfi_def_cfa_expression (which also cannot reasonably be
      expected, as it would require a full expression parser).
      Signed-off-by: NJan Beulich <jbeulich@suse.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Link: http://lkml.kernel.org/r/4E8360200200007800058467@nat28.tlf.novell.comSigned-off-by: NIngo Molnar <mingo@elte.hu>
      eab9e613
  24. 11 8月, 2011 1 次提交
  25. 03 7月, 2011 1 次提交
    • F
      x86: Don't use frame pointer to save old stack on irq entry · a2bbe750
      Frederic Weisbecker 提交于
      rbp is used in SAVE_ARGS_IRQ to save the old stack pointer
      in order to restore it later in ret_from_intr.
      
      It is convenient because we save its value in the irq regs
      and it's easily restored using the leave instruction.
      
      However this is a kind of abuse of the frame pointer which
      role is to help unwinding the kernel by chaining frames
      together, each node following the return address to the
      previous frame.
      
      But although we are breaking the frame by changing the stack
      pointer, there is no preceding return address before the new
      frame. Hence using the frame pointer to link the two stacks
      breaks the stack unwinders that find a random value instead of
      a return address here.
      
      There is no workaround that can work in every case. We are using
      the fixup_bp_irq_link() function to dereference that abused frame
      pointer in the case of non nesting interrupt (which means stack
      changed).
      But that doesn't fix the case of interrupts that don't change the
      stack (but we still have the unconditional frame link), which is
      the case of hardirq interrupting softirq. We have no way to detect
      this transition so the frame irq link is considered as a real frame
      pointer and the return address is dereferenced but it is still a
      spurious one.
      
      There are two possible results of this: either the spurious return
      address, a random stack value, luckily belongs to the kernel text
      and then the unwinding can continue and we just have a weird entry
      in the stack trace. Or it doesn't belong to the kernel text and
      unwinding stops there.
      
      This is the reason why stacktraces (including perf callchains) on
      irqs that interrupted softirqs don't work very well.
      
      To solve this, we don't save the old stack pointer on rbp anymore
      but we save it to a scratch register that we push on the new
      stack and that we pop back later on irq return.
      
      This preserves the whole frame chain without spurious return addresses
      in the middle and drops the need for the horrid fixup_bp_irq_link()
      workaround.
      
      And finally irqs that interrupt softirq are sanely unwinded.
      
      Before:
      
          99.81%         perf  [kernel.kallsyms]  [k] perf_pending_event
                         |
                         --- perf_pending_event
                             irq_work_run
                             smp_irq_work_interrupt
                             irq_work_interrupt
                            |
                            |--41.60%-- __read
                            |          |
                            |          |--99.90%-- create_worker
                            |          |          bench_sched_messaging
                            |          |          cmd_bench
                            |          |          run_builtin
                            |          |          main
                            |          |          __libc_start_main
                            |           --0.10%-- [...]
      
      After:
      
           1.64%  swapper  [kernel.kallsyms]  [k] perf_pending_event
                  |
                  --- perf_pending_event
                      irq_work_run
                      smp_irq_work_interrupt
                      irq_work_interrupt
                     |
                     |--95.00%-- arch_irq_work_raise
                     |          irq_work_queue
                     |          __perf_event_overflow
                     |          perf_swevent_overflow
                     |          perf_swevent_event
                     |          perf_tp_event
                     |          perf_trace_softirq
                     |          __do_softirq
                     |          call_softirq
                     |          do_softirq
                     |          irq_exit
                     |          |
                     |          |--73.68%-- smp_apic_timer_interrupt
                     |          |          apic_timer_interrupt
                     |          |          |
                     |          |          |--96.43%-- amd_e400_idle
                     |          |          |          cpu_idle
                     |          |          |          start_secondary
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jan Beulich <JBeulich@novell.com>
      a2bbe750