1. 17 2月, 2012 4 次提交
    • L
      i387: move AMD K7/K8 fpu fxsave/fxrstor workaround from save to restore · 4903062b
      Linus Torvalds 提交于
      The AMD K7/K8 CPUs don't save/restore FDP/FIP/FOP unless an exception is
      pending.  In order to not leak FIP state from one process to another, we
      need to do a floating point load after the fxsave of the old process,
      and before the fxrstor of the new FPU state.  That resets the state to
      the (uninteresting) kernel load, rather than some potentially sensitive
      user information.
      
      We used to do this directly after the FPU state save, but that is
      actually very inconvenient, since it
      
       (a) corrupts what is potentially perfectly good FPU state that we might
           want to lazy avoid restoring later and
      
       (b) on x86-64 it resulted in a very annoying ordering constraint, where
           "__unlazy_fpu()" in the task switch needs to be delayed until after
           the DS segment has been reloaded just to get the new DS value.
      
      Coupling it to the fxrstor instead of the fxsave automatically avoids
      both of these issues, and also ensures that we only do it when actually
      necessary (the FP state after a save may never actually get used).  It's
      simply a much more natural place for the leaked state cleanup.
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      4903062b
    • L
      i387: do not preload FPU state at task switch time · b3b0870e
      Linus Torvalds 提交于
      Yes, taking the trap to re-load the FPU/MMX state is expensive, but so
      is spending several days looking for a bug in the state save/restore
      code.  And the preload code has some rather subtle interactions with
      both paravirtualization support and segment state restore, so it's not
      nearly as simple as it should be.
      
      Also, now that we no longer necessarily depend on a single bit (ie
      TS_USEDFPU) for keeping track of the state of the FPU, we migth be able
      to do better.  If we are really switching between two processes that
      keep touching the FP state, save/restore is inevitable, but in the case
      of having one process that does most of the FPU usage, we may actually
      be able to do much better than the preloading.
      
      In particular, we may be able to keep track of which CPU the process ran
      on last, and also per CPU keep track of which process' FP state that CPU
      has.  For modern CPU's that don't destroy the FPU contents on save time,
      that would allow us to do a lazy restore by just re-enabling the
      existing FPU state - with no restore cost at all!
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      b3b0870e
    • L
      i387: don't ever touch TS_USEDFPU directly, use helper functions · 6d59d7a9
      Linus Torvalds 提交于
      This creates three helper functions that do the TS_USEDFPU accesses, and
      makes everybody that used to do it by hand use those helpers instead.
      
      In addition, there's a couple of helper functions for the "change both
      CR0.TS and TS_USEDFPU at the same time" case, and the places that do
      that together have been changed to use those.  That means that we have
      fewer random places that open-code this situation.
      
      The intent is partly to clarify the code without actually changing any
      semantics yet (since we clearly still have some hard to reproduce bug in
      this area), but also to make it much easier to use another approach
      entirely to caching the CR0.TS bit for software accesses.
      
      Right now we use a bit in the thread-info 'status' variable (this patch
      does not change that), but we might want to make it a full field of its
      own or even make it a per-cpu variable.
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      6d59d7a9
    • L
      i387: fix x86-64 preemption-unsafe user stack save/restore · 15d8791c
      Linus Torvalds 提交于
      Commit 5b1cbac3 ("i387: make irq_fpu_usable() tests more robust")
      added a sanity check to the #NM handler to verify that we never cause
      the "Device Not Available" exception in kernel mode.
      
      However, that check actually pinpointed a (fundamental) race where we do
      cause that exception as part of the signal stack FPU state save/restore
      code.
      
      Because we use the floating point instructions themselves to save and
      restore state directly from user mode, we cannot do that atomically with
      testing the TS_USEDFPU bit: the user mode access itself may cause a page
      fault, which causes a task switch, which saves and restores the FP/MMX
      state from the kernel buffers.
      
      This kind of "recursive" FP state save is fine per se, but it means that
      when the signal stack save/restore gets restarted, it will now take the
      '#NM' exception we originally tried to avoid.  With preemption this can
      happen even without the page fault - but because of the user access, we
      cannot just disable preemption around the save/restore instruction.
      
      There are various ways to solve this, including using the
      "enable/disable_page_fault()" helpers to not allow page faults at all
      during the sequence, and fall back to copying things by hand without the
      use of the native FP state save/restore instructions.
      
      However, the simplest thing to do is to just allow the #NM from kernel
      space, but fix the race in setting and clearing CR0.TS that this all
      exposed: the TS bit changes and the TS_USEDFPU bit absolutely have to be
      atomic wrt scheduling, so while the actual state save/restore can be
      interrupted and restarted, the act of actually clearing/setting CR0.TS
      and the TS_USEDFPU bit together must not.
      
      Instead of just adding random "preempt_disable/enable()" calls to what
      is already excessively ugly code, this introduces some helper functions
      that mostly mirror the "kernel_fpu_begin/end()" functionality, just for
      the user state instead.
      
      Those helper functions should probably eventually replace the other
      ad-hoc CR0.TS and TS_USEDFPU tests too, but I'll need to think about it
      some more: the task switching functionality in particular needs to
      expose the difference between the 'prev' and 'next' threads, while the
      new helper functions intentionally were written to only work with
      'current'.
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      15d8791c
  2. 14 2月, 2012 2 次提交
    • L
      i387: make irq_fpu_usable() tests more robust · 5b1cbac3
      Linus Torvalds 提交于
      Some code - especially the crypto layer - wants to use the x86
      FP/MMX/AVX register set in what may be interrupt (typically softirq)
      context.
      
      That *can* be ok, but the tests for when it was ok were somewhat
      suspect.  We cannot touch the thread-specific status bits either, so
      we'd better check that we're not going to try to save FP state or
      anything like that.
      
      Now, it may be that the TS bit is always cleared *before* we set the
      USEDFPU bit (and only set when we had already cleared the USEDFP
      before), so the TS bit test may actually have been sufficient, but it
      certainly was not obviously so.
      
      So this explicitly verifies that we will not touch the TS_USEDFPU bit,
      and adds a few related sanity-checks.  Because it seems that somehow
      AES-NI is corrupting user FP state.  The cause is not clear, and this
      patch doesn't fix it, but while debugging it I really wanted the code to
      be more obviously correct and robust.
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      5b1cbac3
    • L
      i387: math_state_restore() isn't called from asm · be98c2cd
      Linus Torvalds 提交于
      It was marked asmlinkage for some really old and stale legacy reasons.
      Fix that and the equally stale comment.
      
      Noticed when debugging the irq_fpu_usable() bugs.
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      be98c2cd
  3. 07 2月, 2012 1 次提交
    • S
      perf: Fix double start/stop in x86_pmu_start() · f39d47ff
      Stephane Eranian 提交于
      The following patch fixes a bug introduced by the following
      commit:
      
              e050e3f0 ("perf: Fix broken interrupt rate throttling")
      
      The patch caused the following warning to pop up depending on
      the sampling frequency adjustments:
      
        ------------[ cut here ]------------
        WARNING: at arch/x86/kernel/cpu/perf_event.c:995 x86_pmu_start+0x79/0xd4()
      
      It was caused by the following call sequence:
      
      perf_adjust_freq_unthr_context.part() {
           stop()
           if (delta > 0) {
                perf_adjust_period() {
                    if (period > 8*...) {
                        stop()
                        ...
                        start()
                    }
                }
            }
            start()
      }
      
      Which caused a double start and a double stop, thus triggering
      the assert in x86_pmu_start().
      
      The patch fixes the problem by avoiding the double calls. We
      pass a new argument to perf_adjust_period() to indicate whether
      or not the event is already stopped. We can't just remove the
      start/stop from that function because it's called from
      __perf_event_overflow where the event needs to be reloaded via a
      stop/start back-toback call.
      
      The patch reintroduces the assertion in x86_pmu_start() which
      was removed by commit:
      
      	84f2b9b2 ("perf: Remove deprecated WARN_ON_ONCE()")
      
      In this second version, we've added calls to disable/enable PMU
      during unthrottling or frequency adjustment based on bug report
      of spurious NMI interrupts from Eric Dumazet.
      Reported-and-tested-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NStephane Eranian <eranian@google.com>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: markus@trippelsdorf.de
      Cc: paulus@samba.org
      Link: http://lkml.kernel.org/r/20120207133956.GA4932@quad
      [ Minor edits to the changelog and to the code ]
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      f39d47ff
  4. 03 2月, 2012 1 次提交
    • S
      perf: Remove deprecated WARN_ON_ONCE() · 84f2b9b2
      Stephane Eranian 提交于
      With the new throttling/unthrottling code introduced with
      commit:
      
        e050e3f0 ("perf: Fix broken interrupt rate throttling")
      
      we occasionally hit two WARN_ON_ONCE() checks in:
      
        - intel_pmu_pebs_enable()
        - intel_pmu_lbr_enable()
        - x86_pmu_start()
      
      The assertions are no longer problematic. There is a valid
      path where they can trigger but it is harmless.
      
      The assertion can be triggered with:
      
        $ perf record -e instructions:pp ....
      
      Leading to paths:
      
        intel_pmu_pebs_enable
        intel_pmu_enable_event
        x86_perf_event_set_period
        x86_pmu_start
        perf_adjust_freq_unthr_context
        perf_event_task_tick
        scheduler_tick
      
      And:
      
        intel_pmu_lbr_enable
        intel_pmu_enable_event
        x86_perf_event_set_period
        x86_pmu_start
        perf_adjust_freq_unthr_context.
        perf_event_task_tick
        scheduler_tick
      
      cpuc->enabled is always on because when we get to
      perf_adjust_freq_unthr_context() the PMU is not totally
      disabled. Furthermore when we need to adjust a period,
      we only stop the event we need to change and not the
      entire PMU. Thus, when we re-enable, cpuc->enabled is
      already set. Note that when we stop the event, both
      pebs and lbr are stopped if necessary (and possible).
      Signed-off-by: NStephane Eranian <eranian@google.com>
      Cc: peterz@infradead.org
      Link: http://lkml.kernel.org/r/20120202110401.GA30911@quadSigned-off-by: NIngo Molnar <mingo@elte.hu>
      84f2b9b2
  5. 30 1月, 2012 2 次提交
    • M
      x86/reboot: Remove VersaLogic Menlow reboot quirk · e6d36a65
      Michael D Labriola 提交于
      This commit removes the reboot quirk originally added by commit
      e19e074b ("x86: Fix reboot problem on VersaLogic Menlow boards").
      
      Testing with a VersaLogic Ocelot (VL-EPMs-21a rev 1.00 w/ BIOS
      6.5.102) revealed the following regarding the reboot hang
      problem:
      
      - v2.6.37 reboot=bios was needed.
      
      - v2.6.38-rc1: behavior changed, reboot=acpi is needed,
        reboot=kbd and reboot=bios results in system hang.
      
      - v2.6.38: VersaLogic patch (e19e074b "x86: Fix reboot problem on
        VersaLogic Menlow boards") was applied prior to v2.6.38-rc7.  This
        patch sets a quirk for VersaLogic Menlow boards that forces the use
        of reboot=bios, which doesn't work anymore.
      
      - v3.2: It seems that commit 660e34ce ("x86: Reorder reboot method
        preferences") changed the default reboot method to acpi prior to
        v3.0-rc1, which means the default behavior is appropriate for the
        Ocelot.  No VersaLogic quirk is required.
      
      The Ocelot board used for testing can successfully reboot w/out
      having to pass any reboot= arguments for all 3 current versions
      of the BIOS.
      Signed-off-by: NMichael D Labriola <michael.d.labriola@gmail.com>
      Cc: Matthew Garrett <mjg@redhat.com>
      Cc: Michael D Labriola <mlabriol@gdeb.com>
      Cc: Kushal Koolwal <kushalkoolwal@gmail.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Link: http://lkml.kernel.org/r/87vcnub9hu.fsf@gmail.comSigned-off-by: NIngo Molnar <mingo@elte.hu>
      e6d36a65
    • M
      x86/reboot: Skip DMI checks if reboot set by user · 5955633e
      Michael D Labriola 提交于
      Skip DMI checks for vendor specific reboot quirks if the user
      passed in a reboot= arg on the command line - we should never
      override user choices.
      Signed-off-by: NMichael D Labriola <michael.d.labriola@gmail.com>
      Cc: Alan Cox <alan@lxorguk.ukuu.org.uk>
      Cc: Michael D Labriola <mlabriol@gdeb.com>
      Cc: Matthew Garrett <mjg@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Link: http://lkml.kernel.org/r/87wr8ab9od.fsf@gmail.comSigned-off-by: NIngo Molnar <mingo@elte.hu>
      5955633e
  6. 28 1月, 2012 1 次提交
  7. 27 1月, 2012 1 次提交
    • P
      bugs, x86: Fix printk levels for panic, softlockups and stack dumps · b0f4c4b3
      Prarit Bhargava 提交于
      rsyslog will display KERN_EMERG messages on a connected
      terminal.  However, these messages are useless/undecipherable
      for a general user.
      
      For example, after a softlockup we get:
      
       Message from syslogd@intel-s3e37-04 at Jan 25 14:18:06 ...
       kernel:Stack:
      
       Message from syslogd@intel-s3e37-04 at Jan 25 14:18:06 ...
       kernel:Call Trace:
      
       Message from syslogd@intel-s3e37-04 at Jan 25 14:18:06 ...
       kernel:Code: ff ff a8 08 75 25 31 d2 48 8d 86 38 e0 ff ff 48 89
       d1 0f 01 c8 0f ae f0 48 8b 86 38 e0 ff ff a8 08 75 08 b1 01 4c 89 e0 0f 01 c9 <e8> ea 69 dd ff 4c 29 e8 48 89 c7 e8 0f bc da ff 49 89 c4 49 89
      
      This happens because the printk levels for these messages are
      incorrect. Only an informational message should be displayed on
      a terminal.
      
      I modified the printk levels for various messages in the kernel
      and tested the output by using the drivers/misc/lkdtm.c kernel
      modules (ie, softlockups, panics, hard lockups, etc.) and
      confirmed that the console output was still the same and that
      the output to the terminals was correct.
      
      For example, in the case of a softlockup we now see the much
      more informative:
      
       Message from syslogd@intel-s3e37-04 at Jan 25 10:18:06 ...
       BUG: soft lockup - CPU4 stuck for 60s!
      
      instead of the above confusing messages.
      
      AFAICT, the messages no longer have to be KERN_EMERG.  In the
      most important case of a panic we set console_verbose().  As for
      the other less severe cases the correct data is output to the
      console and /var/log/messages.
      
      Successfully tested by me using the drivers/misc/lkdtm.c module.
      Signed-off-by: NPrarit Bhargava <prarit@redhat.com>
      Cc: dzickus@redhat.com
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Link: http://lkml.kernel.org/r/1327586134-11926-1-git-send-email-prarit@redhat.comSigned-off-by: NIngo Molnar <mingo@elte.hu>
      b0f4c4b3
  8. 26 1月, 2012 1 次提交
  9. 18 1月, 2012 4 次提交
    • A
      x86-32: Fix build failure with AUDIT=y, AUDITSYSCALL=n · 6015ff10
      Al Viro 提交于
      JONGMAN HEO reports:
      
        With current linus git (commit a25a2b84), I got following build error,
      
        arch/x86/kernel/vm86_32.c: In function 'do_sys_vm86':
        arch/x86/kernel/vm86_32.c:340: error: implicit declaration of function '__audit_syscall_exit'
        make[3]: *** [arch/x86/kernel/vm86_32.o] Error 1
      
      OK, I can reproduce it (32bit allmodconfig with AUDIT=y, AUDITSYSCALL=n)
      
      It's due to commit d7e7528b: "Audit: push audit success and retcode
      into arch ptrace.h".
      Reported-by: NJONGMAN HEO <jongman.heo@samsung.com>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      6015ff10
    • L
      x86, tsc: Fix SMI induced variation in quick_pit_calibrate() · 68f30fbe
      Linus Torvalds 提交于
      pit_expect_msb() returns success wrongly in the below SMI scenario:
      
      a. pit_verify_msb() has not yet seen the MSB transition.
      
      b. we are close to the MSB transition though and got a SMI immediately after
         returning from pit_verify_msb() which didn't see the MSB transition. PIT MSB
         transition has happened somewhere during SMI execution.
      
      c. returned from SMI and we noted down the 'tsc', saw the pit MSB change now and
         exited the loop to calculate 'deltatsc'. Instead of noting the TSC at the MSB
         transition, we are way off because of the SMI.  And as the SMI happened
         between the pit_verify_msb() and before the 'tsc' is recorded in the
         for loop, 'delattsc' (d1/d2 in quick_pit_calibrate()) will be small and
         quick_pit_calibrate() will not notice this error.
      
      Depending on whether SMI disturbance happens while computing d1 or d2, we will
      see the TSC calibrated value smaller or bigger than the expected value. As a
      result, in a cluster we were seeing a variation of approximately +/- 20MHz in
      the calibrated values, resulting in NTP failures.
      
        [ As far as the SMI source is concerned, this is a periodic SMI that gets
          disabled after ACPI is enabled by the OS. But the TSC calibration happens
          before the ACPI is enabled. ]
      
      To address this, change pit_expect_msb() so that
      
       - the 'tsc' is the TSC in between the two reads that read the MSB
      change from the PIT (same as before)
      
       - the 'delta' is the difference in TSC from *before* the MSB changed
      to *after* the MSB changed.
      
      Now the delta is twice as big as before (it covers four PIT accesses,
      roughly 4us) and quick_pit_calibrate() will loop a bit longer to get
      the calibrated value with in the 500ppm precision. As the delta (d1/d2)
      covers four PIT accesses, actual calibrated result might be closer to
      250ppm precision.
      
      As the loop now takes longer to stabilize, double MAX_QUICK_PIT_MS to 50.
      
      SMI disturbance will showup as much larger delta's and the loop will take
      longer than usual for the result to be with in the accepted precision. Or will
      fallback to slow PIT calibration if it takes more than 50msec.
      
      Also while we are at this, remove the calibration correction that aims to
      get the result to the middle of the error bars. We really don't know which
      direction to correct into, so remove it.
      Reported-and-tested-by: NSuresh Siddha <suresh.b.siddha@intel.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com>
      Link: http://lkml.kernel.org/r/1326843337.5291.4.camel@sbsiddha-mobl2Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      68f30fbe
    • E
      audit: inline audit_syscall_entry to reduce burden on archs · b05d8447
      Eric Paris 提交于
      Every arch calls:
      
      if (unlikely(current->audit_context))
      	audit_syscall_entry()
      
      which requires knowledge about audit (the existance of audit_context) in
      the arch code.  Just do it all in static inline in audit.h so that arch's
      can remain blissfully ignorant.
      Signed-off-by: NEric Paris <eparis@redhat.com>
      b05d8447
    • E
      Audit: push audit success and retcode into arch ptrace.h · d7e7528b
      Eric Paris 提交于
      The audit system previously expected arches calling to audit_syscall_exit to
      supply as arguments if the syscall was a success and what the return code was.
      Audit also provides a helper AUDITSC_RESULT which was supposed to simplify things
      by converting from negative retcodes to an audit internal magic value stating
      success or failure.  This helper was wrong and could indicate that a valid
      pointer returned to userspace was a failed syscall.  The fix is to fix the
      layering foolishness.  We now pass audit_syscall_exit a struct pt_reg and it
      in turns calls back into arch code to collect the return value and to
      determine if the syscall was a success or failure.  We also define a generic
      is_syscall_success() macro which determines success/failure based on if the
      value is < -MAX_ERRNO.  This works for arches like x86 which do not use a
      separate mechanism to indicate syscall failure.
      
      We make both the is_syscall_success() and regs_return_value() static inlines
      instead of macros.  The reason is because the audit function must take a void*
      for the regs.  (uml calls theirs struct uml_pt_regs instead of just struct
      pt_regs so audit_syscall_exit can't take a struct pt_regs).  Since the audit
      function takes a void* we need to use static inlines to cast it back to the
      arch correct structure to dereference it.
      
      The other major change is that on some arches, like ia64, MIPS and ppc, we
      change regs_return_value() to give us the negative value on syscall failure.
      THE only other user of this macro, kretprobe_example.c, won't notice and it
      makes the value signed consistently for the audit functions across all archs.
      
      In arch/sh/kernel/ptrace_64.c I see that we were using regs[9] in the old
      audit code as the return value.  But the ptrace_64.h code defined the macro
      regs_return_value() as regs[3].  I have no idea which one is correct, but this
      patch now uses the regs_return_value() function, so it now uses regs[3].
      
      For powerpc we previously used regs->result but now use the
      regs_return_value() function which uses regs->gprs[3].  regs->gprs[3] is
      always positive so the regs_return_value(), much like ia64 makes it negative
      before calling the audit code when appropriate.
      Signed-off-by: NEric Paris <eparis@redhat.com>
      Acked-by: H. Peter Anvin <hpa@zytor.com> [for x86 portion]
      Acked-by: Tony Luck <tony.luck@intel.com> [for ia64]
      Acked-by: Richard Weinberger <richard@nod.at> [for uml]
      Acked-by: David S. Miller <davem@davemloft.net> [for sparc]
      Acked-by: Ralf Baechle <ralf@linux-mips.org> [for mips]
      Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> [for ppc]
      d7e7528b
  10. 17 1月, 2012 2 次提交
  11. 14 1月, 2012 1 次提交
  12. 13 1月, 2012 1 次提交
  13. 11 1月, 2012 1 次提交
  14. 08 1月, 2012 1 次提交
  15. 07 1月, 2012 3 次提交
  16. 04 1月, 2012 1 次提交
  17. 27 12月, 2011 1 次提交
  18. 24 12月, 2011 6 次提交
  19. 22 12月, 2011 6 次提交
    • K
      driver-core: remove sysdev.h usage. · edbaa603
      Kay Sievers 提交于
      The sysdev.h file should not be needed by any in-kernel code, so remove
      the .h file from these random files that seem to still want to include
      it.
      
      The sysdev code will be going away soon, so this include needs to be
      removed no matter what.
      
      Cc: Jiandong Zheng <jdzheng@broadcom.com>
      Cc: Scott Branden <sbranden@broadcom.com>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Kukjin Kim <kgene.kim@samsung.com>
      Cc: David Brown <davidb@codeaurora.org>
      Cc: Daniel Walker <dwalker@fifo99.com>
      Cc: Bryan Huntsman <bryanh@codeaurora.org>
      Cc: Ben Dooks <ben-linux@fluff.org>
      Cc: Wan ZongShun <mcuos.com@gmail.com>
      Cc: Haavard Skinnemoen <hskinnemoen@gmail.com>
      Cc: Hans-Christian Egtvedt <egtvedt@samfundet.no>
      Cc: Guan Xuetao <gxt@mprc.pku.edu.cn>
      Cc: "Venkatesh Pallipadi
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Grant Likely <grant.likely@secretlab.ca>
      Cc: Richard Purdie <rpurdie@rpsys.net>
      Cc: Matthew Garrett <mjg@redhat.com>
      Signed-off-by: NKay Sievers <kay.sievers@vrfy.org>
      edbaa603
    • K
      cpu: convert 'cpu' and 'machinecheck' sysdev_class to a regular subsystem · 8a25a2fd
      Kay Sievers 提交于
      This moves the 'cpu sysdev_class' over to a regular 'cpu' subsystem
      and converts the devices to regular devices. The sysdev drivers are
      implemented as subsystem interfaces now.
      
      After all sysdev classes are ported to regular driver core entities, the
      sysdev implementation will be entirely removed from the kernel.
      
      Userspace relies on events and generic sysfs subsystem infrastructure
      from sysdev devices, which are made available with this conversion.
      
      Cc: Haavard Skinnemoen <hskinnemoen@gmail.com>
      Cc: Hans-Christian Egtvedt <egtvedt@samfundet.no>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Chris Metcalf <cmetcalf@tilera.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Borislav Petkov <bp@amd64.org>
      Cc: Tigran Aivazian <tigran@aivazian.fsnet.co.uk>
      Cc: Len Brown <lenb@kernel.org>
      Cc: Zhang Rui <rui.zhang@intel.com>
      Cc: Dave Jones <davej@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Russell King <rmk+kernel@arm.linux.org.uk>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Cc: "Rafael J. Wysocki" <rjw@sisk.pl>
      Cc: "Srivatsa S. Bhat" <srivatsa.bhat@linux.vnet.ibm.com>
      Signed-off-by: NKay Sievers <kay.sievers@vrfy.org>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@suse.de>
      8a25a2fd
    • S
      x86: Add counter when debug stack is used with interrupts enabled · 42181186
      Steven Rostedt 提交于
      Mathieu Desnoyers pointed out a case that can cause issues with
      NMIs running on the debug stack:
      
        int3 -> interrupt -> NMI -> int3
      
      Because the interrupt changes the stack, the NMI will not see that
      it preempted the debug stack. Looking deeper at this case,
      interrupts only happen when the int3 is from userspace or in
      an a location in the exception table (fixup).
      
        userspace -> int3 -> interurpt -> NMI -> int3
      
      All other int3s that happen in the kernel should be processed
      without ever enabling interrupts, as the do_trap() call will
      panic the kernel if it is called to process any other location
      within the kernel.
      
      Adding a counter around the sections that enable interrupts while
      using the debug stack allows the NMI to also check that case.
      If the NMI sees that it either interrupted a task using the debug
      stack or the debug counter is non-zero, then it will have to
      change the IDT table to make the int3 not change stacks (which will
      corrupt the stack if it does).
      
      Note, I had to move the debug_usage functions out of processor.h
      and into debugreg.h because of the static inlined functions to
      inc and dec the debug_usage counter. __get_cpu_var() requires
      smp.h which includes processor.h, and would fail to build.
      
      Link: http://lkml.kernel.org/r/1323976535.23971.112.camel@gandalf.stny.rr.comReported-by: NMathieu Desnoyers <mathieu.desnoyers@efficios.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: H. Peter Anvin <hpa@linux.intel.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Paul Turner <pjt@google.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      42181186
    • S
      x86: Allow NMIs to hit breakpoints in i386 · ccd49c23
      Steven Rostedt 提交于
      With i386, NMIs and breakpoints use the current stack and they
      do not reset the stack pointer to a fix point that might corrupt
      a previous NMI or breakpoint (as it does in x86_64). But NMIs are
      still not made to be re-entrant, and need to prevent the case that
      an NMI hitting a breakpoint (which does an iret), doesn't allow
      another NMI to run.
      
      The fix is to let the NMI be in 3 different states:
      
      1) not running
      2) executing
      3) latched
      
      When no NMI is executing on a given CPU, the state is "not running".
      When the first NMI comes in, the state is switched to "executing".
      On exit of that NMI, a cmpxchg is performed to switch the state
      back to "not running" and if that fails, the NMI is restarted.
      
      If a breakpoint is hit and does an iret, which re-enables NMIs,
      and another NMI comes in before the first NMI finished, it will
      detect that the state is not in the "not running" state and the
      current NMI is nested. In this case, the state is switched to "latched"
      to let the interrupted NMI know to restart the NMI handler, and
      the nested NMI exits without doing anything.
      
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: H. Peter Anvin <hpa@linux.intel.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Paul Turner <pjt@google.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      ccd49c23
    • S
      x86: Keep current stack in NMI breakpoints · 228bdaa9
      Steven Rostedt 提交于
      We want to allow NMI handlers to have breakpoints to be able to
      remove stop_machine from ftrace, kprobes and jump_labels. But if
      an NMI interrupts a current breakpoint, and then it triggers a
      breakpoint itself, it will switch to the breakpoint stack and
      corrupt the data on it for the breakpoint processing that it
      interrupted.
      
      Instead, have the NMI check if it interrupted breakpoint processing
      by checking if the stack that is currently used is a breakpoint
      stack. If it is, then load a special IDT that changes the IST
      for the debug exception to keep the same stack in kernel context.
      When the NMI is done, it puts it back.
      
      This way, if the NMI does trigger a breakpoint, it will keep
      using the same stack and not stomp on the breakpoint data for
      the breakpoint it interrupted.
      Suggested-by: NPeter Zijlstra <peterz@infradead.org>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      228bdaa9
    • S
      x86: Add workaround to NMI iret woes · 3f3c8b8c
      Steven Rostedt 提交于
      In x86, when an NMI goes off, the CPU goes into an NMI context that
      prevents other NMIs to trigger on that CPU. If an NMI is suppose to
      trigger, it has to wait till the previous NMI leaves NMI context.
      At that time, the next NMI can trigger (note, only one more NMI will
      trigger, as only one can be latched at a time).
      
      The way x86 gets out of NMI context is by calling iret. The problem
      with this is that this causes problems if the NMI handle either
      triggers an exception, or a breakpoint. Both the exception and the
      breakpoint handlers will finish with an iret. If this happens while
      in NMI context, the CPU will leave NMI context and a new NMI may come
      in. As NMI handlers are not made to be re-entrant, this can cause
      havoc with the system, not to mention, the nested NMI will write
      all over the previous NMI's stack.
      
      Linus Torvalds proposed the following workaround to this problem:
      
      https://lkml.org/lkml/2010/7/14/264
      
      "In fact, I wonder if we couldn't just do a software NMI disable
      instead? Hav ea per-cpu variable (in the _core_ percpu areas that get
      allocated statically) that points to the NMI stack frame, and just
      make the NMI code itself do something like
      
       NMI entry:
       - load percpu NMI stack frame pointer
       - if non-zero we know we're nested, and should ignore this NMI:
          - we're returning to kernel mode, so return immediately by using
      "popf/ret", which also keeps NMI's disabled in the hardware until the
      "real" NMI iret happens.
          - before the popf/iret, use the NMI stack pointer to make the NMI
      return stack be invalid and cause a fault
        - set the NMI stack pointer to the current stack pointer
      
       NMI exit (not the above "immediate exit because we nested"):
         clear the percpu NMI stack pointer
         Just do the iret.
      
      Now, the thing is, now the "iret" is atomic. If we had a nested NMI,
      we'll take a fault, and that re-does our "delayed" NMI - and NMI's
      will stay masked.
      
      And if we didn't have a nested NMI, that iret will now unmask NMI's,
      and everything is happy."
      
      I first tried to follow this advice but as I started implementing this
      code, a few gotchas showed up.
      
      One, is accessing per-cpu variables in the NMI handler.
      
      The problem is that per-cpu variables use the %gs register to get the
      variable for the given CPU. But as the NMI may happen in userspace,
      we must first perform a SWAPGS to get to it. The NMI handler already
      does this later in the code, but its too late as we have saved off
      all the registers and we don't want to do that for a disabled NMI.
      
      Peter Zijlstra suggested to keep all variables on the stack. This
      simplifies things greatly and it has the added benefit of cache locality.
      
      Two, faulting on the iret.
      
      I really wanted to make this work, but it was becoming very hacky, and
      I never got it to be stable. The iret already had a fault handler for
      userspace faulting with bad segment registers, and getting NMI to trigger
      a fault and detect it was very tricky. But for strange reasons, the system
      would usually take a double fault and crash. I never figured out why
      and decided to go with a simple "jmp" approach. The new approach I took
      also simplified things.
      
      Finally, the last problem with Linus's approach was to have the nested
      NMI handler do a ret instead of an iret to give the first NMI NMI-context
      again.
      
      The problem is that ret is much more limited than an iret. I couldn't figure
      out how to get the stack back where it belonged. I could have copied the
      current stack, pushed the return onto it, but my fear here is that there
      may be some place that writes data below the stack pointer. I know that
      is not something code should depend on, but I don't want to chance it.
      I may add this feature later, but for now, an NMI handler that loses NMI
      context will not get it back.
      
      Here's what is done:
      
      When an NMI comes in, the HW pushes the interrupt stack frame onto the
      per cpu NMI stack that is selected by the IST.
      
      A special location on the NMI stack holds a variable that is set when
      the first NMI handler runs. If this variable is set then we know that
      this is a nested NMI and we process the nested NMI code.
      
      There is still a race when this variable is cleared and an NMI comes
      in just before the first NMI does the return. For this case, if the
      variable is cleared, we also check if the interrupted stack is the
      NMI stack. If it is, then we process the nested NMI code.
      
      Why the two tests and not just test the interrupted stack?
      
      If the first NMI hits a breakpoint and loses NMI context, and then it
      hits another breakpoint and while processing that breakpoint we get a
      nested NMI. When processing a breakpoint, the stack changes to the
      breakpoint stack. If another NMI comes in here we can't rely on the
      interrupted stack to be the NMI stack.
      
      If the variable is not set and the interrupted task's stack is not the
      NMI stack, then we know this is the first NMI and we can process things
      normally. But in order to do so, we need to do a few things first.
      
      1) Set the stack variable that tells us that we are in an NMI handler
      
      2) Make two copies of the interrupt stack frame.
         One copy is used to return on iret
         The other is used to restore the first one if we have a nested NMI.
      
      This is what the stack will look like:
      
      	  +-------------------------+
      	  | original SS             |
      	  | original Return RSP     |
      	  | original RFLAGS         |
      	  | original CS             |
      	  | original RIP            |
      	  +-------------------------+
      	  | temp storage for rdx    |
      	  +-------------------------+
      	  | NMI executing variable  |
      	  +-------------------------+
      	  | Saved SS                |
      	  | Saved Return RSP        |
      	  | Saved RFLAGS            |
      	  | Saved CS                |
      	  | Saved RIP               |
      	  +-------------------------+
      	  | copied SS               |
      	  | copied Return RSP       |
      	  | copied RFLAGS           |
      	  | copied CS               |
      	  | copied RIP              |
      	  +-------------------------+
      	  | pt_regs                 |
      	  +-------------------------+
      
      The original stack frame contains what the HW put in when we entered
      the NMI.
      
      We store %rdx as a temp variable to use. Both the original HW stack
      frame and this %rdx storage will be clobbered by nested NMIs so we
      can not rely on them later in the first NMI handler.
      
      The next item is the special stack variable that is set when we execute
      the rest of the NMI handler.
      
      Then we have two copies of the interrupt stack. The second copy is
      modified by any nested NMIs to let the first NMI know that we triggered
      a second NMI (latched) and that we should repeat the NMI handler.
      
      If the first NMI hits an exception or breakpoint that takes it out of
      NMI context, if a second NMI comes in before the first one finishes,
      it will update the copied interrupt stack to point to a fix up location
      to trigger another NMI.
      
      When the first NMI calls iret, it will instead jump to the fix up
      location. This fix up location will copy the saved interrupt stack back
      to the copy and execute the nmi handler again.
      
      Note, the nested NMI knows enough to check if it preempted a previous
      NMI handler while it is in the fixup location. If it has, it will not
      modify the copied interrupt stack and will just leave as if nothing
      happened. As the NMI handle is about to execute again, there's no reason
      to latch now.
      
      To test all this, I forced the NMI handler to call iret and take itself
      out of NMI context. I also added assemble code to write to the serial to
      make sure that it hits the nested path as well as the fix up path.
      Everything seems to be working fine.
      
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: H. Peter Anvin <hpa@linux.intel.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Paul Turner <pjt@google.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      3f3c8b8c