1. 07 1月, 2011 2 次提交
  2. 05 1月, 2011 1 次提交
  3. 10 12月, 2010 1 次提交
  4. 18 11月, 2010 2 次提交
    • D
      x86, nmi_watchdog: Remove all stub function calls from old nmi_watchdog · 072b198a
      Don Zickus 提交于
      Now that the bulk of the old nmi_watchdog is gone, remove all
      the stub variables and hooks associated with it.
      
      This touches lots of files mainly because of how the io_apic
      nmi_watchdog was implemented.  Now that the io_apic nmi_watchdog
      is forever gone, remove all its fingers.
      
      Most of this code was not being exercised by virtue of
      nmi_watchdog != NMI_IO_APIC, so there shouldn't be anything to
      risky here.
      Signed-off-by: NDon Zickus <dzickus@redhat.com>
      Cc: fweisbec@gmail.com
      Cc: gorcunov@openvz.org
      LKML-Reference: <1289578944-28564-3-git-send-email-dzickus@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      072b198a
    • D
      x86, nmi_watchdog: Remove the old nmi_watchdog · 5f2b0ba4
      Don Zickus 提交于
      Now that we have a new nmi_watchdog that is more generic and
      sits on top of the perf subsystem, we really do not need the old
      nmi_watchdog any more.
      
      In addition, the old nmi_watchdog doesn't really work if you are
      using the default clocksource, hpet.  The old nmi_watchdog code
      relied on local apic interrupts to determine if the cpu is still
      alive.  With hpet as the clocksource, these interrupts don't
      increment any more and the old nmi_watchdog triggers false
      postives.
      
      This piece removes the old nmi_watchdog code and stubs out any
      variables and functions calls.  The stubs are the same ones used
      by the new nmi_watchdog code, so it should be well tested.
      Signed-off-by: NDon Zickus <dzickus@redhat.com>
      Cc: fweisbec@gmail.com
      Cc: gorcunov@openvz.org
      LKML-Reference: <1289578944-28564-2-git-send-email-dzickus@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      5f2b0ba4
  5. 24 9月, 2010 1 次提交
    • B
      x86, vm86: Fix preemption bug for int1 debug and int3 breakpoint handlers. · 6554287b
      Bart Oldeman 提交于
      Impact: fix kernel bug such as:
      BUG: scheduling while atomic: dosemu.bin/19680/0x00000004
      See also Ubuntu bug 455067 at
      https://bugs.launchpad.net/ubuntu/+source/linux/+bug/455067
      
      Commits 4915a35e
      ("Use preempt_conditional_sti/cli in do_int3, like on x86_64.")
      and 3d2a71a5
      ("x86, traps: converge do_debug handlers")
      started disabling preemption in int1 and int3 handlers on i386.
      The problem with vm86 is that the call to handle_vm86_trap() may jump
      straight to entry_32.S and never returns so preempt is never enabled
      again, and there is an imbalance in the preempt count.
      
      Commit be716615 ("x86, vm86:
      fix preemption bug"), which was later (accidentally?) reverted by commit
      08d68323 ("hw-breakpoints: modifying
      generic debug exception to use thread-specific debug registers")
      fixed the problem for debug exceptions but not for breakpoints.
      
      There are three solutions to this problem.
      
      1. Reenable preemption before calling handle_vm86_trap(). This
      was the approach that was later reverted.
      
      2. Do not disable preemption for i386 in breakpoint and debug handlers.
      This was the situation before October 2008. As far as I understand
      preemption only needs to be disabled on x86_64 because a seperate stack is
      used, but it's nice to have things work the same way on
      i386 and x86_64.
      
      3. Let handle_vm86_trap() return instead of jumping to assembly code.
      By setting a flag in _TIF_WORK_MASK, either TIF_IRET or TIF_NOTIFY_RESUME,
      the code in entry_32.S is instructed to return to 32 bit mode from
      V86 mode. The logic in entry_32.S was already present to handle signals.
      (I chose TIF_IRET because it's slightly more efficient in
      do_notify_resume() in signal.c, but in fact TIF_IRET can probably be
      replaced by TIF_NOTIFY_RESUME everywhere.)
      
      I'm submitting approach 3, because I believe it is the most elegant
      and prevents future confusion. Still, an obvious
      preempt_conditional_cli(regs); is necessary in traps.c to correct the
      bug.
      
      [ hpa: This is technically a regression, but because:
        1. the regression is so old,
        2. the patch seems relatively high risk, justifying more testing, and
        3. we're late in the 2.6.36-rc cycle,
      
        I'm queuing it up for the 2.6.37 merge window.  It might, however,
        justify as a -stable backport at a latter time, hence Cc: stable. ]
      Signed-off-by: NBart Oldeman <bartoldeman@users.sourceforge.net>
      LKML-Reference: <alpine.DEB.2.00.1009231312330.4732@localhost.localdomain>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: K.Prasad <prasad@linux.vnet.ibm.com>
      Cc: Alan Stern <stern@rowland.harvard.edu>
      Cc: Alexander van Heukelum <heukelum@fastmail.fm>
      Cc: <stable@kernel.org>
      Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      6554287b
  6. 10 9月, 2010 2 次提交
  7. 30 6月, 2010 1 次提交
    • F
      x86: Send a SIGTRAP for user icebp traps · a1e80faf
      Frederic Weisbecker 提交于
      Before we had a generic breakpoint layer, x86 used to send a
      sigtrap for any debug event that happened in userspace,
      except if it was caused by lazy dr7 switches.
      
      Currently we only send such signal for single step or breakpoint
      events.
      
      However, there are three other kind of debug exceptions:
      
      - debug register access detected: trigger an exception if the
        next instruction touches the debug registers. We don't use
        it.
      - task switch, but we don't use tss.
      - icebp/int01 trap. This instruction (0xf1) is undocumented and
        generates an int 1 exception. Unlike single step through TF
        flag, it doesn't set the single step origin of the exception
        in dr6.
      
      icebp then used to be reported in userspace using trap signals
      but this have been incidentally broken with the new breakpoint
      code. Reenable this. Since this is the only debug event that
      doesn't set anything in dr6, this is all we have to check.
      
      This fixes a regression in Wine where World Of Warcraft got broken
      as it uses this for software protection checks purposes. And
      probably other apps do.
      Reported-and-tested-by: NAlexandre Julliard <julliard@winehq.org>
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Prasad <prasad@linux.vnet.ibm.com>
      Cc: 2.6.33.x 2.6.34.x <stable@kernel.org>
      a1e80faf
  8. 21 5月, 2010 2 次提交
  9. 13 5月, 2010 1 次提交
    • D
      lockup_detector: Combine nmi_watchdog and softlockup detector · 58687acb
      Don Zickus 提交于
      The new nmi_watchdog (which uses the perf event subsystem) is very
      similar in structure to the softlockup detector.  Using Ingo's
      suggestion, I combined the two functionalities into one file:
      kernel/watchdog.c.
      
      Now both the nmi_watchdog (or hardlockup detector) and softlockup
      detector sit on top of the perf event subsystem, which is run every
      60 seconds or so to see if there are any lockups.
      
      To detect hardlockups, cpus not responding to interrupts, I
      implemented an hrtimer that runs 5 times for every perf event
      overflow event.  If that stops counting on a cpu, then the cpu is
      most likely in trouble.
      
      To detect softlockups, tasks not yielding to the scheduler, I used the
      previous kthread idea that now gets kicked every time the hrtimer fires.
      If the kthread isn't being scheduled neither is anyone else and the
      warning is printed to the console.
      
      I tested this on x86_64 and both the softlockup and hardlockup paths
      work.
      
      V2:
      - cleaned up the Kconfig and softlockup combination
      - surrounded hardlockup cases with #ifdef CONFIG_PERF_EVENTS_NMI
      - seperated out the softlockup case from perf event subsystem
      - re-arranged the enabling/disabling nmi watchdog from proc space
      - added cpumasks for hardlockup failure cases
      - removed fallback to soft events if no PMU exists for hard events
      
      V3:
      - comment cleanups
      - drop support for older softlockup code
      - per_cpu cleanups
      - completely remove software clock base hardlockup detector
      - use per_cpu masking on hard/soft lockup detection
      - #ifdef cleanups
      - rename config option NMI_WATCHDOG to LOCKUP_DETECTOR
      - documentation additions
      
      V4:
      - documentation fixes
      - convert per_cpu to __get_cpu_var
      - powerpc compile fixes
      
      V5:
      - split apart warn flags for hard and soft lockups
      
      TODO:
      - figure out how to make an arch-agnostic clock2cycles call
        (if possible) to feed into perf events as a sample period
      
      [fweisbec: merged conflict patch]
      Signed-off-by: NDon Zickus <dzickus@redhat.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Cyrill Gorcunov <gorcunov@gmail.com>
      Cc: Eric Paris <eparis@redhat.com>
      Cc: Randy Dunlap <randy.dunlap@oracle.com>
      LKML-Reference: <1273266711-18706-2-git-send-email-dzickus@redhat.com>
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      58687acb
  10. 04 5月, 2010 4 次提交
  11. 26 3月, 2010 2 次提交
    • P
      x86, ptrace: Fix block-step · ea8e61b7
      Peter Zijlstra 提交于
      Implement ptrace-block-step using TIF_BLOCKSTEP which will set
      DEBUGCTLMSR_BTF when set for a task while preserving any other
      DEBUGCTLMSR bits.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <20100325135414.017536066@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      ea8e61b7
    • P
      x86, perf, bts, mm: Delete the never used BTS-ptrace code · faa4602e
      Peter Zijlstra 提交于
      Support for the PMU's BTS features has been upstreamed in
      v2.6.32, but we still have the old and disabled ptrace-BTS,
      as Linus noticed it not so long ago.
      
      It's buggy: TIF_DEBUGCTLMSR is trampling all over that MSR without
      regard for other uses (perf) and doesn't provide the flexibility
      needed for perf either.
      
      Its users are ptrace-block-step and ptrace-bts, since ptrace-bts
      was never used and ptrace-block-step can be implemented using a
      much simpler approach.
      
      So axe all 3000 lines of it. That includes the *locked_memory*()
      APIs in mm/mlock.c as well.
      Reported-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Roland McGrath <roland@redhat.com>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Markus Metzger <markus.t.metzger@intel.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      LKML-Reference: <20100325135413.938004390@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      faa4602e
  12. 25 2月, 2010 1 次提交
    • D
      nmi_watchdog: Clean up various small details · 47195d57
      Don Zickus 提交于
      Mostly copy/paste whitespace damage with a couple of nitpicks by
      the checkpatch script. Fix the struct definition as requested by Ingo too.
      Signed-off-by: NDon Zickus <dzickus@redhat.com>
      Cc: peterz@infradead.org
      Cc: gorcunov@gmail.com
      Cc: aris@redhat.com
      LKML-Reference: <1266880143-24943-1-git-send-email-dzickus@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      --
       arch/x86/kernel/apic/hw_nmi.c |   14 +++++------
       arch/x86/kernel/traps.c       |    6 ++--
       include/linux/nmi.h           |    2 -
       kernel/nmi_watchdog.c         |   51 ++++++++++++++++++++----------------------
       4 files changed, 36 insertions(+), 37 deletions(-)
      47195d57
  13. 08 2月, 2010 2 次提交
    • D
      nmi_watchdog: Config option to enable new nmi_watchdog · 84e478c6
      Don Zickus 提交于
      These are the bits that enable the new nmi_watchdog and safely
      isolate the old nmi_watchdog.  Only one or the other can run,
      not both at the same time.
      Signed-off-by: NDon Zickus <dzickus@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: gorcunov@gmail.com
      Cc: aris@redhat.com
      Cc: peterz@infradead.org
      LKML-Reference: <1265424425-31562-4-git-send-email-dzickus@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      84e478c6
    • D
      x86: Move notify_die from nmi.c to traps.c · e40b1720
      Don Zickus 提交于
      In order to handle a new nmi_watchdog approach, I need to move
      the notify_die() routine out of nmi_watchdog_tick() and into
      default_do_nmi(). This lets me easily swap out the old
      nmi_watchdog with the new one with just a config change.
      
      The change probably makes sense from a high level perspective
      because the nmi_watchdog shouldn't be handling notify_die
      routines anyway.  However, this move does change the semantics a
      little bit.  Instead of checking on every nmi interrupt if the
      cpus are stuck, only check them on the nmi_watchdog interrupts.
      
       v2: Move notify_die call into #idef block
      Signed-off-by: NDon Zickus <dzickus@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: gorcunov@gmail.com
      Cc: aris@redhat.com
      Cc: peterz@infradead.org
      LKML-Reference: <1265424425-31562-2-git-send-email-dzickus@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      e40b1720
  14. 29 1月, 2010 1 次提交
  15. 24 9月, 2009 1 次提交
  16. 20 9月, 2009 1 次提交
  17. 19 9月, 2009 1 次提交
  18. 31 8月, 2009 1 次提交
  19. 20 7月, 2009 1 次提交
  20. 10 7月, 2009 1 次提交
  21. 26 6月, 2009 1 次提交
    • K
      x86: Add sysctl to allow panic on IOCK NMI error · 5211a242
      Kurt Garloff 提交于
      This patch introduces a new sysctl:
      
          /proc/sys/kernel/panic_on_io_nmi
      
      which defaults to 0 (off).
      
      When enabled, the kernel panics when the kernel receives an NMI
      caused by an IO error.
      
      The IO error triggered NMI indicates a serious system
      condition, which could result in IO data corruption. Rather
      than contiuing, panicing and dumping might be a better choice,
      so one can figure out what's causing the IO error.
      
      This could be especially important to companies running IO
      intensive applications where corruption must be avoided, e.g. a
      bank's databases.
      
      [ SuSE has been shipping it for a while, it was done at the
        request of a large database vendor, for their users. ]
      Signed-off-by: NKurt Garloff <garloff@suse.de>
      Signed-off-by: NRoberto Angelino <robertangelino@gmail.com>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@suse.de>
      Cc: "Eric W. Biederman" <ebiederm@xmission.com>
      LKML-Reference: <20090624213211.GA11291@kroah.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      5211a242
  22. 18 6月, 2009 1 次提交
    • J
      x86: split out core __math_state_restore · e6e9cac8
      Jeremy Fitzhardinge 提交于
      Split the core fpu state restoration out into __math_state_restore, which
      assumes that cr0.TS is clear and that the fpu context has been initialized.
      
      This will be used during context switch.  There are two reasons this is
      desireable:
      
      - There's a small clarification.  When __switch_to() calls math_state_restore,
        it relies on the fact that tsk_used_math() returns true, and so will
        never do a blocking init_fpu().  __math_state_restore() does not have
        (or need) that logic, so the question never arises.
      
      - It allows the clts() to be moved earler in __switch_to() so it can be performed
        while cpu context updates are batched (will be done in a later patch).
      
      [ Impact: refactor code to make reuse cleaner; no functional change ]
      Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
      Cc: Alok Kataria <akataria@vmware.com>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      e6e9cac8
  23. 17 6月, 2009 1 次提交
  24. 15 6月, 2009 1 次提交
    • V
      x86: add hooks for kmemcheck · f8561296
      Vegard Nossum 提交于
      The hooks that we modify are:
      - Page fault handler (to handle kmemcheck faults)
      - Debug exception handler (to hide pages after single-stepping
        the instruction that caused the page fault)
      
      Also redefine memset() to use the optimized version if kmemcheck is
      enabled.
      
      (Thanks to Pekka Enberg for minimizing the impact on the page fault
      handler.)
      
      As kmemcheck doesn't handle MMX/SSE instructions (yet), we also disable
      the optimized xor code, and rely instead on the generic C implementation
      in order to avoid false-positive warnings.
      Signed-off-by: NVegard Nossum <vegardno@ifi.uio.no>
      
      [whitespace fixlet]
      Signed-off-by: NPekka Enberg <penberg@cs.helsinki.fi>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      
      [rebased for mainline inclusion]
      Signed-off-by: NVegard Nossum <vegardno@ifi.uio.no>
      f8561296
  25. 03 6月, 2009 2 次提交
  26. 29 5月, 2009 2 次提交
    • A
      x86, mce: enable MCE_INTEL for 32bit new MCE · 7856f6cc
      Andi Kleen 提交于
      Enable the 64bit MCE_INTEL code (CMCI, thermal interrupts) for 32bit NEW_MCE.
      Signed-off-by: NAndi Kleen <ak@linux.intel.com>
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      Signed-off-by: NHidetoshi Seto <seto.hidetoshi@jp.fujitsu.com>
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      7856f6cc
    • A
      x86, mce: use 64bit machine check code on 32bit · 4efc0670
      Andi Kleen 提交于
      The 64bit machine check code is in many ways much better than
      the 32bit machine check code: it is more specification compliant,
      is cleaner, only has a single code base versus one per CPU,
      has better infrastructure for recovery, has a cleaner way to communicate
      with user space etc. etc.
      
      Use the 64bit code for 32bit too.
      
      This is the second attempt to do this. There was one a couple of years
      ago to unify this code for 32bit and 64bit.  Back then this ran into some
      trouble with K7s and was reverted.
      
      I believe this time the K7 problems (and some others) are addressed.
      I went over the old handlers and was very careful to retain
      all quirks.
      
      But of course this needs a lot of testing on old systems. On newer
      64bit capable systems I don't expect much problems because they have been
      already tested with the 64bit kernel.
      
      I made this a CONFIG for now that still allows to select the old
      machine check code. This is mostly to make testing easier,
      if someone runs into a problem we can ask them to try
      with the CONFIG switched.
      
      The new code is default y for more coverage.
      
      Once there is confidence the 64bit code works well on older hardware
      too the CONFIG_X86_OLD_MCE and the associated code can be easily
      removed.
      
      This causes a behaviour change for 32bit installations. They now
      have to install the mcelog package to be able to log
      corrected machine checks.
      
      The 64bit machine check code only handles CPUs which support the
      standard Intel machine check architecture described in the IA32 SDM.
      The 32bit code has special support for some older CPUs which
      have non standard machine check architectures, in particular
      WinChip C3 and Intel P5.  I made those a separate CONFIG option
      and kept them for now. The WinChip variant could be probably
      removed without too much pain, it doesn't really do anything
      interesting. P5 is also disabled by default (like it
      was before) because many motherboards have it miswired, but
      according to Alan Cox a few embedded setups use that one.
      
      Forward ported/heavily changed version of old patch, original patch
      included review/fixes from Thomas Gleixner, Bert Wesarg.
      Signed-off-by: NAndi Kleen <ak@linux.intel.com>
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      Signed-off-by: NHidetoshi Seto <seto.hidetoshi@jp.fujitsu.com>
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      4efc0670
  27. 10 4月, 2009 1 次提交
  28. 08 4月, 2009 1 次提交
  29. 02 3月, 2009 1 次提交
    • J
      x86-32: use non-lazy io bitmap context switching · db949bba
      Jeremy Fitzhardinge 提交于
      Impact: remove 32-bit optimization to prepare unification
      
      x86-32 and -64 differ in the way they context-switch tasks
      with io permission bitmaps.  x86-64 simply copies the next
      tasks io bitmap into place (if any) on context switch.  x86-32
      invalidates the bitmap on context switch, so that the next
      IO instruction will fault; at that point it installs the
      appropriate IO bitmap.
      
      This makes context switching IO-bitmap-using tasks a bit more
      less expensive, at the cost of making the next IO instruction
      slower due to the extra fault.  This tradeoff only makes sense
      if IO-bitmap-using processes are relatively common, but they
      don't actually use IO instructions very often.
      
      However, in a typical desktop system, the only process likely
      to be using IO bitmaps is the X server, and nothing at all on
      a server.  Therefore the lazy context switch doesn't really win
      all that much, and its just a gratuitious difference from
      64-bit code.
      
      This patch removes the lazy context switch, with a view to
      unifying this code in a later change.
      Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      db949bba