1. 24 9月, 2009 1 次提交
  2. 20 9月, 2009 1 次提交
  3. 31 8月, 2009 1 次提交
  4. 20 7月, 2009 1 次提交
  5. 10 7月, 2009 1 次提交
  6. 26 6月, 2009 1 次提交
    • K
      x86: Add sysctl to allow panic on IOCK NMI error · 5211a242
      Kurt Garloff 提交于
      This patch introduces a new sysctl:
      
          /proc/sys/kernel/panic_on_io_nmi
      
      which defaults to 0 (off).
      
      When enabled, the kernel panics when the kernel receives an NMI
      caused by an IO error.
      
      The IO error triggered NMI indicates a serious system
      condition, which could result in IO data corruption. Rather
      than contiuing, panicing and dumping might be a better choice,
      so one can figure out what's causing the IO error.
      
      This could be especially important to companies running IO
      intensive applications where corruption must be avoided, e.g. a
      bank's databases.
      
      [ SuSE has been shipping it for a while, it was done at the
        request of a large database vendor, for their users. ]
      Signed-off-by: NKurt Garloff <garloff@suse.de>
      Signed-off-by: NRoberto Angelino <robertangelino@gmail.com>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@suse.de>
      Cc: "Eric W. Biederman" <ebiederm@xmission.com>
      LKML-Reference: <20090624213211.GA11291@kroah.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      5211a242
  7. 18 6月, 2009 1 次提交
    • J
      x86: split out core __math_state_restore · e6e9cac8
      Jeremy Fitzhardinge 提交于
      Split the core fpu state restoration out into __math_state_restore, which
      assumes that cr0.TS is clear and that the fpu context has been initialized.
      
      This will be used during context switch.  There are two reasons this is
      desireable:
      
      - There's a small clarification.  When __switch_to() calls math_state_restore,
        it relies on the fact that tsk_used_math() returns true, and so will
        never do a blocking init_fpu().  __math_state_restore() does not have
        (or need) that logic, so the question never arises.
      
      - It allows the clts() to be moved earler in __switch_to() so it can be performed
        while cpu context updates are batched (will be done in a later patch).
      
      [ Impact: refactor code to make reuse cleaner; no functional change ]
      Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
      Cc: Alok Kataria <akataria@vmware.com>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      e6e9cac8
  8. 17 6月, 2009 1 次提交
  9. 15 6月, 2009 1 次提交
    • V
      x86: add hooks for kmemcheck · f8561296
      Vegard Nossum 提交于
      The hooks that we modify are:
      - Page fault handler (to handle kmemcheck faults)
      - Debug exception handler (to hide pages after single-stepping
        the instruction that caused the page fault)
      
      Also redefine memset() to use the optimized version if kmemcheck is
      enabled.
      
      (Thanks to Pekka Enberg for minimizing the impact on the page fault
      handler.)
      
      As kmemcheck doesn't handle MMX/SSE instructions (yet), we also disable
      the optimized xor code, and rely instead on the generic C implementation
      in order to avoid false-positive warnings.
      Signed-off-by: NVegard Nossum <vegardno@ifi.uio.no>
      
      [whitespace fixlet]
      Signed-off-by: NPekka Enberg <penberg@cs.helsinki.fi>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      
      [rebased for mainline inclusion]
      Signed-off-by: NVegard Nossum <vegardno@ifi.uio.no>
      f8561296
  10. 29 5月, 2009 2 次提交
    • A
      x86, mce: enable MCE_INTEL for 32bit new MCE · 7856f6cc
      Andi Kleen 提交于
      Enable the 64bit MCE_INTEL code (CMCI, thermal interrupts) for 32bit NEW_MCE.
      Signed-off-by: NAndi Kleen <ak@linux.intel.com>
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      Signed-off-by: NHidetoshi Seto <seto.hidetoshi@jp.fujitsu.com>
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      7856f6cc
    • A
      x86, mce: use 64bit machine check code on 32bit · 4efc0670
      Andi Kleen 提交于
      The 64bit machine check code is in many ways much better than
      the 32bit machine check code: it is more specification compliant,
      is cleaner, only has a single code base versus one per CPU,
      has better infrastructure for recovery, has a cleaner way to communicate
      with user space etc. etc.
      
      Use the 64bit code for 32bit too.
      
      This is the second attempt to do this. There was one a couple of years
      ago to unify this code for 32bit and 64bit.  Back then this ran into some
      trouble with K7s and was reverted.
      
      I believe this time the K7 problems (and some others) are addressed.
      I went over the old handlers and was very careful to retain
      all quirks.
      
      But of course this needs a lot of testing on old systems. On newer
      64bit capable systems I don't expect much problems because they have been
      already tested with the 64bit kernel.
      
      I made this a CONFIG for now that still allows to select the old
      machine check code. This is mostly to make testing easier,
      if someone runs into a problem we can ask them to try
      with the CONFIG switched.
      
      The new code is default y for more coverage.
      
      Once there is confidence the 64bit code works well on older hardware
      too the CONFIG_X86_OLD_MCE and the associated code can be easily
      removed.
      
      This causes a behaviour change for 32bit installations. They now
      have to install the mcelog package to be able to log
      corrected machine checks.
      
      The 64bit machine check code only handles CPUs which support the
      standard Intel machine check architecture described in the IA32 SDM.
      The 32bit code has special support for some older CPUs which
      have non standard machine check architectures, in particular
      WinChip C3 and Intel P5.  I made those a separate CONFIG option
      and kept them for now. The WinChip variant could be probably
      removed without too much pain, it doesn't really do anything
      interesting. P5 is also disabled by default (like it
      was before) because many motherboards have it miswired, but
      according to Alan Cox a few embedded setups use that one.
      
      Forward ported/heavily changed version of old patch, original patch
      included review/fixes from Thomas Gleixner, Bert Wesarg.
      Signed-off-by: NAndi Kleen <ak@linux.intel.com>
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      Signed-off-by: NHidetoshi Seto <seto.hidetoshi@jp.fujitsu.com>
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      4efc0670
  11. 10 4月, 2009 1 次提交
  12. 08 4月, 2009 1 次提交
  13. 02 3月, 2009 1 次提交
    • J
      x86-32: use non-lazy io bitmap context switching · db949bba
      Jeremy Fitzhardinge 提交于
      Impact: remove 32-bit optimization to prepare unification
      
      x86-32 and -64 differ in the way they context-switch tasks
      with io permission bitmaps.  x86-64 simply copies the next
      tasks io bitmap into place (if any) on context switch.  x86-32
      invalidates the bitmap on context switch, so that the next
      IO instruction will fault; at that point it installs the
      appropriate IO bitmap.
      
      This makes context switching IO-bitmap-using tasks a bit more
      less expensive, at the cost of making the next IO instruction
      slower due to the extra fault.  This tradeoff only makes sense
      if IO-bitmap-using processes are relatively common, but they
      don't actually use IO instructions very often.
      
      However, in a typical desktop system, the only process likely
      to be using IO bitmaps is the X server, and nothing at all on
      a server.  Therefore the lazy context switch doesn't really win
      all that much, and its just a gratuitious difference from
      64-bit code.
      
      This patch removes the lazy context switch, with a view to
      unifying this code in a later change.
      Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      db949bba
  14. 23 2月, 2009 1 次提交
    • I
      x86: refactor x86_quirks support · 8e6dafd6
      Ingo Molnar 提交于
      Impact: cleanup
      
      Make x86_quirks support more transparent. The highlevel
      methods are now named:
      
        extern void x86_quirk_pre_intr_init(void);
        extern void x86_quirk_intr_init(void);
      
        extern void x86_quirk_trap_init(void);
      
        extern void x86_quirk_pre_time_init(void);
        extern void x86_quirk_time_init(void);
      
      This makes it clear that if some platform extension has to
      do something here that it is considered ... weird, and is
      discouraged.
      
      Also remove arch_hooks.h and move it into setup.h (and other
      header files where appropriate).
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      8e6dafd6
  15. 22 2月, 2009 1 次提交
  16. 15 2月, 2009 1 次提交
    • T
      x86, vm86: fix preemption bug · be716615
      Thomas Gleixner 提交于
      Commit 3d2a71a5 ("x86, traps: converge
      do_debug handlers") changed the preemption disable logic of do_debug()
      so vm86_handle_trap() is called with preemption disabled resulting in:
      
       BUG: sleeping function called from invalid context at include/linux/kernel.h:155
       in_atomic(): 1, irqs_disabled(): 0, pid: 3005, name: dosemu.bin
       Pid: 3005, comm: dosemu.bin Tainted: G        W  2.6.29-rc1 #51
       Call Trace:
        [<c050d669>] copy_to_user+0x33/0x108
        [<c04181f4>] save_v86_state+0x65/0x149
        [<c0418531>] handle_vm86_trap+0x20/0x8f
        [<c064e345>] do_debug+0x15b/0x1a4
        [<c064df1f>] debug_stack_correct+0x27/0x2c
        [<c040365b>] sysenter_do_call+0x12/0x2f
       BUG: scheduling while atomic: dosemu.bin/3005/0x10000001
      
      Restore the original calling convention and reenable preemption before
      calling handle_vm86_trap().
      Reported-by: NMichal Suchanek <hramrach@centrum.cz>
      Cc: stable@kernel.org
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      be716615
  17. 11 2月, 2009 1 次提交
  18. 10 2月, 2009 1 次提交
    • T
      x86: fix math_emu register frame access · d315760f
      Tejun Heo 提交于
      do_device_not_available() is the handler for #NM and it declares that
      it takes a unsigned long and calls math_emu(), which takes a long
      argument and surprisingly expects the stack frame starting at the zero
      argument would match struct math_emu_info, which isn't true regardless
      of configuration in the current code.
      
      This patch makes do_device_not_available() take struct pt_regs like
      other exception handlers and initialize struct math_emu_info with
      pointer to it and pass pointer to the math_emu_info to math_emulate()
      like normal C functions do.  This way, unless gcc makes a copy of
      struct pt_regs in do_device_not_available(), the register frame is
      correctly accessed regardless of kernel configuration or compiler
      used.
      
      This doesn't fix all math_emu problems but it at least gets it
      somewhat working.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      d315760f
  19. 29 1月, 2009 1 次提交
  20. 27 1月, 2009 1 次提交
  21. 20 1月, 2009 1 次提交
  22. 07 1月, 2009 1 次提交
  23. 04 1月, 2009 1 次提交
  24. 26 12月, 2008 2 次提交
  25. 25 12月, 2008 1 次提交
  26. 24 12月, 2008 1 次提交
    • Y
      x86: fix lguest used_vectors breakage, -v2 · b77b881f
      Yinghai Lu 提交于
      Impact: fix lguest, clean up
      
      32-bit lguest used used_vectors to record vectors, but that model of
      allocating vectors changed and got broken, after we changed vector
      allocation to a per_cpu array.
      
      Try enable that for 64bit, and the array is used for all vectors that
      are not managed by vector_irq per_cpu array.
      
      Also kill system_vectors[], that is now a duplication of the
      used_vectors bitmap.
      
      [ merged in cpus4096 due to io_apic.c cpumask changes. ]
      [ -v2, fix build failure ]
      Signed-off-by: NYinghai Lu <yinghai@kernel.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      b77b881f
  27. 23 12月, 2008 1 次提交
    • H
      x86: prioritize the FPU traps for the error code · adf77bac
      H. Peter Anvin 提交于
      In the case of multiple FPU errors, prioritize the error codes,
      instead of returning __SI_FAULT, which ends up pushing a 0 as the
      error code to userspace, a POSIX violation.
      
      For i386, we will simply return if there are no errors at all; for
      x86-64 this is probably a "can't happen" (and the code should be
      unified), but for this patch, return __SI_FAULT|SI_KERNEL if this ever
      happens.
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      adf77bac
  28. 12 12月, 2008 1 次提交
  29. 22 10月, 2008 1 次提交
  30. 13 10月, 2008 9 次提交