1. 28 9月, 2012 1 次提交
  2. 22 8月, 2012 1 次提交
  3. 26 7月, 2012 1 次提交
    • T
      x86/ioapic: Fix NULL pointer dereference on CPU hotplug after disabling irqs · 1d44b30f
      Tomoki Sekiyama 提交于
      In the current kernel, percpu variable `vector_irq' is not always
      cleared when a CPU is offlined. If the CPU that has the disabled
      irqs in vector_irq is hotplugged again, __setup_vector_irq()
      hits invalid irq vector and may crash.
      
      This bug can be reproduced as following;
      
       # echo 0 > /sys/devices/system/cpu/cpu7/online
       # modprobe -r some_driver_using_interrupts     # vector_irq@cpu7 uncleared
       # echo 1 > /sys/devices/system/cpu/cpu7/online # kernel may crash
      
      To fix this problem, this patch clears vector_irq in
      __fixup_irqs() when the CPU is offlined.
      
      This also reverts commit f6175f5b, which partially fixes
      this bug by clearing vector in __clear_irq_vector(). But in
      environments with IOMMU IRQ remapper, it could fail because
      cfg->domain doesn't contain offlined CPUs. With this patch, the
      fix in __clear_irq_vector() can be reverted because every
      vector_irq is already cleared in __fixup_irqs() on offlined CPUs.
      Signed-off-by: NTomoki Sekiyama <tomoki.sekiyama.qu@hitachi.com>
      Acked-by: NSuresh Siddha <suresh.b.siddha@intel.com>
      Cc: yrl.pp-manager.tt@hitachi.com
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: Alexander Gordeev <agordeev@redhat.com>
      Link: http://lkml.kernel.org/r/20120726104732.2889.19144.stgit@kvmdevSigned-off-by: NIngo Molnar <mingo@kernel.org>
      1d44b30f
  4. 06 6月, 2012 1 次提交
  5. 29 3月, 2012 1 次提交
    • L
      x86: Preserve lazy irq disable semantics in fixup_irqs() · 99dd5497
      Liu, Chuansheng 提交于
      The default irq_disable() sematics are to mark the interrupt disabled,
      but keep it unmasked. If the interrupt is delivered while marked
      disabled, the low level interrupt handler masks it and marks it
      pending. This is important for detecting wakeup interrupts during
      suspend and for edge type interrupts to avoid losing interrupts.
      
      fixup_irqs() moves the interrupts away from an offlined cpu. For
      certain interrupt types it needs to mask the interrupt line before
      changing the affinity. After affinity has changed the interrupt line
      is unmasked again, but only if it is not marked disabled.
      
      This breaks the lazy irq disable semantics and causes problems in
      suspend as the interrupt can be lost or wakeup functionality is
      broken.
      
      Check irqd_irq_masked() instead of irqd_irq_disabled() because
      irqd_irq_masked() is only set, when the core code actually masked the
      interrupt line. If it's not set, we unmask the interrupt and let the
      lazy irq disable logic deal with an eventually incoming interrupt.
      
      [ tglx: Massaged changelog and added a comment ]
      Signed-off-by: Nliu chuansheng <chuansheng.liu@intel.com>
      Cc: Yanmin Zhang <yanmin_zhang@linux.intel.com>
      Link: http://lkml.kernel.org/r/27240C0AC20F114CBF8149A2696CBE4A05DFB3@SHSMSX101.ccr.corp.intel.comSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      99dd5497
  6. 18 12月, 2011 1 次提交
  7. 14 12月, 2011 1 次提交
    • F
      x86: Add per-cpu stat counter for APIC ICR read tries · 346b46be
      Fernando Luis Vázquez Cao 提交于
      In the IPI delivery slow path (NMI delivery) we retry the ICR
      read to check for delivery completion a limited number of times.
      
      [ The reason for the limited retries is that some of the places
        where it is used (cpu boot, kdump, etc) IPI delivery might not
        succeed (due to a firmware bug or system crash, for example)
        and in such a case it is better to give up and resume
        execution of other code. ]
      
      This patch adds a new entry to /proc/interrupts, RTR, which
      tells user space the number of times we retried the ICR read in
      the IPI delivery slow path.
      
      This should give some insight into how well the APIC
      message delivery hardware is working - if the counts are way
      too large then we are hitting a (very-) slow path way too
      often.
      Signed-off-by: NFernando Luis Vazquez Cao <fernando@oss.ntt.co.jp>
      Cc: Jörn Engel <joern@logfs.org>
      Cc: Suresh Siddha <suresh.b.siddha@intel.com>
      Link: http://lkml.kernel.org/n/tip-vzsp20lo2xdzh5f70g0eis2s@git.kernel.org
      [ extended the changelog ]
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      346b46be
  8. 12 12月, 2011 1 次提交
    • F
      x86: Call idle notifier after irq_enter() · 98ad1cc1
      Frederic Weisbecker 提交于
      Interrupts notify the idle exit state before calling irq_enter().
      But the notifier code calls rcu_read_lock() and this is not
      allowed while rcu is in an extended quiescent state. We need
      to wait for irq_enter() -> rcu_idle_exit() to be called before
      doing so otherwise this results in a grumpy RCU:
      
      [    0.099991] WARNING: at include/linux/rcupdate.h:194 __atomic_notifier_call_chain+0xd2/0x110()
      [    0.099991] Hardware name: AMD690VM-FMH
      [    0.099991] Modules linked in:
      [    0.099991] Pid: 0, comm: swapper Not tainted 3.0.0-rc6+ #255
      [    0.099991] Call Trace:
      [    0.099991]  <IRQ>  [<ffffffff81051c8a>] warn_slowpath_common+0x7a/0xb0
      [    0.099991]  [<ffffffff81051cd5>] warn_slowpath_null+0x15/0x20
      [    0.099991]  [<ffffffff817d6fa2>] __atomic_notifier_call_chain+0xd2/0x110
      [    0.099991]  [<ffffffff817d6ff1>] atomic_notifier_call_chain+0x11/0x20
      [    0.099991]  [<ffffffff81001873>] exit_idle+0x43/0x50
      [    0.099991]  [<ffffffff81020439>] smp_apic_timer_interrupt+0x39/0xa0
      [    0.099991]  [<ffffffff817da253>] apic_timer_interrupt+0x13/0x20
      [    0.099991]  <EOI>  [<ffffffff8100ae67>] ? default_idle+0xa7/0x350
      [    0.099991]  [<ffffffff8100ae65>] ? default_idle+0xa5/0x350
      [    0.099991]  [<ffffffff8100b19b>] amd_e400_idle+0x8b/0x110
      [    0.099991]  [<ffffffff810cb01f>] ? rcu_enter_nohz+0x8f/0x160
      [    0.099991]  [<ffffffff810019a0>] cpu_idle+0xb0/0x110
      [    0.099991]  [<ffffffff817a7505>] rest_init+0xe5/0x140
      [    0.099991]  [<ffffffff817a7468>] ? rest_init+0x48/0x140
      [    0.099991]  [<ffffffff81cc5ca3>] start_kernel+0x3d1/0x3dc
      [    0.099991]  [<ffffffff81cc5321>] x86_64_start_reservations+0x131/0x135
      [    0.099991]  [<ffffffff81cc5412>] x86_64_start_kernel+0xed/0xf4
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Andy Henroid <andrew.d.henroid@intel.com>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Reviewed-by: NJosh Triplett <josh@joshtriplett.org>
      98ad1cc1
  9. 01 11月, 2011 1 次提交
    • P
      x86: Fix files explicitly requiring export.h for EXPORT_SYMBOL/THIS_MODULE · 69c60c88
      Paul Gortmaker 提交于
      These files were implicitly getting EXPORT_SYMBOL via device.h
      which was including module.h, but that will be fixed up shortly.
      
      By fixing these now, we can avoid seeing things like:
      
      arch/x86/kernel/rtc.c:29: warning: type defaults to ‘int’ in declaration of ‘EXPORT_SYMBOL’
      arch/x86/kernel/pci-dma.c:20: warning: type defaults to ‘int’ in declaration of ‘EXPORT_SYMBOL’
      arch/x86/kernel/e820.c:69: warning: type defaults to ‘int’ in declaration of ‘EXPORT_SYMBOL_GPL’
      
      [ with input from Randy Dunlap <rdunlap@xenotime.net> and also
        from Stephen Rothwell <sfr@canb.auug.org.au> ]
      Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
      69c60c88
  10. 19 5月, 2011 2 次提交
  11. 29 3月, 2011 1 次提交
    • J
      x86: Stop including <linux/delay.h> in two asm header files · ca444564
      Jean Delvare 提交于
      Stop including <linux/delay.h> in x86 header files which don't
      need it. This will let the compiler complain when this header is
      not included by source files when it should, so that
      contributors can fix the problem before building on other
      architectures starts to fail.
      
      Credits go to Geert for the idea.
      Signed-off-by: NJean Delvare <khali@linux-fr.org>
      Cc: James E.J. Bottomley <James.Bottomley@suse.de>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Stephen Rothwell <sfr@canb.auug.org.au>
      LKML-Reference: <20110325152014.297890ec@endymion.delvare>
      [ this also fixes an upstream build bug in drivers/media/rc/ite-cir.c ]
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      ca444564
  12. 12 3月, 2011 2 次提交
  13. 24 2月, 2011 1 次提交
    • S
      x86: Add device tree support · da6b737b
      Sebastian Andrzej Siewior 提交于
      This patch adds minimal support for device tree on x86. The device
      tree blob is passed to the kernel via setup_data which requires at
      least boot protocol 2.09.
      
      Memory size, restricted memory regions, boot arguments are gathered
      the traditional way so things like cmd_line are just here to let the
      code compile.
      
      The current plan is use the device tree as an extension and to gather
      information which can not be enumerated and would have to be hardcoded
      otherwise. This includes things like 
         - which devices are on this I2C/SPI bus?
         - how are the interrupts wired to IO APIC?
         - where could my hpet be?
      Signed-off-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de>
      Signed-off-by: NDirk Brandewie <dirk.brandewie@gmail.com>
      Acked-by: NGrant Likely <grant.likely@secretlab.ca>
      Cc: sodaville@linutronix.de
      Cc: devicetree-discuss@lists.ozlabs.org
      LKML-Reference: <1298405266-1624-3-git-send-email-bigeasy@linutronix.de>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      da6b737b
  14. 18 2月, 2011 1 次提交
    • J
      x86: Eliminate pointless adjustment attempts in fixup_irqs() · 58bff947
      Jan Beulich 提交于
      Not only when an IRQ's affinity equals cpu_online_mask is there
      no need to actually try to adjust the affinity, but also when
      it's a subset thereof. This particularly avoids adjustment
      attempts during system shutdown to any IRQs bound to CPU#0.
      Signed-off-by: NJan Beulich <jbeulich@novell.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Eric W. Biederman <ebiederm@xmission.com>
      Cc: Suresh Siddha <suresh.b.siddha@intel.com>
      Cc: Gary Hade <garyhade@us.ibm.com>
      LKML-Reference: <4D5D52C2020000780003272C@vpn.id2.novell.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      58bff947
  15. 12 2月, 2011 1 次提交
  16. 30 12月, 2010 1 次提交
  17. 16 12月, 2010 1 次提交
  18. 19 10月, 2010 1 次提交
    • P
      irq_work: Add generic hardirq context callbacks · e360adbe
      Peter Zijlstra 提交于
      Provide a mechanism that allows running code in IRQ context. It is
      most useful for NMI code that needs to interact with the rest of the
      system -- like wakeup a task to drain buffers.
      
      Perf currently has such a mechanism, so extract that and provide it as
      a generic feature, independent of perf so that others may also
      benefit.
      
      The IRQ context callback is generated through self-IPIs where
      possible, or on architectures like powerpc the decrementer (the
      built-in timer facility) is set to generate an interrupt immediately.
      
      Architectures that don't have anything like this get to do with a
      callback from the timer tick. These architectures can call
      irq_work_run() at the tail of any IRQ handlers that might enqueue such
      work (like the perf IRQ handler) to avoid undue latencies in
      processing the work.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Acked-by: NKyle McMartin <kyle@mcmartin.ca>
      Acked-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      [ various fixes ]
      Signed-off-by: NHuang Ying <ying.huang@intel.com>
      LKML-Reference: <1287036094.7768.291.camel@yhuang-dev>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      e360adbe
  19. 12 10月, 2010 1 次提交
  20. 15 12月, 2009 1 次提交
  21. 24 11月, 2009 1 次提交
    • J
      x86: Tighten conditionals on MCE related statistics · 0444c9bd
      Jan Beulich 提交于
      irq_thermal_count is only being maintained when
      X86_THERMAL_VECTOR, and both X86_THERMAL_VECTOR and
      X86_MCE_THRESHOLD don't need extra wrapping in X86_MCE
      conditionals.
      Signed-off-by: NJan Beulich <jbeulich@novell.com>
      Cc: Hidetoshi Seto <seto.hidetoshi@jp.fujitsu.com>
      Cc: Yong Wang <yong.y.wang@intel.com>
      Cc: Suresh Siddha <suresh.b.siddha@intel.com>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: Borislav Petkov <borislav.petkov@amd.com>
      Cc: Arjan van de Ven <arjan@infradead.org>
      LKML-Reference: <4B06AFA902000078000211F8@vpn.id2.novell.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      0444c9bd
  22. 02 11月, 2009 4 次提交
    • S
      x86: Remove local_irq_enable()/local_irq_disable() in fixup_irqs() · 5231a686
      Suresh Siddha 提交于
      To ensure that we handle all the pending interrupts (destined
      for this cpu that is going down) in the interrupt subsystem
      before the cpu goes offline, fixup_irqs() does:
      
      	local_irq_enable();
      	mdelay(1);
      	local_irq_disable();
      
      Enabling interrupts is not a good thing as this cpu is already
      offline. So this patch replaces that logic with,
      
      	mdelay(1);
      	check APIC_IRR bits
      	Retrigger the irq at the new destination if any interrupt has arrived
      	via IPI.
      
      For IO-APIC level triggered interrupts, this retrigger IPI will
      appear as an edge interrupt. ack_apic_level() will detect this
      condition and IO-APIC RTE's remoteIRR is cleared using directed
      EOI(using IO-APIC EOI register) on Intel platforms and for
      others it uses the existing mask+edge logic followed by
      unmask+level.
      
      We can also remove mdelay() and then send spuriuous interrupts
      to new cpu targets for all the irqs that were handled previously
      by this cpu that is going offline. While it works, I have seen
      spurious interrupt messages (nothing wrong but still annoying
      messages during cpu offline, which can be seen during
      suspend/resume etc)
      Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com>
      Acked-by: NGary Hade <garyhade@us.ibm.com>
      Cc: Eric W. Biederman <ebiederm@xmission.com>
      LKML-Reference: <20091026230002.043281924@sbs-t61.sc.intel.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      5231a686
    • S
      x86: Force irq complete move during cpu offline · a5e74b84
      Suresh Siddha 提交于
      When a cpu goes offline, fixup_irqs() try to move irq's
      currently destined to the offline cpu to a new cpu. But this
      attempt will fail if the irq is recently moved to this cpu and
      the irq still hasn't arrived at this cpu (for non intr-remapping
      platforms this is when we free the vector allocation at the
      previous destination) that is about to go offline.
      
      This will endup with the interrupt subsystem still pointing the
      irq to the offline cpu, causing that irq to not work any more.
      
      Fix this by forcing the irq to complete its move (its been a
      long time we moved the irq to this cpu which we are offlining
      now) and then move this irq to a new cpu before this cpu goes
      offline.
      Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com>
      Acked-by: NGary Hade <garyhade@us.ibm.com>
      Cc: Eric W. Biederman <ebiederm@xmission.com>
      LKML-Reference: <20091026230001.848830905@sbs-t61.sc.intel.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      a5e74b84
    • S
      x86, intr-remap: Avoid irq_chip mask/unmask in fixup_irqs() for intr-remapping · 84e21493
      Suresh Siddha 提交于
      In the presence of interrupt-remapping, irqs will be migrated in
      the process context and we don't do (and there is no need to)
      irq_chip mask/unmask while migrating the interrupt.
      
      Similarly fix the fixup_irqs() that get called during cpu
      offline and avoid calling irq_chip mask/unmask for irqs that are
      ok to be migrated in the process context.
      
      While we didn't observe any race condition with the existing
      code, this change takes complete advantage of
      interrupt-remapping in the newer generation platforms and avoids
      any potential HW lockup's (that often worry Eric :)
      Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com>
      Acked-by: NEric W. Biederman <ebiederm@xmission.com>
      Cc: garyhade@us.ibm.com
      LKML-Reference: <20091026230001.661423939@sbs-t61.sc.intel.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      84e21493
    • S
      x86: Unify fixup_irqs() for 32-bit and 64-bit kernels · 7a7732bc
      Suresh Siddha 提交于
      There is no reason to have different fixup_irqs() for 32-bit and
      64-bit kernels. Unify by using the superior 64-bit version for
      both the kernels.
      Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com>
      Signed-off-by: NGary Hade <garyhade@us.ibm.com>
      Cc: Eric W. Biederman <ebiederm@xmission.com>
      LKML-Reference: <20091026230001.562512739@sbs-t61.sc.intel.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      7a7732bc
  23. 15 10月, 2009 1 次提交
  24. 14 10月, 2009 1 次提交
    • L
      x86, perf_event: Rename 'performance counter interrupt' · 89ccf465
      Li Hong 提交于
      In 'cdd6c482', we renamed
      Performance Counters -> Performance Events.
      
      The name showed up in /proc/interrupts also needs a change. I use
      PMI (Performance monitoring interrupt) here, since it is the
      official name used in Intel's documents.
      Signed-off-by: NLi Hong <lihong.hi@gmail.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      LKML-Reference: <20091014105039.GA22670@uhli>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      89ccf465
  25. 09 10月, 2009 1 次提交
    • I
      Revert "x86, timers: Check for pending timers after (device) interrupts" · e7ab0f7b
      Ingo Molnar 提交于
      This reverts commit 9bcbdd9c.
      
      The real bug producing LatencyTop latencies has been fixed in:
      
        f5dc3753: sched: Update the clock of runqueue select_task_rq() selected
      
      And the commit being reverted here triggers local timer processing
      from every device IRQ. If device IRQs come in at a high frequency,
      this could cause a performance regression.
      
      The commit being reverted here purely 'fixed' the reported latency
      as a side effect, because CPUs were being moved out of idle more
      often.
      Acked-by: NPeter Zijlstra <peterz@infradead.org>
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Cc: Frans Pop <elendil@planet.nl>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      LKML-Reference: <20091008064041.67219b13@infradead.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      e7ab0f7b
  26. 08 10月, 2009 1 次提交
    • A
      x86, timers: Check for pending timers after (device) interrupts · 9bcbdd9c
      Arjan van de Ven 提交于
      Now that range timers and deferred timers are common, I found a
      problem with these using the "perf timechart" tool. Frans Pop also
      reported high scheduler latencies via LatencyTop, when using
      iwlagn.
      
      It turns out that on x86, these two 'opportunistic' timers only get
      checked when another "real" timer happens. These opportunistic
      timers have the objective to save power by hitchhiking on other
      wakeups, as to avoid CPU wakeups by themselves as much as possible.
      
      The change in this patch runs this check not only at timer
      interrupts, but at all (device) interrupts. The effect is that:
      
       1) the deferred timers/range timers get delayed less
      
       2) the range timers cause less wakeups by themselves because
          the percentage of hitchhiking on existing wakeup events goes up.
      
      I've verified the working of the patch using "perf timechart", the
      original exposed bug is gone with this patch. Frans also reported
      success - the latencies are now down in the expected ~10 msec
      range.
      Signed-off-by: NArjan van de Ven <arjan@linux.intel.com>
      Tested-by: NFrans Pop <elendil@planet.nl>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Mike Galbraith <efault@gmx.de>
      LKML-Reference: <20091008064041.67219b13@infradead.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      9bcbdd9c
  27. 10 7月, 2009 1 次提交
  28. 04 6月, 2009 3 次提交
  29. 29 5月, 2009 1 次提交
    • A
      x86, mce: use 64bit machine check code on 32bit · 4efc0670
      Andi Kleen 提交于
      The 64bit machine check code is in many ways much better than
      the 32bit machine check code: it is more specification compliant,
      is cleaner, only has a single code base versus one per CPU,
      has better infrastructure for recovery, has a cleaner way to communicate
      with user space etc. etc.
      
      Use the 64bit code for 32bit too.
      
      This is the second attempt to do this. There was one a couple of years
      ago to unify this code for 32bit and 64bit.  Back then this ran into some
      trouble with K7s and was reverted.
      
      I believe this time the K7 problems (and some others) are addressed.
      I went over the old handlers and was very careful to retain
      all quirks.
      
      But of course this needs a lot of testing on old systems. On newer
      64bit capable systems I don't expect much problems because they have been
      already tested with the 64bit kernel.
      
      I made this a CONFIG for now that still allows to select the old
      machine check code. This is mostly to make testing easier,
      if someone runs into a problem we can ask them to try
      with the CONFIG switched.
      
      The new code is default y for more coverage.
      
      Once there is confidence the 64bit code works well on older hardware
      too the CONFIG_X86_OLD_MCE and the associated code can be easily
      removed.
      
      This causes a behaviour change for 32bit installations. They now
      have to install the mcelog package to be able to log
      corrected machine checks.
      
      The 64bit machine check code only handles CPUs which support the
      standard Intel machine check architecture described in the IA32 SDM.
      The 32bit code has special support for some older CPUs which
      have non standard machine check architectures, in particular
      WinChip C3 and Intel P5.  I made those a separate CONFIG option
      and kept them for now. The WinChip variant could be probably
      removed without too much pain, it doesn't really do anything
      interesting. P5 is also disabled by default (like it
      was before) because many motherboards have it miswired, but
      according to Alan Cox a few embedded setups use that one.
      
      Forward ported/heavily changed version of old patch, original patch
      included review/fixes from Thomas Gleixner, Bert Wesarg.
      Signed-off-by: NAndi Kleen <ak@linux.intel.com>
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      Signed-off-by: NHidetoshi Seto <seto.hidetoshi@jp.fujitsu.com>
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      4efc0670
  30. 14 4月, 2009 1 次提交
  31. 13 4月, 2009 2 次提交
    • C
      x86: apic - introduce dummy apic operations · 08306ce6
      Cyrill Gorcunov 提交于
      Impact: refactor, speed up and robustize code
      
      In case if apic was disabled by kernel option
      or by hardware limits we can use dummy operations
      in apic->write to simplify the ack_APIC_irq() code.
      
      At the lame time the patch fixes the missed EOI in
      do_IRQ function (which has place if kernel is compiled
      as X86-32 and interrupt without handler happens where
      apic was not asked to be disabled via kernel option).
      
      Note that native_apic_write_dummy() consists of
      WARN_ON_ONCE to catch any buggy writes on enabled
      APICs. Could be removed after some time of testing.
      Signed-off-by: NCyrill Gorcunov <gorcunov@openvz.org>
      LKML-Reference: <20090412165058.724788431@openvz.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      08306ce6
    • C
      x86: irq.c - tiny cleanup · edea7148
      Cyrill Gorcunov 提交于
      Impact: cleanup, robustization
      
       1) guard ack_bad_irq with printk_ratelimit since there is no
          guarantee we will not be flooded one day
      
       2) use pr_emerg() helper
      Signed-off-by: NCyrill Gorcunov <gorcunov@openvz.org>
      LKML-Reference: <20090412165058.277579847@openvz.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      edea7148
  32. 12 4月, 2009 1 次提交
    • J
      x86: clean up declarations and variables · 2c1b284e
      Jaswinder Singh Rajput 提交于
      Impact: cleanup, no code changed
      
       - syscalls.h       update declarations due to unifications
       - irq.c            declare smp_generic_interrupt() before it gets used
       - process.c        declare sys_fork() and sys_vfork() before they get used
       - tsc.c            rename tsc_khz shadowed variable
       - apic/probe_32.c  declare apic_default before it gets used
       - apic/nmi.c       prev_nmi_count should be unsigned
       - apic/io_apic.c   declare smp_irq_move_cleanup_interrupt() before it gets used
       - mm/init.c        declare direct_gbpages and free_initrd_mem before they get used
      Signed-off-by: NJaswinder Singh Rajput <jaswinder@kernel.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      2c1b284e