1. 12 3月, 2011 2 次提交
  2. 24 2月, 2011 1 次提交
    • S
      x86: Add device tree support · da6b737b
      Sebastian Andrzej Siewior 提交于
      This patch adds minimal support for device tree on x86. The device
      tree blob is passed to the kernel via setup_data which requires at
      least boot protocol 2.09.
      
      Memory size, restricted memory regions, boot arguments are gathered
      the traditional way so things like cmd_line are just here to let the
      code compile.
      
      The current plan is use the device tree as an extension and to gather
      information which can not be enumerated and would have to be hardcoded
      otherwise. This includes things like 
         - which devices are on this I2C/SPI bus?
         - how are the interrupts wired to IO APIC?
         - where could my hpet be?
      Signed-off-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de>
      Signed-off-by: NDirk Brandewie <dirk.brandewie@gmail.com>
      Acked-by: NGrant Likely <grant.likely@secretlab.ca>
      Cc: sodaville@linutronix.de
      Cc: devicetree-discuss@lists.ozlabs.org
      LKML-Reference: <1298405266-1624-3-git-send-email-bigeasy@linutronix.de>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      da6b737b
  3. 18 2月, 2011 1 次提交
    • J
      x86: Eliminate pointless adjustment attempts in fixup_irqs() · 58bff947
      Jan Beulich 提交于
      Not only when an IRQ's affinity equals cpu_online_mask is there
      no need to actually try to adjust the affinity, but also when
      it's a subset thereof. This particularly avoids adjustment
      attempts during system shutdown to any IRQs bound to CPU#0.
      Signed-off-by: NJan Beulich <jbeulich@novell.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Eric W. Biederman <ebiederm@xmission.com>
      Cc: Suresh Siddha <suresh.b.siddha@intel.com>
      Cc: Gary Hade <garyhade@us.ibm.com>
      LKML-Reference: <4D5D52C2020000780003272C@vpn.id2.novell.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      58bff947
  4. 12 2月, 2011 1 次提交
  5. 30 12月, 2010 1 次提交
  6. 16 12月, 2010 1 次提交
  7. 19 10月, 2010 1 次提交
    • P
      irq_work: Add generic hardirq context callbacks · e360adbe
      Peter Zijlstra 提交于
      Provide a mechanism that allows running code in IRQ context. It is
      most useful for NMI code that needs to interact with the rest of the
      system -- like wakeup a task to drain buffers.
      
      Perf currently has such a mechanism, so extract that and provide it as
      a generic feature, independent of perf so that others may also
      benefit.
      
      The IRQ context callback is generated through self-IPIs where
      possible, or on architectures like powerpc the decrementer (the
      built-in timer facility) is set to generate an interrupt immediately.
      
      Architectures that don't have anything like this get to do with a
      callback from the timer tick. These architectures can call
      irq_work_run() at the tail of any IRQ handlers that might enqueue such
      work (like the perf IRQ handler) to avoid undue latencies in
      processing the work.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Acked-by: NKyle McMartin <kyle@mcmartin.ca>
      Acked-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      [ various fixes ]
      Signed-off-by: NHuang Ying <ying.huang@intel.com>
      LKML-Reference: <1287036094.7768.291.camel@yhuang-dev>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      e360adbe
  8. 12 10月, 2010 1 次提交
  9. 15 12月, 2009 1 次提交
  10. 24 11月, 2009 1 次提交
    • J
      x86: Tighten conditionals on MCE related statistics · 0444c9bd
      Jan Beulich 提交于
      irq_thermal_count is only being maintained when
      X86_THERMAL_VECTOR, and both X86_THERMAL_VECTOR and
      X86_MCE_THRESHOLD don't need extra wrapping in X86_MCE
      conditionals.
      Signed-off-by: NJan Beulich <jbeulich@novell.com>
      Cc: Hidetoshi Seto <seto.hidetoshi@jp.fujitsu.com>
      Cc: Yong Wang <yong.y.wang@intel.com>
      Cc: Suresh Siddha <suresh.b.siddha@intel.com>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: Borislav Petkov <borislav.petkov@amd.com>
      Cc: Arjan van de Ven <arjan@infradead.org>
      LKML-Reference: <4B06AFA902000078000211F8@vpn.id2.novell.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      0444c9bd
  11. 02 11月, 2009 4 次提交
    • S
      x86: Remove local_irq_enable()/local_irq_disable() in fixup_irqs() · 5231a686
      Suresh Siddha 提交于
      To ensure that we handle all the pending interrupts (destined
      for this cpu that is going down) in the interrupt subsystem
      before the cpu goes offline, fixup_irqs() does:
      
      	local_irq_enable();
      	mdelay(1);
      	local_irq_disable();
      
      Enabling interrupts is not a good thing as this cpu is already
      offline. So this patch replaces that logic with,
      
      	mdelay(1);
      	check APIC_IRR bits
      	Retrigger the irq at the new destination if any interrupt has arrived
      	via IPI.
      
      For IO-APIC level triggered interrupts, this retrigger IPI will
      appear as an edge interrupt. ack_apic_level() will detect this
      condition and IO-APIC RTE's remoteIRR is cleared using directed
      EOI(using IO-APIC EOI register) on Intel platforms and for
      others it uses the existing mask+edge logic followed by
      unmask+level.
      
      We can also remove mdelay() and then send spuriuous interrupts
      to new cpu targets for all the irqs that were handled previously
      by this cpu that is going offline. While it works, I have seen
      spurious interrupt messages (nothing wrong but still annoying
      messages during cpu offline, which can be seen during
      suspend/resume etc)
      Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com>
      Acked-by: NGary Hade <garyhade@us.ibm.com>
      Cc: Eric W. Biederman <ebiederm@xmission.com>
      LKML-Reference: <20091026230002.043281924@sbs-t61.sc.intel.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      5231a686
    • S
      x86: Force irq complete move during cpu offline · a5e74b84
      Suresh Siddha 提交于
      When a cpu goes offline, fixup_irqs() try to move irq's
      currently destined to the offline cpu to a new cpu. But this
      attempt will fail if the irq is recently moved to this cpu and
      the irq still hasn't arrived at this cpu (for non intr-remapping
      platforms this is when we free the vector allocation at the
      previous destination) that is about to go offline.
      
      This will endup with the interrupt subsystem still pointing the
      irq to the offline cpu, causing that irq to not work any more.
      
      Fix this by forcing the irq to complete its move (its been a
      long time we moved the irq to this cpu which we are offlining
      now) and then move this irq to a new cpu before this cpu goes
      offline.
      Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com>
      Acked-by: NGary Hade <garyhade@us.ibm.com>
      Cc: Eric W. Biederman <ebiederm@xmission.com>
      LKML-Reference: <20091026230001.848830905@sbs-t61.sc.intel.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      a5e74b84
    • S
      x86, intr-remap: Avoid irq_chip mask/unmask in fixup_irqs() for intr-remapping · 84e21493
      Suresh Siddha 提交于
      In the presence of interrupt-remapping, irqs will be migrated in
      the process context and we don't do (and there is no need to)
      irq_chip mask/unmask while migrating the interrupt.
      
      Similarly fix the fixup_irqs() that get called during cpu
      offline and avoid calling irq_chip mask/unmask for irqs that are
      ok to be migrated in the process context.
      
      While we didn't observe any race condition with the existing
      code, this change takes complete advantage of
      interrupt-remapping in the newer generation platforms and avoids
      any potential HW lockup's (that often worry Eric :)
      Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com>
      Acked-by: NEric W. Biederman <ebiederm@xmission.com>
      Cc: garyhade@us.ibm.com
      LKML-Reference: <20091026230001.661423939@sbs-t61.sc.intel.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      84e21493
    • S
      x86: Unify fixup_irqs() for 32-bit and 64-bit kernels · 7a7732bc
      Suresh Siddha 提交于
      There is no reason to have different fixup_irqs() for 32-bit and
      64-bit kernels. Unify by using the superior 64-bit version for
      both the kernels.
      Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com>
      Signed-off-by: NGary Hade <garyhade@us.ibm.com>
      Cc: Eric W. Biederman <ebiederm@xmission.com>
      LKML-Reference: <20091026230001.562512739@sbs-t61.sc.intel.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      7a7732bc
  12. 15 10月, 2009 1 次提交
  13. 14 10月, 2009 1 次提交
    • L
      x86, perf_event: Rename 'performance counter interrupt' · 89ccf465
      Li Hong 提交于
      In 'cdd6c482', we renamed
      Performance Counters -> Performance Events.
      
      The name showed up in /proc/interrupts also needs a change. I use
      PMI (Performance monitoring interrupt) here, since it is the
      official name used in Intel's documents.
      Signed-off-by: NLi Hong <lihong.hi@gmail.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      LKML-Reference: <20091014105039.GA22670@uhli>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      89ccf465
  14. 09 10月, 2009 1 次提交
    • I
      Revert "x86, timers: Check for pending timers after (device) interrupts" · e7ab0f7b
      Ingo Molnar 提交于
      This reverts commit 9bcbdd9c.
      
      The real bug producing LatencyTop latencies has been fixed in:
      
        f5dc3753: sched: Update the clock of runqueue select_task_rq() selected
      
      And the commit being reverted here triggers local timer processing
      from every device IRQ. If device IRQs come in at a high frequency,
      this could cause a performance regression.
      
      The commit being reverted here purely 'fixed' the reported latency
      as a side effect, because CPUs were being moved out of idle more
      often.
      Acked-by: NPeter Zijlstra <peterz@infradead.org>
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Cc: Frans Pop <elendil@planet.nl>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      LKML-Reference: <20091008064041.67219b13@infradead.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      e7ab0f7b
  15. 08 10月, 2009 1 次提交
    • A
      x86, timers: Check for pending timers after (device) interrupts · 9bcbdd9c
      Arjan van de Ven 提交于
      Now that range timers and deferred timers are common, I found a
      problem with these using the "perf timechart" tool. Frans Pop also
      reported high scheduler latencies via LatencyTop, when using
      iwlagn.
      
      It turns out that on x86, these two 'opportunistic' timers only get
      checked when another "real" timer happens. These opportunistic
      timers have the objective to save power by hitchhiking on other
      wakeups, as to avoid CPU wakeups by themselves as much as possible.
      
      The change in this patch runs this check not only at timer
      interrupts, but at all (device) interrupts. The effect is that:
      
       1) the deferred timers/range timers get delayed less
      
       2) the range timers cause less wakeups by themselves because
          the percentage of hitchhiking on existing wakeup events goes up.
      
      I've verified the working of the patch using "perf timechart", the
      original exposed bug is gone with this patch. Frans also reported
      success - the latencies are now down in the expected ~10 msec
      range.
      Signed-off-by: NArjan van de Ven <arjan@linux.intel.com>
      Tested-by: NFrans Pop <elendil@planet.nl>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Mike Galbraith <efault@gmx.de>
      LKML-Reference: <20091008064041.67219b13@infradead.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      9bcbdd9c
  16. 10 7月, 2009 1 次提交
  17. 04 6月, 2009 3 次提交
  18. 29 5月, 2009 1 次提交
    • A
      x86, mce: use 64bit machine check code on 32bit · 4efc0670
      Andi Kleen 提交于
      The 64bit machine check code is in many ways much better than
      the 32bit machine check code: it is more specification compliant,
      is cleaner, only has a single code base versus one per CPU,
      has better infrastructure for recovery, has a cleaner way to communicate
      with user space etc. etc.
      
      Use the 64bit code for 32bit too.
      
      This is the second attempt to do this. There was one a couple of years
      ago to unify this code for 32bit and 64bit.  Back then this ran into some
      trouble with K7s and was reverted.
      
      I believe this time the K7 problems (and some others) are addressed.
      I went over the old handlers and was very careful to retain
      all quirks.
      
      But of course this needs a lot of testing on old systems. On newer
      64bit capable systems I don't expect much problems because they have been
      already tested with the 64bit kernel.
      
      I made this a CONFIG for now that still allows to select the old
      machine check code. This is mostly to make testing easier,
      if someone runs into a problem we can ask them to try
      with the CONFIG switched.
      
      The new code is default y for more coverage.
      
      Once there is confidence the 64bit code works well on older hardware
      too the CONFIG_X86_OLD_MCE and the associated code can be easily
      removed.
      
      This causes a behaviour change for 32bit installations. They now
      have to install the mcelog package to be able to log
      corrected machine checks.
      
      The 64bit machine check code only handles CPUs which support the
      standard Intel machine check architecture described in the IA32 SDM.
      The 32bit code has special support for some older CPUs which
      have non standard machine check architectures, in particular
      WinChip C3 and Intel P5.  I made those a separate CONFIG option
      and kept them for now. The WinChip variant could be probably
      removed without too much pain, it doesn't really do anything
      interesting. P5 is also disabled by default (like it
      was before) because many motherboards have it miswired, but
      according to Alan Cox a few embedded setups use that one.
      
      Forward ported/heavily changed version of old patch, original patch
      included review/fixes from Thomas Gleixner, Bert Wesarg.
      Signed-off-by: NAndi Kleen <ak@linux.intel.com>
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      Signed-off-by: NHidetoshi Seto <seto.hidetoshi@jp.fujitsu.com>
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      4efc0670
  19. 14 4月, 2009 1 次提交
  20. 13 4月, 2009 2 次提交
    • C
      x86: apic - introduce dummy apic operations · 08306ce6
      Cyrill Gorcunov 提交于
      Impact: refactor, speed up and robustize code
      
      In case if apic was disabled by kernel option
      or by hardware limits we can use dummy operations
      in apic->write to simplify the ack_APIC_irq() code.
      
      At the lame time the patch fixes the missed EOI in
      do_IRQ function (which has place if kernel is compiled
      as X86-32 and interrupt without handler happens where
      apic was not asked to be disabled via kernel option).
      
      Note that native_apic_write_dummy() consists of
      WARN_ON_ONCE to catch any buggy writes on enabled
      APICs. Could be removed after some time of testing.
      Signed-off-by: NCyrill Gorcunov <gorcunov@openvz.org>
      LKML-Reference: <20090412165058.724788431@openvz.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      08306ce6
    • C
      x86: irq.c - tiny cleanup · edea7148
      Cyrill Gorcunov 提交于
      Impact: cleanup, robustization
      
       1) guard ack_bad_irq with printk_ratelimit since there is no
          guarantee we will not be flooded one day
      
       2) use pr_emerg() helper
      Signed-off-by: NCyrill Gorcunov <gorcunov@openvz.org>
      LKML-Reference: <20090412165058.277579847@openvz.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      edea7148
  21. 12 4月, 2009 1 次提交
    • J
      x86: clean up declarations and variables · 2c1b284e
      Jaswinder Singh Rajput 提交于
      Impact: cleanup, no code changed
      
       - syscalls.h       update declarations due to unifications
       - irq.c            declare smp_generic_interrupt() before it gets used
       - process.c        declare sys_fork() and sys_vfork() before they get used
       - tsc.c            rename tsc_khz shadowed variable
       - apic/probe_32.c  declare apic_default before it gets used
       - apic/nmi.c       prev_nmi_count should be unsigned
       - apic/io_apic.c   declare smp_irq_move_cleanup_interrupt() before it gets used
       - mm/init.c        declare direct_gbpages and free_initrd_mem before they get used
      Signed-off-by: NJaswinder Singh Rajput <jaswinder@kernel.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      2c1b284e
  22. 09 4月, 2009 2 次提交
  23. 07 4月, 2009 1 次提交
  24. 23 3月, 2009 2 次提交
  25. 13 3月, 2009 1 次提交
  26. 05 3月, 2009 1 次提交
  27. 18 2月, 2009 1 次提交
  28. 17 2月, 2009 1 次提交
  29. 09 2月, 2009 1 次提交
  30. 18 1月, 2009 1 次提交
  31. 04 1月, 2009 1 次提交