1. 31 3月, 2012 1 次提交
  2. 31 3月, 2011 1 次提交
  3. 29 3月, 2011 2 次提交
  4. 28 3月, 2011 1 次提交
  5. 19 2月, 2011 6 次提交
  6. 03 2月, 2011 1 次提交
    • T
      genirq: Prevent irq storm on migration · f1a06390
      Thomas Gleixner 提交于
      move_native_irq() masks and unmasks the interrupt line
      unconditionally, but the interrupt line might be masked due to a
      threaded oneshot handler in progress. Unmasking the line in that case
      can lead to interrupt storms. Observed on PREEMPT_RT.
      
      Originally-from: Ingo Molnar <mingo@elte.hu>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: stable@kernel.org
      f1a06390
  7. 04 10月, 2010 4 次提交
  8. 15 12月, 2009 1 次提交
  9. 21 7月, 2009 1 次提交
    • T
      genirq: Delegate irq affinity setting to the irq thread · 591d2fb0
      Thomas Gleixner 提交于
      irq_set_thread_affinity() calls set_cpus_allowed_ptr() which might
      sleep, but irq_set_thread_affinity() is called with desc->lock held
      and can be called from hard interrupt context as well. The code has
      another bug as it does not hold a ref on the task struct as required
      by set_cpus_allowed_ptr().
      
      Just set the IRQTF_AFFINITY bit in action->thread_flags. The next time
      the thread runs it migrates itself. Solves all of the above problems
      nicely.
      
      Add kerneldoc to irq_set_thread_affinity() while at it.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      LKML-Reference: <new-submission>
      591d2fb0
  10. 28 4月, 2009 1 次提交
    • Y
      irq: only update affinity if ->set_affinity() is sucessfull · 57b150cc
      Yinghai Lu 提交于
      irq_set_affinity() and move_masked_irq() try to assign affinity
      before calling chip set_affinity(). Some archs are assigning it
      in ->set_affinity() again.
      
      We do something like:
      
       cpumask_cpy(desc->affinity, mask);
       desc->chip->set_affinity(mask);
      
      But in the failure path, affinity should not be touched - otherwise
      we'll end up with a different affinity mask despite the failure to
      migrate the IRQ.
      
      So try to update the afffinity only if set_affinity returns with 0.
      Also call irq_set_thread_affinity accordingly.
      
      v2: update after "irq, x86: Remove IRQ_DISABLED check in process context IRQ move"
      v3: according to Ingo, change set_affinity() in irq_chip should return int.
      v4: update comments by removing moving irq_desc code.
      
      [ Impact: fix /proc/irq/*/smp_affinity setting corner case bug ]
      Signed-off-by: NYinghai Lu <yinghai@kernel.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Suresh Siddha <suresh.b.siddha@intel.com>
      Cc: "Eric W. Biederman" <ebiederm@xmission.com>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      LKML-Reference: <49F65509.60307@kernel.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      57b150cc
  11. 12 1月, 2009 1 次提交
    • M
      cpumask: update irq_desc to use cpumask_var_t · 7f7ace0c
      Mike Travis 提交于
      Impact: reduce memory usage, use new cpumask API.
      
      Replace the affinity and pending_masks with cpumask_var_t's.  This adds
      to the significant size reduction done with the SPARSE_IRQS changes.
      
      The added functions (init_alloc_desc_masks & init_copy_desc_masks) are
      in the include file so they can be inlined (and optimized out for the
      !CONFIG_CPUMASKS_OFFSTACK case.)  [Naming chosen to be consistent with
      the other init*irq functions, as well as the backwards arg declaration
      of "from, to" instead of the more common "to, from" standard.]
      
      Includes a slight change to the declaration of struct irq_desc to embed
      the pending_mask within ifdef(CONFIG_SMP) to be consistent with other
      references, and some small changes to Xen.
      
      Tested: sparse/non-sparse/cpumask_offstack/non-cpumask_offstack/nonuma/nosmp on x86_64
      Signed-off-by: NMike Travis <travis@sgi.com>
      Cc: Chris Wright <chrisw@sous-sol.org>
      Cc: Jeremy Fitzhardinge <jeremy@xensource.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
      Cc: virtualization@lists.osdl.org
      Cc: xen-devel@lists.xensource.com
      Cc: Yinghai Lu <yhlu.kernel@gmail.com>
      7f7ace0c
  12. 13 12月, 2008 1 次提交
  13. 10 11月, 2008 1 次提交
  14. 16 10月, 2008 1 次提交
  15. 27 2月, 2007 1 次提交
    • E
      [PATCH] genirq: Mask irqs when migrating them. · 2a786b45
      Eric W. Biederman 提交于
      move_native_irqs tries to do the right thing when migrating irqs
      by disabling them.  However disabling them is a software logical
      thing, not a hardware thing.  This has always been a little flaky
      and after Ingo's latest round of changes it is guaranteed to not
      mask the apic.
      
      So this patch fixes move_native_irq to directly call the mask and
      unmask chip methods to guarantee that we mask the irq when we
      are migrating it.  We must do this as it is required by
      all code that call into the path.
      
      Since we don't know the masked status when IRQ_DISABLED is
      set so we will not be able to restore it.   The patch makes the code
      just give up and trying again the next time this routing is called.
      Signed-off-by: NEric W. Biederman <ebiederm@xmission.com>
      Acked-by: NSuresh Siddha <suresh.b.siddha@intel.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      2a786b45
  16. 04 10月, 2006 2 次提交
    • E
      [PATCH] genirq: irq: add moved_masked_irq · e7b946e9
      Eric W. Biederman 提交于
      Currently move_native_irq disables and renables the irq we are migrating to
      ensure we don't take that irq when we are actually doing the migration
      operation.  Disabling the irq needs to happen but sometimes doing the work is
      move_native_irq is too late.
      
      On x86 with ioapics the irq move sequences needs to be:
      edge_triggered:
        mask irq.
        move irq.
        unmask irq.
        ack irq.
      level_triggered:
        mask irq.
        ack irq.
        move irq.
        unmask irq.
      
      We can easily perform the edge triggered sequence, with the current defintion
      of move_native_irq.  However the level triggered case does not map well.  For
      that I have added move_masked_irq, to allow me to disable the irqs around both
      the ack and the move.
      
      Q: Why have we not seen this problem earlier?
      
      A: The only symptom I have been able to reproduce is that if we change
         the vector before acknowleding an irq the wrong irq is acknowledged.
         Since we currently are not reprogramming the irq vector during
         migration no problems show up.
      
         We have to mask the irq before we acknowledge the irq or else we could
         hit a window where an irq is asserted just before we acknowledge it.
      
         Edge triggered irqs do not have this problem because acknowledgements
         do not propogate in the same way.
      Signed-off-by: NEric W. Biederman <ebiederm@xmission.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Rajesh Shah <rajesh.shah@intel.com>
      Cc: Andi Kleen <ak@muc.de>
      Cc: "Protasevich, Natalie" <Natalie.Protasevich@UNISYS.com>
      Cc: "Luck, Tony" <tony.luck@intel.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      e7b946e9
    • E
      [PATCH] genirq: irq: convert the move_irq flag from a 32bit word to a single bit · a24ceab4
      Eric W. Biederman 提交于
      The primary aim of this patchset is to remove maintenances problems caused by
      the irq infrastructure.  The two big issues I address are an artificially
      small cap on the number of irqs, and that MSI assumes vector == irq.  My
      primary focus is on x86_64 but I have touched other architectures where
      necessary to keep them from breaking.
      
      - To increase the number of irqs I modify the code to look at the (cpu,
        vector) pair instead of just looking at the vector.
      
        With a large number of irqs available systems with a large irq count no
        longer need to compress their irq numbers to fit.  Removing a lot of brittle
        special cases.
      
        For acpi guys the result is that irq == gsi.
      
      - Addressing the fact that MSI assumes irq == vector takes a few more
        patches.  But suffice it to say when I am done none of the generic irq code
        even knows what a vector is.
      
      In quick testing on a large Unisys x86_64 machine we stumbled over at least
      one driver that assumed that NR_IRQS could always fit into an 8 bit number.
      This driver is clearly buggy today.  But this has become a class of bugs that
      it is now much easier to hit.
      
      This patch:
      
      This is a minor space optimization.  In practice I don't think this has any
      affect because of our alignment constraints and the other fields but there is
      not point in chewing up an uncessary word and since we already read the flag
      field this should improve the cache hit ratio of the irq handler.
      Signed-off-by: NEric W. Biederman <ebiederm@xmission.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Rajesh Shah <rajesh.shah@intel.com>
      Cc: Andi Kleen <ak@muc.de>
      Cc: "Protasevich, Natalie" <Natalie.Protasevich@UNISYS.com>
      Cc: "Luck, Tony" <tony.luck@intel.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      a24ceab4
  17. 30 6月, 2006 4 次提交
  18. 23 6月, 2006 1 次提交
  19. 11 4月, 2006 1 次提交
  20. 26 3月, 2006 2 次提交
    • B
      [PATCH] IRQ: prevent enabling of previously disabled interrupt · 501f2499
      Bryan Holty 提交于
      This fix prevents re-disabling and enabling of a previously disabled
      interrupt.  On an SMP system with irq balancing enabled; If an interrupt is
      disabled from within its own interrupt context with disable_irq_nosync and is
      also earmarked for processor migration, the interrupt is blindly moved to the
      other processor and enabled without regard for its current "enabled" state.
      If there is an interrupt pending, it will unexpectedly invoke the irq handler
      on the new irq owning processor (even though the irq was previously disabled)
      
      The more intuitive fix would be to invoke disable_irq_nosync and
      enable_irq, but since we already have the desc->lock from __do_IRQ, we
      cannot call them directly.  Instead we can use the same logic to disable
      and enable found in disable_irq_nosync and enable_irq, with regards to the
      desc->depth.
      
      This now prevents a disabled interrupt from being re-disabled, and more
      importantly prevents a disabled interrupt from being incorrectly enabled on
      a different processor.
      Signed-off-by: NBryan Holty <lgeek@frontiernet.net>
      Cc: Andi Kleen <ak@muc.de>
      Cc: "Luck, Tony" <tony.luck@intel.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      501f2499
    • A
      [PATCH] irq: uninline migration functions · c777ac55
      Andrew Morton 提交于
      Uninline some massive IRQ migration functions.  Put them in the new
      kernel/irq/migration.c.
      
      Cc: Andi Kleen <ak@muc.de>
      Cc: "Luck, Tony" <tony.luck@intel.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      c777ac55