1. 18 8月, 2017 2 次提交
  2. 27 7月, 2017 1 次提交
    • T
      genirq/cpuhotplug: Revert "Set force affinity flag on hotplug migration" · 83979133
      Thomas Gleixner 提交于
      That commit was part of the changes moving x86 to the generic CPU hotplug
      interrupt migration code. The force flag was required on x86 before the
      hierarchical irqdomain rework, but invoking set_affinity() with force=true
      stayed and had no side effects.
      
      At some point in the past, the force flag got repurposed to support the
      exynos timer interrupt affinity setting to a not yet online CPU, so the
      interrupt controller callback does not verify the supplied affinity mask
      against cpu_online_mask.
      
      Setting the flag in the CPU hotplug code causes the cpu online masking to
      be blocked on these irq controllers and results in potentially affining an
      interrupt to the CPU which is unplugged, i.e. instead of moving it away,
      it's just reassigned to it.
      
      As the force flags is not longer needed on x86, it's safe to revert that
      patch so the ARM irqchips which use the force flag work again.
      
      Add comments to that effect, so this won't happen again.
      
      Note: The online mask handling should be done in the generic code and the
      force flag and the masking in the irq chips removed all together, but
      that's not a change possible for 4.13. 
      
      Fixes: 77f85e66 ("genirq/cpuhotplug: Set force affinity flag on hotplug migration")
      Reported-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Cc: Marc Zyngier <marc.zyngier@arm.com>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: LAK <linux-arm-kernel@lists.infradead.org>
      Link: http://lkml.kernel.org/r/alpine.DEB.2.20.1707271217590.3109@nanosSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      83979133
  3. 23 6月, 2017 9 次提交
    • T
      genirq: Introduce IRQD_SINGLE_TARGET flag · d52dd441
      Thomas Gleixner 提交于
      Many interrupt chips allow only a single CPU as interrupt target. The core
      code has no knowledge about that. That's unfortunate as it could avoid
      trying to readd a newly online CPU to the effective affinity mask.
      
      Add the status flag and the necessary accessors.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Marc Zyngier <marc.zyngier@arm.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Keith Busch <keith.busch@intel.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Christoph Hellwig <hch@lst.de>
      Link: http://lkml.kernel.org/r/20170619235447.352343969@linutronix.de
      d52dd441
    • T
      genirq/cpuhotplug: Handle managed IRQs on CPU hotplug · c5cb83bb
      Thomas Gleixner 提交于
      If a CPU goes offline, interrupts affine to the CPU are moved away. If the
      outgoing CPU is the last CPU in the affinity mask the migration code breaks
      the affinity and sets it it all online cpus.
      
      This is a problem for affinity managed interrupts as CPU hotplug is often
      used for power management purposes. If the affinity is broken, the
      interrupt is not longer affine to the CPUs to which it was allocated.
      
      The affinity spreading allows to lay out multi queue devices in a way that
      they are assigned to a single CPU or a group of CPUs. If the last CPU goes
      offline, then the queue is not longer used, so the interrupt can be
      shutdown gracefully and parked until one of the assigned CPUs comes online
      again.
      
      Add a graceful shutdown mechanism into the irq affinity breaking code path,
      mark the irq as MANAGED_SHUTDOWN and leave the affinity mask unmodified.
      
      In the online path, scan the active interrupts for managed interrupts and
      if the interrupt is functional and the newly online CPU is part of the
      affinity mask, restart the interrupt if it is marked MANAGED_SHUTDOWN or if
      the interrupts is started up, try to add the CPU back to the effective
      affinity mask.
      Originally-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Marc Zyngier <marc.zyngier@arm.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Keith Busch <keith.busch@intel.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Link: http://lkml.kernel.org/r/20170619235447.273417334@linutronix.de
      c5cb83bb
    • T
      genirq: Handle managed irqs gracefully in irq_startup() · 761ea388
      Thomas Gleixner 提交于
      Affinity managed interrupts should keep their assigned affinity accross CPU
      hotplug. To avoid magic hackery in device drivers, the core code shall
      manage them transparently and set these interrupts into a managed shutdown
      state when the last CPU of the assigned affinity mask goes offline. The
      interrupt will be restarted when one of the CPUs in the assigned affinity
      mask comes back online.
      
      Add the necessary logic to irq_startup(). If an interrupt is requested and
      started up, the code checks whether it is affinity managed and if so, it
      checks whether a CPU in the interrupts affinity mask is online. If not, it
      puts the interrupt into managed shutdown state. 
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Marc Zyngier <marc.zyngier@arm.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Keith Busch <keith.busch@intel.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Christoph Hellwig <hch@lst.de>
      Link: http://lkml.kernel.org/r/20170619235447.189851170@linutronix.de
      761ea388
    • T
      genirq: Introduce IRQD_MANAGED_SHUTDOWN · 54fdf6a0
      Thomas Gleixner 提交于
      Affinity managed interrupts should keep their assigned affinity accross CPU
      hotplug. To avoid magic hackery in device drivers, the core code shall
      manage them transparently. This will set these interrupts into a managed
      shutdown state when the last CPU of the assigned affinity mask goes
      offline. The interrupt will be restarted when one of the CPUs in the
      assigned affinity mask comes back online.
      
      Introduce the necessary state flag and the accessor functions.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Marc Zyngier <marc.zyngier@arm.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Keith Busch <keith.busch@intel.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Christoph Hellwig <hch@lst.de>
      Link: http://lkml.kernel.org/r/20170619235446.954523476@linutronix.de
      54fdf6a0
    • T
      genirq: Introduce effective affinity mask · 0d3f5425
      Thomas Gleixner 提交于
      There is currently no way to evaluate the effective affinity mask of a
      given interrupt. Many irq chips allow only a single target CPU or a subset
      of CPUs in the affinity mask.
      
      Updating the mask at the time of setting the affinity to the subset would
      be counterproductive because information for cpu hotplug about assigned
      interrupt affinities gets lost. On CPU hotplug it's also pointless to force
      migrate an interrupt, which is not targeted at the CPU effectively. But
      currently the information is not available.
      
      Provide a seperate mask to be updated by the irq_chip->irq_set_affinity()
      implementations. Implement the read only proc files so the user can see the
      effective mask as well w/o trying to deduce it from /proc/interrupts.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Marc Zyngier <marc.zyngier@arm.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Keith Busch <keith.busch@intel.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Christoph Hellwig <hch@lst.de>
      Link: http://lkml.kernel.org/r/20170619235446.247834245@linutronix.de
      0d3f5425
    • T
      genirq: Move irq_fixup_move_pending() to core · 36d84fb4
      Thomas Gleixner 提交于
      Now that x86 uses the generic code, the function declaration and inline
      stub can move to the core internal header.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Marc Zyngier <marc.zyngier@arm.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Keith Busch <keith.busch@intel.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Christoph Hellwig <hch@lst.de>
      Link: http://lkml.kernel.org/r/20170619235445.928156166@linutronix.de
      36d84fb4
    • T
      genirq/cpuhotplug: Add support for cleaning up move in progress · f0383c24
      Thomas Gleixner 提交于
      In order to move x86 to the generic hotplug migration code, add support for
      cleaning up move in progress bits.
      
      On architectures which have this x86 specific (mis)feature not enabled,
      this is optimized out by the compiler.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Marc Zyngier <marc.zyngier@arm.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Keith Busch <keith.busch@intel.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Christoph Hellwig <hch@lst.de>
      Link: http://lkml.kernel.org/r/20170619235445.525817311@linutronix.de
      f0383c24
    • T
      genirq: Provide irq_fixup_move_pending() · cdd16365
      Thomas Gleixner 提交于
      If an CPU goes offline, the interrupts are migrated away, but a eventually
      pending interrupt move, which has not yet been made effective is kept
      pending even if the outgoing CPU is the sole target of the pending affinity
      mask. What's worse is, that the pending affinity mask is discarded even if
      it would contain a valid subset of the online CPUs.
      
      Implement a helper function which allows to avoid these issues.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Marc Zyngier <marc.zyngier@arm.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Keith Busch <keith.busch@intel.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Christoph Hellwig <hch@lst.de>
      Link: http://lkml.kernel.org/r/20170619235444.691345468@linutronix.de
      cdd16365
    • T
      genirq: Add missing comment for IRQD_STARTED · 1bb04016
      Thomas Gleixner 提交于
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Marc Zyngier <marc.zyngier@arm.com>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Keith Busch <keith.busch@intel.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Christoph Hellwig <hch@lst.de>
      Link: http://lkml.kernel.org/r/20170619235444.614913014@linutronix.de
      1bb04016
  4. 21 6月, 2017 4 次提交
  5. 04 6月, 2017 1 次提交
    • T
      genirq: Handle NOAUTOEN interrupt setup proper · 201d7f47
      Thomas Gleixner 提交于
      If an interrupt is marked NOAUTOEN then request_irq() installs the action,
      but does not enable the interrupt via startup_irq().  The interrupt is
      enabled via enable_irq() later from the driver. enable_irq() calls
      irq_enable().
      
      That means that for interrupts which have a irq_startup() callback this
      callback is never invoked. Neither is irq_domain_activate_irq() invoked for
      such interrupts.
      
      If an interrupt depends on irq_startup() or irq_domain_activate_irq() then
      the enable via irq_enable() is not enough.
      
      Add a status flag IRQD_IRQ_STARTED_UP and use this to select the proper
      mechanism in enable_irq(). Use the flag also to avoid pointless calls into
      the low level functions.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Acked-by: NMarc Zyngier <marc.zyngier@arm.com>
      Cc: dianders@chromium.org
      Cc: jeffy <jeffy.chen@rock-chips.com>
      Cc: Brian Norris <briannorris@chromium.org>
      Cc: tfiga@chromium.org
      Link: http://lkml.kernel.org/r/20170531100212.130986205@linutronix.de
      201d7f47
  6. 10 2月, 2017 1 次提交
  7. 30 1月, 2017 1 次提交
  8. 13 9月, 2016 1 次提交
  9. 03 9月, 2016 1 次提交
    • S
      genirq/generic_chip: Verify irqs_per_chip <= 32 · f88eecfe
      Sebastian Frias 提交于
      Most (if not all) code here implicitly assumes that the maximum number of
      IRQs per chip will be 32, and thus uses 'u32' or 'unsigned long' for many
      tasks (for example "struct irq_data" declares its 'mask' field as 'u32',
      and "struct irq_chip_generic" declares its 'installed' field as 'unsigned
      long')
      
      However, there is no check to verify that irqs_per_chip is <= 32.  Hence,
      calling irq_alloc_domain_generic_chips() with a bigger value will result in
      unexpected results.
      
      Provide a wrapper with a MAYBE_BUILD_BUG_ON(nrirqs >= 32) to catch such
      cases.
      
      [ tglx: Reduced changelog to the essential information ]
      Signed-off-by: NSebastian Frias <sf84@laposte.net>
      Cc: Marc Zyngier <marc.zyngier@arm.com>
      Cc: Mason <slash.tmp@free.fr>
      Cc: Jason Cooper <jason@lakedaemon.net>
      Link: http://lkml.kernel.org/r/57B31D94.5040701@laposte.netSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      f88eecfe
  10. 04 7月, 2016 2 次提交
  11. 18 6月, 2016 1 次提交
    • K
      genirq: Add untracked irq handler · edd14cfe
      Keith Busch 提交于
      This adds a software irq handler for controllers that multiplex
      interrupts from multiple devices, but don't know which device generated
      the interrupt. For these devices, the irq handler that demuxes must
      check every action for every software irq using the same h/w irq in order
      to find out which device generated the interrupt. This will inevitably
      trigger spurious interrupt detection if we are noting the irq.
      
      The new irq handler does not track the handling for spurious interrupt
      detection. An irq that uses this also won't get stats tracked since it
      didn't generate the interrupt, nor added to randomness since they are
      not random.
      Signed-off-by: NKeith Busch <keith.busch@intel.com>
      Cc: Bjorn Helgaas <bhelgaas@google.com>
      Cc: linux-pci@vger.kernel.org
      Cc: Jon Derrick <jonathan.derrick@intel.com>
      Link: http://lkml.kernel.org/r/1466200821-29159-1-git-send-email-keith.busch@intel.comSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      edd14cfe
  12. 13 6月, 2016 1 次提交
  13. 02 5月, 2016 1 次提交
  14. 25 2月, 2016 6 次提交
  15. 24 2月, 2016 1 次提交
  16. 11 10月, 2015 1 次提交
    • T
      genirq: Add flag to force mask in disable_irq[_nosync]() · e9849777
      Thomas Gleixner 提交于
      If an irq chip does not implement the irq_disable callback, then we
      use a lazy approach for disabling the interrupt. That means that the
      interrupt is marked disabled, but the interrupt line is not
      immediately masked in the interrupt chip. It only becomes masked if
      the interrupt is raised while it's marked disabled. We use this to avoid
      possibly expensive mask/unmask operations for common case operations.
      
      Unfortunately there are devices which do not allow the interrupt to be
      disabled easily at the device level. They are forced to use
      disable_irq_nosync(). This can result in taking each interrupt twice.
      
      Instead of enforcing the non lazy mode on all interrupts of a irq
      chip, provide a settings flag, which can be set by the driver for that
      particular interrupt line.
      Reported-and-tested-by: NDuc Dang <dhdang@apm.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Marc Zyngier <marc.zyngier@arm.com>
      Cc: Jason Cooper <jason@lakedaemon.net>
      Link: http://lkml.kernel.org/r/alpine.DEB.2.11.1510092348370.6097@nanos
      e9849777
  17. 01 10月, 2015 3 次提交
  18. 16 9月, 2015 3 次提交