1. 09 4月, 2015 1 次提交
    • M
      genirq: Allow the irqchip state of an IRQ to be save/restored · 1b7047ed
      Marc Zyngier 提交于
      There is a number of cases where a kernel subsystem may want to
      introspect the state of an interrupt at the irqchip level:
      
      - When a peripheral is shared between virtual machines,
        its interrupt state becomes part of the guest's state,
        and must be switched accordingly. KVM on arm/arm64 requires
        this for its guest-visible timer
      - Some GPIO controllers seem to require peeking into the
        interrupt controller they are connected to to report
        their internal state
      
      This seem to be a pattern that is common enough for the core code
      to try and support this without too many horrible hacks. Introduce
      a pair of accessors (irq_get_irqchip_state/irq_set_irqchip_state)
      to retrieve the bits that can be of interest to another subsystem:
      pending, active, and masked.
      
      - irq_get_irqchip_state returns the state of the interrupt according
        to a parameter set to IRQCHIP_STATE_PENDING, IRQCHIP_STATE_ACTIVE,
        IRQCHIP_STATE_MASKED or IRQCHIP_STATE_LINE_LEVEL.
      - irq_set_irqchip_state similarly sets the state of the interrupt.
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      Reviewed-by: NBjorn Andersson <bjorn.andersson@sonymobile.com>
      Tested-by: NBjorn Andersson <bjorn.andersson@sonymobile.com>
      Cc: linux-arm-kernel@lists.infradead.org
      Cc: Abhijeet Dharmapurikar <adharmap@codeaurora.org>
      Cc: Stephen Boyd <sboyd@codeaurora.org>
      Cc: Phong Vo <pvo@apm.com>
      Cc: Linus Walleij <linus.walleij@linaro.org>
      Cc: Tin Huynh <tnhuynh@apm.com>
      Cc: Y Vo <yvo@apm.com>
      Cc: Toan Le <toanle@apm.com>
      Cc: Bjorn Andersson <bjorn@kryo.se>
      Cc: Jason Cooper <jason@lakedaemon.net>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Link: http://lkml.kernel.org/r/1426676484-21812-2-git-send-email-marc.zyngier@arm.comSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      1b7047ed
  2. 05 3月, 2015 1 次提交
    • R
      genirq / PM: Add flag for shared NO_SUSPEND interrupt lines · 17f48034
      Rafael J. Wysocki 提交于
      It currently is required that all users of NO_SUSPEND interrupt
      lines pass the IRQF_NO_SUSPEND flag when requesting the IRQ or the
      WARN_ON_ONCE() in irq_pm_install_action() will trigger.  That is
      done to warn about situations in which unprepared interrupt handlers
      may be run unnecessarily for suspended devices and may attempt to
      access those devices by mistake.  However, it may cause drivers
      that have no technical reasons for using IRQF_NO_SUSPEND to set
      that flag just because they happen to share the interrupt line
      with something like a timer.
      
      Moreover, the generic handling of wakeup interrupts introduced by
      commit 9ce7a258 (genirq: Simplify wakeup mechanism) only works
      for IRQs without any NO_SUSPEND users, so the drivers of wakeup
      devices needing to use shared NO_SUSPEND interrupt lines for
      signaling system wakeup generally have to detect wakeup in their
      interrupt handlers.  Thus if they happen to share an interrupt line
      with a NO_SUSPEND user, they also need to request that their
      interrupt handlers be run after suspend_device_irqs().
      
      In both cases the reason for using IRQF_NO_SUSPEND is not because
      the driver in question has a genuine need to run its interrupt
      handler after suspend_device_irqs(), but because it happens to
      share the line with some other NO_SUSPEND user.  Otherwise, the
      driver would do without IRQF_NO_SUSPEND just fine.
      
      To make it possible to specify that condition explicitly, introduce
      a new IRQ action handler flag for shared IRQs, IRQF_COND_SUSPEND,
      that, when set, will indicate to the IRQ core that the interrupt
      user is generally fine with suspending the IRQ, but it also can
      tolerate handler invocations after suspend_device_irqs() and, in
      particular, it is capable of detecting system wakeup and triggering
      it as appropriate from its interrupt handler.
      
      That will allow us to work around a problem with a shared timer
      interrupt line on at91 platforms.
      
      Link: http://marc.info/?l=linux-kernel&m=142252777602084&w=2
      Link: http://marc.info/?t=142252775300011&r=1&w=2
      Link: https://lkml.org/lkml/2014/12/15/552Reported-by: NBoris Brezillon <boris.brezillon@free-electrons.com>
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Acked-by: NMark Rutland <mark.rutland@arm.com>
      17f48034
  3. 18 2月, 2015 1 次提交
    • P
      genirq: Provide disable_hardirq() · 02cea395
      Peter Zijlstra 提交于
      For things like netpoll there is a need to disable an interrupt from
      atomic context. Currently netpoll uses disable_irq() which will
      sleep-wait on threaded handlers and thus forced_irqthreads breaks
      things.
      
      Provide disable_hardirq(), which uses synchronize_hardirq() to only wait
      for active hardirq handlers; also change synchronize_hardirq() to
      return the status of threaded handlers.
      
      This will allow one to try-disable an interrupt from atomic context, or
      in case of request_threaded_irq() to only wait for the hardirq part.
      Suggested-by: NSabrina Dubroca <sd@queasysnail.net>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: David Miller <davem@davemloft.net>
      Cc: Eyal Perry <eyalpe@mellanox.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Quentin Lambert <lambert.quentin@gmail.com>
      Cc: Randy Dunlap <rdunlap@infradead.org>
      Cc: Russell King <linux@arm.linux.org.uk>
      Link: http://lkml.kernel.org/r/20150205130623.GH5029@twins.programming.kicks-ass.net
      [ Fixed typos and such. ]
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      02cea395
  4. 10 2月, 2015 1 次提交
  5. 23 1月, 2015 1 次提交
    • J
      genirq: Set initial affinity in irq_set_affinity_hint() · e2e64a93
      Jesse Brandeburg 提交于
      Problem:
      The default behavior of the kernel is somewhat undesirable as all
      requested interrupts end up on CPU0 after registration.  A user can
      run irqbalance daemon, or can manually configure smp_affinity via the
      proc filesystem, but the default affinity of the interrupts for all
      devices is always CPU zero, this can cause performance problems or
      very heavy cpu use of only one core if not noticed and fixed by the
      user.
      
      Solution:
      Enable the setting of the initial affinity directly when the driver
      sets a hint.
      
      This enabling means that kernel drivers can include an initial
      affinity setting for the interrupt, instead of all interrupts starting
      out life on CPU0. Of course if irqbalance is still running then the
      interrupts will get moved as before.
      
      This function is currently called by drivers in block, crypto,
      infiniband, ethernet and scsi trees, but only a handful, so these will
      be the devices affected by this change.
      
      Tested on i40e, and default interrupts were spread across the CPUs
      according to the hint.
      
      drivers/block/mtip32xx/mtip32xx.c:3
      drivers/block/nvme-core.c:2
      drivers/crypto/qat/qat_dh895xcc/adf_isr.c:3
      drivers/infiniband/hw/qib/qib_iba7322.c:2
      drivers/net/ethernet/intel/i40e/i40e_main.c:3
      drivers/net/ethernet/intel/i40evf/i40evf_main.c:3
      drivers/net/ethernet/intel/ixgbe/ixgbe_main.c:3
      drivers/net/ethernet/mellanox/mlx4/en_cq.c:2
      drivers/scsi/hpsa.c:3
      drivers/scsi/lpfc/lpfc_init.c:3
      drivers/scsi/megaraid/megaraid_sas_base.c:8
      drivers/soc/ti/knav_qmss_acc.c:1
      drivers/soc/ti/knav_qmss_queue.c:2
      drivers/virtio/virtio_pci_common.c:2
      Signed-off-by: NJesse Brandeburg <jesse.brandeburg@intel.com>
      Cc: netdev@vger.kernel.org
      Link: http://lkml.kernel.org/r/20141219012206.4220.27491.stgit@jbrandeb-cp2.jf.intel.comSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      e2e64a93
  6. 23 11月, 2014 1 次提交
    • J
      genirq: Add IRQ_SET_MASK_OK_DONE to support stacked irqchip · 2cb62547
      Jiang Liu 提交于
      Add IRQ_SET_MASK_OK_DONE in addition to IRQ_SET_MASK_OK and
      IRQ_SET_MASK_OK_NOCOPY to support stacked irqchip. IRQ_SET_MASK_OK_DONE
      is the same as IRQ_SET_MASK_OK to irq core. To stacked irqchip, it means
      that ascendant irqchips have done all the work and no more handling
      needed in descendant irqchips.
      Signed-off-by: NJiang Liu <jiang.liu@linux.intel.com>
      Cc: Bjorn Helgaas <bhelgaas@google.com>
      Cc: Grant Likely <grant.likely@linaro.org>
      Cc: Marc Zyngier <marc.zyngier@arm.com>
      Cc: Yingjoe Chen <yingjoe.chen@mediatek.com>
      Cc: Yijing Wang <wangyijing@huawei.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      2cb62547
  7. 01 9月, 2014 2 次提交
  8. 01 8月, 2014 2 次提交
  9. 24 7月, 2014 1 次提交
  10. 15 7月, 2014 1 次提交
    • R
      PM / sleep / irq: Do not suspend wakeup interrupts · d709f7bc
      Rafael J. Wysocki 提交于
      If an IRQ has been configured for wakeup via enable_irq_wake(), the
      driver who has done that must be prepared for receiving interrupts
      after suspend_device_irqs() has returned, so there is no need to
      "suspend" such IRQs.  Moreover, if drivers using enable_irq_wake()
      actually want to receive interrupts after suspend_device_irqs() has
      returned, they need to add IRQF_NO_SUSPEND to the IRQ flags while
      requesting the IRQs, which shouldn't be necessary (it also goes a bit
      too far, as IRQF_NO_SUSPEND causes the IRQ to be ignored by
      suspend_device_irqs() all the time regardless of whether or not it
      has been configured for signaling wakeup).
      
      For the above reasons, make __disable_irq() ignore IRQ descriptors
      with IRQD_WAKEUP_STATE set when its suspend argument is true which
      effectively causes them to behave like IRQs with IRQF_NO_SUSPEND
      set.
      
      This also allows IRQs configured for wakeup via enable_irq_wake()
      to work as wakeup interrupts for the "freeze" (suspend-to-idle)
      sleep mode automatically just like for any other sleep states.
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      Cc: Li Aubrey <aubrey.li@linux.intel.com>
      Cc: Dmitry Torokhov <dmitry.torokhov@gmail.com>
      Cc: One Thousand Gnomes <gnomes@lxorguk.ukuu.org.uk>
      Link: http://lkml.kernel.org/r/4679574.kGUnqAuNl9@vostro.rjw.lanSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      d709f7bc
  11. 04 5月, 2014 1 次提交
    • T
      genirq: Sanitize spurious interrupt detection of threaded irqs · 1e77d0a1
      Thomas Gleixner 提交于
      Till reported that the spurious interrupt detection of threaded
      interrupts is broken in two ways:
      
      - note_interrupt() is called for each action thread of a shared
        interrupt line. That's wrong as we are only interested whether none
        of the device drivers felt responsible for the interrupt, but by
        calling multiple times for a single interrupt line we account
        IRQ_NONE even if one of the drivers felt responsible.
      
      - note_interrupt() when called from the thread handler is not
        serialized. That leaves the members of irq_desc which are used for
        the spurious detection unprotected.
      
      To solve this we need to defer the spurious detection of a threaded
      interrupt to the next hardware interrupt context where we have
      implicit serialization.
      
      If note_interrupt is called with action_ret == IRQ_WAKE_THREAD, we
      check whether the previous interrupt requested a deferred check. If
      not, we request a deferred check for the next hardware interrupt and
      return. 
      
      If set, we check whether one of the interrupt threads signaled
      success. Depending on this information we feed the result into the
      spurious detector.
      
      If one primary handler of a shared interrupt returns IRQ_HANDLED we
      disable the deferred check of irq threads on the same line, as we have
      found at least one device driver who cared.
      Reported-by: NTill Straumann <strauman@slac.stanford.edu>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Tested-by: NAustin Schuh <austin@peloton-tech.com>
      Cc: Oliver Hartkopp <socketcan@hartkopp.net>
      Cc: Wolfgang Grandegger <wg@grandegger.com>
      Cc: Pavel Pisa <pisa@cmp.felk.cvut.cz>
      Cc: Marc Kleine-Budde <mkl@pengutronix.de>
      Cc: linux-can@vger.kernel.org
      Cc: stable@vger.kernel.org
      Link: http://lkml.kernel.org/r/alpine.LFD.2.02.1303071450130.22263@ionos
      1e77d0a1
  12. 18 4月, 2014 1 次提交
    • T
      genirq: Allow forcing cpu affinity of interrupts · 01f8fa4f
      Thomas Gleixner 提交于
      The current implementation of irq_set_affinity() refuses rightfully to
      route an interrupt to an offline cpu.
      
      But there is a special case, where this is actually desired. Some of
      the ARM SoCs have per cpu timers which require setting the affinity
      during cpu startup where the cpu is not yet in the online mask.
      
      If we can't do that, then the local timer interrupt for the about to
      become online cpu is routed to some random online cpu.
      
      The developers of the affected machines tried to work around that
      issue, but that results in a massive mess in that timer code.
      
      We have a yet unused argument in the set_affinity callbacks of the irq
      chips, which I added back then for a similar reason. It was never
      required so it got not used. But I'm happy that I never removed it.
      
      That allows us to implement a sane handling of the above scenario. So
      the affected SoC drivers can add the required force handling to their
      interrupt chip, switch the timer code to irq_force_affinity() and
      things just work.
      
      This does not affect any existing user of irq_set_affinity().
      
      Tagged for stable to allow a simple fix of the affected SoC clock
      event drivers.
      Reported-and-tested-by: NKrzysztof Kozlowski <k.kozlowski@samsung.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Kyungmin Park <kyungmin.park@samsung.com>
      Cc: Marek Szyprowski <m.szyprowski@samsung.com>
      Cc: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
      Cc: Tomasz Figa <t.figa@samsung.com>,
      Cc: Daniel Lezcano <daniel.lezcano@linaro.org>,
      Cc: Kukjin Kim <kgene.kim@samsung.com>
      Cc: linux-arm-kernel@lists.infradead.org,
      Cc: stable@vger.kernel.org
      Link: http://lkml.kernel.org/r/20140416143315.717251504@linutronix.deSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      01f8fa4f
  13. 14 3月, 2014 1 次提交
  14. 12 3月, 2014 1 次提交
  15. 27 2月, 2014 1 次提交
    • C
      genirq: Remove racy waitqueue_active check · c685689f
      Chuansheng Liu 提交于
      We hit one rare case below:
      
      T1 calling disable_irq(), but hanging at synchronize_irq()
      always;
      The corresponding irq thread is in sleeping state;
      And all CPUs are in idle state;
      
      After analysis, we found there is one possible scenerio which
      causes T1 is waiting there forever:
      CPU0                                       CPU1
       synchronize_irq()
        wait_event()
          spin_lock()
                                                 atomic_dec_and_test(&threads_active)
            insert the __wait into queue
          spin_unlock()
                                                 if(waitqueue_active)
          atomic_read(&threads_active)
                                                   wake_up()
      
      Here after inserted the __wait into queue on CPU0, and before
      test if queue is empty on CPU1, there is no barrier, it maybe
      cause it is not visible for CPU1 immediately, although CPU0 has
      updated the queue list.
      It is similar for CPU0 atomic_read() threads_active also.
      
      So we'd need one smp_mb() before waitqueue_active.that, but removing
      the waitqueue_active() check solves it as wel l and it makes
      things simple and clear.
      Signed-off-by: NChuansheng Liu <chuansheng.liu@intel.com>
      Cc: Xiaoming Wang <xiaoming.wang@intel.com>
      Link: http://lkml.kernel.org/r/1393212590-32543-1-git-send-email-chuansheng.liu@intel.com
      Cc: stable@vger.kernel.org
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      c685689f
  16. 20 2月, 2014 3 次提交
    • C
      b04c644e
    • T
      genirq: Provide irq_wake_thread() · a92444c6
      Thomas Gleixner 提交于
      In course of the sdhci/sdio discussion with Russell about killing the
      sdio kthread hackery we discovered the need to be able to wake an
      interrupt thread from software.
      
      The rationale for this is, that sdio hardware can lack proper
      interrupt support for certain features. So the driver needs to poll
      the status registers, but at the same time it needs to be woken up by
      an hardware interrupt.
      
      To be able to get rid of the home brewn kthread construct of sdio we
      need a way to wake an irq thread independent of an actual hardware
      interrupt.
      
      Provide an irq_wake_thread() function which wakes up the thread which
      is associated to a given dev_id. This allows sdio to invoke the irq
      thread from the hardware irq handler via the IRQ_WAKE_THREAD return
      value and provides a possibility to wake it via a timer for the
      polling scenarios. That allows to simplify the sdio logic
      significantly.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Chris Ball <chris@printf.net>
      Acked-by: NPeter Zijlstra <peterz@infradead.org>
      Link: http://lkml.kernel.org/r/20140215003823.772565780@linutronix.de
      a92444c6
    • T
      genirq: Provide synchronize_hardirq() · 18258f72
      Thomas Gleixner 提交于
      synchronize_irq() waits for hard irq and threaded handlers to complete
      before returning. For some special cases we only need to make sure
      that the hard interrupt part of the irq line is not in progress when
      we disabled the - possibly shared - interrupt at the device level.
      
      A proper use case for this was provided by Russell. The sdhci driver
      requires some irq triggered functions to be run in thread context. The
      current implementation of the thread context is a sdio private kthread
      construct, which has quite some shortcomings. These can be avoided
      when the thread is directly associated to the device interrupt via the
      generic threaded irq infrastructure.
      
      Though there is a corner case related to run time power management
      where one side disables the device interrupts at the device level and
      needs to make sure, that an already running hard interrupt handler has
      completed before proceeding further. Though that hard interrupt
      handler might wake the associated thread, which in turn can request
      the runtime PM to reenable the device. Using synchronize_irq() leads
      to an immediate deadlock of the irq thread waiting for the PM lock and
      the synchronize_irq() waiting for the irq thread to complete.
      
      Due to the fact that it is sufficient for this case to ensure that no
      hard irq handler is executing a new function which avoids the check
      for the thread is required.
      
      Add a function, which just monitors the hard irq parts and ignores the
      threaded handlers.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Tested-by: NRussell King <linux@arm.linux.org.uk>
      Cc: Chris Ball <chris@printf.net>
      Acked-by: NPeter Zijlstra <peterz@infradead.org>
      Link: http://lkml.kernel.org/r/20140215003823.653236081@linutronix.de
      18258f72
  17. 28 10月, 2013 1 次提交
  18. 18 10月, 2013 1 次提交
  19. 28 6月, 2013 1 次提交
  20. 11 6月, 2013 1 次提交
  21. 19 2月, 2013 1 次提交
  22. 08 2月, 2013 1 次提交
  23. 19 12月, 2012 1 次提交
  24. 13 11月, 2012 1 次提交
    • T
      genirq: Always force thread affinity · 04aa530e
      Thomas Gleixner 提交于
      Sankara reported that the genirq core code fails to adjust the
      affinity of an interrupt thread in several cases:
      
       1) On request/setup_irq() the call to setup_affinity() happens before
          the new action is registered, so the new thread is not notified.
      
       2) For secondary shared interrupts nothing notifies the new thread to
          change its affinity.
      
       3) Interrupts which have the IRQ_NO_BALANCE flag set are not moving
          the thread either.
      
      Fix this by setting the thread affinity flag right on thread creation
      time. This ensures that under all circumstances the thread moves to
      the right place. Requires a check in irq_thread_check_affinity for an
      existing affinity mask (CONFIG_CPU_MASK_OFFSTACK=y)
      Reported-and-tested-by: NSankara Muthukrishnan <sankara.m@gmail.com>
      Cc: stable@vger.kernel.org
      Link: http://lkml.kernel.org/r/alpine.LFD.2.02.1209041738200.2754@ionosSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      04aa530e
  25. 01 11月, 2012 2 次提交
  26. 25 7月, 2012 1 次提交
  27. 23 7月, 2012 2 次提交
  28. 19 7月, 2012 1 次提交
  29. 01 6月, 2012 1 次提交
  30. 25 5月, 2012 1 次提交
  31. 24 5月, 2012 1 次提交
    • O
      genirq: reimplement exit_irq_thread() hook via task_work_add() · 4d1d61a6
      Oleg Nesterov 提交于
      exit_irq_thread() and task->irq_thread are needed to handle the unexpected
      (and unlikely) exit of irq-thread.
      
      We can use task_work instead and make this all private to
      kernel/irq/manage.c, cleanup plus micro-optimization.
      
      1. rename exit_irq_thread() to irq_thread_dtor(), make it
         static, and move it up before irq_thread().
      
      2. change irq_thread() to do task_work_add(irq_thread_dtor)
         at the start and task_work_cancel() before return.
      
         tracehook_notify_resume() can never play with kthreads,
         only do_exit()->exit_task_work() can call the callback
         and this is what we want.
      
      3. remove task_struct->irq_thread and the special hook
         in do_exit().
      Signed-off-by: NOleg Nesterov <oleg@redhat.com>
      Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: David Howells <dhowells@redhat.com>
      Cc: Richard Kuo <rkuo@codeaurora.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Alexander Gordeev <agordeev@redhat.com>
      Cc: Chris Zankel <chris@zankel.net>
      Cc: David Smith <dsmith@redhat.com>
      Cc: "Frank Ch. Eigler" <fche@redhat.com>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Larry Woodman <lwoodman@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Ingo Molnar <mingo@elte.hu>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      4d1d61a6
  32. 22 5月, 2012 1 次提交
  33. 19 4月, 2012 2 次提交
    • T
      genirq: Be more informative on irq type mismatch · f5d89470
      Thomas Gleixner 提交于
      We require that shared interrupts agree on a few flag settings. Right
      now we silently return with an error code without giving any hint why
      we reject it.
      
      Make the printout unconditionally and actually useful by printing the
      flags of the new and the already registered action.
      
      Convert all printks to pr_* and use a proper prefix while at it.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      f5d89470
    • T
      genirq: Reject bogus threaded irq requests · 1c6c6952
      Thomas Gleixner 提交于
      Requesting a threaded interrupt without a primary handler and without
      IRQF_ONESHOT set is dangerous.
      
      The core will use the default primary handler for it, which merily
      wakes the thread. For a level type interrupt this results in an
      interrupt storm, because the interrupt line is reenabled after the
      primary handler runs. The device has still the line asserted, which
      brings us back into the primary handler.
      
      While this works for edge type interrupts, we play it safe and reject
      unconditionally because we can't say for sure which type this
      interrupt really has. The type flags are unreliable as the underlying
      chip implementation can override them. And we cannot assume that
      developers using that interface know what they are doing.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      1c6c6952