1. 18 4月, 2014 1 次提交
    • T
      genirq: Allow forcing cpu affinity of interrupts · 01f8fa4f
      Thomas Gleixner 提交于
      The current implementation of irq_set_affinity() refuses rightfully to
      route an interrupt to an offline cpu.
      
      But there is a special case, where this is actually desired. Some of
      the ARM SoCs have per cpu timers which require setting the affinity
      during cpu startup where the cpu is not yet in the online mask.
      
      If we can't do that, then the local timer interrupt for the about to
      become online cpu is routed to some random online cpu.
      
      The developers of the affected machines tried to work around that
      issue, but that results in a massive mess in that timer code.
      
      We have a yet unused argument in the set_affinity callbacks of the irq
      chips, which I added back then for a similar reason. It was never
      required so it got not used. But I'm happy that I never removed it.
      
      That allows us to implement a sane handling of the above scenario. So
      the affected SoC drivers can add the required force handling to their
      interrupt chip, switch the timer code to irq_force_affinity() and
      things just work.
      
      This does not affect any existing user of irq_set_affinity().
      
      Tagged for stable to allow a simple fix of the affected SoC clock
      event drivers.
      Reported-and-tested-by: NKrzysztof Kozlowski <k.kozlowski@samsung.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Kyungmin Park <kyungmin.park@samsung.com>
      Cc: Marek Szyprowski <m.szyprowski@samsung.com>
      Cc: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
      Cc: Tomasz Figa <t.figa@samsung.com>,
      Cc: Daniel Lezcano <daniel.lezcano@linaro.org>,
      Cc: Kukjin Kim <kgene.kim@samsung.com>
      Cc: linux-arm-kernel@lists.infradead.org,
      Cc: stable@vger.kernel.org
      Link: http://lkml.kernel.org/r/20140416143315.717251504@linutronix.deSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      01f8fa4f
  2. 14 3月, 2014 1 次提交
  3. 12 3月, 2014 1 次提交
  4. 27 2月, 2014 1 次提交
    • C
      genirq: Remove racy waitqueue_active check · c685689f
      Chuansheng Liu 提交于
      We hit one rare case below:
      
      T1 calling disable_irq(), but hanging at synchronize_irq()
      always;
      The corresponding irq thread is in sleeping state;
      And all CPUs are in idle state;
      
      After analysis, we found there is one possible scenerio which
      causes T1 is waiting there forever:
      CPU0                                       CPU1
       synchronize_irq()
        wait_event()
          spin_lock()
                                                 atomic_dec_and_test(&threads_active)
            insert the __wait into queue
          spin_unlock()
                                                 if(waitqueue_active)
          atomic_read(&threads_active)
                                                   wake_up()
      
      Here after inserted the __wait into queue on CPU0, and before
      test if queue is empty on CPU1, there is no barrier, it maybe
      cause it is not visible for CPU1 immediately, although CPU0 has
      updated the queue list.
      It is similar for CPU0 atomic_read() threads_active also.
      
      So we'd need one smp_mb() before waitqueue_active.that, but removing
      the waitqueue_active() check solves it as wel l and it makes
      things simple and clear.
      Signed-off-by: NChuansheng Liu <chuansheng.liu@intel.com>
      Cc: Xiaoming Wang <xiaoming.wang@intel.com>
      Link: http://lkml.kernel.org/r/1393212590-32543-1-git-send-email-chuansheng.liu@intel.com
      Cc: stable@vger.kernel.org
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      c685689f
  5. 20 2月, 2014 3 次提交
    • C
      b04c644e
    • T
      genirq: Provide irq_wake_thread() · a92444c6
      Thomas Gleixner 提交于
      In course of the sdhci/sdio discussion with Russell about killing the
      sdio kthread hackery we discovered the need to be able to wake an
      interrupt thread from software.
      
      The rationale for this is, that sdio hardware can lack proper
      interrupt support for certain features. So the driver needs to poll
      the status registers, but at the same time it needs to be woken up by
      an hardware interrupt.
      
      To be able to get rid of the home brewn kthread construct of sdio we
      need a way to wake an irq thread independent of an actual hardware
      interrupt.
      
      Provide an irq_wake_thread() function which wakes up the thread which
      is associated to a given dev_id. This allows sdio to invoke the irq
      thread from the hardware irq handler via the IRQ_WAKE_THREAD return
      value and provides a possibility to wake it via a timer for the
      polling scenarios. That allows to simplify the sdio logic
      significantly.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Chris Ball <chris@printf.net>
      Acked-by: NPeter Zijlstra <peterz@infradead.org>
      Link: http://lkml.kernel.org/r/20140215003823.772565780@linutronix.de
      a92444c6
    • T
      genirq: Provide synchronize_hardirq() · 18258f72
      Thomas Gleixner 提交于
      synchronize_irq() waits for hard irq and threaded handlers to complete
      before returning. For some special cases we only need to make sure
      that the hard interrupt part of the irq line is not in progress when
      we disabled the - possibly shared - interrupt at the device level.
      
      A proper use case for this was provided by Russell. The sdhci driver
      requires some irq triggered functions to be run in thread context. The
      current implementation of the thread context is a sdio private kthread
      construct, which has quite some shortcomings. These can be avoided
      when the thread is directly associated to the device interrupt via the
      generic threaded irq infrastructure.
      
      Though there is a corner case related to run time power management
      where one side disables the device interrupts at the device level and
      needs to make sure, that an already running hard interrupt handler has
      completed before proceeding further. Though that hard interrupt
      handler might wake the associated thread, which in turn can request
      the runtime PM to reenable the device. Using synchronize_irq() leads
      to an immediate deadlock of the irq thread waiting for the PM lock and
      the synchronize_irq() waiting for the irq thread to complete.
      
      Due to the fact that it is sufficient for this case to ensure that no
      hard irq handler is executing a new function which avoids the check
      for the thread is required.
      
      Add a function, which just monitors the hard irq parts and ignores the
      threaded handlers.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Tested-by: NRussell King <linux@arm.linux.org.uk>
      Cc: Chris Ball <chris@printf.net>
      Acked-by: NPeter Zijlstra <peterz@infradead.org>
      Link: http://lkml.kernel.org/r/20140215003823.653236081@linutronix.de
      18258f72
  6. 28 10月, 2013 1 次提交
  7. 18 10月, 2013 1 次提交
  8. 28 6月, 2013 1 次提交
  9. 11 6月, 2013 1 次提交
  10. 19 2月, 2013 1 次提交
  11. 08 2月, 2013 1 次提交
  12. 19 12月, 2012 1 次提交
  13. 13 11月, 2012 1 次提交
    • T
      genirq: Always force thread affinity · 04aa530e
      Thomas Gleixner 提交于
      Sankara reported that the genirq core code fails to adjust the
      affinity of an interrupt thread in several cases:
      
       1) On request/setup_irq() the call to setup_affinity() happens before
          the new action is registered, so the new thread is not notified.
      
       2) For secondary shared interrupts nothing notifies the new thread to
          change its affinity.
      
       3) Interrupts which have the IRQ_NO_BALANCE flag set are not moving
          the thread either.
      
      Fix this by setting the thread affinity flag right on thread creation
      time. This ensures that under all circumstances the thread moves to
      the right place. Requires a check in irq_thread_check_affinity for an
      existing affinity mask (CONFIG_CPU_MASK_OFFSTACK=y)
      Reported-and-tested-by: NSankara Muthukrishnan <sankara.m@gmail.com>
      Cc: stable@vger.kernel.org
      Link: http://lkml.kernel.org/r/alpine.LFD.2.02.1209041738200.2754@ionosSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      04aa530e
  14. 01 11月, 2012 2 次提交
  15. 25 7月, 2012 1 次提交
  16. 23 7月, 2012 2 次提交
  17. 19 7月, 2012 1 次提交
  18. 01 6月, 2012 1 次提交
  19. 25 5月, 2012 1 次提交
  20. 24 5月, 2012 1 次提交
    • O
      genirq: reimplement exit_irq_thread() hook via task_work_add() · 4d1d61a6
      Oleg Nesterov 提交于
      exit_irq_thread() and task->irq_thread are needed to handle the unexpected
      (and unlikely) exit of irq-thread.
      
      We can use task_work instead and make this all private to
      kernel/irq/manage.c, cleanup plus micro-optimization.
      
      1. rename exit_irq_thread() to irq_thread_dtor(), make it
         static, and move it up before irq_thread().
      
      2. change irq_thread() to do task_work_add(irq_thread_dtor)
         at the start and task_work_cancel() before return.
      
         tracehook_notify_resume() can never play with kthreads,
         only do_exit()->exit_task_work() can call the callback
         and this is what we want.
      
      3. remove task_struct->irq_thread and the special hook
         in do_exit().
      Signed-off-by: NOleg Nesterov <oleg@redhat.com>
      Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: David Howells <dhowells@redhat.com>
      Cc: Richard Kuo <rkuo@codeaurora.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Alexander Gordeev <agordeev@redhat.com>
      Cc: Chris Zankel <chris@zankel.net>
      Cc: David Smith <dsmith@redhat.com>
      Cc: "Frank Ch. Eigler" <fche@redhat.com>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Larry Woodman <lwoodman@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Ingo Molnar <mingo@elte.hu>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      4d1d61a6
  21. 22 5月, 2012 1 次提交
  22. 19 4月, 2012 2 次提交
    • T
      genirq: Be more informative on irq type mismatch · f5d89470
      Thomas Gleixner 提交于
      We require that shared interrupts agree on a few flag settings. Right
      now we silently return with an error code without giving any hint why
      we reject it.
      
      Make the printout unconditionally and actually useful by printing the
      flags of the new and the already registered action.
      
      Convert all printks to pr_* and use a proper prefix while at it.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      f5d89470
    • T
      genirq: Reject bogus threaded irq requests · 1c6c6952
      Thomas Gleixner 提交于
      Requesting a threaded interrupt without a primary handler and without
      IRQF_ONESHOT set is dangerous.
      
      The core will use the default primary handler for it, which merily
      wakes the thread. For a level type interrupt this results in an
      interrupt storm, because the interrupt line is reenabled after the
      primary handler runs. The device has still the line asserted, which
      brings us back into the primary handler.
      
      While this works for edge type interrupts, we play it safe and reject
      unconditionally because we can't say for sure which type this
      interrupt really has. The type flags are unreliable as the underlying
      chip implementation can override them. And we cannot assume that
      developers using that interface know what they are doing.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      1c6c6952
  23. 29 3月, 2012 2 次提交
  24. 16 3月, 2012 1 次提交
  25. 14 3月, 2012 1 次提交
    • I
      genirq: Flush the irq thread on synchronization · 7140ea19
      Ido Yariv 提交于
      The current implementation does not always flush the threaded handler
      when disabling the irq. In case the irq handler was called, but the
      threaded handler hasn't started running yet, the interrupt will be
      flagged as pending, and the handler will not run. This implementation
      has some issues:
      
      First, if the interrupt is a wake source and flagged as pending, the
      system will not be able to suspend.
      
      Second, when quickly disabling and re-enabling the irq, the threaded
      handler might continue to run after the irq is re-enabled without the
      irq handler being called first. This might be an unexpected behavior.
      
      In addition, it might be counter-intuitive that the threaded handler
      will not be called even though the irq handler was called and returned
      IRQ_WAKE_THREAD.
      
      Fix this by always waiting for the threaded handler to complete in
      synchronize_irq().
      
      [ tglx: Massaged comments, added WARN_ONs and the missing
        	IRQTF_RUNTHREAD check in exit_irq_thread() ]
      Signed-off-by: NIdo Yariv <ido@wizery.com>
      Link: http://lkml.kernel.org/r/1322843052-7166-1-git-send-email-ido@wizery.comSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      7140ea19
  26. 10 3月, 2012 4 次提交
  27. 07 3月, 2012 1 次提交
  28. 15 2月, 2012 1 次提交
  29. 02 12月, 2011 1 次提交
    • I
      genirq: Fix race condition when stopping the irq thread · 550acb19
      Ido Yariv 提交于
      In irq_wait_for_interrupt(), the should_stop member is verified before
      setting the task's state to TASK_INTERRUPTIBLE and calling schedule().
      In case kthread_stop sets should_stop and wakes up the process after
      should_stop is checked by the irq thread but before the task's state
      is changed, the irq thread might never exit:
      
      kthread_stop                    irq_wait_for_interrupt
      ------------                    ----------------------
      
                                       ...
      ...                              while (!kthread_should_stop()) {
      kthread->should_stop = 1;
      wake_up_process(k);
      wait_for_completion(&kthread->exited);
      ...
                                           set_current_state(TASK_INTERRUPTIBLE);
      
                                           ...
      
                                           schedule();
                                       }
      
      Fix this by checking if the thread should stop after modifying the
      task's state.
      
      [ tglx: Simplified it a bit ]
      Signed-off-by: NIdo Yariv <ido@wizery.com>
      Link: http://lkml.kernel.org/r/1322740508-22640-1-git-send-email-ido@wizery.comSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: stable@kernel.org
      550acb19
  30. 18 11月, 2011 1 次提交
  31. 30 10月, 2011 1 次提交