- 25 7月, 2012 1 次提交
-
-
由 Thomas Gleixner 提交于
Some interrupt chips like MSI are oneshot safe by implementation. For those interrupts we can avoid the mask/unmask sequence for threaded interrupt handlers. Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/alpine.LFD.2.02.1207132056540.32033@ionos Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Avi Kivity <avi@redhat.com> Cc: Marcelo Tosatti <mtosatti@redhat.com> Cc: Jan Kiszka <jan.kiszka@web.de>
-
- 01 6月, 2012 1 次提交
-
-
由 Andrew Morton 提交于
Use the module-wide pr_fmt() mechanism rather than open-coding "genirq: " everywhere. Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 25 5月, 2012 1 次提交
-
-
由 Jiang Liu 提交于
All invocations of chip->irq_set_affinity() are doing the same return value checks. Let them all use a common function. [ tglx: removed the silly likely while at it ] Signed-off-by: NJiang Liu <jiang.liu@huawei.com> Cc: Jiang Liu <liuj97@gmail.com> Cc: Keping Chen <chenkeping@huawei.com> Link: http://lkml.kernel.org/r/1333120296-13563-3-git-send-email-jiang.liu@huawei.comSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 24 5月, 2012 1 次提交
-
-
由 Oleg Nesterov 提交于
exit_irq_thread() and task->irq_thread are needed to handle the unexpected (and unlikely) exit of irq-thread. We can use task_work instead and make this all private to kernel/irq/manage.c, cleanup plus micro-optimization. 1. rename exit_irq_thread() to irq_thread_dtor(), make it static, and move it up before irq_thread(). 2. change irq_thread() to do task_work_add(irq_thread_dtor) at the start and task_work_cancel() before return. tracehook_notify_resume() can never play with kthreads, only do_exit()->exit_task_work() can call the callback and this is what we want. 3. remove task_struct->irq_thread and the special hook in do_exit(). Signed-off-by: NOleg Nesterov <oleg@redhat.com> Reviewed-by: NThomas Gleixner <tglx@linutronix.de> Cc: David Howells <dhowells@redhat.com> Cc: Richard Kuo <rkuo@codeaurora.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Alexander Gordeev <agordeev@redhat.com> Cc: Chris Zankel <chris@zankel.net> Cc: David Smith <dsmith@redhat.com> Cc: "Frank Ch. Eigler" <fche@redhat.com> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Larry Woodman <lwoodman@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Tejun Heo <tj@kernel.org> Cc: Ingo Molnar <mingo@elte.hu> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
- 22 5月, 2012 1 次提交
-
-
由 Richard Weinberger 提交于
As it's only user (UML) does no longer need it we can get rid of it. Signed-off-by: NRichard Weinberger <richard@nod.at> Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
-
- 19 4月, 2012 2 次提交
-
-
由 Thomas Gleixner 提交于
We require that shared interrupts agree on a few flag settings. Right now we silently return with an error code without giving any hint why we reject it. Make the printout unconditionally and actually useful by printing the flags of the new and the already registered action. Convert all printks to pr_* and use a proper prefix while at it. Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Thomas Gleixner 提交于
Requesting a threaded interrupt without a primary handler and without IRQF_ONESHOT set is dangerous. The core will use the default primary handler for it, which merily wakes the thread. For a level type interrupt this results in an interrupt storm, because the interrupt line is reenabled after the primary handler runs. The device has still the line asserted, which brings us back into the primary handler. While this works for edge type interrupts, we play it safe and reject unconditionally because we can't say for sure which type this interrupt really has. The type flags are unreliable as the underlying chip implementation can override them. And we cannot assume that developers using that interface know what they are doing. Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 29 3月, 2012 2 次提交
-
-
由 Prarit Bhargava 提交于
We respect node affinity of devices already in the irq descriptor allocation, but we ignore it for the initial interrupt affinity setup, so the interrupt might be routed to a different node. Restrict the default affinity mask to the node on which the irq descriptor is allocated. [ tglx: Massaged changelog ] Signed-off-by: NPrarit Bhargava <prarit@redhat.com> Acked-by: NNeil Horman <nhorman@tuxdriver.com> Cc: Yinghai Lu <yinghai@kernel.org> Cc: David Rientjes <rientjes@google.com> Link: http://lkml.kernel.org/r/1332788538-17425-1-git-send-email-prarit@redhat.comSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Alexander Gordeev 提交于
The only place irq_finalize_oneshot() is called with force parameter set is the threaded handler error exit path. But IRQTF_RUNTHREAD is dropped at this point and irq_wake_thread() is not going to set it again, since PF_EXITING is set for this thread already. So irq_finalize_oneshot() will drop the threads bit in threads_oneshot anyway and hence the force parameter is superfluous. Signed-off-by: NAlexander Gordeev <agordeev@redhat.com> Link: http://lkml.kernel.org/r/20120321162234.GP24806@dhcp-26-207.brq.redhat.comSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 16 3月, 2012 1 次提交
-
-
由 Thomas Gleixner 提交于
Alexander pointed out that the warnons in the regular exit path are bogus and the thread_mask one actually could be triggered when __setup_irq() hands out that thread_mask again after __free_irq() dropped irq_desc->lock. Thinking more about it, neither IRQTF_RUNTHREAD nor the bit in thread_mask can be set as this is the regular exit path. We come here due to: __free_irq() remove action from desc synchronize_irq() kthread_stop() So synchronize_irq() makes sure that the thread finished running and cleaned up both the thread_active count and thread_mask. After that point nothing can set IRQTF_RUNTHREAD on this action. So the warnons and the cleanups are pointless. Reported-by: NAlexander Gordeev <agordeev@redhat.com> Cc: Ido Yariv <ido@wizery.com> Link: http://lkml.kernel.org/r/20120315190755.GA6732@dhcp-26-207.brq.redhat.comSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 14 3月, 2012 1 次提交
-
-
由 Ido Yariv 提交于
The current implementation does not always flush the threaded handler when disabling the irq. In case the irq handler was called, but the threaded handler hasn't started running yet, the interrupt will be flagged as pending, and the handler will not run. This implementation has some issues: First, if the interrupt is a wake source and flagged as pending, the system will not be able to suspend. Second, when quickly disabling and re-enabling the irq, the threaded handler might continue to run after the irq is re-enabled without the irq handler being called first. This might be an unexpected behavior. In addition, it might be counter-intuitive that the threaded handler will not be called even though the irq handler was called and returned IRQ_WAKE_THREAD. Fix this by always waiting for the threaded handler to complete in synchronize_irq(). [ tglx: Massaged comments, added WARN_ONs and the missing IRQTF_RUNTHREAD check in exit_irq_thread() ] Signed-off-by: NIdo Yariv <ido@wizery.com> Link: http://lkml.kernel.org/r/1322843052-7166-1-git-send-email-ido@wizery.comSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 10 3月, 2012 4 次提交
-
-
由 Alexander Gordeev 提交于
Currently IRQTF_DIED flag is set when a IRQ thread handler calls do_exit() But also PF_EXITING per process flag gets set when a thread exits. This fix eliminates the duplicate by using PF_EXITING flag. Also, there is a race condition in exit_irq_thread(). In case a thread's bit is cleared in desc->threads_oneshot (and the IRQ line gets unmasked), but before IRQTF_DIED flag is set, a new interrupt might come in and set just cleared bit again, this time forever. This fix throws IRQTF_DIED flag away, eliminating the race as a result. [ tglx: Test THREAD_EXITING first as suggested by Oleg ] Reported-by: NOleg Nesterov <oleg@redhat.com> Signed-off-by: NAlexander Gordeev <agordeev@redhat.com> Link: http://lkml.kernel.org/r/20120309135958.GD2114@dhcp-26-207.brq.redhat.comSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Alexander Gordeev 提交于
Since 63706172 kthread_stop() is not afraid of dead kernel threads. So no need to check if a thread is alive before stopping it. These checks still were racy. Reported-by: NOleg Nesterov <oleg@redhat.com> Signed-off-by: NAlexander Gordeev <agordeev@redhat.com> Link: http://lkml.kernel.org/r/20120309135939.GC2114@dhcp-26-207.brq.redhat.comSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Alexander Gordeev 提交于
When a new thread handler is created, an irqaction is passed to it as data. Not only that irqaction is stored in task_struct by the handler for later use, but also a structure associated with the kernel thread keeps this value as long as the thread exists. This fix kicks irqaction out off task_struct. Yes, I introduce new bit field. But it allows not only to eliminate the duplicate, but also shortens size of task_struct. Reported-by: NOleg Nesterov <oleg@redhat.com> Signed-off-by: NAlexander Gordeev <agordeev@redhat.com> Link: http://lkml.kernel.org/r/20120309135925.GB2114@dhcp-26-207.brq.redhat.comSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Alexander Gordeev 提交于
We do not want a bitwise AND between boolean operands Signed-off-by: NAlexander Gordeev <agordeev@redhat.com> Cc: Oleg Nesterov <oleg@redhat.com> Link: http://lkml.kernel.org/r/20120309135912.GA2114@dhcp-26-207.brq.redhat.com Cc: stable@vger.kernel.org Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 07 3月, 2012 1 次提交
-
-
由 Thomas Gleixner 提交于
Xommit ac563761(genirq: Unmask oneshot irqs when thread was not woken) fails to unmask when a !IRQ_ONESHOT threaded handler is handled by handle_level_irq. This happens because thread_mask is or'ed unconditionally in irq_wake_thread(), but for !IRQ_ONESHOT interrupts never cleared. So the check for !desc->thread_active fails and keeps the interrupt disabled. Keep the thread_mask zero for !IRQ_ONESHOT interrupts. Document the thread_mask magic while at it. Reported-and-tested-by: NSven Joachim <svenjoac@gmx.de> Reported-and-tested-by: NStefan Lippers-Hollmann <s.l-h@gmx.de> Cc: stable@vger.kernel.org Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 15 2月, 2012 1 次提交
-
-
由 Thomas Gleixner 提交于
An interrupt might be pending when irq_startup() is called, but the startup code does not invoke the resend logic. In some cases this prevents the device from issuing another interrupt which renders the device non functional. Call the resend function in irq_startup() to keep things going. Reported-and-tested-by: NRussell King <rmk+kernel@arm.linux.org.uk> Cc: stable@vger.kernel.org Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 02 12月, 2011 1 次提交
-
-
由 Ido Yariv 提交于
In irq_wait_for_interrupt(), the should_stop member is verified before setting the task's state to TASK_INTERRUPTIBLE and calling schedule(). In case kthread_stop sets should_stop and wakes up the process after should_stop is checked by the irq thread but before the task's state is changed, the irq thread might never exit: kthread_stop irq_wait_for_interrupt ------------ ---------------------- ... ... while (!kthread_should_stop()) { kthread->should_stop = 1; wake_up_process(k); wait_for_completion(&kthread->exited); ... set_current_state(TASK_INTERRUPTIBLE); ... schedule(); } Fix this by checking if the thread should stop after modifying the task's state. [ tglx: Simplified it a bit ] Signed-off-by: NIdo Yariv <ido@wizery.com> Link: http://lkml.kernel.org/r/1322740508-22640-1-git-send-email-ido@wizery.comSigned-off-by: NThomas Gleixner <tglx@linutronix.de> Cc: stable@kernel.org
-
- 18 11月, 2011 1 次提交
-
-
由 Marc Zyngier 提交于
The power management functions related to interrupts do not know (yet) about per-cpu interrupts and end up calling the wrong low-level methods to enable/disable interrupts. This leads to all kind of interesting issues (action taken on one CPU only, updating a refcount which is not used otherwise...). The workaround for the time being is simply to flag these interrupts with IRQF_NO_SUSPEND. At least on ARM, these interrupts are actually dealt with at the architecture level. Reported-by: NSantosh Shilimkar <santosh.shilimkar@ti.com> Tested-by: NSantosh Shilimkar <santosh.shilimkar@ti.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Cc: linux-arm-kernel@lists.infradead.org Link: http://lkml.kernel.org/r/1321446459-31409-1-git-send-email-marc.zyngier@arm.comSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 30 10月, 2011 1 次提交
-
-
由 Javi Merino 提交于
Signed-off-by: NJavi Merino <javi.merino@arm.com> Signed-off-by: NJiri Kosina <jkosina@suse.cz>
-
- 05 10月, 2011 1 次提交
-
-
由 Thomas Gleixner 提交于
Putting the argument inside the quote does not really help. Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 03 10月, 2011 2 次提交
-
-
由 Marc Zyngier 提交于
As request_percpu_irq() doesn't allow for a percpu interrupt to have its type configured (it is generally impossible to configure it on all CPUs at once), add a 'type' argument to enable_percpu_irq(). This allows some low-level, board specific init code to be switched to a generic API. [ tglx: Added WARN_ON argument ] Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Cc: Abhijeet Dharmapurikar <adharmap@codeaurora.org> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Marc Zyngier 提交于
The ARM GIC interrupt controller offers per CPU interrupts (PPIs), which are usually used to connect local timers to each core. Each CPU has its own private interface to the GIC, and only sees the PPIs that are directly connect to it. While these timers are separate devices and have a separate interrupt line to a core, they all use the same IRQ number. For these devices, request_irq() is not the right API as it assumes that an IRQ number is visible by a number of CPUs (through the affinity setting), but makes it very awkward to express that an IRQ number can be handled by all CPUs, and yet be a different interrupt line on each CPU, requiring a different dev_id cookie to be passed back to the handler. The *_percpu_irq() functions is designed to overcome these limitations, by providing a per-cpu dev_id vector: int request_percpu_irq(unsigned int irq, irq_handler_t handler, const char *devname, void __percpu *percpu_dev_id); void free_percpu_irq(unsigned int, void __percpu *); int setup_percpu_irq(unsigned int irq, struct irqaction *new); void remove_percpu_irq(unsigned int irq, struct irqaction *act); void enable_percpu_irq(unsigned int irq); void disable_percpu_irq(unsigned int irq); The API has a number of limitations: - no interrupt sharing - no threading - common handler across all the CPUs Once the interrupt is requested using setup_percpu_irq() or request_percpu_irq(), it must be enabled by each core that wishes its local interrupt to be delivered. Based on an initial patch by Thomas Gleixner. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Cc: linux-arm-kernel@lists.infradead.org Link: http://lkml.kernel.org/r/1316793788-14500-2-git-send-email-marc.zyngier@arm.comSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 12 9月, 2011 1 次提交
-
-
由 Santosh Shilimkar 提交于
Some irq chips need the irq_set_wake() functionality, but do not require a irq_set_wake() callback. Instead of forcing an empty callback to be implemented add a flag which notes this fact. Check for the flag in set_irq_wake_real() and return success when set. Signed-off-by: NSantosh Shilimkar <santosh.shilimkar@ti.com> Cc: Thomas Gleixner <tglx@linutronix.de>
-
- 24 8月, 2011 1 次提交
-
-
由 Linus Torvalds 提交于
This reverts commit f3637a5f. It turns out that this breaks several drivers, one example being OMAP boards which use the on-board OMAP UARTs and the omap-serial driver that will not boot to userspace after the commit. Paul Walmsley reports that enabling CONFIG_DEBUG_SHIRQ reveals 'IRQ handler type mismatch' errors: IRQ handler type mismatch for IRQ 74 current handler: serial idle ... and the reason is that setting IRQF_ONESHOT will now result in those interrupt handlers having different IRQF flags, and thus being unsharable. So the commit log in the reverted commit: "Since it is required for those users and there is no difference for others it makes sense to add this flag unconditionally." is simply not true: there may not be any difference from a "actions at irq time", but there is a *big* difference wrt this flag testing irq management (see __setup_irq() in kernel/irq/manage.c). One solution may be to stop verifying IRQF_ONESHOT in __setup_irq(), but right now the safe course of action is to revert the change. Let's revisit this in a later merge window. Reported-by: NPaul Walmsley <paul@pwsan.com> Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Requested-by: NAlan Cox <alan@lxorguk.ukuu.org.uk> Acked-by: NThomas Gleixner <tglx@linutronix.de> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 28 7月, 2011 2 次提交
-
-
Interrupt descriptors can be allocated from modules. The interrupts are used by other modules, but we have no refcount on the module which provides the interrupts and there is no way to establish one on the device level as the interrupt using module is agnostic to the fact that the interrupt is provided by a module rather than by some builtin interrupt controller. To prevent removal of the interrupt providing module, we can track the owner of the interrupt descriptor, which also provides the relevant irq chip functions in the irq descriptor. request/setup_irq() can now acquire a refcount on the owner module to prevent unloading. free_irq() drops the refcount. Signed-off-by: NSebastian Andrzej Siewior <sebastian@breakpoint.cc> Link: http://lkml.kernel.org/r/20110711101731.GA13804@Chamillionaire.breakpoint.ccSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
-
If no primary handler is specified then a default one is assigned which always returns IRQ_WAKE_THREAD. This handler requires the IRQF_ONESHOT flag on LEVEL / EIO typed irqs because the source of interrupt is not disabled. Since it is required for those users and there is no difference for others it makes sense to add this flag unconditionally. Signed-off-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de> Link: http://lkml.kernel.org/r/1310070737-18514-1-git-send-email-bigeasy@linutronix.deSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 10 6月, 2011 1 次提交
-
-
由 Jesper Juhl 提交于
In kernel/irq/manage.c::irq_set_irq_wake() we call irq_get_desc_buslock() which may return NULL, but the code dereferences the result unconditionally. irq_set_irq_wake() has lots of callers - I checked a few and I couldn't find anything that guarantees that they won't call it with some input that will cause irq_get_desc_buslock() to return NULL, so I think it's a good thing to test and -EINVAL was the most sane error code in this situation that I could think of. Not all callers test the return value of irq_set_irq_wake(), but those that do take != 0 to mean error as far as I can see, so they should be fine. I guess those that don't test actually should, but that's a different issue. Signed-off-by: NJesper Juhl <jj@chaosbits.net> Link: http://lkml.kernel.org/r/alpine.LNX.2.00.1106092300360.17868@swampdragon.chaosbits.netSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 03 6月, 2011 1 次提交
-
-
The detection of spurios interrupts is currently limited to first level handler. In force-threaded mode we never notice if the threaded irq does not feel responsible. This patch catches the return value of the threaded handler and forwards it to the spurious detector. If the primary handler returns only IRQ_WAKE_THREAD then the spourious detector ignores it because it gets called again from the threaded handler. [ tglx: Report the erroneous return value early and bail out ] Signed-off-by: NSebastian Andrzej Siewior <sebastian@breakpoint.cc> Link: http://lkml.kernel.org/r/1306824972-27067-2-git-send-email-sebastian@breakpoint.ccSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 23 4月, 2011 1 次提交
-
-
由 Paul Mundt 提交于
This adds support for disabling threading on a per-IRQ basis via the IRQ status instead of the IRQ flow, which is necessary for interrupts that don't follow the natural IRQ flow channels, such as those that are virtually created. The new APIs added are simply: irq_set_thread() irq_set_nothread() which follow the rest of the IRQ status routines. Chained handlers also have IRQ_NOTHREAD set on them automatically, making the lack of threading explicit rather than implicit. Subsequently, the nothread flag can be viewed through the standard genirq debugging facilities. [ tglx: Fixed cleanup fallout ] Signed-off-by: NPaul Mundt <lethal@linux-sh.org> Link: http://lkml.kernel.org/r/%3C20110406210135.GF18426%40linux-sh.org%3ESigned-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 03 4月, 2011 1 次提交
-
-
由 Xiaotian Feng 提交于
The allocated cpumask should be freed in __setup_irq(). Signed-off-by: NXiaotian Feng <dfeng@redhat.com> LKML-Reference: <1301744375-6812-1-git-send-email-dfeng@redhat.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 29 3月, 2011 4 次提交
-
-
由 Thomas Gleixner 提交于
Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Thomas Gleixner 提交于
The late night fixup missed to convert the data type from irq_desc to irq_data, which results in a harmless but annoying warning. Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Thomas Gleixner 提交于
I missed the CONFIG_GENERIC_PENDING_IRQ dependency in the affinity related functions and the IRQ_LEVEL propagation into irq_data state. Did not pop up on my main test platforms. :( Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Tested-by: NDavid Daney <ddaney@caviumnetworks.com>
-
由 Randy Dunlap 提交于
Fix new irq-related kernel-doc warnings in 2.6.38: Warning(kernel/irq/manage.c:149): No description found for parameter 'mask' Warning(kernel/irq/manage.c:149): Excess function parameter 'cpumask' description in 'irq_set_affinity' Warning(include/linux/irq.h:161): No description found for parameter 'state_use_accessors' Warning(include/linux/irq.h:161): Excess struct/union/enum/typedef member 'state_use_accessor' description in 'irq_data' Signed-off-by: NRandy Dunlap <randy.dunlap@oracle.com> LKML-Reference: <20110318093356.b939558d.randy.dunlap@oracle.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 28 3月, 2011 1 次提交
-
-
由 Thomas Gleixner 提交于
We really need these flags for some of the interrupt chips. Move it from internal state to irq_data and provide proper accessors. Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Cc: David Daney <ddaney@caviumnetworks.com>
-
- 27 3月, 2011 1 次提交
-
-
由 David Daney 提交于
The .irq_cpu_online() and .irq_cpu_offline() functions may need to adjust affinity, but they are called with the descriptor lock held. Create __irq_set_affinity_locked() which is called with the lock held. Make irq_set_affinity() just a wrapper that acquires the lock. [ tglx: Changed the argument to irq_data, added a !desc check and moved the !irq_set_affinity check where it belongs ] Signed-off-by: NDavid Daney <ddaney@caviumnetworks.com> Cc: linux-mips@linux-mips.org Cc: ralf@linux-mips.org LKML-Reference: <1301081931-11240-4-git-send-email-ddaney@caviumnetworks.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 17 3月, 2011 1 次提交
-
-
由 Dan Carpenter 提交于
goto out_thread is called before we take the lock. It causes a gcc warning: "kernel/irq/manage.c:858: warning: ‘flags’ may be used uninitialized in this function" [ tglx: Moved unlock before free_cpumask_var() ] Signed-off-by: NDan Carpenter <error27@gmail.com> LKML-Reference: <20110317114307.GJ2008@bicker> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 26 2月, 2011 2 次提交
-
-
由 Thomas Gleixner 提交于
Add a commandline parameter "threadirqs" which forces all interrupts except those marked IRQF_NO_THREAD to run threaded. That's mostly a debug option to allow retrieving better debug data from crashing interrupt handlers. If "threadirqs" is not enabled on the kernel command line, then there is no impact in the interrupt hotpath. Architecture code needs to select CONFIG_IRQ_FORCED_THREADING after marking the interrupts which cant be threaded IRQF_NO_THREAD. All interrupts which have IRQF_TIMER set are implict marked IRQF_NO_THREAD. Also all PER_CPU interrupts are excluded. Forced threading hard interrupts also forces all soft interrupt handling into thread context. When enabled it might slow down things a bit, but for debugging problems in interrupt code it's a reasonable penalty as it does not immediately crash and burn the machine when an interrupt handler is buggy. Some test results on a Core2Duo machine: Cache cold run of: # time git grep irq_desc non-threaded threaded real 1m18.741s 1m19.061s user 0m1.874s 0m1.757s sys 0m5.843s 0m5.427s # iperf -c server non-threaded [ 3] 0.0-10.0 sec 1.09 GBytes 933 Mbits/sec [ 3] 0.0-10.0 sec 1.09 GBytes 934 Mbits/sec [ 3] 0.0-10.0 sec 1.09 GBytes 933 Mbits/sec threaded [ 3] 0.0-10.0 sec 1.09 GBytes 939 Mbits/sec [ 3] 0.0-10.0 sec 1.09 GBytes 934 Mbits/sec [ 3] 0.0-10.0 sec 1.09 GBytes 937 Mbits/sec Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Cc: Peter Zijlstra <peterz@infradead.org> LKML-Reference: <20110223234956.772668648@linutronix.de>
-
由 Thomas Gleixner 提交于
Support ONESHOT on shared interrupts, if all drivers agree on it. Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Cc: Peter Zijlstra <peterz@infradead.org> LKML-Reference: <20110223234956.483640430@linutronix.de>
-