- 17 10月, 2011 1 次提交
-
-
由 Ian Campbell 提交于
This adds a mechanism to resume selected IRQs during syscore_resume instead of dpm_resume_noirq. Under Xen we need to resume IRQs associated with IPIs early enough that the resched IPI is unmasked and we can therefore schedule ourselves out of the stop_machine where the suspend/resume takes place. This issue was introduced by 676dc3cf "xen: Use IRQF_FORCE_RESUME". Signed-off-by: NIan Campbell <ian.campbell@citrix.com> Cc: Rafael J. Wysocki <rjw@sisk.pl> Cc: Jeremy Fitzhardinge <Jeremy.Fitzhardinge@citrix.com> Cc: xen-devel <xen-devel@lists.xensource.com> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Link: http://lkml.kernel.org/r/1318713254.11016.52.camel@dagon.hellion.org.uk Cc: stable@kernel.org (at least to 2.6.32.y) Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 03 10月, 2011 2 次提交
-
-
由 Marc Zyngier 提交于
As request_percpu_irq() doesn't allow for a percpu interrupt to have its type configured (it is generally impossible to configure it on all CPUs at once), add a 'type' argument to enable_percpu_irq(). This allows some low-level, board specific init code to be switched to a generic API. [ tglx: Added WARN_ON argument ] Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Cc: Abhijeet Dharmapurikar <adharmap@codeaurora.org> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Marc Zyngier 提交于
The ARM GIC interrupt controller offers per CPU interrupts (PPIs), which are usually used to connect local timers to each core. Each CPU has its own private interface to the GIC, and only sees the PPIs that are directly connect to it. While these timers are separate devices and have a separate interrupt line to a core, they all use the same IRQ number. For these devices, request_irq() is not the right API as it assumes that an IRQ number is visible by a number of CPUs (through the affinity setting), but makes it very awkward to express that an IRQ number can be handled by all CPUs, and yet be a different interrupt line on each CPU, requiring a different dev_id cookie to be passed back to the handler. The *_percpu_irq() functions is designed to overcome these limitations, by providing a per-cpu dev_id vector: int request_percpu_irq(unsigned int irq, irq_handler_t handler, const char *devname, void __percpu *percpu_dev_id); void free_percpu_irq(unsigned int, void __percpu *); int setup_percpu_irq(unsigned int irq, struct irqaction *new); void remove_percpu_irq(unsigned int irq, struct irqaction *act); void enable_percpu_irq(unsigned int irq); void disable_percpu_irq(unsigned int irq); The API has a number of limitations: - no interrupt sharing - no threading - common handler across all the CPUs Once the interrupt is requested using setup_percpu_irq() or request_percpu_irq(), it must be enabled by each core that wishes its local interrupt to be delivered. Based on an initial patch by Thomas Gleixner. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Cc: linux-arm-kernel@lists.infradead.org Link: http://lkml.kernel.org/r/1316793788-14500-2-git-send-email-marc.zyngier@arm.comSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 27 7月, 2011 1 次提交
-
-
由 Arun Sharma 提交于
This allows us to move duplicated code in <asm/atomic.h> (atomic_inc_not_zero() for now) to <linux/atomic.h> Signed-off-by: NArun Sharma <asharma@fb.com> Reviewed-by: NEric Dumazet <eric.dumazet@gmail.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: David Miller <davem@davemloft.net> Cc: Eric Dumazet <eric.dumazet@gmail.com> Acked-by: NMike Frysinger <vapier@gentoo.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 15 6月, 2011 1 次提交
-
-
由 Shaohua Li 提交于
Commit a26ac245(rcu: move TREE_RCU from softirq to kthread) introduced performance regression. In an AIM7 test, this commit degraded performance by about 40%. The commit runs rcu callbacks in a kthread instead of softirq. We observed high rate of context switch which is caused by this. Out test system has 64 CPUs and HZ is 1000, so we saw more than 64k context switch per second which is caused by RCU's per-CPU kthread. A trace showed that most of the time the RCU per-CPU kthread doesn't actually handle any callbacks, but instead just does a very small amount of work handling grace periods. This means that RCU's per-CPU kthreads are making the scheduler do quite a bit of work in order to allow a very small amount of RCU-related processing to be done. Alex Shi's analysis determined that this slowdown is due to lock contention within the scheduler. Unfortunately, as Peter Zijlstra points out, the scheduler's real-time semantics require global action, which means that this contention is inherent in real-time scheduling. (Yes, perhaps someone will come up with a workaround -- otherwise, -rt is not going to do well on large SMP systems -- but this patch will work around this issue in the meantime. And "the meantime" might well be forever.) This patch therefore re-introduces softirq processing to RCU, but only for core RCU work. RCU callbacks are still executed in kthread context, so that only a small amount of RCU work runs in softirq context in the common case. This should minimize ksoftirqd execution, allowing us to skip boosting of ksoftirqd for CONFIG_RCU_BOOST=y kernels. Signed-off-by: NShaohua Li <shaohua.li@intel.com> Tested-by: N"Alex,Shi" <alex.shi@intel.com> Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
-
- 06 5月, 2011 1 次提交
-
-
由 Paul E. McKenney 提交于
If RCU priority boosting is to be meaningful, callback invocation must be boosted in addition to preempted RCU readers. Otherwise, in presence of CPU real-time threads, the grace period ends, but the callbacks don't get invoked. If the callbacks don't get invoked, the associated memory doesn't get freed, so the system is still subject to OOM. But it is not reasonable to priority-boost RCU_SOFTIRQ, so this commit moves the callback invocations to a kthread, which can be boosted easily. Also add comments and properly synchronized all accesses to rcu_cpu_kthread_task, as suggested by Lai Jiangshan. Signed-off-by: NPaul E. McKenney <paul.mckenney@linaro.org> Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com> Reviewed-by: NJosh Triplett <josh@joshtriplett.org>
-
- 31 3月, 2011 1 次提交
-
-
由 Lucas De Marchi 提交于
Fixes generated by 'codespell' and manually reviewed. Signed-off-by: NLucas De Marchi <lucas.demarchi@profusion.mobi>
-
- 30 3月, 2011 1 次提交
-
-
由 Thomas Gleixner 提交于
Missed that one in the big compat remval patch Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 26 2月, 2011 3 次提交
-
-
由 Thomas Gleixner 提交于
Add a commandline parameter "threadirqs" which forces all interrupts except those marked IRQF_NO_THREAD to run threaded. That's mostly a debug option to allow retrieving better debug data from crashing interrupt handlers. If "threadirqs" is not enabled on the kernel command line, then there is no impact in the interrupt hotpath. Architecture code needs to select CONFIG_IRQ_FORCED_THREADING after marking the interrupts which cant be threaded IRQF_NO_THREAD. All interrupts which have IRQF_TIMER set are implict marked IRQF_NO_THREAD. Also all PER_CPU interrupts are excluded. Forced threading hard interrupts also forces all soft interrupt handling into thread context. When enabled it might slow down things a bit, but for debugging problems in interrupt code it's a reasonable penalty as it does not immediately crash and burn the machine when an interrupt handler is buggy. Some test results on a Core2Duo machine: Cache cold run of: # time git grep irq_desc non-threaded threaded real 1m18.741s 1m19.061s user 0m1.874s 0m1.757s sys 0m5.843s 0m5.427s # iperf -c server non-threaded [ 3] 0.0-10.0 sec 1.09 GBytes 933 Mbits/sec [ 3] 0.0-10.0 sec 1.09 GBytes 934 Mbits/sec [ 3] 0.0-10.0 sec 1.09 GBytes 933 Mbits/sec threaded [ 3] 0.0-10.0 sec 1.09 GBytes 939 Mbits/sec [ 3] 0.0-10.0 sec 1.09 GBytes 934 Mbits/sec [ 3] 0.0-10.0 sec 1.09 GBytes 937 Mbits/sec Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Cc: Peter Zijlstra <peterz@infradead.org> LKML-Reference: <20110223234956.772668648@linutronix.de>
-
由 Thomas Gleixner 提交于
Some low level interrupts cannot be threaded even when we force thread all interrupt handlers. Add a flag to annotate such interrupts. Add all timer interrupts to this category by default. Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Cc: Peter Zijlstra <peterz@infradead.org> LKML-Reference: <20110223234956.578893460@linutronix.de>
-
由 Thomas Gleixner 提交于
For level type interrupts we need to track how many threads are on flight to avoid useless interrupt storms when not all thread handlers have finished yet. Keep track of the woken threads and only unmask when there are no more threads in flight. Yes, I'm lazy and using a bitfield. But not only because I'm lazy, the main reason is that it's way simpler than using a refcount. A refcount based solution would need to keep track of various things like crashing the irq thread, spurious interrupts coming in, disables/enables, free_irq() and some more. The bitfield keeps the tracking simple and makes things just work. It's also nicely confined to the thread code pathes and does not require additional checks all over the place. Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Cc: Peter Zijlstra <peterz@infradead.org> LKML-Reference: <20110223234956.388095876@linutronix.de>
-
- 19 2月, 2011 3 次提交
-
-
由 Thomas Gleixner 提交于
All archs implement show_interrupts() in more or less the same way. That's tons of duplicated code with different bugs with no value. Implement a generic version and deprecate show_interrupts() Unfortunately we need some ifdeffery for !GENERIC_HARDIRQ archs. Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Thomas Gleixner 提交于
Soleley used in core code. Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Thomas Gleixner 提交于
The irq namespace has become quite convoluted. My bad. Clean it up and deprecate the old functions. All new functions follow the scheme: irq number based: irq_set/get/xxx/_xxx(unsigned int irq, ...) irq_data based: irq_data_set/get/xxx/_xxx(struct irq_data *d, ....) irq_desc based: irq_desc_get_xxx(struct irq_desc *desc) Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 08 2月, 2011 1 次提交
-
-
由 Thomas Gleixner 提交于
Xen needs to reenable interrupts which are marked IRQF_NO_SUSPEND in the resume path. Add a flag to force the reenabling in the resume code. Tested-and-acked-by: NIan Campbell <Ian.Campbell@eu.citrix.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 26 1月, 2011 1 次提交
-
-
由 Venkatesh Pallipadi 提交于
Cleanup patch, freeing up PF_KSOFTIRQD and use per_cpu ksoftirqd pointer instead, as suggested by Eric Dumazet. Tested-by: NShaun Ruffell <sruffell@digium.com> Signed-off-by: NVenkatesh Pallipadi <venki@google.com> Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <1292980144-28796-2-git-send-email-venki@google.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 23 1月, 2011 1 次提交
-
-
由 Ben Hutchings 提交于
When initiating I/O on a multiqueue and multi-IRQ device, we may want to select a queue for which the response will be handled on the same or a nearby CPU. This requires a reverse-map of IRQ affinity. Add a notification mechanism to support this. This is based closely on work by Thomas Gleixner <tglx@linutronix.de>. Signed-off-by: NBen Hutchings <bhutchings@solarflare.com> Cc: linux-net-drivers@solarflare.com Cc: Tom Herbert <therbert@google.com> Cc: David Miller <davem@davemloft.net> LKML-Reference: <1295470904.11126.84.camel@bwh-desktop> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 10 11月, 2010 1 次提交
-
-
由 Eric Dumazet 提交于
We currently use kmalloc-96 slab for struct irqaction allocations on 64bit arches. This is unfortunate because of possible false sharing and two cache lines accesses. Move 'name' and 'dir' fields at the end of the structure, and force a suitable alignement. Hot path fields now use one cache line on x86_64. Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com> Reviewed-by: NAndi Kleen <andi@firstfloor.org> Cc: Peter Zijlstra <peterz@infradead.org> LKML-Reference: <1288865628.2659.69.camel@edumazet-laptop> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 21 10月, 2010 1 次提交
-
-
由 Thomas Gleixner 提交于
With the addition of trace_softirq_raise() the softirq tracepoint got even more convoluted. Why the tracepoints take two pointers to assign an integer is beyond my comprehension. But adding an extra case which treats the first pointer as an unsigned long when the second pointer is NULL including the back and forth type casting is just horrible. Convert the softirq tracepoints to take a single unsigned int argument for the softirq vector number and fix the call sites. Signed-off-by: NThomas Gleixner <tglx@linutronix.de> LKML-Reference: <alpine.LFD.2.00.1010191428560.6815@localhost6.localdomain6> Acked-by: NPeter Zijlstra <peterz@infradead.org> Acked-by: mathieu.desnoyers@efficios.com Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Steven Rostedt <rostedt@goodmis.org>
-
- 12 10月, 2010 1 次提交
-
-
由 Thomas Gleixner 提交于
This function should have not been there in the first place. Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Reviewed-by: NIngo Molnar <mingo@elte.hu>
-
- 22 9月, 2010 1 次提交
-
-
由 Thomas Gleixner 提交于
No users outside of kernel/softirq.c Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 07 9月, 2010 1 次提交
-
-
由 Lai Jiangshan 提交于
Add a tracepoint for tracing when softirq action is raised. This and the existing tracepoints complete softirq's tracepoints: softirq_raise, softirq_entry and softirq_exit. And when this tracepoint is used in combination with the softirq_entry tracepoint we can determine the softirq raise latency. Signed-off-by: NLai Jiangshan <laijs@cn.fujitsu.com> Acked-by: NMathieu Desnoyers <mathieu.desnoyers@efficios.com> Acked-by: NNeil Horman <nhorman@tuxdriver.com> Cc: David Miller <davem@davemloft.net> Cc: Kaneshige Kenji <kaneshige.kenji@jp.fujitsu.com> Cc: Izumo Taku <izumi.taku@jp.fujitsu.com> Cc: Kosaki Motohiro <kosaki.motohiro@jp.fujitsu.com> Cc: Lai Jiangshan <laijs@cn.fujitsu.com> Cc: Scott Mcmillan <scott.a.mcmillan@intel.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Eric Dumazet <eric.dumazet@gmail.com> LKML-Reference: <4C724298.4050509@jp.fujitsu.com> [ factorize softirq events with DECLARE_EVENT_CLASS ] Signed-off-by: NKoki Sanagi <sanagi.koki@jp.fujitsu.com> Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
-
- 29 7月, 2010 1 次提交
-
-
由 Ian Campbell 提交于
A small number of users of IRQF_TIMER are using it for the implied no suspend behaviour on interrupts which are not timer interrupts. Therefore add a new IRQF_NO_SUSPEND flag, rename IRQF_TIMER to __IRQF_TIMER and redefine IRQF_TIMER in terms of these new flags. Signed-off-by: NIan Campbell <ian.campbell@citrix.com> Cc: Jeremy Fitzhardinge <jeremy@goop.org> Cc: Dmitry Torokhov <dmitry.torokhov@gmail.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Grant Likely <grant.likely@secretlab.ca> Cc: xen-devel@lists.xensource.com Cc: linux-input@vger.kernel.org Cc: linuxppc-dev@ozlabs.org Cc: devicetree-discuss@lists.ozlabs.org LKML-Reference: <1280398595-29708-1-git-send-email-ian.campbell@citrix.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 22 5月, 2010 1 次提交
-
-
由 Randy Dunlap 提交于
Fix kernel-doc fatal error: /** beginning a non-kernel-doc comment block: (That alone does not kill kernel-doc, but the 'enum' was totally confusing to it.) Error(/lnx/src/TMP/linux-2.6.34-git6//include/linux/interrupt.h:88): cannot understand prototype: 'enum ' make[2]: *** [Documentation/DocBook/genericirq.xml] Error 1 Signed-off-by: NRandy Dunlap <randy.dunlap@oracle.com> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 03 5月, 2010 1 次提交
-
-
由 Peter P Waskiewicz Jr 提交于
This patch adds a cpumask affinity hint to the irq_desc structure, along with a registration function and a read-only proc entry for each interrupt. This affinity_hint handle for each interrupt can be used by underlying drivers that need a better mechanism to control interrupt affinity. The underlying driver can register a cpumask for the interrupt, which will allow the driver to provide the CPU mask for the interrupt to anything that requests it. The intent is to extend the userspace daemon, irqbalance, to help hint to it a preferred CPU mask to balance the interrupt into. [ tglx: Fixed compile warnings, added WARN_ON, made SMP only ] Signed-off-by: NPeter P Waskiewicz Jr <peter.p.waskiewicz.jr@intel.com> Cc: davem@davemloft.net Cc: arjan@linux.jf.intel.com Cc: bhutchings@solarflare.com LKML-Reference: <20100430214445.3992.41647.stgit@ppwaskie-hc2.jf.intel.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 13 4月, 2010 2 次提交
-
-
由 Thomas Gleixner 提交于
Remove all code which is related to IRQF_DISABLED from the core kernel code. IRQF_DISABLED still exists as a flag, but becomes a NOOP and will be removed after a grace period. That way we can easily revert to the previous behaviour by just restoring the core code. Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Alan Cox <alan@lxorguk.ukuu.org.uk> Cc: Andi Kleen <andi@firstfloor.org> Cc: David Miller <davem@davemloft.net> Cc: Greg Kroah-Hartman <gregkh@suse.de> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Linus Torvalds <torvalds@osdl.org> LKML-Reference: <20100326000405.991244690@linutronix.de>
-
由 Marc Zyngier 提交于
Now that we enjoy threaded interrupts, we're starting to see irq_chip implementations (wm831x, pca953x) that make use of threaded interrupts for the controller, and nested interrupts for the client interrupt. It all works very well, with one drawback: Drivers requesting an IRQ must now know whether the handler will run in a thread context or not, and call request_threaded_irq() or request_irq() accordingly. The problem is that the requesting driver sometimes doesn't know about the nature of the interrupt, specially when the interrupt controller is a discrete chip (typically a GPIO expander connected over I2C) that can be connected to a wide variety of otherwise perfectly supported hardware. This patch introduces the request_any_context_irq() function that mostly mimics the usual request_irq(), except that it checks whether the irq level is configured as nested or not, and calls the right backend. On success, it also returns either IRQC_IS_HARDIRQ or IRQC_IS_NESTED. [ tglx: Made return value an enum, simplified code and made the export of request_any_context_irq GPL ] Signed-off-by: NMarc Zyngier <maz@misterjones.org> Cc: <joachim.eastwood@jotron.com> LKML-Reference: <927ea285bd0c68934ddae1a47e44a9ba@localhost> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 04 11月, 2009 1 次提交
-
-
由 Thomas Gleixner 提交于
commit 74296a8e added this function for debug purposes, but it was never used for anything. Remove it. Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 12 10月, 2009 1 次提交
-
-
由 Alexey Dobriyan 提交于
After m68k's task_thread_info() doesn't refer to current, it's possible to remove sched.h from interrupt.h and not break m68k! Many thanks to Heiko Carstens for allowing this. Signed-off-by: NAlexey Dobriyan <adobriyan@gmail.com>
-
- 24 9月, 2009 2 次提交
-
-
由 Nobuhiro Iwamatsu 提交于
By 7be23e278f, mask field was deleted by irqaction. However, it was not deleted from comment. Signed-off-by: NNobuhiro Iwamatsu <iwamatsu.nobuhiro@renesas.com> CC: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
-
由 Rusty Russell 提交于
Up until 1.1.83, the primitive human tribes used struct sigaction for interrupts. The sa_mask field was overloaded to hold a pointer to the name. When someone created the new "struct irqaction" they carried across the "mask" field as a kind of ancestor worship: the fact that it was unused makes clear its spiritual significance. Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
-
- 11 9月, 2009 1 次提交
-
-
由 Jens Axboe 提交于
This borrows some code from NAPI and implements a polled completion mode for block devices. The idea is the same as NAPI - instead of doing the command completion when the irq occurs, schedule a dedicated softirq in the hopes that we will complete more IO when the iopoll handler is invoked. Devices have a budget of commands assigned, and will stay in polled mode as long as they continue to consume their budget from the iopoll softirq handler. If they do not, the device is set back to interrupt completion mode. This patch holds the core bits for blk-iopoll, device driver support sold separately. Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
-
- 17 8月, 2009 1 次提交
-
-
由 Thomas Gleixner 提交于
For threaded interrupt handlers we expect the hard interrupt handler part to mask the interrupt on the originating device. The interrupt line itself is reenabled after the hard interrupt handler has executed. This requires access to the originating device from hard interrupt context which is not always possible. There are devices which can only be accessed via a bus (i2c, spi, ...). The bus access requires thread context. For such devices we need to keep the interrupt line masked until the threaded handler has executed. Add a new flag IRQF_ONESHOT which allows drivers to request that the interrupt is not unmasked after the hard interrupt context handler has been executed and the thread has been woken. The interrupt line is unmasked after the thread handler function has been executed. Note that for now IRQF_ONESHOT cannot be used with IRQF_SHARED to avoid complex accounting mechanisms. For oneshot interrupts the primary handler simply returns IRQ_WAKE_THREAD and does nothing else. A generic implementation irq_default_primary_handler() is provided to avoid useless copies all over the place. It is automatically installed when request_threaded_irq() is called with handler=NULL and thread_fn!=NULL. Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Cc: Mark Brown <broonie@opensource.wolfsonmicro.com> Cc: Dmitry Torokhov <dmitry.torokhov@gmail.com> Cc: Trilok Soni <soni.trilok@gmail.com> Cc: Pavel Machek <pavel@ucw.cz> Cc: Brian Swetland <swetland@google.com> Cc: Joonyoung Shim <jy0922.shim@samsung.com> Cc: m.szyprowski@samsung.com Cc: t.fujak@samsung.com Cc: kyungmin.park@samsung.com, Cc: David Brownell <david-b@pacbell.net> Cc: Daniel Ribeiro <drwyrm@gmail.com> Cc: arve@android.com Cc: Barry Song <21cnbao@gmail.com>
-
- 22 7月, 2009 1 次提交
-
-
由 Peter Zijlstra 提交于
commit ca109491 (hrtimer: removing all ur callback modes) moved all hrtimer callbacks into hard interrupt context when high resolution timers are active. That breaks code which relied on the assumption that the callback happens in softirq context. Provide a generic infrastructure which combines tasklets and hrtimers together to provide an in-softirq hrtimer experience. Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Cc: torvalds@linux-foundation.org Cc: kaber@trash.net Cc: David Miller <davem@davemloft.net> LKML-Reference: <1248265724.27058.1366.camel@twins> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 21 7月, 2009 1 次提交
-
-
由 Thomas Gleixner 提交于
irq_set_thread_affinity() calls set_cpus_allowed_ptr() which might sleep, but irq_set_thread_affinity() is called with desc->lock held and can be called from hard interrupt context as well. The code has another bug as it does not hold a ref on the task struct as required by set_cpus_allowed_ptr(). Just set the IRQTF_AFFINITY bit in action->thread_flags. The next time the thread runs it migrates itself. Solves all of the above problems nicely. Add kerneldoc to irq_set_thread_affinity() while at it. Signed-off-by: NThomas Gleixner <tglx@linutronix.de> LKML-Reference: <new-submission>
-
- 13 6月, 2009 2 次提交
-
-
由 Vegard Nossum 提交于
Rationale: kmemcheck needs to be able to schedule a tasklet without touching any dynamically allocated memory _at_ _all_ (since that would lead to a recursive page fault). This tasklet is used for writing the error reports to the kernel log. The new scheduling function avoids touching any other tasklets by inserting the new tasklist as the head of the "tasklet_hi" list instead of on the tail. Also don't wake up the softirq thread lest the scheduler access some tracked memory and we go down with a recursive page fault. In this case, we'd better just wait for the maximum time of 1/HZ for the message to appear. Signed-off-by: NVegard Nossum <vegard.nossum@gmail.com>
-
由 Heiko Carstens 提交于
git commit 0a0c5168 "PM: Introduce functions for suspending and resuming device interrupts" introduced some helper functions. However these functions are only available for architectures which support GENERIC_HARDIRQS. Other architectures will see this build error: drivers/built-in.o: In function `sysdev_suspend': (.text+0x15138): undefined reference to `check_wakeup_irqs' drivers/built-in.o: In function `device_power_up': (.text+0x1cb66): undefined reference to `resume_device_irqs' drivers/built-in.o: In function `device_power_down': (.text+0x1cb92): undefined reference to `suspend_device_irqs' To fix this add some empty inline functions for !GENERIC_HARDIRQS. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
-
- 28 4月, 2009 1 次提交
-
-
由 Yinghai Lu 提交于
This simplifies the node awareness of the code. All our allocators only deal with a NUMA node ID locality not with CPU ids anyway - so there's no need to maintain (and transform) a CPU id all across the IRq layer. v2: keep move_irq_desc related [ Impact: cleanup, prepare IRQ code to be NUMA-aware ] Signed-off-by: NYinghai Lu <yinghai@kernel.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Suresh Siddha <suresh.b.siddha@intel.com> Cc: "Eric W. Biederman" <ebiederm@xmission.com> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Jeremy Fitzhardinge <jeremy@goop.org> LKML-Reference: <49F65536.2020300@kernel.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 31 3月, 2009 2 次提交
-
-
由 Peter Zijlstra 提交于
It appears I inadvertly introduced rq->lock recursion to the hrtimer_start() path when I delegated running already expired timers to softirq context. This patch fixes it by introducing a __hrtimer_start_range_ns() method that will not use raise_softirq_irqoff() but __raise_softirq_irqoff() which avoids the wakeup. It then also changes schedule() to check for pending softirqs and do the wakeup then, I'm not quite sure I like this last bit, nor am I convinced its really needed. Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: paulus@samba.org LKML-Reference: <20090313112301.096138802@chello.nl> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Rafael J. Wysocki 提交于
Introduce helper functions allowing us to prevent device drivers from getting any interrupts (without disabling interrupts on the CPU) during suspend (or hibernation) and to make them start to receive interrupts again during the subsequent resume. These functions make it possible to keep timer interrupts enabled while the "late" suspend and "early" resume callbacks provided by device drivers are being executed. In turn, this allows device drivers' "late" suspend and "early" resume callbacks to sleep, execute ACPI callbacks etc. The functions introduced here will be used to rework the handling of interrupts during suspend (hibernation) and resume. Namely, interrupts will only be disabled on the CPU right before suspending sysdevs, while device drivers will be prevented from receiving interrupts, with the help of the new helper function, before their "late" suspend callbacks run (and analogously during resume). Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl> Acked-by: NIngo Molnar <mingo@elte.hu>
-