- 09 4月, 2015 1 次提交
-
-
由 Marc Zyngier 提交于
There is a number of cases where a kernel subsystem may want to introspect the state of an interrupt at the irqchip level: - When a peripheral is shared between virtual machines, its interrupt state becomes part of the guest's state, and must be switched accordingly. KVM on arm/arm64 requires this for its guest-visible timer - Some GPIO controllers seem to require peeking into the interrupt controller they are connected to to report their internal state This seem to be a pattern that is common enough for the core code to try and support this without too many horrible hacks. Introduce a pair of accessors (irq_get_irqchip_state/irq_set_irqchip_state) to retrieve the bits that can be of interest to another subsystem: pending, active, and masked. - irq_get_irqchip_state returns the state of the interrupt according to a parameter set to IRQCHIP_STATE_PENDING, IRQCHIP_STATE_ACTIVE, IRQCHIP_STATE_MASKED or IRQCHIP_STATE_LINE_LEVEL. - irq_set_irqchip_state similarly sets the state of the interrupt. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Reviewed-by: NBjorn Andersson <bjorn.andersson@sonymobile.com> Tested-by: NBjorn Andersson <bjorn.andersson@sonymobile.com> Cc: linux-arm-kernel@lists.infradead.org Cc: Abhijeet Dharmapurikar <adharmap@codeaurora.org> Cc: Stephen Boyd <sboyd@codeaurora.org> Cc: Phong Vo <pvo@apm.com> Cc: Linus Walleij <linus.walleij@linaro.org> Cc: Tin Huynh <tnhuynh@apm.com> Cc: Y Vo <yvo@apm.com> Cc: Toan Le <toanle@apm.com> Cc: Bjorn Andersson <bjorn@kryo.se> Cc: Jason Cooper <jason@lakedaemon.net> Cc: Arnd Bergmann <arnd@arndb.de> Link: http://lkml.kernel.org/r/1426676484-21812-2-git-send-email-marc.zyngier@arm.comSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 05 3月, 2015 1 次提交
-
-
由 Rafael J. Wysocki 提交于
It currently is required that all users of NO_SUSPEND interrupt lines pass the IRQF_NO_SUSPEND flag when requesting the IRQ or the WARN_ON_ONCE() in irq_pm_install_action() will trigger. That is done to warn about situations in which unprepared interrupt handlers may be run unnecessarily for suspended devices and may attempt to access those devices by mistake. However, it may cause drivers that have no technical reasons for using IRQF_NO_SUSPEND to set that flag just because they happen to share the interrupt line with something like a timer. Moreover, the generic handling of wakeup interrupts introduced by commit 9ce7a258 (genirq: Simplify wakeup mechanism) only works for IRQs without any NO_SUSPEND users, so the drivers of wakeup devices needing to use shared NO_SUSPEND interrupt lines for signaling system wakeup generally have to detect wakeup in their interrupt handlers. Thus if they happen to share an interrupt line with a NO_SUSPEND user, they also need to request that their interrupt handlers be run after suspend_device_irqs(). In both cases the reason for using IRQF_NO_SUSPEND is not because the driver in question has a genuine need to run its interrupt handler after suspend_device_irqs(), but because it happens to share the line with some other NO_SUSPEND user. Otherwise, the driver would do without IRQF_NO_SUSPEND just fine. To make it possible to specify that condition explicitly, introduce a new IRQ action handler flag for shared IRQs, IRQF_COND_SUSPEND, that, when set, will indicate to the IRQ core that the interrupt user is generally fine with suspending the IRQ, but it also can tolerate handler invocations after suspend_device_irqs() and, in particular, it is capable of detecting system wakeup and triggering it as appropriate from its interrupt handler. That will allow us to work around a problem with a shared timer interrupt line on at91 platforms. Link: http://marc.info/?l=linux-kernel&m=142252777602084&w=2 Link: http://marc.info/?t=142252775300011&r=1&w=2 Link: https://lkml.org/lkml/2014/12/15/552Reported-by: NBoris Brezillon <boris.brezillon@free-electrons.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Acked-by: NMark Rutland <mark.rutland@arm.com>
-
- 18 2月, 2015 1 次提交
-
-
由 Peter Zijlstra 提交于
For things like netpoll there is a need to disable an interrupt from atomic context. Currently netpoll uses disable_irq() which will sleep-wait on threaded handlers and thus forced_irqthreads breaks things. Provide disable_hardirq(), which uses synchronize_hardirq() to only wait for active hardirq handlers; also change synchronize_hardirq() to return the status of threaded handlers. This will allow one to try-disable an interrupt from atomic context, or in case of request_threaded_irq() to only wait for the hardirq part. Suggested-by: NSabrina Dubroca <sd@queasysnail.net> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: David Miller <davem@davemloft.net> Cc: Eyal Perry <eyalpe@mellanox.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Quentin Lambert <lambert.quentin@gmail.com> Cc: Randy Dunlap <rdunlap@infradead.org> Cc: Russell King <linux@arm.linux.org.uk> Link: http://lkml.kernel.org/r/20150205130623.GH5029@twins.programming.kicks-ass.net [ Fixed typos and such. ] Signed-off-by: NIngo Molnar <mingo@kernel.org>
-
- 10 2月, 2015 1 次提交
-
-
由 Jesse Brandeburg 提交于
The recent set_affinity commit by me introduced some null pointer dereferences on driver unload, because some drivers call this function with a NULL argument. This fixes the issue by just checking for null before setting the affinity mask. Fixes: e2e64a93 ("genirq: Set initial affinity in irq_set_affinity_hint()") Reported-by: NYinghai Lu <yinghai@kernel.org> Signed-off-by: NJesse Brandeburg <jesse.brandeburg@intel.com> CC: netdev@vger.kernel.org Link: http://lkml.kernel.org/r/20150128185739.9689.84588.stgit@jbrandeb-cp2.jf.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 23 1月, 2015 1 次提交
-
-
由 Jesse Brandeburg 提交于
Problem: The default behavior of the kernel is somewhat undesirable as all requested interrupts end up on CPU0 after registration. A user can run irqbalance daemon, or can manually configure smp_affinity via the proc filesystem, but the default affinity of the interrupts for all devices is always CPU zero, this can cause performance problems or very heavy cpu use of only one core if not noticed and fixed by the user. Solution: Enable the setting of the initial affinity directly when the driver sets a hint. This enabling means that kernel drivers can include an initial affinity setting for the interrupt, instead of all interrupts starting out life on CPU0. Of course if irqbalance is still running then the interrupts will get moved as before. This function is currently called by drivers in block, crypto, infiniband, ethernet and scsi trees, but only a handful, so these will be the devices affected by this change. Tested on i40e, and default interrupts were spread across the CPUs according to the hint. drivers/block/mtip32xx/mtip32xx.c:3 drivers/block/nvme-core.c:2 drivers/crypto/qat/qat_dh895xcc/adf_isr.c:3 drivers/infiniband/hw/qib/qib_iba7322.c:2 drivers/net/ethernet/intel/i40e/i40e_main.c:3 drivers/net/ethernet/intel/i40evf/i40evf_main.c:3 drivers/net/ethernet/intel/ixgbe/ixgbe_main.c:3 drivers/net/ethernet/mellanox/mlx4/en_cq.c:2 drivers/scsi/hpsa.c:3 drivers/scsi/lpfc/lpfc_init.c:3 drivers/scsi/megaraid/megaraid_sas_base.c:8 drivers/soc/ti/knav_qmss_acc.c:1 drivers/soc/ti/knav_qmss_queue.c:2 drivers/virtio/virtio_pci_common.c:2 Signed-off-by: NJesse Brandeburg <jesse.brandeburg@intel.com> Cc: netdev@vger.kernel.org Link: http://lkml.kernel.org/r/20141219012206.4220.27491.stgit@jbrandeb-cp2.jf.intel.comSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 23 11月, 2014 1 次提交
-
-
由 Jiang Liu 提交于
Add IRQ_SET_MASK_OK_DONE in addition to IRQ_SET_MASK_OK and IRQ_SET_MASK_OK_NOCOPY to support stacked irqchip. IRQ_SET_MASK_OK_DONE is the same as IRQ_SET_MASK_OK to irq core. To stacked irqchip, it means that ascendant irqchips have done all the work and no more handling needed in descendant irqchips. Signed-off-by: NJiang Liu <jiang.liu@linux.intel.com> Cc: Bjorn Helgaas <bhelgaas@google.com> Cc: Grant Likely <grant.likely@linaro.org> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Yingjoe Chen <yingjoe.chen@mediatek.com> Cc: Yijing Wang <wangyijing@huawei.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 01 9月, 2014 2 次提交
-
-
由 Thomas Gleixner 提交于
Account the IRQF_NO_SUSPEND and IRQF_RESUME_EARLY actions on shared interrupt lines and yell loudly if there is a mismatch. Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Thomas Gleixner 提交于
No functional change. Preparatory patch for cleaning up the suspend abort functionality. Update the comments while at it. Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 01 8月, 2014 2 次提交
-
-
由 Thomas Gleixner 提交于
This reverts commit 4fae4e76. Undo because it breaks working systems. Requested-by: NRafael J. Wysocki <rjw@rjwysocki.net> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Thomas Gleixner 提交于
This reverts commit d709f7bc. Undo, because it might break exisiting functionality. Requested-by: NRafael J. Wysocki <rjw@rjwysocki.net> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 24 7月, 2014 1 次提交
-
-
由 Peter Zijlstra 提交于
When suspend_device_irqs() iterates all descriptors, its pointless if one has NO_SUSPEND set while another has not. Validate on request_irq() that NO_SUSPEND state maches for SHARED interrupts. Signed-off-by: NPeter Zijlstra <peterz@infradead.org> Acked-by: N"Rafael J. Wysocki" <rjw@rjwysocki.net> Link: http://lkml.kernel.org/r/20140724133921.GY6758@twins.programming.kicks-ass.netSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 15 7月, 2014 1 次提交
-
-
由 Rafael J. Wysocki 提交于
If an IRQ has been configured for wakeup via enable_irq_wake(), the driver who has done that must be prepared for receiving interrupts after suspend_device_irqs() has returned, so there is no need to "suspend" such IRQs. Moreover, if drivers using enable_irq_wake() actually want to receive interrupts after suspend_device_irqs() has returned, they need to add IRQF_NO_SUSPEND to the IRQ flags while requesting the IRQs, which shouldn't be necessary (it also goes a bit too far, as IRQF_NO_SUSPEND causes the IRQ to be ignored by suspend_device_irqs() all the time regardless of whether or not it has been configured for signaling wakeup). For the above reasons, make __disable_irq() ignore IRQ descriptors with IRQD_WAKEUP_STATE set when its suspend argument is true which effectively causes them to behave like IRQs with IRQF_NO_SUSPEND set. This also allows IRQs configured for wakeup via enable_irq_wake() to work as wakeup interrupts for the "freeze" (suspend-to-idle) sleep mode automatically just like for any other sleep states. Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Cc: Li Aubrey <aubrey.li@linux.intel.com> Cc: Dmitry Torokhov <dmitry.torokhov@gmail.com> Cc: One Thousand Gnomes <gnomes@lxorguk.ukuu.org.uk> Link: http://lkml.kernel.org/r/4679574.kGUnqAuNl9@vostro.rjw.lanSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 04 5月, 2014 1 次提交
-
-
由 Thomas Gleixner 提交于
Till reported that the spurious interrupt detection of threaded interrupts is broken in two ways: - note_interrupt() is called for each action thread of a shared interrupt line. That's wrong as we are only interested whether none of the device drivers felt responsible for the interrupt, but by calling multiple times for a single interrupt line we account IRQ_NONE even if one of the drivers felt responsible. - note_interrupt() when called from the thread handler is not serialized. That leaves the members of irq_desc which are used for the spurious detection unprotected. To solve this we need to defer the spurious detection of a threaded interrupt to the next hardware interrupt context where we have implicit serialization. If note_interrupt is called with action_ret == IRQ_WAKE_THREAD, we check whether the previous interrupt requested a deferred check. If not, we request a deferred check for the next hardware interrupt and return. If set, we check whether one of the interrupt threads signaled success. Depending on this information we feed the result into the spurious detector. If one primary handler of a shared interrupt returns IRQ_HANDLED we disable the deferred check of irq threads on the same line, as we have found at least one device driver who cared. Reported-by: NTill Straumann <strauman@slac.stanford.edu> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Tested-by: NAustin Schuh <austin@peloton-tech.com> Cc: Oliver Hartkopp <socketcan@hartkopp.net> Cc: Wolfgang Grandegger <wg@grandegger.com> Cc: Pavel Pisa <pisa@cmp.felk.cvut.cz> Cc: Marc Kleine-Budde <mkl@pengutronix.de> Cc: linux-can@vger.kernel.org Cc: stable@vger.kernel.org Link: http://lkml.kernel.org/r/alpine.LFD.2.02.1303071450130.22263@ionos
-
- 18 4月, 2014 1 次提交
-
-
由 Thomas Gleixner 提交于
The current implementation of irq_set_affinity() refuses rightfully to route an interrupt to an offline cpu. But there is a special case, where this is actually desired. Some of the ARM SoCs have per cpu timers which require setting the affinity during cpu startup where the cpu is not yet in the online mask. If we can't do that, then the local timer interrupt for the about to become online cpu is routed to some random online cpu. The developers of the affected machines tried to work around that issue, but that results in a massive mess in that timer code. We have a yet unused argument in the set_affinity callbacks of the irq chips, which I added back then for a similar reason. It was never required so it got not used. But I'm happy that I never removed it. That allows us to implement a sane handling of the above scenario. So the affected SoC drivers can add the required force handling to their interrupt chip, switch the timer code to irq_force_affinity() and things just work. This does not affect any existing user of irq_set_affinity(). Tagged for stable to allow a simple fix of the affected SoC clock event drivers. Reported-and-tested-by: NKrzysztof Kozlowski <k.kozlowski@samsung.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Cc: Kyungmin Park <kyungmin.park@samsung.com> Cc: Marek Szyprowski <m.szyprowski@samsung.com> Cc: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com> Cc: Tomasz Figa <t.figa@samsung.com>, Cc: Daniel Lezcano <daniel.lezcano@linaro.org>, Cc: Kukjin Kim <kgene.kim@samsung.com> Cc: linux-arm-kernel@lists.infradead.org, Cc: stable@vger.kernel.org Link: http://lkml.kernel.org/r/20140416143315.717251504@linutronix.deSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 14 3月, 2014 1 次提交
-
-
由 Thomas Gleixner 提交于
The flag is necessary for interrupt chips which require an ACK/EOI after the handler has run. In case of threaded handlers this needs to happen after the threaded handler has completed before the unmask of the interrupt. The flag is only unseful in combination with the handle_fasteoi_irq flow control handler. It can be combined with the flag IRQCHIP_EOI_IF_HANDLED, so the EOI is not issued when the interrupt is disabled or in progress. Tested-by: NHans de Goede <hdegoede@redhat.com> Reviewed-by: NHans de Goede <hdegoede@redhat.com> Cc: linux-arm-kernel@lists.infradead.org Cc: linux-sunxi@googlegroups.com Cc: Maxime Ripard <maxime.ripard@free-electrons.com> Link: http://lkml.kernel.org/r/1394733834-26839-2-git-send-email-hdegoede@redhat.comSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 12 3月, 2014 1 次提交
-
-
由 Thomas Gleixner 提交于
For certain irq types, e.g. gpios, it's necessary to request resources before starting up the irq. This might fail so we cannot use the irq_startup() callback because we might call the irq_set_type() callback before that which does not make sense when the resource is not available. Calling irq_startup() before irq_set_type() can lead to spurious interrupts which is not desired either. Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Cc: Jean-Jacques Hiblot <jjhiblot@traphandler.com> Cc: Grant Likely <grant.likely@linaro.org> Cc: linux-arm-kernel@lists.infradead.org Reviewed-by: NLinus Walleij <linus.walleij@linaro.org> Link: http://lkml.kernel.org/r/alpine.DEB.2.02.1403080857160.18573@ionos.tec.linutronix.deSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 27 2月, 2014 1 次提交
-
-
由 Chuansheng Liu 提交于
We hit one rare case below: T1 calling disable_irq(), but hanging at synchronize_irq() always; The corresponding irq thread is in sleeping state; And all CPUs are in idle state; After analysis, we found there is one possible scenerio which causes T1 is waiting there forever: CPU0 CPU1 synchronize_irq() wait_event() spin_lock() atomic_dec_and_test(&threads_active) insert the __wait into queue spin_unlock() if(waitqueue_active) atomic_read(&threads_active) wake_up() Here after inserted the __wait into queue on CPU0, and before test if queue is empty on CPU1, there is no barrier, it maybe cause it is not visible for CPU1 immediately, although CPU0 has updated the queue list. It is similar for CPU0 atomic_read() threads_active also. So we'd need one smp_mb() before waitqueue_active.that, but removing the waitqueue_active() check solves it as wel l and it makes things simple and clear. Signed-off-by: NChuansheng Liu <chuansheng.liu@intel.com> Cc: Xiaoming Wang <xiaoming.wang@intel.com> Link: http://lkml.kernel.org/r/1393212590-32543-1-git-send-email-chuansheng.liu@intel.com Cc: stable@vger.kernel.org Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 20 2月, 2014 3 次提交
-
-
由 Chuansheng Liu 提交于
Change the comment "chasnge" to "change". Signed-off-by: NChuansheng Liu <chuansheng.liu@intel.com> Link: http://lkml.kernel.org/r/1392020037-5484-2-git-send-email-chuansheng.liu@intel.comSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Thomas Gleixner 提交于
In course of the sdhci/sdio discussion with Russell about killing the sdio kthread hackery we discovered the need to be able to wake an interrupt thread from software. The rationale for this is, that sdio hardware can lack proper interrupt support for certain features. So the driver needs to poll the status registers, but at the same time it needs to be woken up by an hardware interrupt. To be able to get rid of the home brewn kthread construct of sdio we need a way to wake an irq thread independent of an actual hardware interrupt. Provide an irq_wake_thread() function which wakes up the thread which is associated to a given dev_id. This allows sdio to invoke the irq thread from the hardware irq handler via the IRQ_WAKE_THREAD return value and provides a possibility to wake it via a timer for the polling scenarios. That allows to simplify the sdio logic significantly. Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Cc: Russell King <linux@arm.linux.org.uk> Cc: Chris Ball <chris@printf.net> Acked-by: NPeter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/20140215003823.772565780@linutronix.de
-
由 Thomas Gleixner 提交于
synchronize_irq() waits for hard irq and threaded handlers to complete before returning. For some special cases we only need to make sure that the hard interrupt part of the irq line is not in progress when we disabled the - possibly shared - interrupt at the device level. A proper use case for this was provided by Russell. The sdhci driver requires some irq triggered functions to be run in thread context. The current implementation of the thread context is a sdio private kthread construct, which has quite some shortcomings. These can be avoided when the thread is directly associated to the device interrupt via the generic threaded irq infrastructure. Though there is a corner case related to run time power management where one side disables the device interrupts at the device level and needs to make sure, that an already running hard interrupt handler has completed before proceeding further. Though that hard interrupt handler might wake the associated thread, which in turn can request the runtime PM to reenable the device. Using synchronize_irq() leads to an immediate deadlock of the irq thread waiting for the PM lock and the synchronize_irq() waiting for the irq thread to complete. Due to the fact that it is sufficient for this case to ensure that no hard irq handler is executing a new function which avoids the check for the thread is required. Add a function, which just monitors the hard irq parts and ignores the threaded handlers. Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Tested-by: NRussell King <linux@arm.linux.org.uk> Cc: Chris Ball <chris@printf.net> Acked-by: NPeter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/20140215003823.653236081@linutronix.de
-
- 28 10月, 2013 1 次提交
-
-
由 Thomas Pfaff 提交于
In commit ee238713 ("genirq: Set irq thread to RT priority on creation") we moved the assigment of the thread's priority from the thread's function into __setup_irq(). That function may run in user context for instance if the user opens an UART node and then driver calls requests in the ->open() callback. That user may not have CAP_SYS_NICE and so the irq thread won't run with the SCHED_OTHER policy. This patch uses sched_setscheduler_nocheck() so we omit the CAP_SYS_NICE check which is otherwise required for the SCHED_OTHER policy. [bigeasy: Rewrite the changelog] Signed-off-by: NThomas Pfaff <tpfaff@pcs.com> Cc: Ivo Sieben <meltedpianoman@gmail.com> Cc: stable@vger.kernel.org Link: http://lkml.kernel.org/r/1381489240-29626-1-git-send-email-bigeasy@linutronix.deSigned-off-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 18 10月, 2013 1 次提交
-
-
由 Xie XiuQi 提交于
Signed-off-by: NXie XiuQi <xiexiuqi@huawei.com> [jkosina@suse.cz: fix 'explicitly', noticed by Randy Dunlap] Signed-off-by: NJiri Kosina <jkosina@suse.cz>
-
- 28 6月, 2013 1 次提交
-
-
由 Ben Hutchings 提交于
Commit 02725e74 ('genirq: Use irq_get/put functions'), inadvertently changed can_request_irq() to return 0 for IRQs that have no action. This causes pcibios_lookup_irq() to select only IRQs that already have an action with IRQF_SHARED set, or to fail if there are none. Change can_request_irq() to return 1 for IRQs that have no action (if the first two conditions are met). Reported-by: NBjarni Ingi Gislason <bjarniig@rhi.hi.is> Tested-by: Bjarni Ingi Gislason <bjarniig@rhi.hi.is> (against 3.2) Signed-off-by: NBen Hutchings <ben@decadent.org.uk> Cc: 709647@bugs.debian.org Cc: stable@vger.kernel.org # 2.6.39+ Link: http://bugs.debian.org/709647 Link: http://lkml.kernel.org/r/1372383630.23847.40.camel@deadeye.wl.decadent.org.ukSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 11 6月, 2013 1 次提交
-
-
由 Ivo Sieben 提交于
When a threaded irq handler is installed the irq thread is initially created on normal scheduling priority. Only after the irq thread is woken up it sets its priority to RT_FIFO MAX_USER_RT_PRIO/2 itself. This means that interrupts that occur directly after the irq handler is installed will be handled on a normal scheduling priority instead of the realtime priority that one would expect. Fix this by setting the RT priority on creation of the irq_thread. Signed-off-by: NIvo Sieben <meltedpianoman@gmail.com> Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Cc: Steven Rostedt <rostedt@goodmis.org> Link: http://lkml.kernel.org/r/1370254322-17240-1-git-send-email-meltedpianoman@gmail.comSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 19 2月, 2013 1 次提交
-
-
由 Chris Metcalf 提交于
These functions are used by the tilegx onchip network driver, and it's useful to be able to load that driver as a module. Signed-off-by: NChris Metcalf <cmetcalf@tilera.com> Link: http://lkml.kernel.org/r/201302012043.r11KhNZF024371@farm-0021.internal.tilera.comSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 08 2月, 2013 1 次提交
-
-
由 Clark Williams 提交于
Move rt scheduler definitions out of include/linux/sched.h into new file include/linux/sched/rt.h Signed-off-by: NClark Williams <williams@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Steven Rostedt <rostedt@goodmis.org> Link: http://lkml.kernel.org/r/20130207094707.7b9f825f@riff.lanSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 19 12月, 2012 1 次提交
-
-
由 Alan Cox 提交于
The array check is useless so remove it. [akpm@linux-foundation.org: remove comment, per David] Signed-off-by: NAlan Cox <alan@linux.intel.com> Cc: David Rientjes <rientjes@google.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 13 11月, 2012 1 次提交
-
-
由 Thomas Gleixner 提交于
Sankara reported that the genirq core code fails to adjust the affinity of an interrupt thread in several cases: 1) On request/setup_irq() the call to setup_affinity() happens before the new action is registered, so the new thread is not notified. 2) For secondary shared interrupts nothing notifies the new thread to change its affinity. 3) Interrupts which have the IRQ_NO_BALANCE flag set are not moving the thread either. Fix this by setting the thread affinity flag right on thread creation time. This ensures that under all circumstances the thread moves to the right place. Requires a check in irq_thread_check_affinity for an existing affinity mask (CONFIG_CPU_MASK_OFFSTACK=y) Reported-and-tested-by: NSankara Muthukrishnan <sankara.m@gmail.com> Cc: stable@vger.kernel.org Link: http://lkml.kernel.org/r/alpine.LFD.2.02.1209041738200.2754@ionosSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 01 11月, 2012 2 次提交
-
-
由 Sankara Muthukrishnan 提交于
As irq_thread_check_affinity is called ONLY inside the while loop in the irq thread, the core affinity is set only when an interrupt occurs. This patch sets the core affinity right after the irq thread is created and before it waits for interrupts. In real-tiime targets that do not typically change the core affinity of irqs during run-time, this patch will save additional latency of an irq thread in setting the core affinity during the first interrupt occurrence for that irq. Signed-off-by: NSankara S Muthukrishnan <sankara.m@ni.com> Acked-by: NSteven Rostedt <rostedt@goodmis.org> Link: http://lkml.kernel.org/r/CAFQPvXeVZ858WFYimEU5uvLNxLDd6bJMmqWihFmbCf3ntokz0A@mail.gmail.comSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Thomas Gleixner 提交于
Attempts to retrigger nested threaded IRQs currently fail because they have no primary handler. In order to support retrigger of nested IRQs, the parent IRQ needs to be retriggered. To fix, when an IRQ needs to be resent, if the interrupt has a parent IRQ and runs in the context of the parent IRQ, then resend the parent. Also, handle_nested_irq() needs to clear the replay flag like the other handlers, otherwise check_irq_resend() will set it and it will never be cleared. Without clearing, it results in the first resend working fine, but check_irq_resend() returning early on subsequent resends because the replay flag is still set. Problem discovered on ARM/OMAP platforms where a nested IRQ that's also a wakeup IRQ happens late in suspend and needed to be retriggered during the resume process. [khilman@ti.com: changelog edits, clear IRQS_REPLAY in handle_nested_irq()] Reported-by: NKevin Hilman <khilman@ti.com> Tested-by: NKevin Hilman <khilman@ti.com> Cc: linux-arm-kernel@lists.infradead.org Link: http://lkml.kernel.org/r/1350425269-11489-1-git-send-email-khilman@deeprootsystems.comSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 25 7月, 2012 1 次提交
-
-
由 Thomas Gleixner 提交于
Some interrupt chips like MSI are oneshot safe by implementation. For those interrupts we can avoid the mask/unmask sequence for threaded interrupt handlers. Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/alpine.LFD.2.02.1207132056540.32033@ionos Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Avi Kivity <avi@redhat.com> Cc: Marcelo Tosatti <mtosatti@redhat.com> Cc: Jan Kiszka <jan.kiszka@web.de>
-
- 23 7月, 2012 2 次提交
-
-
由 Al Viro 提交于
task_work and rcu_head are identical now; merge them (calling the result struct callback_head, rcu_head #define'd to it), kill separate allocation in security/keys since we can just use cred->rcu now. Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
由 Al Viro 提交于
get rid of the only user of ->data; this is _not_ the final variant - in the end we'll have task_work and rcu_head identical and just use cred->rcu, at which point the separate allocation will be gone completely. Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
- 19 7月, 2012 1 次提交
-
-
由 Theodore Ts'o 提交于
With the new interrupt sampling system, we are no longer using the timer_rand_state structure in the irq descriptor, so we can stop initializing it now. [ Merged in fixes from Sedat to find some last missing references to rand_initialize_irq() ] Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu> Signed-off-by: NSedat Dilek <sedat.dilek@gmail.com>
-
- 01 6月, 2012 1 次提交
-
-
由 Andrew Morton 提交于
Use the module-wide pr_fmt() mechanism rather than open-coding "genirq: " everywhere. Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 25 5月, 2012 1 次提交
-
-
由 Jiang Liu 提交于
All invocations of chip->irq_set_affinity() are doing the same return value checks. Let them all use a common function. [ tglx: removed the silly likely while at it ] Signed-off-by: NJiang Liu <jiang.liu@huawei.com> Cc: Jiang Liu <liuj97@gmail.com> Cc: Keping Chen <chenkeping@huawei.com> Link: http://lkml.kernel.org/r/1333120296-13563-3-git-send-email-jiang.liu@huawei.comSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 24 5月, 2012 1 次提交
-
-
由 Oleg Nesterov 提交于
exit_irq_thread() and task->irq_thread are needed to handle the unexpected (and unlikely) exit of irq-thread. We can use task_work instead and make this all private to kernel/irq/manage.c, cleanup plus micro-optimization. 1. rename exit_irq_thread() to irq_thread_dtor(), make it static, and move it up before irq_thread(). 2. change irq_thread() to do task_work_add(irq_thread_dtor) at the start and task_work_cancel() before return. tracehook_notify_resume() can never play with kthreads, only do_exit()->exit_task_work() can call the callback and this is what we want. 3. remove task_struct->irq_thread and the special hook in do_exit(). Signed-off-by: NOleg Nesterov <oleg@redhat.com> Reviewed-by: NThomas Gleixner <tglx@linutronix.de> Cc: David Howells <dhowells@redhat.com> Cc: Richard Kuo <rkuo@codeaurora.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Alexander Gordeev <agordeev@redhat.com> Cc: Chris Zankel <chris@zankel.net> Cc: David Smith <dsmith@redhat.com> Cc: "Frank Ch. Eigler" <fche@redhat.com> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Larry Woodman <lwoodman@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Tejun Heo <tj@kernel.org> Cc: Ingo Molnar <mingo@elte.hu> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
- 22 5月, 2012 1 次提交
-
-
由 Richard Weinberger 提交于
As it's only user (UML) does no longer need it we can get rid of it. Signed-off-by: NRichard Weinberger <richard@nod.at> Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
-
- 19 4月, 2012 2 次提交
-
-
由 Thomas Gleixner 提交于
We require that shared interrupts agree on a few flag settings. Right now we silently return with an error code without giving any hint why we reject it. Make the printout unconditionally and actually useful by printing the flags of the new and the already registered action. Convert all printks to pr_* and use a proper prefix while at it. Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Thomas Gleixner 提交于
Requesting a threaded interrupt without a primary handler and without IRQF_ONESHOT set is dangerous. The core will use the default primary handler for it, which merily wakes the thread. For a level type interrupt this results in an interrupt storm, because the interrupt line is reenabled after the primary handler runs. The device has still the line asserted, which brings us back into the primary handler. While this works for edge type interrupts, we play it safe and reject unconditionally because we can't say for sure which type this interrupt really has. The type flags are unreliable as the underlying chip implementation can override them. And we cannot assume that developers using that interface know what they are doing. Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-