- 16 7月, 2014 6 次提交
-
-
由 Lars-Peter Clausen 提交于
The field is completely unused, remove it. Signed-off-by: NLars-Peter Clausen <lars@metafoo.de> Signed-off-by: NVinod Koul <vinod.koul@intel.com>
-
由 Lars-Peter Clausen 提交于
The dmac_reset() callaback of the pl330_info struct is always set to NULL, so remove it. Signed-off-by: NLars-Peter Clausen <lars@metafoo.de> Signed-off-by: NVinod Koul <vinod.koul@intel.com>
-
由 Lars-Peter Clausen 提交于
The pl330_chanstatus struct is completely unused, so remove it. Signed-off-by: NLars-Peter Clausen <lars@metafoo.de> Signed-off-by: NVinod Koul <vinod.koul@intel.com>
-
由 Lars-Peter Clausen 提交于
The settings for destination and source cache control are exactly the same. This patch removes the duplicated enum and uses the same for both destination and source cache control. Signed-off-by: NLars-Peter Clausen <lars@metafoo.de> Signed-off-by: NVinod Koul <vinod.koul@intel.com>
-
由 Lars-Peter Clausen 提交于
The pl330 driver has the custom pl330_reqtype enum which has the same possible settings as the generic dma_transfer_direction enum. Switching over to the generic enum internally makes it possible to directly initialize it from the transfer request direction. Signed-off-by: NLars-Peter Clausen <lars@metafoo.de> Signed-off-by: NVinod Koul <vinod.koul@intel.com>
-
由 Wolfram Sang 提交于
To be able to see debug messages during boot, enable the debug settings from Kconfig also for drivers in subdirectories. Signed-off-by: NWolfram Sang <wsa+renesas@sang-engineering.com> Signed-off-by: NVinod Koul <vinod.koul@intel.com>
-
- 15 7月, 2014 5 次提交
-
-
由 Joe Perches 提交于
Use the zeroing function instead of dma_alloc_coherent & memset(,0,) Signed-off-by: NJoe Perches <joe@perches.com> Signed-off-by: NVinod Koul <vinod.koul@intel.com>
-
由 Andy Gross 提交于
This patch adds support for end of transaction (EOT) and notify when done (NWD) hardware descriptor flags. The EOT flag requests that the peripheral assert an end of transaction interrupt when that descriptor is complete. It also results in special signaling protocol that is used between the attached peripheral and the core using the DMA controller. Clients will specify DMA_PREP_INTERRUPT to enable this flag. The NWD flag requests that the peripheral wait until the data has been fully processed by the peripheral before moving on to the next descriptor. Clients will specify DMA_PREP_FENCE to enable this flag. Signed-off-by: NAndy Gross <agross@codeaurora.org> Signed-off-by: NVinod Koul <vinod.koul@intel.com>
-
由 Hongbo Zhang 提交于
Fix the potential risk when enable config NET_DMA and ASYNC_TX. Async_tx is lack of support in current release process of dma descriptor, all descriptors will be released whatever is acked or no-acked by async_tx, so there is a potential race condition when dma engine is uesd by others clients (e.g. when enable NET_DMA to offload TCP). In our case, a race condition which is raised when use both of talitos and dmaengine to offload xor is because napi scheduler will sync all pending requests in dma channels, it affects the process of raid operations due to ack_tx is not checked in fsl dma. The no-acked descriptor is freed which is submitted just now, as a dependent tx, this freed descriptor trigger BUG_ON(async_tx_test_ack(depend_tx)) in async_tx_submit(). TASK = ee1a94a0[1390] 'md0_raid5' THREAD: ecf40000 CPU: 0 GPR00: 00000001 ecf41ca0 ee44/921a94a0 0000003f 00000001 c00593e4 00000000 00000001 GPR08: 00000000 a7a7a7a7 00000001 045/920000002 42028042 100a38d4 ed576d98 00000000 GPR16: ed5a11b0 00000000 2b162000 00000200 046/920000000 2d555000 ed3015e8 c15a7aa0 GPR24: 00000000 c155fc40 00000000 ecb63220 ecf41d28 e47/92f640bb0 ef640c30 ecf41ca0 NIP [c02b048c] async_tx_submit+0x6c/0x2b4 LR [c02b068c] async_tx_submit+0x26c/0x2b4 Call Trace: [ecf41ca0] [c02b068c] async_tx_submit+0x26c/0x2b448/92 (unreliable) [ecf41cd0] [c02b0a4c] async_memcpy+0x240/0x25c [ecf41d20] [c0421064] async_copy_data+0xa0/0x17c [ecf41d70] [c0421cf4] __raid_run_ops+0x874/0xe10 [ecf41df0] [c0426ee4] handle_stripe+0x820/0x25e8 [ecf41e90] [c0429080] raid5d+0x3d4/0x5b4 [ecf41f40] [c04329b8] md_thread+0x138/0x16c [ecf41f90] [c008277c] kthread+0x8c/0x90 [ecf41ff0] [c0011630] kernel_thread+0x4c/0x68 Another modification in this patch is the change of completed descriptors, there is a potential risk which caused by exception interrupt, all descriptors in ld_running list are seemed completed when an interrupt raised, it works fine under normal condition, but if there is an exception occured, it cannot work as our excepted. Hardware should not be depend on s/w list, the right way is to read current descriptor address register to find the last completed descriptor. If an interrupt is raised by an error, all descriptors in ld_running should not be seemed finished, or these unfinished descriptors in ld_running will be released wrongly. A simple way to reproduce: Enable dmatest first, then insert some bad descriptors which can trigger Programming Error interrupts before the good descriptors. Last, the good descriptors will be freed before they are processsed because of the exception intrerrupt. Note: the bad descriptors are only for simulating an exception interrupt. This case can illustrate the potential risk in current fsl-dma very well. Signed-off-by: NHongbo Zhang <hongbo.zhang@freescale.com> Signed-off-by: NQiang Liu <qiang.liu@freescale.com> Signed-off-by: NIra W. Snyder <iws@ovro.caltech.edu> Signed-off-by: NVinod Koul <vinod.koul@intel.com>
-
由 Hongbo Zhang 提交于
This patch adds suspend and resume functions for Freescale DMA driver. Signed-off-by: NHongbo Zhang <hongbo.zhang@freescale.com> Signed-off-by: NVinod Koul <vinod.koul@intel.com>
-
由 Hongbo Zhang 提交于
The usage of spin_lock_irqsave() is a stronger locking mechanism than is required throughout the driver. The minimum locking required should be used instead. Interrupts will be turned off and context will be saved, it is unnecessary to use irqsave. This patch changes all instances of spin_lock_irqsave() to spin_lock_bh(). All manipulation of protected fields is done using tasklet context or weaker, which makes spin_lock_bh() the correct choice. Signed-off-by: NHongbo Zhang <hongbo.zhang@freescale.com> Signed-off-by: NQiang Liu <qiang.liu@freescale.com> Signed-off-by: NVinod Koul <vinod.koul@intel.com>
-
- 01 7月, 2014 2 次提交
-
-
由 Russell King - ARM Linux 提交于
I received a report this morning from one of the Novena developers that the behaviour of the iMX6 ASoC codec driver (using imx-pcm-dma.c) was sub-optimal under high system load. While there are issues relating to system load remaining, upon reviewing the ASoC imx-pcm-dma.c driver, it was noticed that it not using the residue support, because SDMA doesn't support it. This has the effect that SDMA has to make multiple calls into the ASoC and ALSA code, one for each period. Since ALSA's snd_pcm_elapsed() does not need to be called multiple times and it is entirely sufficient to call it once to update ALSA with the current buffer position via the pointer method, we can do better here. We can also avoid stopping the DMA entirely, just like real cyclic DMA implementations behave. While this means that we replay some old samples, this is a nicer behaviour than having audio stop and restart. The changes to achieve this are relatively minor - imx-sdma.c can track where the DMA is to the nearest descriptor boundary - it does this already when deciding how many callbacks to issue. In doing this, buf_tail always points at the descriptor which will complete next. The residue is defined by the bytes remaining to the end of the buffer, when the buffer is viewed as a single block of memory [start...end]. So, when we start out, there's a full buffer worth of residue, and this counts down as we approach the end of the buffer, eventually becoming zero at the end, before returning to the full buffer worth when we wrap back to the start. Moving the walking of the descriptors into the interrupt handler means that we can update the BD_DONE flag at interrupt time, thus avoiding a delayed tasklet stopping the cyclic DMA. This means that the residue can be calculated from (total descriptors - buf_tail) * descriptor size. This is what the change below does. We update imx-pcm-dma.c to remove the NO_RESIDUE flag since we now provide the residue. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk> Tested-by: NShawn Guo <shawn.guo@linaro.org> Signed-off-by: NVinod Koul <vinod.koul@intel.com>
-
由 Daniel Mack 提交于
When a 0-length packet is received on the bus, desc->pd0 yields 1, which confuses the driver's users. This information is clearly wrong and not in accordance to the datasheet, but it's been observed on an AM335x board, very reproducible. Fix this by looking at bit 19 in PD2 of the completed packet. This bit will tell us if a zero-length packet was received on a queue. If it's set, ignore the value in PD0 and report a total length of 0 instead. Signed-off-by: NDaniel Mack <zonque@gmail.com> Signed-off-by: NVinod Koul <vinod.koul@intel.com>
-
- 03 6月, 2014 3 次提交
-
-
由 Vinod Koul 提交于
dynamic stack allocation in kernel is considered bad as kernel stack is low and we get warns on few archs as reported by kbuild test robot >> drivers/dma/sh/shdma-base.c:671:32: sparse: Variable length array is used. >> drivers/dma/sh/shdma-base.c:701:1: warning: 'shdma_prep_dma_cyclic' uses >> dynamic stack allocation [enabled by default] Fix this by making a static array of 32 which should be sufficient for shdma_prep_dma_cyclic which only user in kernel is audio and 32 periods for audio seems quite sufficient atm Reported-by: Nkbuild test robot <fengguang.wu@intel.com> Signed-off-by: NVinod Koul <vinod.koul@intel.com>
-
由 Vinod Koul 提交于
As documented in Documentation/printk-formats.txt we should use %zu/%zx specifiers for size_t type variables for the code to compile on different architectures. This is uncovered as COMPILE_TEST has been enabled recently for this driver drivers/dma/sh/shdma-base.c: In function 'shdma_prep_dma_cyclic': >> drivers/dma/sh/shdma-base.c:683:4: warning: format '%d' expects argument of >> type 'int', but argument 4 has type 'size_t' [-Wformat=] __func__, buf_len, period_len, slave_id); >> drivers/dma/sh/shdma-base.c:683:4: warning: format '%d' expects argument of >> type 'int', but argument 5 has type 'size_t' [-Wformat=] Reported-by: Nkbuild test robot <fengguang.wu@intel.com> Acked-by: NLaurent Pinchart <laurent.pinchart@ideasonboard.com> Signed-off-by: NVinod Koul <vinod.koul@intel.com>
-
由 Vinod Koul 提交于
kbuild test robot reports that shdma_prep_dma_cyclic should be static, since symbol is not declared, quick check revails that is the case >> drivers/dma/sh/shdma-base.c:660:32: sparse: symbol 'shdma_prep_dma_cyclic' >> was not declared. Should it be static? Reported-by: Nkbuild test robot <fengguang.wu@intel.com> Acked-by: NLaurent Pinchart <laurent.pinchart@ideasonboard.com> Signed-off-by: NVinod Koul <vinod.koul@intel.com>
-
- 02 6月, 2014 10 次提交
-
-
由 Fabio Estevam 提交于
The APBX-DMA block is also found on MX6Q/MX6DL chips. Update the help text accordingly. Signed-off-by: NFabio Estevam <fabio.estevam@freescale.com> Signed-off-by: NVinod Koul <vinod.koul@intel.com>
-
由 Laurent Pinchart 提交于
This helps increasing build testing coverage. Signed-off-by: NLaurent Pinchart <laurent.pinchart+renesas@ideasonboard.com> Acked-by: NSimon Horman <horms@verge.net.au> Signed-off-by: NVinod Koul <vinod.koul@intel.com>
-
由 Laurent Pinchart 提交于
linux/err.h isn't implicitly included by the current headers on all platforms, resulting in compilation failures due to implicit declarations of IS_ERR and PTR_ERR. Fix this by including linux/err.h. Signed-off-by: NLaurent Pinchart <laurent.pinchart+renesas@ideasonboard.com> Signed-off-by: NVinod Koul <vinod.koul@intel.com>
-
由 Laurent Pinchart 提交于
linux/err.h isn't implicitly included by the current headers on all platforms, resulting in compilation failures due to implicit declarations of IS_ERR and PTR_ERR. Fix this by including linux/err.h. Signed-off-by: NLaurent Pinchart <laurent.pinchart+renesas@ideasonboard.com> Signed-off-by: NVinod Koul <vinod.koul@intel.com>
-
由 Laurent Pinchart 提交于
This helps detecting duplicate includes. Signed-off-by: NLaurent Pinchart <laurent.pinchart+renesas@ideasonboard.com> Signed-off-by: NVinod Koul <vinod.koul@intel.com>
-
由 Laurent Pinchart 提交于
linux/err.h isn't implicitly included by the current headers on all platforms, resulting in compilation failures due to implicit declarations of IS_ERR and PTR_ERR. Fix this by including linux/err.h. Signed-off-by: NLaurent Pinchart <laurent.pinchart+renesas@ideasonboard.com> Signed-off-by: NVinod Koul <vinod.koul@intel.com>
-
由 Laurent Pinchart 提交于
This helps detecting duplicate includes. Signed-off-by: NLaurent Pinchart <laurent.pinchart+renesas@ideasonboard.com> Signed-off-by: NVinod Koul <vinod.koul@intel.com>
-
由 Vasily Khoruzhick 提交于
Many audio interface drivers require support of cyclic transfers to work correctly, for example Samsung ASoC DMA driver. This patch adds support for cyclic transfers to the s3c24xx-dma driver Signed-off-by: NVasily Khoruzhick <anarsoul@gmail.com> Reviewed-by: NHeiko Stuebner <heiko@sntech.de> Signed-off-by: NVinod Koul <vinod.koul@intel.com>
-
由 Vasily Khoruzhick 提交于
Due to redundant 'break' in loop driver processed only first chunk. Signed-off-by: NVasily Khoruzhick <anarsoul@gmail.com> Reviewed-by: NHeiko Stuebner <heiko@sntech.de> Signed-off-by: NVinod Koul <vinod.koul@intel.com>
-
由 Jiada Wang 提交于
In cyclic dma tx's handler sdma_handle_channel_loop(), SDMA channel statue is set to either DMA_ERROR or DMA_IN_PROGRESS based on each period's status. This has the following issues: 1) If one period's status is BD_RROR, then channel status will be set to DMA_ERROR, but it will be overwritten to DMA_IN_PROGRESS if the following periods are OK. 2) DMA client may call sdma_control(DMA_TERMINATE_ALL) to stop the cyclic dma operation, sdma channel status will be set to DMA_ERROR, but if after this handler is called, then again the channel status will be overwritten to DMA_IN_PROGRESS. Then the following dmaengine_prep_dma_cyclic() will always fail, as channel status is DMA_IN_PROGRESS. As in cyclic dma tx, channel status will be initially set to DMA_IN_PROGRESS, driver only needs to change it to DMA_ERROR, when something wrong happens (one period status is wrong, or stoped by client explicitly). Signed-off-by: NJiada Wang <jiada_wang@mentor.com> Signed-off-by: NVinod Koul <vinod.koul@intel.com>
-
- 22 5月, 2014 8 次提交
-
-
由 Vinod Koul 提交于
commit 4828b493 introduced COMPILE_TEST for this driver and this cause compile failure on alpha as kzalloc wasnt availble for this arch in included header, so explictly add slab.h Reported-by: Nkbuild test robot <fengguang.wu@intel.com> Signed-off-by: NVinod Koul <vinod.koul@intel.com>
-
由 Andy Shevchenko 提交于
dma_async_device_register() may return non-zero error code. In such case we have to follow error path. Signed-off-by: NAndy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: NVinod Koul <vinod.koul@intel.com>
-
由 Andy Shevchenko 提交于
The commit dbde5c29 "dw_dmac: use devm_* functions to simplify code" turns probe function to use devm_* helpers and simultaneously brings a regression. We have to 1) call clk_disable_unprepare() on error path, and 2) check error code of clk_enable_prepare(). First part was done in the original code, second one is an update. Signed-off-by: NAndy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: NVinod Koul <vinod.koul@intel.com>
-
由 Andy Shevchenko 提交于
hclk signal is a bus clock. So, it means we have to have it enabled during access to the DMA controller. This patch makes sure that we enable clock before access to the device, though it currently works on Intel hardware. Signed-off-by: NAndy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: NVinod Koul <vinod.koul@intel.com>
-
由 Jean Delvare 提交于
The pch_dma driver is for a companion chip to the Intel Atom E600 series processors. These are 32-bit x86 processors so the driver is only needed on X86_32. Add COMPILE_TEST as an alternative, so that the driver can still be build-tested elsewhere. Signed-off-by: NJean Delvare <jdelvare@suse.de> Cc: Dan Williams <dan.j.williams@intel.com> Signed-off-by: NVinod Koul <vinod.koul@intel.com>
-
由 Alexander Popov 提交于
Introduce support for slave s/g transfer preparation and the associated device control callback in the MPC512x DMA controller driver, which adds support for data transfers between memory and peripheral I/O to the previously supported mem-to-mem transfers. Signed-off-by: NAlexander Popov <a13xp0p0v88@gmail.com> [fixed subsytem name] Signed-off-by: NVinod Koul <vinod.koul@intel.com>
-
由 Xuelin Shi 提交于
The count which is used to get_unmap_data maybe not the same as the count computed in dmaengine_unmap which causes to free data in a wrong pool. This patch fixes this issue by keeping the map count with unmap_data structure and use this count to get the pool. Cc: <stable@vger.kernel.org> Signed-off-by: NXuelin Shi <xuelin.shi@freescale.com> Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
由 Ezequiel Garcia 提交于
We need to use writel() instead of writel_relaxed() when starting a channel, to ensure all the descriptors have been flushed before the activation. While at it, remove the unneeded read-modify-write and make the code simpler. Cc: <stable@vger.kernel.org> Signed-off-by: NLior Amsalem <alior@marvell.com> Signed-off-by: NEzequiel Garcia <ezequiel.garcia@free-electrons.com> Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
- 21 5月, 2014 2 次提交
-
-
由 Arnd Bergmann 提交于
The sa11x0_dma_pm_ops unconditionally reference sa11x0_dma_resume and sa11x0_dma_suspend, which currently breaks if CONFIG_PM_SLEEP is disabled. There is probably a better way to remove the reference in this case, but the safe choice is to have the suspend/resume code always built in the driver. Signed-off-by: NArnd Bergmann <arnd@arndb.de> Cc: Russell King <linux@arm.linux.org.uk> Cc: dmaengine@vger.kernel.org Cc: Vinod Koul <vinod.koul@intel.com> Cc: Dan Williams <dan.j.williams@intel.com> Signed-off-by: NVinod Koul <vinod.koul@intel.com>
-
由 Jingoo Han 提交于
Don't use DEFINE_PCI_DEVICE_TABLE macro, because this macro is deprecated. Signed-off-by: NJingoo Han <jg1.han@samsung.com> Acked-by: NAndy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: NVinod Koul <vinod.koul@intel.com>
-
- 07 5月, 2014 4 次提交
-
-
由 Andy Shevchenko 提交于
The commit dbde5c29 "dw_dmac: use devm_* functions to simplify code" turns probe function to use devm_* helpers and simultaneously brings a regression. We need to ensure irq is disabled, followed by ensuring that don't schedule any more tasklets and then its safe to use tasklet_kill(). The free_irq() will ensure that the irq is disabled and also wait till all scheduled interrupts are executed by invoking synchronize_irq(). So we need to only do tasklet_kill() after invoking free_irq(). Signed-off-by: NAndy Shevchenko <andriy.shevchenko@linux.intel.com> Cc: stable@vger.kernel.org # v3.11+ Signed-off-by: NVinod Koul <vinod.koul@intel.com>
-
由 Ulf Hansson 提交于
Clients may still be active in the early phase of system PM, thus we need to move the suspend operations to the late system PM phase. Signed-off-by: NUlf Hansson <ulf.hansson@linaro.org> Signed-off-by: NVinod Koul <vinod.koul@intel.com>
-
由 Daniel Mack 提交于
A channel can accommodate more than one transaction, each consisting of multiple descriptors, the last of which has the DCMD_ENDIRQEN bit set. In order to report the channel's residue, we hence have to walk the list of running descriptors, look for those which match the cookie, and then try to find the descriptor which defines upper and lower boundaries that embrace the current transport pointer. Once it is found, walk forward until we find the descriptor that tells us about the end of a transaction via a set DCMD_ENDIRQEN bit. Signed-off-by: NDaniel Mack <zonque@gmail.com> Signed-off-by: NVinod Koul <vinod.koul@intel.com>
-
由 Ulf Hansson 提交于
Make sure to handle register context save/restore when needed from system PM callbacks. Previously we solely trusted the device to reside in in-active state while the system suspend callback were invoked, which is just too optimistic. Signed-off-by: NUlf Hansson <ulf.hansson@linaro.org> Acked-by: NLinus Walleij <linus.walleij@linaro.org> Signed-off-by: NVinod Koul <vinod.koul@intel.com>
-