- 08 6月, 2012 1 次提交
-
-
由 Laxman Dewangan 提交于
The DMA controller like Nvidia's Tegra Dma controller supports the different slave requestor id from different slave. This need to be configure in dma controller to handle the request properly. Adding the slave-id in the slave configuration so that information can be passed from client when configuring for slave. Signed-off-by: NLaxman Dewangan <ldewangan@nvidia.com> Signed-off-by: NVinod Koul <vinod.koul@linux.intel.com>
-
- 01 6月, 2012 1 次提交
-
-
由 Alexandre Bounine 提交于
Adds DMA Engine framework support into RapidIO subsystem. Uses DMA Engine DMA_SLAVE interface to generate data transfers to/from remote RapidIO target devices. Introduces RapidIO-specific wrapper for prep_slave_sg() interface with an extra parameter to pass target specific information. Uses scatterlist to describe local data buffer. Address flat data buffer on a remote side. Signed-off-by: NAlexandre Bounine <alexandre.bounine@idt.com> Cc: Dan Williams <dan.j.williams@intel.com> Acked-by: NVinod Koul <vinod.koul@linux.intel.com> Cc: Li Yang <leoli@freescale.com> Cc: Matt Porter <mporter@kernel.crashing.org> Cc: Paul Gortmaker <paul.gortmaker@windriver.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 11 5月, 2012 1 次提交
-
-
由 Kuninori Morimoto 提交于
dmaengine_prep_slave_single() is a helper function which is supposed to be used to prepare a transfer of a single contingous buffer. Currently the function takes a pointer to such a buffer from which it builds a scatterlist and passes it on to device_prep_slave_sg. The dmaengine framework requires that any scatterlist that is passed to device_prep_slave_sg is mapped and it may not be unmapped until the DMA operation has completed. This is not the here and any use of dmaengine_prep_slave_single() will lead to undefined behaviour (Most likely a system crash). This patch changes dmaengine_prep_slave_single() to take a dma_addr_t instead of a pointer to a buffer and moves the responsibility of mapping and unmapping the buffer up to the caller. Signed-off-by: NKuninori Morimoto <kuninori.morimoto.gx@renesas.com> Signed-off-by: NLars-Peter Clausen <lars@metafoo.de> Acked-by: NLinus Walleij <linus.walleij@linaro.org> Signed-off-by: NVinod Koul <vinod.koul@linux.intel.com>
-
- 06 4月, 2012 1 次提交
-
-
由 Dave Jiang 提交于
This is the fallout from adding memcpy alignment workaround for certain IOATDMA hardware. NetDMA will only use DMA engine that can handle byte align ops. Acked-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NDave Jiang <dave.jiang@intel.com> Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
- 21 3月, 2012 2 次提交
-
-
由 Alexandre Bounine 提交于
Add context parameter to device_prep_slave_sg() and device_prep_dma_cyclic() interfaces to allow passing client/target specific information associated with the data transfer. Modify all affected DMA engine drivers. Signed-off-by: NAlexandre Bounine <alexandre.bounine@idt.com> Acked-by: NLinus Walleij <linus.walleij@linaro.org> Acked-by: NFelipe Balbi <balbi@ti.com> Signed-off-by: NVinod Koul <vinod.koul@linux.intel.com>
-
由 Alexandre Bounine 提交于
Add inline wrappers for device_prep_slave_sg() and device_prep_dma_cyclic() interfaces to hide new parameter from current users of affected interfaces. Convert current users to use new wrappers instead of direct calls. Suggested by Russell King [https://lkml.org/lkml/2012/2/3/269]. Signed-off-by: NAlexandre Bounine <alexandre.bounine@idt.com> Signed-off-by: NVinod Koul <vinod.koul@linux.intel.com>
-
- 13 3月, 2012 2 次提交
-
-
由 Russell King - ARM Linux 提交于
Add a local private header file to contain definitions and declarations which should only be used by DMA engine drivers. We also fix linux/dmaengine.h to use LINUX_DMAENGINE_H to guard against multiple inclusion. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk> Tested-by: NLinus Walleij <linus.walleij@linaro.org> Reviewed-by: NLinus Walleij <linus.walleij@linaro.org> Acked-by: NJassi Brar <jassisinghbrar@gmail.com> [imx-sdma.c & mxs-dma.c] Tested-by: NShawn Guo <shawn.guo@linaro.org> Signed-off-by: NVinod Koul <vinod.koul@linux.intel.com>
-
由 Russell King - ARM Linux 提交于
Every DMA engine implementation declares a last completed dma cookie in their private dma channel structures. This is pointless, and forces driver specific code. Move this out into the common dma_chan structure. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk> Tested-by: NLinus Walleij <linus.walleij@linaro.org> Reviewed-by: NLinus Walleij <linus.walleij@linaro.org> Acked-by: NJassi Brar <jassisinghbrar@gmail.com> [imx-sdma.c & mxs-dma.c] Tested-by: NShawn Guo <shawn.guo@linaro.org> Signed-off-by: NVinod Koul <vinod.koul@linux.intel.com>
-
- 05 3月, 2012 1 次提交
-
-
由 Paul Gortmaker 提交于
If a header file is making use of BUG, BUG_ON, BUILD_BUG_ON, or any other BUG variant in a static inline (i.e. not in a #define) then that header really should be including <linux/bug.h> and not just expecting it to be implicitly present. We can make this change risk-free, since if the files using these headers didn't have exposure to linux/bug.h already, they would have been causing compile failures/warnings. Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
-
- 22 2月, 2012 1 次提交
-
-
由 Viresh Kumar 提交于
Flow controller is programmable for few controllers and there are few intelligent peripherals like, Synopsys JPEG controller, that needs to be a flow controller of DMA transfers on dest side. For this, currently two drivers, pl08x and dw_dmac, support flow controller to be passed from platform to these drivers. Perhaps, this should be a part of struct dma_slave_config. This patch adds another field device_fc to this structure. User drivers must pass this as true if they want to be flow controller of certain transfers. Signed-off-by: NViresh Kumar <viresh.kumar@st.com> Acked-by: NLinus Walleij <linus.walleij@linaro.org> Signed-off-by: NVinod Koul <vinod.koul@linux.intel.com>
-
- 24 12月, 2011 1 次提交
-
-
由 Shawn Guo 提交于
Before dma_transfer_direction was introduced to replace dma_data_direction, some dmaengine device uses DMA_NONE of dma_data_direction for some talk with its client drivers. The mxs-dma and its clients mxs-mmc and gpmi-nand are such case. This patch adds DMA_TRANS_NONE to dma_transfer_direction and migrate the DMA_NONE use in mxs-dma to it. It also fixes the compile warning below. CC drivers/dma/mxs-dma.o drivers/dma/mxs-dma.c: In function ‘mxs_dma_prep_slave_sg’: drivers/dma/mxs-dma.c:420:16: warning: comparison between ‘enum dma_transfer_direction’ and ‘enum dma_data_direction’ Signed-off-by: NShawn Guo <shawn.guo@linaro.org> Signed-off-by: NVinod Koul <vinod.koul@linux.intel.com>
-
- 18 11月, 2011 1 次提交
-
-
由 Jassi Brar 提交于
Define a new api that could be used for doing fancy data transfers like interleaved to contiguous copy and vice-versa. Traditional SG_list based transfers tend to be very inefficient in such cases as where the interleave and chunk are only a few bytes, which call for a very condensed api to convey pattern of the transfer. This api supports all 4 variants of scatter-gather and contiguous transfer. Of course, neither can this api help transfers that don't lend to DMA by nature, i.e, scattered tiny read/writes with no periodic pattern. Also since now we support SLAVE channels that might not provide device_prep_slave_sg callback but device_prep_interleaved_dma, remove the BUG_ON check. Signed-off-by: NJassi Brar <jaswinder.singh@linaro.org> Acked-by: NBarry Song <Baohua.Song@csr.com> [renamed dmaxfer_template to dma_interleaved_template did fixup after the enum dma_transfer_merge] Signed-off-by: NVinod Koul <vinod.koul@linux.intel.com>
-
- 01 11月, 2011 1 次提交
-
-
由 Paul Gortmaker 提交于
The implicit presence of module.h and all its sub-includes was masking these implicit header usages: include/linux/dmaengine.h:684: warning: 'struct page' declared inside parameter list include/linux/dmaengine.h:684: warning: its scope is only this definition or declaration, which is probably not what you want include/linux/dmaengine.h:687: warning: 'struct page' declared inside parameter list include/linux/dmaengine.h:736:2: error: implicit declaration of function 'bitmap_zero' With input from Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
-
- 27 10月, 2011 1 次提交
-
-
由 Vinod Koul 提交于
This new enum removes usage of dma_data_direction for dma direction. The new enum cleans tells the DMA direction and mode This further paves way for merging the dmaengine _prep operations and also for interleaved dma Suggested-by: NJassi Brar <jaswinder.singh@linaro.org> Reviewed-by: NBarry Song <Baohua.Song@csr.com> Signed-off-by: NVinod Koul <vinod.koul@linux.intel.com>
-
- 16 8月, 2011 1 次提交
-
-
由 Vinod Koul 提交于
Commit 90b44f8f introduces dmaengine_prep_slave_single API which adds scatterlist.h in dmaengine.h, so defining struct scatterlist is not required Signed-off-by: NVinod Koul <vinod.koul@intel.com> Acked-by: NDan Williams <dan.j.williams@intel.com>
-
- 08 8月, 2011 1 次提交
-
-
由 Vinod Koul 提交于
For clients which require a single slave transfer and dont want to be bothered about the scatterlist api, this helper gives simple API for this transfer and creates single scatterlist for DMA API Idea from Russell King Signed-off-by: NVinod Koul <vinod.koul@intel.com>
-
- 22 6月, 2011 1 次提交
-
-
由 Alexey Dobriyan 提交于
Remove linux/mm.h inclusion from netdevice.h -- it's unused (I've checked manually). To prevent mm.h inclusion via other channels also extract "enum dma_data_direction" definition into separate header. This tiny piece is what gluing netdevice.h with mm.h via "netdevice.h => dmaengine.h => dma-mapping.h => scatterlist.h => mm.h". Removal of mm.h from scatterlist.h was tried and was found not feasible on most archs, so the link was cutoff earlier. Hope people are OK with tiny include file. Note, that mm_types.h is still dragged in, but it is a separate story. Signed-off-by: NAlexey Dobriyan <adobriyan@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 31 3月, 2011 1 次提交
-
-
由 Lucas De Marchi 提交于
Fixes generated by 'codespell' and manually reviewed. Signed-off-by: NLucas De Marchi <lucas.demarchi@profusion.mobi>
-
- 15 1月, 2011 1 次提交
-
-
由 Russell King - ARM Linux 提交于
desc->tx_submit's return type is dma_cookie_t, not int. Therefore, dmaengine_submit() should match this return type as it's just wrapping this detail. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
- 03 1月, 2011 1 次提交
-
-
由 Guennadi Liakhovetski 提交于
This lets drivers, optionally using the dmaengine, build with DMA_ENGINE unselected. Signed-off-by: NGuennadi Liakhovetski <g.liakhovetski@gmx.de> Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
- 08 10月, 2010 3 次提交
-
-
由 Dan Williams 提交于
The majority of drivers in drivers/dma/ will never establish cross channel operation chains and do not need the extra overhead in struct dma_async_tx_descriptor. Make channel switching opt-in by default. Cc: Anatolij Gustschin <agust@denx.de> Cc: Ira Snyder <iws@ovro.caltech.edu> Cc: Linus Walleij <linus.walleij@stericsson.com> Cc: Saeed Bishara <saeed@marvell.com> Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
由 Ira Snyder 提交于
Now that the generic DMAEngine API has support for scatterlist to scatterlist copying, the device_prep_slave_sg() portion of the DMA_SLAVE API is no longer necessary and has been removed. However, the device_control() portion of the DMA_SLAVE API is still useful to control device specific parameters, such as externally controlled DMA transfers and maximum burst length. A special dma_ctrl_cmd has been added to enable externally controlled DMA transfers. This is currently specific to the Freescale DMA controller, but can easily be made generic when another user is found. Signed-off-by: NIra W. Snyder <iws@ovro.caltech.edu> Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
由 Ira Snyder 提交于
This adds support for scatterlist to scatterlist DMA transfers. A similar interface is exposed by the fsldma driver (through the DMA_SLAVE API) and by the ste_dma40 driver (through an exported function). This patch paves the way for making this type of copy operation a part of the generic DMAEngine API. Futher patches will add support in individual drivers. Signed-off-by: NIra W. Snyder <iws@ovro.caltech.edu> Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
- 06 10月, 2010 2 次提交
-
-
由 Sascha Hauer 提交于
Add wrapper functions around the dma_device->device_control function to bring back type safety. Also, add a wrapper function around dma_async_tx_descriptor->tx_submit. This is named dmaengine_submit instead of dmaengine_tx_submit to get rid of the confusing 'tx' in the function name Signed-off-by: NSascha Hauer <s.hauer@pengutronix.de> Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
由 Sascha Hauer 提交于
Cyclic transfers are useful for audio where a single buffer divided in periods has to be transfered endlessly until stopped. After being prepared the transfer is started using the dma_async_descriptor->tx_submit function. dma_async_descriptor->callback is called after each period. The transfer is stopped using the DMA_TERMINATE_ALL callback. While being used for cyclic transfers the channel cannot be used for other transfer types. Signed-off-by: NSascha Hauer <s.hauer@pengutronix.de> Cc: Haavard Skinnemoen <haavard.skinnemoen@atmel.com> Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
- 23 9月, 2010 1 次提交
-
-
由 Mathieu Lacage 提交于
Add a missing inline keyword for static function in linux/dmaengine.h to avoid duplicate symbol definitions. Signed-off-by: NMathieu Lacage <mathieu.lacage@sophia.inria.fr> Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
- 05 8月, 2010 1 次提交
-
-
由 Linus Walleij 提交于
This adds an interface to the DMAengine to make it possible to reconfigure a slave channel at runtime. We add a few foreseen config parameters to the passed struct, with a void * pointer for custom per-device or per-platform runtime slave data. Signed-off-by: NLinus Walleij <linus.walleij@stericsson.com> Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
- 18 5月, 2010 2 次提交
-
-
由 Linus Walleij 提交于
This adds an argument to the DMAengine control function, so that we can later provide control commands that need some external data passed in through an argument akin to the ioctl() operation prototype. [dan.j.williams@intel.com: fix up some missed conversions] Signed-off-by: NLinus Walleij <linus.walleij@stericsson.com> Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
由 Dan Williams 提交于
Saves 24 bytes per descriptor (64-bit) when the channel-switching capabilities of async_tx are not required. Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
- 27 3月, 2010 3 次提交
-
-
由 Dan Williams 提交于
Simple conditional struct filler to cut out some duplicated code. Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
由 Linus Walleij 提交于
Convert the device_is_tx_complete() operation on the DMA engine to a generic device_tx_status()operation which can return three states, DMA_TX_RUNNING, DMA_TX_COMPLETE, DMA_TX_PAUSED. [dan.j.williams@intel.com: update for timberdale] Signed-off-by: NLinus Walleij <linus.walleij@stericsson.com> Acked-by: NMark Brown <broonie@opensource.wolfsonmicro.com> Cc: Maciej Sosnowski <maciej.sosnowski@intel.com> Cc: Nicolas Ferre <nicolas.ferre@atmel.com> Cc: Pavel Machek <pavel@ucw.cz> Cc: Li Yang <leoli@freescale.com> Cc: Guennadi Liakhovetski <g.liakhovetski@gmx.de> Cc: Paul Mundt <lethal@linux-sh.org> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Haavard Skinnemoen <haavard.skinnemoen@atmel.com> Cc: Magnus Damm <damm@opensource.se> Cc: Liam Girdwood <lrg@slimlogic.co.uk> Cc: Joe Perches <joe@perches.com> Cc: Roland Dreier <rdreier@cisco.com> Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
由 Linus Walleij 提交于
Convert the device_terminate_all() operation on the DMA engine to a generic device_control() operation which can now optionally support also pausing and resuming DMA on a certain channel. Implemented for the COH 901 318 DMAC as an example. [dan.j.williams@intel.com: update for timberdale] Signed-off-by: NLinus Walleij <linus.walleij@stericsson.com> Acked-by: NMark Brown <broonie@opensource.wolfsonmicro.com> Cc: Maciej Sosnowski <maciej.sosnowski@intel.com> Cc: Nicolas Ferre <nicolas.ferre@atmel.com> Cc: Pavel Machek <pavel@ucw.cz> Cc: Li Yang <leoli@freescale.com> Cc: Guennadi Liakhovetski <g.liakhovetski@gmx.de> Cc: Paul Mundt <lethal@linux-sh.org> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Haavard Skinnemoen <haavard.skinnemoen@atmel.com> Cc: Magnus Damm <damm@opensource.se> Cc: Liam Girdwood <lrg@slimlogic.co.uk> Cc: Joe Perches <joe@perches.com> Cc: Roland Dreier <rdreier@cisco.com> Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
- 01 3月, 2010 1 次提交
-
-
由 Steven J. Magnani 提交于
fsl_dma_update_completed_cookie() appears to calculate the last completed cookie incorrectly in the corner case where DMA on cookie 1 is in progress just following a cookie wrap. Signed-off-by: NSteven J. Magnani <steve@digidescorp.com> Acked-by: NIra W. Snyder <iws@ovro.caltech.edu> [dan.j.williams@intel.com: fix an integer overflow warning with INT_MAX] Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
- 17 2月, 2010 1 次提交
-
-
由 Tejun Heo 提交于
Add __percpu sparse annotations to places which didn't make it in one of the previous patches. All converions are trivial. These annotations are to make sparse consider percpu variables to be in a different address space and warn if accessed without going through percpu accessors. This patch doesn't affect normal builds. Signed-off-by: NTejun Heo <tj@kernel.org> Acked-by: NBorislav Petkov <borislav.petkov@amd.com> Cc: Dan Williams <dan.j.williams@intel.com> Cc: Huang Ying <ying.huang@intel.com> Cc: Len Brown <lenb@kernel.org> Cc: Neil Brown <neilb@suse.de>
-
- 11 12月, 2009 1 次提交
-
-
由 Guennadi Liakhovetski 提交于
DMA_CTRL_ACK's description applies to its clear state, not to its set state. Signed-off-by: NGuennadi Liakhovetski <g.liakhovetski@gmx.de> Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
- 09 9月, 2009 5 次提交
-
-
由 Dan Williams 提交于
The tx_list attribute of struct dma_async_tx_descriptor is common to most, but not all dma driver implementations. None of the upper level code (dmaengine/async_tx) uses it, so allow drivers to implement it locally if they need it. This saves sizeof(struct list_head) bytes for drivers that do not manage descriptors with a linked list (e.g.: ioatdma v2,3). Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
由 Dan Williams 提交于
Some engines have transfer size and address alignment restrictions. Add a per-operation alignment property to struct dma_device that the async routines and dmatest can use to check alignment capabilities. Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
由 Dan Williams 提交于
No drivers currently implement these operation types, so they can be deleted. Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
由 Dan Williams 提交于
Channel switching is problematic for some dmaengine drivers as the architecture precludes separating the ->prep from ->submit. In these cases the driver can select ASYNC_TX_DISABLE_CHANNEL_SWITCH to modify the async_tx allocator to only return channels that support all of the required asynchronous operations. For example MD_RAID456=y selects support for asynchronous xor, xor validate, pq, pq validate, and memcpy. When ASYNC_TX_DISABLE_CHANNEL_SWITCH=y any channel with all these capabilities is marked DMA_ASYNC_TX allowing async_tx_find_channel() to quickly locate compatible channels with the guarantee that dependency chains will remain on one channel. When ASYNC_TX_DISABLE_CHANNEL_SWITCH=n async_tx_find_channel() may select channels that lead to operation chains that need to cross channel boundaries using the async_tx channel switch capability. Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
由 Dan Williams 提交于
Some engines optimize operation by reading ahead in the descriptor chain such that descriptor2 may start execution before descriptor1 completes. If descriptor2 depends on the result from descriptor1 then a fence is required (on descriptor2) to disable this optimization. The async_tx api could implicitly identify dependencies via the 'depend_tx' parameter, but that would constrain cases where the dependency chain only specifies a completion order rather than a data dependency. So, provide an ASYNC_TX_FENCE to explicitly identify data dependencies. Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-