- 20 11月, 2012 7 次提交
-
-
由 Thomas Petazzoni 提交于
The mv_xor_device structure embeds a 'struct dma_device', which is named 'common', a not very meaningful name. Rename it to 'dmadev', which will help avoid confusions later as we merge the mv_xor_device and mv_xor_chan structures together. Signed-off-by: NThomas Petazzoni <thomas.petazzoni@free-electrons.com>
-
由 Thomas Petazzoni 提交于
The mv_xor_chan structure embeds a 'struct dma_chan', which is named 'common', a not very meaningful name. Rename it to 'dmachan', which will help avoid confusions later as we merge the mv_xor_device and mv_xor_chan structures together. Signed-off-by: NThomas Petazzoni <thomas.petazzoni@free-electrons.com>
-
由 Thomas Petazzoni 提交于
It was only used in places where we could get the 'struct device *' pointer through a different way. Signed-off-by: NThomas Petazzoni <thomas.petazzoni@free-electrons.com>
-
由 Thomas Petazzoni 提交于
Signed-off-by: NThomas Petazzoni <thomas.petazzoni@free-electrons.com>
-
由 Thomas Petazzoni 提交于
The 'shared' word no longer makes sense in a number of places as we renamed the 'mv_xor_shared' driver to 'mv_xor'. Signed-off-by: NThomas Petazzoni <thomas.petazzoni@free-electrons.com>
-
由 Thomas Petazzoni 提交于
Extend the XOR engine driver (currently called "mv_xor_shared") so that XOR channels can be passed in the platform_data structure, and be registered from there. This will allow the users of the driver to be converted to the single platform_driver variant of the mv_xor driver. Signed-off-by: NThomas Petazzoni <thomas.petazzoni@free-electrons.com>
-
由 Thomas Petazzoni 提交于
The driver currently pokes into the platform_data structure during its normal operation to get the pool_size value. Poking into the platform_data structure is not nice when moving to the Device Tree, so this commit adds a new pool_size field in the mv_xor_device structure, which gets initialized at ->probe() time. The driver then uses this field instead of the platform_data. Signed-off-by: NThomas Petazzoni <thomas.petazzoni@free-electrons.com>
-
- 09 5月, 2012 1 次提交
-
-
由 Andrew Lunn 提交于
Some orion platforms can gate the XOR driver clock. If the clock exisits, unable/disable it as appropriate. Signed-off-by: NAndrew Lunn <andrew@lunn.ch> Tested-by: NJamie Lentin <jm@lentin.co.uk> Signed-off-by: NMike Turquette <mturquette@linaro.org>
-
- 13 3月, 2012 2 次提交
-
-
由 Russell King - ARM Linux 提交于
Every DMA engine implementation declares a last completed dma cookie in their private dma channel structures. This is pointless, and forces driver specific code. Move this out into the common dma_chan structure. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk> Tested-by: NLinus Walleij <linus.walleij@linaro.org> Reviewed-by: NLinus Walleij <linus.walleij@linaro.org> Acked-by: NJassi Brar <jassisinghbrar@gmail.com> [imx-sdma.c & mxs-dma.c] Tested-by: NShawn Guo <shawn.guo@linaro.org> Signed-off-by: NVinod Koul <vinod.koul@linux.intel.com>
-
由 Russell King - ARM Linux 提交于
mv_xor's is_complete_cookie is only ever written to, but never read. This is silly, remove the write-only structure member. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk> Tested-by: NLinus Walleij <linus.walleij@linaro.org> Reviewed-by: NLinus Walleij <linus.walleij@linaro.org> Acked-by: NJassi Brar <jassisinghbrar@gmail.com> [imx-sdma.c & mxs-dma.c] Tested-by: NShawn Guo <shawn.guo@linaro.org> Signed-off-by: NVinod Koul <vinod.koul@linux.intel.com>
-
- 09 9月, 2009 1 次提交
-
-
由 Dan Williams 提交于
Drop mv_xor's use of tx_list from struct dma_async_tx_descriptor in preparation for removal of this field. Cc: Saeed Bishara <saeed@marvell.com> Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
- 09 7月, 2008 1 次提交
-
-
由 Saeed Bishara 提交于
The XOR engine found in Marvell's SoCs and system controllers provides XOR and DMA operation, iSCSI CRC32C calculation, memory initialization, and memory ECC error cleanup operation support. This driver implements the DMA engine API and supports the following capabilities: - memcpy - xor - memset The XOR engine can be used by DMA engine clients implemented in the kernel, one of those clients is the RAID module. In that case, I observed 20% improvement in the raid5 write throughput, and 40% decrease in the CPU utilization when doing array construction, those results obtained on an 5182 running at 500Mhz. When enabling the NET DMA client, the performance decreased, so meanwhile it is recommended to keep this client off. Signed-off-by: NSaeed Bishara <saeed@marvell.com> Signed-off-by: NLennert Buytenhek <buytenh@marvell.com> Signed-off-by: NNicolas Pitre <nico@marvell.com> Acked-by: NMaciej Sosnowski <maciej.sosnowski@intel.com> Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-