- 26 3月, 2009 1 次提交
-
-
由 Dan Williams 提交于
Centralize this common initialization (and one case where ipu_idmac is duplicating ->chan initialization). Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
- 05 3月, 2009 1 次提交
-
-
由 Roel Kluin 提交于
iop_adma_zero_sum_self_test has the brackets in the wrong place for the setup failure deallocation path. This error was duplicated in mv_xor_xor_self_test. Signed-off-by: NRoel Kluin <roel.kluin@gmail.com> Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
- 04 3月, 2009 1 次提交
-
-
由 Russell King 提交于
`iop_adma_remove' referenced in section `.data' of drivers/built-in.o: defined in discarded section `.devexit.text' of drivers/built-in.o `mv_xor_remove' referenced in section `.data' of drivers/built-in.o: defined in discarded section `.devexit.text' of drivers/built-in.o `mv64xxx_i2c_unmap_regs' referenced in section `.devinit.text' of drivers/built-in.o: defined in discarded section `.devexit.text' of drivers/built-in.o `mv64xxx_i2c_remove' referenced in section `.data' of drivers/built-in.o: defined in discarded section `.devexit.text' of drivers/built-in.o `orion_nand_remove' referenced in section `.data' of drivers/built-in.o: defined in discarded section `.devexit.text' of drivers/built-in.o `pxafb_remove' referenced in section `.data' of drivers/built-in.o: defined in discarded section `.devexit.text' of drivers/built-in.o Acked-by: NUwe Kleine-König <u.kleine-koenig@pengutronix.de> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 07 1月, 2009 5 次提交
-
-
由 Dan Williams 提交于
Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
由 Dan Williams 提交于
This BUG_ON caught problems in early development but now it is in the way as it invalidly triggers when trying to remove the module. Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
由 Dan Williams 提交于
No need to free stuff that the devm infrastructure will take care of... Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
由 Dan Williams 提交于
Reference counting is done at the module level so clients need not worry that a channel will leave while they are actively using dmaengine. Reviewed-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
由 Dan Williams 提交于
All users have been converted to either the general-purpose allocator, dma_find_channel, or dma_request_channel. Reviewed-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
- 06 1月, 2009 1 次提交
-
-
由 Dan Williams 提交于
async_tx.ko is a consumer of dma channels. A circular dependency arises if modules in drivers/dma rely on common code in async_tx.ko. It prevents either module from being unloaded. Move dma_wait_for_async_tx and async_tx_run_dependencies to dmaeninge.o where they should have been from the beginning. Reviewed-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
- 09 12月, 2008 1 次提交
-
-
由 Dan Williams 提交于
Mapping the destination multiple times is a misuse of the dma-api. Since the destination may be reused as a source, ensure that it is only mapped once and that it is mapped bidirectionally. This appears to add ugliness on the unmap side in that it always reads back the destination address from the descriptor, but gcc can determine that dma_unmap is a nop and not emit the code that calculates its arguments. Cc: <stable@kernel.org> Cc: Saeed Bishara <saeed@marvell.com> Acked-by: NYuri Tikhonov <yur@emcraft.com> Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
- 12 11月, 2008 2 次提交
-
-
由 Dan Williams 提交于
Now that the critical read back to flush the next descriptor address is fixed we can downgrade some BUG_ONs that need only be enabled when testing changes to the driver. Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
由 Dan Williams 提交于
The current dummy read references the wrong address allowing the next descriptor address update to linger in the store buffer and get passed by an 'append' event. This issue was uncovered by the change from strongly-ordered to device memory for the adma registers. Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
- 07 8月, 2008 1 次提交
-
-
由 Russell King 提交于
This just leaves include/asm-arm/plat-* to deal with. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 18 7月, 2008 2 次提交
-
-
由 Dan Williams 提交于
Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
由 Dan Williams 提交于
Force callers that trigger an "out of descriptors" condition to run the cleanup loop directly. Alleviates the requirement to have soft-irqs enabled when polling for a descriptor in async_xor. Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
- 09 7月, 2008 3 次提交
-
-
由 Dan Williams 提交于
In some cases client code may need the dma-driver to skip the unmap of source and/or destination buffers. Setting these flags indicates to the driver to skip the unmap step. In this regard async_xor is currently broken in that it allows the destination buffer to be unmapped while an operation is still in progress, i.e. when the number of sources exceeds the hardware channel's maximum (fixed in a subsequent patch). Acked-by: NSaeed Bishara <saeed@marvell.com> Acked-by: NMaciej Sosnowski <maciej.sosnowski@intel.com> Acked-by: NHaavard Skinnemoen <haavard.skinnemoen@atmel.com> Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
由 Haavard Skinnemoen 提交于
A DMA controller capable of doing slave transfers may need to know a few things about the slave when preparing the channel. We don't want to add this information to struct dma_channel since the channel hasn't yet been bound to a client at this point. Instead, pass a reference to the client requesting the channel to the driver's device_alloc_chan_resources hook so that it can pick the necessary information from the dma_client struct by itself. [dan.j.williams@intel.com: fixed up fsldma and mv_xor] Acked-by: NMaciej Sosnowski <maciej.sosnowski@intel.com> Signed-off-by: NHaavard Skinnemoen <haavard.skinnemoen@atmel.com> Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
由 Kay Sievers 提交于
Since 43cc71ee, the platform modalias is prefixed with "platform:". Add MODULE_ALIAS() to most of the hotpluggable platform drivers, to re-enable auto loading. Cc: <stable@kernel.org> Signed-off-by: NKay Sievers <kay.sievers@vrfy.org> Signed-off-by: NDavid Brownell <dbrownell@users.sourceforge.net> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
- 21 5月, 2008 1 次提交
-
-
由 Christophe Jaillet 提交于
1) Remove an explicit memset(.., 0, ...) to a variable allocated with kzalloc (i.e. 'dest'). 2) Allocate 'src' with kmalloc instead of kzalloc as all elements of the 'src' buffer are initialized in a 'for(...)' loop just after. 3) remove useless 'sizeof(u8)', which always returns 1, when computing the size of the memory to be allocated. Signed-off-by: NChristophe Jaillet <christophe.jaillet@wanadoo.fr> Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
- 18 4月, 2008 4 次提交
-
-
由 Dan Williams 提交于
'ack' is currently a simple integer that flags whether or not a client is done touching fields in the given descriptor. It is effectively just a single bit of information. Converting this to a flags parameter allows the other bits to be put to use to control completion actions, like dma-unmap, and capture results, like xor-zero-sum == 0. Changes are one of: 1/ convert all open-coded ->ack manipulations to use async_tx_ack and async_tx_test_ack. 2/ set the ack bit at prep time where possible 3/ make drivers store the flags at prep time 4/ add flags to the device_prep_dma_interrupt prototype Acked-by: NMaciej Sosnowski <maciej.sosnowski@intel.com> Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
由 Dan Williams 提交于
This workaround was covering the dependency submission bug in async_tx. Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
由 Dan Williams 提交于
DMA drivers no longer need to be notified of dependency submission events as async_tx_run_dependencies and async_tx_channel_switch will handle the scheduling and execution of dependent operations. [sfr@canb.auug.org.au: extend this for fsldma] Acked-by: NShannon Nelson <shannon.nelson@intel.com> Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
由 Dan Williams 提交于
Shrink struct dma_async_tx_descriptor and introduce async_tx_channel_switch to properly inject a channel switch interrupt in the descriptor stream. This simplifies the locking model as drivers no longer need to handle dma_async_tx_descriptor.lock. Acked-by: NShannon Nelson <shannon.nelson@intel.com> Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
- 14 3月, 2008 1 次提交
-
-
由 Harvey Harrison 提交于
__FUNCTION__ is gcc-specific, use __func__ Signed-off-by: NHarvey Harrison <harvey.harrison@gmail.com> Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
- 07 2月, 2008 3 次提交
-
-
由 Dan Williams 提交于
Pass a full set of flags to drivers' per-operation 'prep' routines. Currently the only flag passed is DMA_PREP_INTERRUPT. The expectation is that arch-specific async_tx_find_channel() implementations can exploit this capability to find the best channel for an operation. Signed-off-by: NDan Williams <dan.j.williams@intel.com> Acked-by: NShannon Nelson <shannon.nelson@intel.com> Reviewed-by: NHaavard Skinnemoen <hskinnemoen@atmel.com>
-
由 Dan Williams 提交于
The tx_set_src and tx_set_dest methods were originally implemented to allow an array of addresses to be passed down from async_xor to the dmaengine driver while minimizing stack overhead. Removing these methods allows drivers to have all transaction parameters available at 'prep' time, saves two function pointers in struct dma_async_tx_descriptor, and reduces the number of indirect branches.. A consequence of moving this data to the 'prep' routine is that multi-source routines like async_xor need temporary storage to convert an array of linear addresses into an array of dma addresses. In order to keep the same stack footprint of the previous implementation the input array is reused as storage for the dma addresses. This requires that sizeof(dma_addr_t) be less than or equal to sizeof(void *). As a consequence CONFIG_DMADEVICES now depends on !CONFIG_HIGHMEM64G. It also requires that drivers be able to make descriptor resources available when the 'prep' routine is polled. Signed-off-by: NDan Williams <dan.j.williams@intel.com> Acked-by: NShannon Nelson <shannon.nelson@intel.com>
-
由 Denis Cheng 提交于
these three list_head are all local variables, but can also use LIST_HEAD. Signed-off-by: NDenis Cheng <crquan@gmail.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
- 17 10月, 2007 1 次提交
-
-
由 Rusty Russell 提交于
Adrian Bunk points out that "unsafe" was used to mark modules touched by the deprecated MOD_INC_USE_COUNT interface, which has long gone. It's time to remove the member from the module structure, as well. If you want a module which can't unload, don't register an exit function. (Vlad Yasevich says SCTP is now safe to unload, so just remove the __unsafe there). Signed-off-by: NRusty Russell <rusty@rustcorp.com.au> Acked-by: NShannon Nelson <shannon.nelson@intel.com> Acked-by: NDan Williams <dan.j.williams@intel.com> Acked-by: NVlad Yasevich <vladislav.yasevich@hp.com> Cc: Sridhar Samudrala <sri@us.ibm.com> Cc: Adrian Bunk <bunk@stusta.de> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 13 7月, 2007 1 次提交
-
-
由 Dan Williams 提交于
The Intel(R) IOP series of i/o processors integrate an Xscale core with raid acceleration engines. The capabilities per platform are: iop219: (2) copy engines iop321: (2) copy engines (1) xor and block fill engine iop33x: (2) copy and crc32c engines (1) xor, xor zero sum, pq, pq zero sum, and block fill engine iop34x (iop13xx): (2) copy, crc32c, xor, xor zero sum, and block fill engines (1) copy, crc32c, xor, xor zero sum, pq, pq zero sum, and block fill engine The driver supports the features of the async_tx api: * asynchronous notification of operation completion * implicit (interupt triggered) handling of inter-channel transaction dependencies The driver adapts to the platform it is running by two methods. 1/ #include <asm/arch/adma.h> which defines the hardware specific iop_chan_* and iop_desc_* routines as a series of static inline functions 2/ The private platform data attached to the platform_device defines the capabilities of the channels 20070626: Callbacks are run in a tasklet. Given the recent discussion on LKML about killing tasklets in favor of workqueues I did a quick conversion of the driver. Raid5 resync performance dropped from 50MB/s to 30MB/s, so the tasklet implementation remains until a generic softirq interface is available. Changelog: * fixed a slot allocation bug in do_iop13xx_adma_xor that caused too few slots to be requested eventually leading to data corruption * enabled the slot allocation routine to attempt to free slots before returning -ENOMEM * switched the cleanup routine to solely use the software chain and the status register to determine if a descriptor is complete. This is necessary to support other IOP engines that do not have status writeback capability * make the driver iop generic * modified the allocation routines to understand allocating a group of slots for a single operation * added a null xor initialization operation for the xor only channel on iop3xx * support xor operations on buffers larger than the hardware maximum * split the do_* routines into separate prep, src/dest set, submit stages * added async_tx support (dependent operations initiation at cleanup time) * simplified group handling * added interrupt support (callbacks via tasklets) * brought the pending depth inline with ioat (i.e. 4 descriptors) * drop dma mapping methods, suggested by Chris Leech * don't use inline in C files, Adrian Bunk * remove static tasklet declarations * make iop_adma_alloc_slots easier to read and remove chances for a corrupted descriptor chain * fix locking bug in iop_adma_alloc_chan_resources, Benjamin Herrenschmidt * convert capabilities over to dma_cap_mask_t * fixup sparse warnings * add descriptor flush before iop_chan_enable * checkpatch.pl fixes * gpl v2 only correction * move set_src, set_dest, submit to async_tx methods * move group_list and phys to async_tx Cc: Russell King <rmk@arm.linux.org.uk> Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-