- 07 1月, 2009 14 次提交
-
-
由 Dan Williams 提交于
Unregistering services should only happen at "remove" time. This prevents the device from being unregistered while dmaengine clients are still active. Also, the comment on ioat_remove is stale since removal is prevented while a channel may be in use. Reported-by: NAlexander Beregalov <a.beregalov@gmail.com> Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
由 Dan Williams 提交于
Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
由 Dan Williams 提交于
This BUG_ON caught problems in early development but now it is in the way as it invalidly triggers when trying to remove the module. Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
由 Dan Williams 提交于
No need to free stuff that the devm infrastructure will take care of... Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
由 Dan Williams 提交于
DMA_NAK is now useless. We can just use a bool instead. Reviewed-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
由 Dan Williams 提交于
Reference counting is done at the module level so clients need not worry that a channel will leave while they are actively using dmaengine. Reviewed-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
由 Dan Williams 提交于
All users have been converted to either the general-purpose allocator, dma_find_channel, or dma_request_channel. Reviewed-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
由 Dan Williams 提交于
Now that clients no longer need to be notified of channel arrival dma_async_client_register can simply increment the dmaengine_ref_count. Reviewed-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
由 Dan Williams 提交于
dma_request_channel provides an exclusive channel, so we no longer need to pass slave data through dmaengine. Cc: Haavard Skinnemoen <haavard.skinnemoen@atmel.com> Reviewed-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
由 Dan Williams 提交于
Replace the client registration infrastructure with a custom loop to poll for channels. Once dma_request_channel returns NULL stop asking for channels. A userspace side effect of this change if that loading the dmatest module before loading a dma driver will result in no channels being found, previously dmatest would get a callback. To facilitate testing in the built-in case dmatest_init is marked as a late_initcall. Another side effect is that channels under test can not be used for any other purpose. Cc: Haavard Skinnemoen <haavard.skinnemoen@atmel.com> Reviewed-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
由 Dan Williams 提交于
This interface is primarily for device-to-memory clients which need to search for dma channels with platform-specific characteristics. The prototype is: struct dma_chan *dma_request_channel(dma_cap_mask_t mask, dma_filter_fn filter_fn, void *filter_param); When the optional 'filter_fn' parameter is set to NULL dma_request_channel simply returns the first channel that satisfies the capability mask. Otherwise, when the mask parameter is insufficient for specifying the necessary channel, the filter_fn routine can be used to disposition the available channels in the system. The filter_fn routine is called once for each free channel in the system. Upon seeing a suitable channel filter_fn returns DMA_ACK which flags that channel to be the return value from dma_request_channel. A channel allocated via this interface is exclusive to the caller, until dma_release_channel() is called. To ensure that all channels are not consumed by the general-purpose allocator the DMA_PRIVATE capability is provided to exclude a dma_device from general-purpose (memory-to-memory) consideration. Reviewed-by: NAndrew Morton <akpm@linux-foundation.org> Acked-by: NMaciej Sosnowski <maciej.sosnowski@intel.com> Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
由 Dan Williams 提交于
async_tx and net_dma each have open-coded versions of issue_pending_all, so provide a common routine in dmaengine. The implementation needs to walk the global device list, so implement rcu to allow dma_issue_pending_all to run lockless. Clients protect themselves from channel removal events by holding a dmaengine reference. Reviewed-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
由 Dan Williams 提交于
Allowing multiple clients to each define their own channel allocation scheme quickly leads to a pathological situation. For memory-to-memory offload all clients can share a central allocator. This simply moves the existing async_tx allocator to dmaengine with minimal fixups: * async_tx.c:get_chan_ref_by_cap --> dmaengine.c:nth_chan * async_tx.c:async_tx_rebalance --> dmaengine.c:dma_channel_rebalance * split out common code from async_tx.c:__async_tx_find_channel --> dma_find_channel Reviewed-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
由 Dan Williams 提交于
Simply, if a client wants any dmaengine channel then prevent all dmaengine modules from being removed. Once the clients are done re-enable module removal. Why?, beyond reducing complication: 1/ Tracking reference counts per-transaction in an efficient manner, as is currently done, requires a complicated scheme to avoid cache-line bouncing effects. 2/ Per-transaction ref-counting gives the false impression that a dma-driver can be gracefully removed ahead of its user (net, md, or dma-slave) 3/ None of the in-tree dma-drivers talk to hot pluggable hardware, but if such an engine were built one day we still would not need to notify clients of remove events. The driver can simply return NULL to a ->prep() request, something that is much easier for a client to handle. Reviewed-by: NAndrew Morton <akpm@linux-foundation.org> Acked-by: NMaciej Sosnowski <maciej.sosnowski@intel.com> Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
- 06 1月, 2009 1 次提交
-
-
由 Dan Williams 提交于
async_tx.ko is a consumer of dma channels. A circular dependency arises if modules in drivers/dma rely on common code in async_tx.ko. It prevents either module from being unloaded. Move dma_wait_for_async_tx and async_tx_run_dependencies to dmaeninge.o where they should have been from the beginning. Reviewed-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
- 09 12月, 2008 1 次提交
-
-
由 Dan Williams 提交于
Mapping the destination multiple times is a misuse of the dma-api. Since the destination may be reused as a source, ensure that it is only mapped once and that it is mapped bidirectionally. This appears to add ugliness on the unmap side in that it always reads back the destination address from the descriptor, but gcc can determine that dma_unmap is a nop and not emit the code that calculates its arguments. Cc: <stable@kernel.org> Cc: Saeed Bishara <saeed@marvell.com> Acked-by: NYuri Tikhonov <yur@emcraft.com> Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
- 04 12月, 2008 2 次提交
-
-
由 Dan Williams 提交于
There is a possibility to have two devices registered with the same id. Cc: <stable@kernel.org> Acked-by: NMaciej Sosnowski <maciej.sosnowski@intel.com> Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
由 Dan Williams 提交于
As part of the ioat_dma self-test it performs a printk from a completion callback. Depending on the system console configuration this output can take longer than a millisecond causing the self-test to fail. Introduce a completion with a generous timeout to mitigate this failure. Cc: <stable@kernel.org> Acked-by: NMaciej Sosnowski <maciej.sosnowski@intel.com> Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
- 12 11月, 2008 3 次提交
-
-
由 Kay Sievers 提交于
Acked-by: NGreg Kroah-Hartman <gregkh@suse.de> Acked-by: NMaciej Sosnowski <maciej.sosnowski@intel.com> Signed-off-by: NKay Sievers <kay.sievers@vrfy.org> Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
由 Dan Williams 提交于
Now that the critical read back to flush the next descriptor address is fixed we can downgrade some BUG_ONs that need only be enabled when testing changes to the driver. Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
由 Dan Williams 提交于
The current dummy read references the wrong address allowing the next descriptor address update to linger in the store buffer and get passed by an 'append' event. This issue was uncovered by the change from strongly-ordered to device memory for the adma registers. Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
- 11 11月, 2008 3 次提交
-
-
由 Maciej Sosnowski 提交于
async_tx.callback should be checked for the first not the last descriptor in the chain. Cc: <stable@kernel.org> Signed-off-by: NMaciej Sosnowski <maciej.sosnowski@intel.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Maciej Sosnowski 提交于
Error handling needs to be modified in dma_pin_iovec_pages(). It should return NULL instead of ERR_PTR (pinned_list is checked for NULL in tcp_recvmsg() to determine if iovec pages have been successfully pinned down). In case of error for the first iovec, local_list->nr_iovecs needs to be initialized. Cc: <stable@kernel.org> Signed-off-by: NMaciej Sosnowski <maciej.sosnowski@intel.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Maciej Sosnowski 提交于
If the ioatdma driver is loaded but not used it does not allocate descriptors. Before it frees channel resources it should first be sure that they have been previously allocated. Cc: <stable@kernel.org> Signed-off-by: NMaciej Sosnowski <maciej.sosnowski@intel.com> Tested-by: NTom Picard <tom.s.picard@intel.com> Signed-off-by: NDan Williams <dan.j.williams@intel.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 25 10月, 2008 2 次提交
-
-
由 Venki Pallipadi 提交于
When I7300_idle driver is not configured, there is a compile time warning about IDLE_IOAT_CHANNEL not defined. Fix it. Reported-by: NSuresh Siddha <suresh.b.siddha@intel.com> Reported-by: NArjan van de Ven <arjan@linux.intel.com> Signed-off-by: NVenkatesh Pallipadi <venkatesh.pallipadi@intel.com> Signed-off-by: NLen Brown <len.brown@intel.com>
-
由 Venki Pallipadi 提交于
Based on input from Andi Kleen: share the platform detection code with ioat_dma and disable the channel in dma engine only for specific platforms. Signed-off-by: NVenkatesh Pallipadi <venkatesh.pallipadi@intel.com> Signed-off-by: NLen Brown <len.brown@intel.com>
-
- 22 10月, 2008 1 次提交
-
-
由 Andy Henroid 提交于
The Intel 7300 Memory Controller supports dynamic throttling of memory which can be used to save power when system is idle. This driver does the memory throttling when all CPUs are idle on such a system. Refer to "Intel 7300 Memory Controller Hub (MCH)" datasheet for the config space description. Signed-off-by: NAndy Henroid <andrew.d.henroid@intel.com> Signed-off-by: NLen Brown <len.brown@intel.com> Signed-off-by: NVenkatesh Pallipadi <venkatesh.pallipadi@intel.com>
-
- 04 10月, 2008 1 次提交
-
-
由 Haavard Skinnemoen 提交于
The tasklet checks RAW.BLOCK twice, and does not check RAW.XFER. This is obviously wrong, and could theoretically cause the driver to hang. Reported-by: NNicolas Ferre <nicolas.ferre@atmel.com> Signed-off-by: NHaavard Skinnemoen <haavard.skinnemoen@atmel.com> Acked-by: NDan Williams <dan.j.williams@intel.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 27 9月, 2008 1 次提交
-
-
由 Timur Tabi 提交于
Modify the Freescale Elo / Elo Plus DMA driver so that it can be compiled as a module. The primary change is to stop treating the DMA controller as a bus, and the DMA channels as devices on the bus. This is because the Open Firmware (OF) kernel code does not allow busses to be removed, so although we can call of_platform_bus_probe() to probe the DMA channels, there is no of_platform_bus_remove(). Instead, the DMA channels are manually probed, similar to what fsl_elbc_nand.c does. Cc: Scott Wood <scottwood@freescale.com> Acked-by: NLi Yang <leoli@freescale.com> Signed-off-by: NTimur Tabi <timur@freescale.com> Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
- 24 9月, 2008 1 次提交
-
-
由 Timur Tabi 提交于
The Freescale Elo DMA driver runs an internal self-test before registering the channels with the DMA engine. This self-test has a fundemental flaw in that it calls the DMA engine's callback functions directly before the registration. However, the registration initializes some variables that the callback functions uses, namely the device struct. The code works today because there are two device structs: the one created by the DMA engine, and one created by the Open Firmware (OF) subsystem. The self-test currently uses the device struct created by OF. However, in the future, some of the device structs created by OF will be eliminated. This means that the self-test will only have access to the device struct created by the DMA engine. But this device struct isn't initialized when the self-test runs, and this causes a kernel panic. Since there is already a DMA test module (dmatest), the internal self-test code is not useful anyway. It is extremely unlikely that the test will fail in normal usage. It may have been helpful during development, but not any more. Cc: Kumar Gala <galak@kernel.crashing.org> Cc: Li Yang <leoli@freescale.com> Cc: Scott Wood <scottwood@freescale.com> Signed-off-by: NTimur Tabi <timur@freescale.com> Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
- 19 9月, 2008 2 次提交
-
-
由 Andrew Morton 提交于
It was needlessly using the unreliable GFP_ATOMIC. Cc: Timur Tabi <timur@freescale.com> Acked-by: NHaavard Skinnemoen <haavard.skinnemoen@atmel.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
由 Timur Tabi 提交于
Update the the dmatest driver so that it handles duplicate DMA channels properly. When a DMA client is notified of an available DMA channel, it must check if it has already allocated resources for that channel. If so, it should return DMA_DUP. This can happen, for example, if a DMA driver calls dma_async_device_register() more than once. Acked-by: NHaavard Skinnemoen <haavard.skinnemoen@atmel.com> Signed-off-by: NTimur Tabi <timur@freescale.com> Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
- 14 9月, 2008 1 次提交
-
-
由 Julia Lawall 提交于
The break after the return serves no purpose. Signed-off-by: NJulia Lawall <julia@diku.dk> Reviewed-by: NRichard Genoud <richard.genoud@gmail.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
- 09 8月, 2008 1 次提交
-
-
由 Lennert Buytenhek 提交于
This patch performs the equivalent include directory shuffle for plat-orion, and fixes up all users. Signed-off-by: NLennert Buytenhek <buytenh@marvell.com>
-
- 07 8月, 2008 2 次提交
-
-
由 Luis R. Rodriguez 提交于
If you are using linked lists for queues list_splice() will not do what you would expect even if you use the elements passed reversed. We need to handle these differently. We add list_splice_tail() and list_splice_tail_init(). Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: NLuis R. Rodriguez <lrodriguez@atheros.com> Signed-off-by: NJohn W. Linville <linville@tuxdriver.com>
-
由 Russell King 提交于
This just leaves include/asm-arm/plat-* to deal with. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 23 7月, 2008 3 次提交
-
-
由 Maciej Sosnowski 提交于
This patch adds to ioatdma and dca modules support for Intel I/OAT DMA engine ver.3 (aka CB3 device). The main features of I/OAT ver.3 are: * 8 single channel DMA devices (8 channels total) * 8 DCA providers, each can accept 2 requesters * 8-bit TAG values and 32-bit extended APIC IDs Signed-off-by: NMaciej Sosnowski <maciej.sosnowski@intel.com> Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
由 Maciej Sosnowski 提交于
I/OAT DMA performance tuning showed different optimal values of tcp_dma_copybreak for different I/OAT versions (4096 for 1.2 and 2048 for 2.0). This patch lets ioatdma driver set tcp_dma_copybreak value according to these results. [dan.j.williams@intel.com: remove some ifdefs] Signed-off-by: NMaciej Sosnowski <maciej.sosnowski@intel.com> Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
由 Maciej Sosnowski 提交于
Due to occasional DMA channel hangs observed for I/OAT versions 1.2 and 2.0 a watchdog has been introduced to check every 2 seconds if all channels progress normally. If stuck channel is detected, driver resets it. The reset is done in two parts. The second part is scheduled by the first one to reinitialize the channel after the restart. Signed-off-by: NMaciej Sosnowski <maciej.sosnowski@intel.com> Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
- 18 7月, 2008 1 次提交
-
-
由 Dan Williams 提交于
Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-