- 14 8月, 2013 7 次提交
-
-
由 Daniel Mack 提交于
The DMA_SLAVE is currently set twice. Signed-off-by: NDaniel Mack <zonque@gmail.com> Acked-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NVinod Koul <vinod.koul@intel.com>
-
由 Daniel Mack 提交于
That helps check the provided runtime information. Signed-off-by: NDaniel Mack <zonque@gmail.com> Acked-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NVinod Koul <vinod.koul@intel.com>
-
由 Daniel Mack 提交于
This patch makes the mmp_pdma controller able to provide DMA resources in DT environments by providing an dma xlate function. of_dma_simple_xlate() isn't used here, because if fails to handle multiple different DMA engines or several instances of the same controller. Instead, a private implementation is provided that makes use of the newly introduced dma_get_slave_channel() call. Signed-off-by: NDaniel Mack <zonque@gmail.com> Acked-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NVinod Koul <vinod.koul@intel.com>
-
由 Daniel Mack 提交于
PXA peripherals need to obtain specific DMA request ids which will eventually be stored in the DRCMR register. Currently, clients are expected to store that number inside the slave config block as slave_id, which is unfortunately incompatible with the way DMA resources are handled in DT environments. This patch adds a filter function which stores the filter parameter passed in by of-dma.c into the channel's drcmr register. For backward compatability, cfg->slave_id is still used if set to a non-zero value. Signed-off-by: NDaniel Mack <zonque@gmail.com> Acked-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NVinod Koul <vinod.koul@intel.com>
-
由 Daniel Mack 提交于
There's no reason for limiting the maximum transfer length to 0x1000. Take the actual bit mask instead; the PDMA is able to transfer chunks of up to SZ_8K - 1. Signed-off-by: NDaniel Mack <zonque@gmail.com> Acked-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NVinod Koul <vinod.koul@intel.com>
-
由 Daniel Mack 提交于
As suggested by Ezequiel García, release the spinlock at the end of the function only, and use a goto for the control flow. Just a minor cleanup. Signed-off-by: NDaniel Mack <zonque@gmail.com> Acked-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NVinod Koul <vinod.koul@intel.com>
-
由 Daniel Mack 提交于
The exact same calculation is done twice, so let's factor it out to a macro. Signed-off-by: NDaniel Mack <zonque@gmail.com> Acked-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NVinod Koul <vinod.koul@intel.com>
-
- 13 8月, 2013 1 次提交
-
-
由 Jingoo Han 提交于
mmp_pdma_alloc_descriptor() is used only in this file. Fix the following sparse warning: drivers/dma/mmp_pdma.c:359:25: warning: symbol 'mmp_pdma_alloc_descriptor' was not declared. Should it be static? Signed-off-by: NJingoo Han <jg1.han@samsung.com> Signed-off-by: NVinod Koul <vinod.koul@intel.com>
-
- 05 8月, 2013 3 次提交
-
-
由 Xiang Wang 提交于
In mmp pdma, phy channels are allocated/freed dynamically. The mapping from DMA request to DMA channel number in DRCMR should be cleared when a phy channel is freed. Otherwise conflicts will happen when: 1. A is using channel 2 and free it after finished, but A still maps to channel 2 in DRCMR of A. 2. Now another one B gets channel 2. So B maps to channel 2 too in DRCMR of B. In the datasheet, it is described that "Do not map two active requests to the same channel since it produces unpredictable results" and we can observe that during test. Signed-off-by: NXiang Wang <wangx@marvell.com> Signed-off-by: NVinod Koul <vinod.koul@intel.com>
-
由 Xiang Wang 提交于
In mmp pdma, phy channels are allocated/freed dynamically and frequently. But no proper protection is added. Conflict will happen when multi-users are requesting phy channels at the same time. Use spinlock to protect. Signed-off-by: NXiang Wang <wangx@marvell.com> Signed-off-by: NVinod Koul <vinod.koul@intel.com>
-
由 Andy Shevchenko 提交于
Accordingly to dma_cookie_status() description locking is not required. Signed-off-by: NAndy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: NVinod Koul <vinod.koul@intel.com>
-
- 26 1月, 2013 1 次提交
-
-
由 Thierry Reding 提交于
Convert all uses of devm_request_and_ioremap() to the newly introduced devm_ioremap_resource() which provides more consistent error handling. devm_ioremap_resource() provides its own error messages so all explicit error messages can be removed from the failure code paths. Signed-off-by: NThierry Reding <thierry.reding@avionic-design.de> Cc: Vinod Koul <vinod.koul@intel.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 20 1月, 2013 1 次提交
-
-
由 Cong Ding 提交于
the pointer cfg is dereferenced in line 594, so it's no reason to check null again in line 620. Signed-off-by: NCong Ding <dinggnu@gmail.com> Signed-off-by: NVinod Koul <vinod.koul@intel.com>
-
- 04 1月, 2013 1 次提交
-
-
由 Greg Kroah-Hartman 提交于
CONFIG_HOTPLUG is going away as an option. As a result, the __dev* markings need to be removed. This change removes the use of __devinit, __devexit_p, __devinitconst, and __devexit from these drivers. Based on patches originally written by Bill Pemberton, but redone by me in order to handle some of the coding style issues better, by hand. Cc: Bill Pemberton <wfp5p@virginia.edu> Cc: Viresh Kumar <viresh.linux@gmail.com> Cc: Dan Williams <djbw@fb.com> Cc: Vinod Koul <vinod.koul@intel.com> Cc: Barry Song <baohua.song@csr.com> Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Cc: Alexander Duyck <alexander.h.duyck@intel.com> Cc: Russell King <rmk+kernel@arm.linux.org.uk> Cc: Linus Walleij <linus.walleij@linaro.org> Cc: Jassi Brar <jassisinghbrar@gmail.com> Cc: Dave Jiang <dave.jiang@intel.com> Cc: Bill Pemberton <wfp5p@virginia.edu> Cc: Guennadi Liakhovetski <g.liakhovetski@gmx.de> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 29 11月, 2012 2 次提交
-
-
由 Bill Pemberton 提交于
CONFIG_HOTPLUG is going away as an option so __devinit is no longer needed. Signed-off-by: NBill Pemberton <wfp5p@virginia.edu> Cc: Li Yang <leoli@freescale.com> Cc: Zhang Wei <zw@zh-kernel.org> Cc: Barry Song <baohua.song@csr.com> Acked-by: NViresh Kumar <viresh.kumar@linaro.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Bill Pemberton 提交于
CONFIG_HOTPLUG is going away as an option so __devexit_p is no longer needed. Signed-off-by: NBill Pemberton <wfp5p@virginia.edu> Acked-by: NBarry Song <baohua.song@csr.com> Acked-by: NViresh Kumar <viresh.kumar@linaro.org> Acked-by: NRussell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 14 9月, 2012 1 次提交
-
-
由 Zhangfei Gao 提交于
1. virtual channel vs. physical channel Virtual channel is managed by dmaengine Physical channel handling resource, such as irq Physical channel is alloced dynamically as descending priority, freed immediately when irq done. The availble highest priority physically channel will alwayes be alloced Issue pending list -> alloc highest dma physically channel available -> dma done -> free physically channel 2. list: running list & pending list submit: desc list -> pending list issue_pending_list: if (IDLE) pending list -> running list; free pending list (RUN) irq: free running list (IDLE) check pendlist -> pending list -> running list; free pending list (RUN) 3. irq: Each list generate one irq, calling callback One list may contain several desc chain, in such case, make sure only the last desc list generate irq. 4. async Submit will add desc chain to pending list, which can be multi-called If multi desc chain is submitted, only the last desc would generate irq -> call back If IDLE, issue_pending_list start pending_list, transforming pendlist to running list If RUN, irq will start pending list 5. test 5.1 pxa3xx_nand on pxa910 5.2 insmod dmatest.ko (threads_per_chan=y) By default drivers/dma/dmatest.c test every channel and test memcpy with 1 threads per channel Signed-off-by: NZhangfei Gao <zhangfei.gao@marvell.com> Acked-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NVinod Koul <vinod.koul@linux.intel.com>
-