- 30 7月, 2012 3 次提交
-
-
由 Marek Szyprowski 提交于
This patch adds support for dma_get_sgtable() function which is required to let drivers to share the buffers allocated by DMA-mapping subsystem. Generic implementation based on virt_to_page() is not suitable for ARM dma-mapping subsystem. Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com> Reviewed-by: NKyungmin Park <kyungmin.park@samsung.com>
-
由 Marek Szyprowski 提交于
Commit 9adc5374 ('common: dma-mapping: introduce mmap method') added a generic method for implementing mmap user call to dma_map_ops structure. This patch converts ARM and PowerPC architectures (the only providers of dma_mmap_coherent/dma_mmap_writecombine calls) to use this generic dma_map_ops based call and adds a generic cross architecture definition for dma_mmap_attrs, dma_mmap_coherent, dma_mmap_writecombine functions. The generic mmap virt_to_page-based fallback implementation is provided for architectures which don't provide their own implementation for mmap method. Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com> Reviewed-by: NKyungmin Park <kyungmin.park@samsung.com>
-
由 Marek Szyprowski 提交于
This patch changes dma-mapping subsystem to use generic vmalloc areas for all consistent dma allocations. This increases the total size limit of the consistent allocations and removes platform hacks and a lot of duplicated code. Atomic allocations are served from special pool preallocated on boot, because vmalloc areas cannot be reliably created in atomic context. Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com> Reviewed-by: NKyungmin Park <kyungmin.park@samsung.com> Reviewed-by: NMinchan Kim <minchan@kernel.org>
-
- 21 5月, 2012 5 次提交
-
-
由 Marek Szyprowski 提交于
This patch converts dma_alloc/free/mmap_{coherent,writecombine} functions to use generic alloc/free/mmap methods from dma_map_ops structure. A new DMA_ATTR_WRITE_COMBINE DMA attribute have been introduced to implement writecombine methods. Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com> Acked-by: NKyungmin Park <kyungmin.park@samsung.com> Acked-by: NArnd Bergmann <arnd@arndb.de> Tested-By: NSubash Patel <subash.ramaswamy@linaro.org>
-
由 Marek Szyprowski 提交于
This patch removes dma bounce hooks from the common dma mapping implementation on ARM architecture and creates a separate set of dma_map_ops for dma bounce devices. Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com> Acked-by: NKyungmin Park <kyungmin.park@samsung.com> Tested-By: NSubash Patel <subash.ramaswamy@linaro.org>
-
由 Marek Szyprowski 提交于
This patch modifies dma-mapping implementation on ARM architecture to use common dma_map_ops structure and asm-generic/dma-mapping-common.h helpers. Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com> Acked-by: NKyungmin Park <kyungmin.park@samsung.com> Tested-By: NSubash Patel <subash.ramaswamy@linaro.org>
-
由 Marek Szyprowski 提交于
This patch removes the need for the offset parameter in dma bounce functions. This is required to let dma-mapping framework on ARM architecture to use common, generic dma_map_ops based dma-mapping helpers. Background and more detailed explaination: dma_*_range_* functions are available from the early days of the dma mapping api. They are the correct way of doing a partial syncs on the buffer (usually used by the network device drivers). This patch changes only the internal implementation of the dma bounce functions to let them tunnel through dma_map_ops structure. The driver api stays unchanged, so driver are obliged to call dma_*_range_* functions to keep code clean and easy to understand. The only drawback from this patch is reduced detection of the dma api abuse. Let us consider the following code: dma_addr = dma_map_single(dev, ptr, 64, DMA_TO_DEVICE); dma_sync_single_range_for_cpu(dev, dma_addr+16, 0, 32, DMA_TO_DEVICE); Without the patch such code fails, because dma bounce code is unable to find the bounce buffer for the given dma_address. After the patch the above sync call will be equivalent to: dma_sync_single_range_for_cpu(dev, dma_addr, 16, 32, DMA_TO_DEVICE); which succeeds. I don't consider this as a real problem, because DMA API abuse should be caught by debug_dma_* function family. This patch lets us to simplify the internal low-level implementation without chaning the driver visible API. Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com> Acked-by: NKyungmin Park <kyungmin.park@samsung.com> Tested-By: NSubash Patel <subash.ramaswamy@linaro.org>
-
由 Marek Szyprowski 提交于
Replace all uses of ~0 with DMA_ERROR_CODE, what should make the code easier to read. Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com> Acked-by: NKyungmin Park <kyungmin.park@samsung.com> Tested-By: NSubash Patel <subash.ramaswamy@linaro.org>
-
- 23 8月, 2011 1 次提交
-
-
由 Catalin Marinas 提交于
This is to avoid a compiler warning when invoking the __bus_to_virt() macro. The dma_to_virt() function gets addresses within the 32-bit range. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 22 8月, 2011 1 次提交
-
-
由 Jon Medhurst 提交于
This function can be called during boot to increase the size of the consistent DMA region above it's default value of 2MB. It must be called before the memory allocator is initialised, i.e. before any core_initcall. Signed-off-by: NJon Medhurst <tixy@yxit.co.uk> Acked-by: NNicolas Pitre <nicolas.pitre@linaro.org>
-
- 12 7月, 2011 1 次提交
-
-
由 Russell King 提交于
ISA_DMA_THRESHOLD has been unused by non-arch code, so lets now get rid of it from ARM by replacing it with arm_dma_zone_mask. Move dma_supported() and dma_set_mask() out of line, and have dma_supported() check this new variable instead. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 08 7月, 2011 1 次提交
-
-
由 Russell King 提交于
Simplify the dmabounce specific code in dma_set_mask(). We can just omit setting the dma mask if dmabounce is enabled (we will have already set dma mask via callbacks when the device is created in that case.) Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 04 7月, 2011 2 次提交
-
-
由 Russell King 提交于
Pass the device type specific needs_bounce function in at dmabounce register time, avoiding the need for a platform specific global function to do this. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Use dma_map_page()/dma_unmap_page() internals to handle dma_map_single() and dma_unmap_single(). Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 07 1月, 2011 1 次提交
-
-
由 Russell King 提交于
Add ARM support for the DMA debug infrastructure, which allows the DMA API usage to be debugged. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 03 1月, 2011 1 次提交
-
-
由 Russell King 提交于
Replace the page_to_dma() and dma_to_page() macros with their PFN equivalents. This allows us to map parts of memory which do not have a struct page allocated to them to bus addresses. This will be used internally by dma_alloc_coherent()/dma_alloc_writecombine(). Build tested on Versatile, OMAP1, IOP13xx and KS8695. Tested-by: NJanusz Krzysztofik <jkrzyszt@tis.icnet.pl> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 08 9月, 2010 1 次提交
-
-
由 Russell King 提交于
This reverts commit 4fa5518c, which causes a compilation regression for IXP4xx platforms. Reported-by: NRichard Cochran <richardcochran@gmail.com> Acked-by: NEric Miao <eric.y.miao@gmail.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 11 8月, 2010 2 次提交
-
-
由 FUJITA Tomonori 提交于
Architectures implement dma_is_consistent() in different ways (some misinterpret the definition of API in DMA-API.txt). So it hasn't been so useful for drivers. We have only one user of the API in tree. Unlikely out-of-tree drivers use the API. Even if we fix dma_is_consistent() in some architectures, it doesn't look useful at all. It was invented long ago for some old systems that can't allocate coherent memory at all. It's better to export only APIs that are definitely necessary for drivers. Let's remove this API. Signed-off-by: NFUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Cc: James Bottomley <James.Bottomley@HansenPartnership.com> Reviewed-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> Cc: <linux-arch@vger.kernel.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 FUJITA Tomonori 提交于
dma_get_cache_alignment returns the minimum DMA alignment. Architectures defines it as ARCH_DMA_MINALIGN (formally ARCH_KMALLOC_MINALIGN). So we can unify dma_get_cache_alignment implementations. Note that some architectures implement dma_get_cache_alignment wrongly. dma_get_cache_alignment() should return the minimum DMA alignment. So fully-coherent architectures should return 1. This patch also fixes this issue. Signed-off-by: NFUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Cc: <linux-arch@vger.kernel.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 05 8月, 2010 1 次提交
-
-
由 Eric Miao 提交于
With a correct dev->dma_mask before calling dmabounce_register_dev(), dma_needs_bounce() is not necessary. The sa1111, though, is a bit complicated. Until it's fully understood and fixed, dma_needs_bounce() for sa1111 is kept if CONFIG_SA1111 is enabled with no side effect (with the condition of machine_is_*) Thanks for Mike Rapoport to fix one error in the original version of the patch and get this tested. Acked-by: NMike Rapoport <mike@compulab.co.il> Signed-off-by: NEric Miao <eric.y.miao@gmail.com>
-
- 13 3月, 2010 1 次提交
-
-
由 FUJITA Tomonori 提交于
This converts arm to the generic pci_set_dma_mask and pci_set_consistent_dma_mask (removes HAVE_ARCH_PCI_SET_DMA_MASK for dmabounce). Signed-off-by: NFUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Looked-over-by: NRussell King <rmk+kernel@arm.linux.org.uk> Cc: Jesse Barnes <jbarnes@virtuousgeek.org> Cc: Greg KH <greg@kroah.com> Cc: Kay Sievers <kay.sievers@vrfy.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 15 2月, 2010 2 次提交
-
-
由 Russell King 提交于
Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk> Tested-By: NSantosh Shilimkar <santosh.shilimkar@ti.com>
-
由 Russell King 提交于
The DMA API has the notion of buffer ownership; make it explicit in the ARM implementation of this API. This gives us a set of hooks to allow us to deal with CPU cache issues arising from non-cache coherent DMA. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk> Tested-By: NSantosh Shilimkar <santosh.shilimkar@ti.com> Tested-By: NJamie Iles <jamie@jamieiles.com>
-
- 23 11月, 2009 3 次提交
-
-
由 Russell King 提交于
We will need to treat dma_unmap_page() differently from dma_unmap_single() Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk> Tested-By: NJamie Iles <jamie@jamieiles.com>
-
由 Russell King 提交于
Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk> Tested-By: NJamie Iles <jamie@jamieiles.com>
-
由 Russell King 提交于
The non-highmem() and the __pfn_to_bus() based page_to_dma() both compile to the same code, so its pointless having these two different approaches. Use the __pfn_to_bus() based version. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk> Tested-By: NJamie Iles <jamie@jamieiles.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 16 3月, 2009 2 次提交
-
-
由 Nicolas Pitre 提交于
If a machine class has a custom __virt_to_bus() implementation then it must provide a __arch_page_to_dma() implementation as well which is _not_ based on page_address() to support highmem. This patch fixes existing __arch_page_to_dma() and provide a default implementation otherwise. The default implementation for highmem is based on __pfn_to_bus() which is defined only when no custom __virt_to_bus() is provided by the machine class. That leaves only ebsa110 and footbridge which cannot support highmem until they provide their own __arch_page_to_dma() implementation. But highmem support on those legacy platforms with limited memory is certainly not a priority. Signed-off-by: NNicolas Pitre <nico@marvell.com>
-
由 Nicolas Pitre 提交于
This is a helper to be used by the DMA mapping API to handle cache maintenance for memory identified by a page structure instead of a virtual address. Those pages may or may not be highmem pages, and when they're highmem pages, they may or may not be virtually mapped. When they're not mapped then there is no L1 cache to worry about. But even in that case the L2 cache must be processed since unmapped highmem pages can still be L2 cached. Signed-off-by: NNicolas Pitre <nico@marvell.com>
-
- 13 12月, 2008 1 次提交
-
-
由 Russell King 提交于
dma_supported() is supposed to indicate whether the system can support the DMA mask it was passed, which depends on the maximal address which can be returned for DMA allocations. If the mask is smaller than that, we are unable to guarantee that the driver can reliably obtain suitable memory. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 13 11月, 2008 1 次提交
-
-
由 Russell King 提交于
arch/arm/mm/dma-mapping.c: In function `dma_sync_sg_for_cpu': arch/arm/mm/dma-mapping.c:588: warning: statement with no effect Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 30 9月, 2008 2 次提交
-
-
由 Russell King 提交于
... to prevent people being mislead. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
As per the dma_unmap_* calls, we don't touch the cache when a DMA buffer transitions from device to CPU ownership. Presently, no problems have been identified with speculative cache prefetching which in itself is a new feature in later architectures. We may have to revisit the DMA API later for these architectures anyway. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 29 9月, 2008 5 次提交
-
-
由 Russell King 提交于
Validate the direction argument like x86 does. In addition, validate the dma_unmap_* parameters against those passed to dma_map_* when using the DMA bounce code. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
The dmabounce dma_sync_xxx() implementation have been broken for quite some time; they all copy data between the DMA buffer and the CPU visible buffer no irrespective of the change of ownership. (IOW, a DMA_FROM_DEVICE mapping copies data from the DMA buffer to the CPU buffer during a call to dma_sync_single_for_device().) Fix it by getting rid of sync_single(), moving the contents into the recently created dmabounce_sync_for_xxx() functions and adjusting appropriately. This also makes it possible to properly support the DMA range sync functions. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 26 9月, 2008 2 次提交
-
-
由 Russell King 提交于
We can translate a struct page directly to a DMA address using page_to_dma(). No need to use page_address() followed by virt_to_dma(). Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Update the ARM DMA scatter gather APIs for the scatterlist changes. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 10 8月, 2008 1 次提交
-
-
由 Russell King 提交于
Convert the existing dma_sync_single_for_* APIs to the new range based APIs, and make the dma_sync_single_for_* API a superset of it. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-