- 31 10月, 2013 1 次提交
-
-
由 Santosh Shilimkar 提交于
Most of the kernel code assumes that max*pfn is maximum pfns because the physical start of memory is expected to be PFN0. Since this assumption is not true on ARM architectures, the meaning of max*pfn is number of memory pages. This is done to keep drivers happy which are making use of of these variable to calculate the dma bounce limit using dma_mask. Now since we have a architecture override possibility for DMAable maximum pfns, lets make meaning of max*pfns as maximum pnfs on ARM as well. Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Nicolas Pitre <nicolas.pitre@linaro.org> Signed-off-by: NSantosh Shilimkar <santosh.shilimkar@ti.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 04 12月, 2012 1 次提交
-
-
由 Ming Lei 提交于
Without the patch, kind of below warning will be dumped if DMA-API debug is enabled: [ 11.069763] ------------[ cut here ]------------ [ 11.074645] WARNING: at lib/dma-debug.c:948 check_unmap+0x770/0x860() [ 11.081420] ehci-omap ehci-omap.0: DMA-API: device driver failed to check map error[device address=0x0000000 0adb78e80] [size=8 bytes] [mapped as single] [ 11.095611] Modules linked in: Cc: Russell King <linux@arm.linux.org.uk> Cc: Marek Szyprowski <m.szyprowski@samsung.com> Signed-off-by: NMing Lei <ming.lei@canonical.com> Signed-off-by: NJoerg Roedel <joro@8bytes.org>
-
- 22 11月, 2012 1 次提交
-
-
由 Gregory CLEMENT 提交于
Expose another DMA operations function: arm_dma_set_mask. This function will be added to a custom DMA ops for Armada 370/XP. Depending of its configuration Armada 370/XP can be set as a "nearly" coherent architecture. In this case the DMA ops is made of: - specific functions for this architecture - already exposed arm DMA related functions - the arm_dma_set_mask which was not exposed yet. Signed-off-by: NGregory CLEMENT <gregory.clement@free-electrons.com> Acked-by: NMarek Szyprowski <m.szyprowski@samsung.com>
-
- 13 11月, 2012 1 次提交
-
-
由 Marek Szyprowski 提交于
Since commit e9da6e99 ("ARM: dma-mapping: remove custom consistent dma region") setting consistent dma memory size is not longer required. All calls to this function has been already removed, so the init_consistent_dma_size() stub can also be gone. Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com>
-
- 26 10月, 2012 1 次提交
-
-
由 Marek Szyprowski 提交于
This reverts commit 871ae57a, which is scheduled for v3.8 and accidently got into v3.7-rc series. Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com>
-
- 23 10月, 2012 1 次提交
-
-
由 Ming Lei 提交于
Without the patch, kind of below warning will be dumped if DMA-API debug is enabled: [ 11.069763] ------------[ cut here ]------------ [ 11.074645] WARNING: at lib/dma-debug.c:948 check_unmap+0x770/0x860() [ 11.081420] ehci-omap ehci-omap.0: DMA-API: device driver failed to check map error[device address=0x0000000 0adb78e80] [size=8 bytes] [mapped as single] [ 11.095611] Modules linked in: Cc: Russell King <linux@arm.linux.org.uk> Cc: Marek Szyprowski <m.szyprowski@samsung.com> Signed-off-by: NMing Lei <ming.lei@canonical.com> Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com>
-
- 02 10月, 2012 1 次提交
-
-
由 Rob Herring 提交于
arch_is_coherent is problematic as it is a global symbol. This doesn't work for multi-platform kernels or platforms which can support per device coherent DMA. This adds arm_coherent_dma_ops to be used for devices which connected coherently (i.e. to the ACP port on Cortex-A9 or A15). The arm_dma_ops are modified at boot when arch_is_coherent is true. Signed-off-by: NRob Herring <rob.herring@calxeda.com> Cc: Russell King <linux@arm.linux.org.uk> Cc: Marek Szyprowski <m.szyprowski@samsung.com> Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com>
-
- 29 8月, 2012 1 次提交
-
-
由 Marek Szyprowski 提交于
Some platforms might require to increase atomic coherent pool to make sure that their device will be able to allocate all their buffers from atomic context. This function can be also used to decrease atomic coherent pool size if coherent allocations are not used for the given sub-platform. Suggested-by: NJosh Coombs <josh.coombs@gmail.com> Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com>
-
- 30 7月, 2012 3 次提交
-
-
由 Marek Szyprowski 提交于
This patch adds support for dma_get_sgtable() function which is required to let drivers to share the buffers allocated by DMA-mapping subsystem. Generic implementation based on virt_to_page() is not suitable for ARM dma-mapping subsystem. Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com> Reviewed-by: NKyungmin Park <kyungmin.park@samsung.com>
-
由 Marek Szyprowski 提交于
Commit 9adc5374 ('common: dma-mapping: introduce mmap method') added a generic method for implementing mmap user call to dma_map_ops structure. This patch converts ARM and PowerPC architectures (the only providers of dma_mmap_coherent/dma_mmap_writecombine calls) to use this generic dma_map_ops based call and adds a generic cross architecture definition for dma_mmap_attrs, dma_mmap_coherent, dma_mmap_writecombine functions. The generic mmap virt_to_page-based fallback implementation is provided for architectures which don't provide their own implementation for mmap method. Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com> Reviewed-by: NKyungmin Park <kyungmin.park@samsung.com>
-
由 Marek Szyprowski 提交于
This patch changes dma-mapping subsystem to use generic vmalloc areas for all consistent dma allocations. This increases the total size limit of the consistent allocations and removes platform hacks and a lot of duplicated code. Atomic allocations are served from special pool preallocated on boot, because vmalloc areas cannot be reliably created in atomic context. Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com> Reviewed-by: NKyungmin Park <kyungmin.park@samsung.com> Reviewed-by: NMinchan Kim <minchan@kernel.org>
-
- 21 5月, 2012 5 次提交
-
-
由 Marek Szyprowski 提交于
This patch converts dma_alloc/free/mmap_{coherent,writecombine} functions to use generic alloc/free/mmap methods from dma_map_ops structure. A new DMA_ATTR_WRITE_COMBINE DMA attribute have been introduced to implement writecombine methods. Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com> Acked-by: NKyungmin Park <kyungmin.park@samsung.com> Acked-by: NArnd Bergmann <arnd@arndb.de> Tested-By: NSubash Patel <subash.ramaswamy@linaro.org>
-
由 Marek Szyprowski 提交于
This patch removes dma bounce hooks from the common dma mapping implementation on ARM architecture and creates a separate set of dma_map_ops for dma bounce devices. Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com> Acked-by: NKyungmin Park <kyungmin.park@samsung.com> Tested-By: NSubash Patel <subash.ramaswamy@linaro.org>
-
由 Marek Szyprowski 提交于
This patch modifies dma-mapping implementation on ARM architecture to use common dma_map_ops structure and asm-generic/dma-mapping-common.h helpers. Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com> Acked-by: NKyungmin Park <kyungmin.park@samsung.com> Tested-By: NSubash Patel <subash.ramaswamy@linaro.org>
-
由 Marek Szyprowski 提交于
This patch removes the need for the offset parameter in dma bounce functions. This is required to let dma-mapping framework on ARM architecture to use common, generic dma_map_ops based dma-mapping helpers. Background and more detailed explaination: dma_*_range_* functions are available from the early days of the dma mapping api. They are the correct way of doing a partial syncs on the buffer (usually used by the network device drivers). This patch changes only the internal implementation of the dma bounce functions to let them tunnel through dma_map_ops structure. The driver api stays unchanged, so driver are obliged to call dma_*_range_* functions to keep code clean and easy to understand. The only drawback from this patch is reduced detection of the dma api abuse. Let us consider the following code: dma_addr = dma_map_single(dev, ptr, 64, DMA_TO_DEVICE); dma_sync_single_range_for_cpu(dev, dma_addr+16, 0, 32, DMA_TO_DEVICE); Without the patch such code fails, because dma bounce code is unable to find the bounce buffer for the given dma_address. After the patch the above sync call will be equivalent to: dma_sync_single_range_for_cpu(dev, dma_addr, 16, 32, DMA_TO_DEVICE); which succeeds. I don't consider this as a real problem, because DMA API abuse should be caught by debug_dma_* function family. This patch lets us to simplify the internal low-level implementation without chaning the driver visible API. Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com> Acked-by: NKyungmin Park <kyungmin.park@samsung.com> Tested-By: NSubash Patel <subash.ramaswamy@linaro.org>
-
由 Marek Szyprowski 提交于
Replace all uses of ~0 with DMA_ERROR_CODE, what should make the code easier to read. Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com> Acked-by: NKyungmin Park <kyungmin.park@samsung.com> Tested-By: NSubash Patel <subash.ramaswamy@linaro.org>
-
- 23 8月, 2011 1 次提交
-
-
由 Catalin Marinas 提交于
This is to avoid a compiler warning when invoking the __bus_to_virt() macro. The dma_to_virt() function gets addresses within the 32-bit range. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 22 8月, 2011 1 次提交
-
-
由 Jon Medhurst 提交于
This function can be called during boot to increase the size of the consistent DMA region above it's default value of 2MB. It must be called before the memory allocator is initialised, i.e. before any core_initcall. Signed-off-by: NJon Medhurst <tixy@yxit.co.uk> Acked-by: NNicolas Pitre <nicolas.pitre@linaro.org>
-
- 12 7月, 2011 1 次提交
-
-
由 Russell King 提交于
ISA_DMA_THRESHOLD has been unused by non-arch code, so lets now get rid of it from ARM by replacing it with arm_dma_zone_mask. Move dma_supported() and dma_set_mask() out of line, and have dma_supported() check this new variable instead. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 08 7月, 2011 1 次提交
-
-
由 Russell King 提交于
Simplify the dmabounce specific code in dma_set_mask(). We can just omit setting the dma mask if dmabounce is enabled (we will have already set dma mask via callbacks when the device is created in that case.) Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 04 7月, 2011 2 次提交
-
-
由 Russell King 提交于
Pass the device type specific needs_bounce function in at dmabounce register time, avoiding the need for a platform specific global function to do this. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Use dma_map_page()/dma_unmap_page() internals to handle dma_map_single() and dma_unmap_single(). Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 07 1月, 2011 1 次提交
-
-
由 Russell King 提交于
Add ARM support for the DMA debug infrastructure, which allows the DMA API usage to be debugged. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 03 1月, 2011 1 次提交
-
-
由 Russell King 提交于
Replace the page_to_dma() and dma_to_page() macros with their PFN equivalents. This allows us to map parts of memory which do not have a struct page allocated to them to bus addresses. This will be used internally by dma_alloc_coherent()/dma_alloc_writecombine(). Build tested on Versatile, OMAP1, IOP13xx and KS8695. Tested-by: NJanusz Krzysztofik <jkrzyszt@tis.icnet.pl> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 08 9月, 2010 1 次提交
-
-
由 Russell King 提交于
This reverts commit 4fa5518c, which causes a compilation regression for IXP4xx platforms. Reported-by: NRichard Cochran <richardcochran@gmail.com> Acked-by: NEric Miao <eric.y.miao@gmail.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 11 8月, 2010 2 次提交
-
-
由 FUJITA Tomonori 提交于
Architectures implement dma_is_consistent() in different ways (some misinterpret the definition of API in DMA-API.txt). So it hasn't been so useful for drivers. We have only one user of the API in tree. Unlikely out-of-tree drivers use the API. Even if we fix dma_is_consistent() in some architectures, it doesn't look useful at all. It was invented long ago for some old systems that can't allocate coherent memory at all. It's better to export only APIs that are definitely necessary for drivers. Let's remove this API. Signed-off-by: NFUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Cc: James Bottomley <James.Bottomley@HansenPartnership.com> Reviewed-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> Cc: <linux-arch@vger.kernel.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 FUJITA Tomonori 提交于
dma_get_cache_alignment returns the minimum DMA alignment. Architectures defines it as ARCH_DMA_MINALIGN (formally ARCH_KMALLOC_MINALIGN). So we can unify dma_get_cache_alignment implementations. Note that some architectures implement dma_get_cache_alignment wrongly. dma_get_cache_alignment() should return the minimum DMA alignment. So fully-coherent architectures should return 1. This patch also fixes this issue. Signed-off-by: NFUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Cc: <linux-arch@vger.kernel.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 05 8月, 2010 1 次提交
-
-
由 Eric Miao 提交于
With a correct dev->dma_mask before calling dmabounce_register_dev(), dma_needs_bounce() is not necessary. The sa1111, though, is a bit complicated. Until it's fully understood and fixed, dma_needs_bounce() for sa1111 is kept if CONFIG_SA1111 is enabled with no side effect (with the condition of machine_is_*) Thanks for Mike Rapoport to fix one error in the original version of the patch and get this tested. Acked-by: NMike Rapoport <mike@compulab.co.il> Signed-off-by: NEric Miao <eric.y.miao@gmail.com>
-
- 13 3月, 2010 1 次提交
-
-
由 FUJITA Tomonori 提交于
This converts arm to the generic pci_set_dma_mask and pci_set_consistent_dma_mask (removes HAVE_ARCH_PCI_SET_DMA_MASK for dmabounce). Signed-off-by: NFUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Looked-over-by: NRussell King <rmk+kernel@arm.linux.org.uk> Cc: Jesse Barnes <jbarnes@virtuousgeek.org> Cc: Greg KH <greg@kroah.com> Cc: Kay Sievers <kay.sievers@vrfy.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 15 2月, 2010 2 次提交
-
-
由 Russell King 提交于
Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk> Tested-By: NSantosh Shilimkar <santosh.shilimkar@ti.com>
-
由 Russell King 提交于
The DMA API has the notion of buffer ownership; make it explicit in the ARM implementation of this API. This gives us a set of hooks to allow us to deal with CPU cache issues arising from non-cache coherent DMA. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk> Tested-By: NSantosh Shilimkar <santosh.shilimkar@ti.com> Tested-By: NJamie Iles <jamie@jamieiles.com>
-
- 23 11月, 2009 3 次提交
-
-
由 Russell King 提交于
We will need to treat dma_unmap_page() differently from dma_unmap_single() Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk> Tested-By: NJamie Iles <jamie@jamieiles.com>
-
由 Russell King 提交于
Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk> Tested-By: NJamie Iles <jamie@jamieiles.com>
-
由 Russell King 提交于
The non-highmem() and the __pfn_to_bus() based page_to_dma() both compile to the same code, so its pointless having these two different approaches. Use the __pfn_to_bus() based version. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk> Tested-By: NJamie Iles <jamie@jamieiles.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 16 3月, 2009 2 次提交
-
-
由 Nicolas Pitre 提交于
If a machine class has a custom __virt_to_bus() implementation then it must provide a __arch_page_to_dma() implementation as well which is _not_ based on page_address() to support highmem. This patch fixes existing __arch_page_to_dma() and provide a default implementation otherwise. The default implementation for highmem is based on __pfn_to_bus() which is defined only when no custom __virt_to_bus() is provided by the machine class. That leaves only ebsa110 and footbridge which cannot support highmem until they provide their own __arch_page_to_dma() implementation. But highmem support on those legacy platforms with limited memory is certainly not a priority. Signed-off-by: NNicolas Pitre <nico@marvell.com>
-
由 Nicolas Pitre 提交于
This is a helper to be used by the DMA mapping API to handle cache maintenance for memory identified by a page structure instead of a virtual address. Those pages may or may not be highmem pages, and when they're highmem pages, they may or may not be virtually mapped. When they're not mapped then there is no L1 cache to worry about. But even in that case the L2 cache must be processed since unmapped highmem pages can still be L2 cached. Signed-off-by: NNicolas Pitre <nico@marvell.com>
-
- 13 12月, 2008 1 次提交
-
-
由 Russell King 提交于
dma_supported() is supposed to indicate whether the system can support the DMA mask it was passed, which depends on the maximal address which can be returned for DMA allocations. If the mask is smaller than that, we are unable to guarantee that the driver can reliably obtain suitable memory. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 13 11月, 2008 1 次提交
-
-
由 Russell King 提交于
arch/arm/mm/dma-mapping.c: In function `dma_sync_sg_for_cpu': arch/arm/mm/dma-mapping.c:588: warning: statement with no effect Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 30 9月, 2008 2 次提交
-
-
由 Russell King 提交于
... to prevent people being mislead. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
As per the dma_unmap_* calls, we don't touch the cache when a DMA buffer transitions from device to CPU ownership. Presently, no problems have been identified with speculative cache prefetching which in itself is a new feature in later architectures. We may have to revisit the DMA API later for these architectures anyway. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-