- 11 6月, 2012 1 次提交
-
-
由 Sachin Kamat 提交于
Fixes the following sparse warnings: arch/arm/mm/dma-mapping.c:231:15: warning: symbol 'consistent_base' was not declared. Should it be static? arch/arm/mm/dma-mapping.c:326:8: warning: symbol 'coherent_pool_size' was not declared. Should it be static? Signed-off-by: NSachin Kamat <sachin.kamat@linaro.org> Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com>
-
- 04 6月, 2012 1 次提交
-
-
由 Marek Szyprowski 提交于
CMA has been enabled unconditionally on all ARMv6+ systems to solve the long standing issue of double kernel mappings for all dma coherent buffers. This however created a dependency on CONFIG_EXPERIMENTAL for the whole ARM architecture what should be really avoided. This patch removes this dependency and lets one use old, well-tested dma-mapping implementation also on ARMv6+ systems without the need to use EXPERIMENTAL stuff. Reported-by: NRussell King <linux@arm.linux.org.uk> Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com>
-
- 21 5月, 2012 12 次提交
-
-
由 Vitaly Andrianov 提交于
The dma_contiguous_remap() function clears existing section maps using the wrong size (PGDIR_SIZE instead of PMD_SIZE). This is a bug which does not affect non-LPAE systems, where PGDIR_SIZE and PMD_SIZE are the same. On LPAE systems, however, this bug causes the kernel to hang at this point. This fix has been tested on both LPAE and non-LPAE kernel builds. Signed-off-by: NVitaly Andrianov <vitalya@ti.com> Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com>
-
由 Marek Szyprowski 提交于
This patch adds support for CMA to dma-mapping subsystem for ARM architecture. By default a global CMA area is used, but specific devices are allowed to have their private memory areas if required (they can be created with dma_declare_contiguous() function during board initialisation). Contiguous memory areas reserved for DMA are remapped with 2-level page tables on boot. Once a buffer is requested, a low memory kernel mapping is updated to to match requested memory access type. GFP_ATOMIC allocations are performed from special pool which is created early during boot. This way remapping page attributes is not needed on allocation time. CMA has been enabled unconditionally for ARMv6+ systems. Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com> Signed-off-by: NKyungmin Park <kyungmin.park@samsung.com> CC: Michal Nazarewicz <mina86@mina86.com> Acked-by: NArnd Bergmann <arnd@arndb.de> Tested-by: NRob Clark <rob.clark@linaro.org> Tested-by: NOhad Ben-Cohen <ohad@wizery.com> Tested-by: NBenjamin Gaignard <benjamin.gaignard@linaro.org> Tested-by: NRobert Nelson <robertcnelson@gmail.com> Tested-by: NBarry Song <Baohua.Song@csr.com>
-
由 Marek Szyprowski 提交于
This patch add a complete implementation of DMA-mapping API for devices which have IOMMU support. This implementation tries to optimize dma address space usage by remapping all possible physical memory chunks into a single dma address space chunk. DMA address space is managed on top of the bitmap stored in the dma_iommu_mapping structure stored in device->archdata. Platform setup code has to initialize parameters of the dma address space (base address, size, allocation precision order) with arm_iommu_create_mapping() function. To reduce the size of the bitmap, all allocations are aligned to the specified order of base 4 KiB pages. dma_alloc_* functions allocate physical memory in chunks, each with alloc_pages() function to avoid failing if the physical memory gets fragmented. In worst case the allocated buffer is composed of 4 KiB page chunks. dma_map_sg() function minimizes the total number of dma address space chunks by merging of physical memory chunks into one larger dma address space chunk. If requested chunk (scatter list entry) boundaries match physical page boundaries, most calls to dma_map_sg() requests will result in creating only one chunk in dma address space. dma_map_page() simply creates a mapping for the given page(s) in the dma address space. All dma functions also perform required cache operation like their counterparts from the arm linear physical memory mapping version. This patch contains code and fixes kindly provided by: - Krishna Reddy <vdumpa@nvidia.com>, - Andrzej Pietrasiewicz <andrzej.p@samsung.com>, - Hiroshi DOYU <hdoyu@nvidia.com> Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com> Acked-by: NKyungmin Park <kyungmin.park@samsung.com> Reviewed-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> Tested-By: NSubash Patel <subash.ramaswamy@linaro.org>
-
由 Marek Szyprowski 提交于
This patch converts dma_alloc/free/mmap_{coherent,writecombine} functions to use generic alloc/free/mmap methods from dma_map_ops structure. A new DMA_ATTR_WRITE_COMBINE DMA attribute have been introduced to implement writecombine methods. Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com> Acked-by: NKyungmin Park <kyungmin.park@samsung.com> Acked-by: NArnd Bergmann <arnd@arndb.de> Tested-By: NSubash Patel <subash.ramaswamy@linaro.org>
-
由 Marek Szyprowski 提交于
This patch just performs a global cleanup in DMA mapping implementation for ARM architecture. Some of the tiny helper functions have been moved to the caller code, some have been merged together. Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com> Acked-by: NKyungmin Park <kyungmin.park@samsung.com> Acked-by: NArnd Bergmann <arnd@arndb.de> Tested-By: NSubash Patel <subash.ramaswamy@linaro.org>
-
由 Marek Szyprowski 提交于
This patch removes dma bounce hooks from the common dma mapping implementation on ARM architecture and creates a separate set of dma_map_ops for dma bounce devices. Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com> Acked-by: NKyungmin Park <kyungmin.park@samsung.com> Tested-By: NSubash Patel <subash.ramaswamy@linaro.org>
-
由 Marek Szyprowski 提交于
This patch converts all dma_sg methods to be generic (independent of the current DMA mapping implementation for ARM architecture). All dma sg operations are now implemented on top of respective dma_map_page/dma_sync_single_for* operations from dma_map_ops structure. Before this patch there were custom methods for all scatter/gather related operations. They iterated over the whole scatter list and called cache related operations directly (which in turn checked if we use dma bounce code or not and called respective version). This patch changes them not to use such shortcut. Instead it provides similar loop over scatter list and calls methods from the device's dma_map_ops structure. This enables us to use device dependent implementations of cache related operations (direct linear or dma bounce) depending on the provided dma_map_ops structure. Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com> Acked-by: NKyungmin Park <kyungmin.park@samsung.com> Tested-By: NSubash Patel <subash.ramaswamy@linaro.org>
-
由 Marek Szyprowski 提交于
This patch modifies dma-mapping implementation on ARM architecture to use common dma_map_ops structure and asm-generic/dma-mapping-common.h helpers. Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com> Acked-by: NKyungmin Park <kyungmin.park@samsung.com> Tested-By: NSubash Patel <subash.ramaswamy@linaro.org>
-
由 Marek Szyprowski 提交于
This patch removes the need for the offset parameter in dma bounce functions. This is required to let dma-mapping framework on ARM architecture to use common, generic dma_map_ops based dma-mapping helpers. Background and more detailed explaination: dma_*_range_* functions are available from the early days of the dma mapping api. They are the correct way of doing a partial syncs on the buffer (usually used by the network device drivers). This patch changes only the internal implementation of the dma bounce functions to let them tunnel through dma_map_ops structure. The driver api stays unchanged, so driver are obliged to call dma_*_range_* functions to keep code clean and easy to understand. The only drawback from this patch is reduced detection of the dma api abuse. Let us consider the following code: dma_addr = dma_map_single(dev, ptr, 64, DMA_TO_DEVICE); dma_sync_single_range_for_cpu(dev, dma_addr+16, 0, 32, DMA_TO_DEVICE); Without the patch such code fails, because dma bounce code is unable to find the bounce buffer for the given dma_address. After the patch the above sync call will be equivalent to: dma_sync_single_range_for_cpu(dev, dma_addr, 16, 32, DMA_TO_DEVICE); which succeeds. I don't consider this as a real problem, because DMA API abuse should be caught by debug_dma_* function family. This patch lets us to simplify the internal low-level implementation without chaning the driver visible API. Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com> Acked-by: NKyungmin Park <kyungmin.park@samsung.com> Tested-By: NSubash Patel <subash.ramaswamy@linaro.org>
-
由 Marek Szyprowski 提交于
Replace all uses of ~0 with DMA_ERROR_CODE, what should make the code easier to read. Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com> Acked-by: NKyungmin Park <kyungmin.park@samsung.com> Tested-By: NSubash Patel <subash.ramaswamy@linaro.org>
-
由 Marek Szyprowski 提交于
Replace all calls to printk with pr_* functions family. Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com> Acked-by: NKyungmin Park <kyungmin.park@samsung.com> Acked-by: NArnd Bergmann <arnd@arndb.de> Tested-By: NSubash Patel <subash.ramaswamy@linaro.org>
-
由 Marek Szyprowski 提交于
Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com>
-
- 23 1月, 2012 1 次提交
-
-
由 Russell King 提交于
Add a new seqfile for reporting coherent DMA allocations. This contains the address range, size and the function which was used to allocate each region, allowing these allocations to be viewed in much the same way as /proc/vmallocinfo. The DMA coherent region has limited space, so this allows allocation failures to be viewed, as well as finding out how much space is being used. Make sure this file is only readable by root - same as vmallocinfo - to prevent information leakage. Acked-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 27 11月, 2011 1 次提交
-
-
由 Sumit Bhattacharya 提交于
dma_alloc_coherent wants to split pages after allocation in order to reduce the memory footprint. This does not work well with GFP_COMP pages, so drop this flag before allocation. This patch is ported from arch/avr32 (commit 3611553e). [swarren: s/HUGETLB_PAGE/HUGETLBFS/ in comment, minor comment cleanup] Signed-off-by: NSumit Bhattacharya <sumitb@nvidia.com> Tested-by: NVarun Colbert <vcolbert@nvidia.com> Signed-off-by: NStephen Warren <swarren@nvidia.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 21 11月, 2011 1 次提交
-
-
由 Catalin Marinas 提交于
Commit 99d1717d (ARM: Add init_consistent_dma_size()) introduces dynamic allocation of the consistent_pte array. The number of PTEs should be calculated based on the number of PMD entries rather than PGD, hence the PMD_SHIFT. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Cc: Jon Medhurst <tixy@yxit.co.uk> Acked-by: NNicolas Pitre <nicolas.pitre@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 26 9月, 2011 1 次提交
-
-
由 Russell King 提交于
If the attempt to map a page for DMA fails (eg, because we're out of mapping space) then we must not hold on to the page we allocated for DMA - doing so will result in a memory leak. Cc: <stable@kernel.org> Reported-by: NBryan Phillippe <bp@darkforest.org> Tested-by: NBryan Phillippe <bp@darkforest.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 23 8月, 2011 1 次提交
-
-
由 Catalin Marinas 提交于
PGDIR_SHIFT and PMD_SHIFT for the classic 2-level page table format have the same value (21). This patch converts the PGDIR_* uses in the kernel to the PMD_* equivalent so that LPAE builds can reuse the same code. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 22 8月, 2011 2 次提交
-
-
由 Jon Medhurst 提交于
There are now no platforms which set this macro. Signed-off-by: NJon Medhurst <tixy@yxit.co.uk>
-
由 Jon Medhurst 提交于
This function can be called during boot to increase the size of the consistent DMA region above it's default value of 2MB. It must be called before the memory allocator is initialised, i.e. before any core_initcall. Signed-off-by: NJon Medhurst <tixy@yxit.co.uk> Acked-by: NNicolas Pitre <nicolas.pitre@linaro.org>
-
- 12 7月, 2011 1 次提交
-
-
由 Russell King 提交于
ISA_DMA_THRESHOLD has been unused by non-arch code, so lets now get rid of it from ARM by replacing it with arm_dma_zone_mask. Move dma_supported() and dma_set_mask() out of line, and have dma_supported() check this new variable instead. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 22 2月, 2011 1 次提交
-
-
由 Russell King 提交于
Add pud_offset() et.al. between the pgd and pmd code in preparation of using pgtable-nopud.h rather than 4level-fixup.h. This incorporates a fix from Jamie Iles <jamie@jamieiles.com> for uaccess_with_memcpy.c. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 13 1月, 2011 1 次提交
-
-
由 Linus Walleij 提交于
The kerneldoc for this function is at odds with the DMA-API document, which holds, so fix it. Signed-off-by: NLinus Walleij <linus.walleij@stericsson.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 07 1月, 2011 1 次提交
-
-
由 Russell King 提交于
Add ARM support for the DMA debug infrastructure, which allows the DMA API usage to be debugged. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 03 1月, 2011 1 次提交
-
-
由 Russell King 提交于
Replace the page_to_dma() and dma_to_page() macros with their PFN equivalents. This allows us to map parts of memory which do not have a struct page allocated to them to bus addresses. This will be used internally by dma_alloc_coherent()/dma_alloc_writecombine(). Build tested on Versatile, OMAP1, IOP13xx and KS8695. Tested-by: NJanusz Krzysztofik <jkrzyszt@tis.icnet.pl> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 20 12月, 2010 1 次提交
-
-
由 Nicolas Pitre 提交于
Since commit 3e4d3af5 "mm: stack based kmap_atomic()", it is no longer necessary to carry an ad hoc version of kmap_atomic() added in commit 7e5a69e8 "ARM: 6007/1: fix highmem with VIPT cache and DMA" to cope with reentrancy. In fact, it is now actively wrong to rely on fixed kmap type indices (namely KM_L1_CACHE) as kmap_atomic() totally ignores them now and a concurrent instance of it may reuse any slot for any purpose. Signed-off-by: NNicolas Pitre <nicolas.pitre@linaro.org>
-
- 08 11月, 2010 1 次提交
-
-
由 Russell King 提交于
An out by one bug meant that the DMA coherent allocator was aligning to one more bit than it should, causing it to run out of available memory quicker. Fix this. Reported-by: NPetr Štetiar <ynezz@true.cz> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 19 9月, 2010 1 次提交
-
-
由 Catalin Marinas 提交于
There are places in Linux where writes to newly allocated page cache pages happen without a subsequent call to flush_dcache_page() (several PIO drivers including USB HCD). This patch changes the meaning of PG_arch_1 to be PG_dcache_clean and always flush the D-cache for a newly mapped page in update_mmu_cache(). The patch also sets the PG_arch_1 bit in the DMA cache maintenance function to avoid additional cache flushing in update_mmu_cache(). Tested-by: NRabin Vincent <rabin.vincent@stericsson.com> Cc: Nicolas Pitre <nicolas.pitre@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 08 9月, 2010 1 次提交
-
-
由 Russell King 提交于
Dave Hylands reports: | We've observed a problem with dma_alloc_writecombine when the system | is under heavy load (heavy bus traffic). We've managed to reduce the | problem to the following snippet, which is run from a kthread in a | continuous loop: | | void *virtAddr; | dma_addr_t physAddr; | unsigned int numBytes = 256; | | for (;;) { | virtAddr = dma_alloc_writecombine(NULL, | numBytes, &physAddr, GFP_KERNEL); | if (virtAddr == NULL) { | printk(KERN_ERR "Running out of memory\n"); | break; | } | | /* access DMA memory allocated */ | tmp = virtAddr; | *tmp = 0x77; | | /* free DMA memory */ | dma_free_writecombine(NULL, | numBytes, virtAddr, physAddr); | | ...sleep here... | } | | By itself, the code will run forever with no issues. However, as we | increase our bus traffic (typically using DMA) then the *tmp = 0x77 | line will eventually cause a page fault. If we add a small delay (a | few microseconds) before the *tmp = 0x77, then we don't see a page | fault, even under heavy load. A dsb() is required after modifying the PTE entries to ensure that they will always be visible. Add this dsb(). Reported-by: NDave Hylands <dhylands@gmail.com> Tested-by: NDave Hylands <dhylands@gmail.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 27 7月, 2010 1 次提交
-
-
由 Russell King 提交于
The DMA coherent remap area is used to provide an uncached mapping of memory for coherency with DMA engines. Currently, we look for any free hole which our allocation will fit in with page alignment. However, this can lead to fragmentation of the area, and allows small allocations to cross L1 entry boundaries. This is undesirable as we want to move towards allocating sections of memory. Align allocations according to the size, limiting the alignment between the page and section sizes. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 01 7月, 2010 1 次提交
-
-
由 Catalin Marinas 提交于
This macro is not defined when !CONFIG_MMU so this patch moves the CONSISTENT_* definitions to the CONFIG_MMU section. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 14 4月, 2010 1 次提交
-
-
由 Nicolas Pitre 提交于
The VIVT cache of a highmem page is always flushed before the page is unmapped. This cache flush is explicit through flush_cache_kmaps() in flush_all_zero_pkmaps(), or through __cpuc_flush_dcache_area() in kunmap_atomic(). There is also an implicit flush of those highmem pages that were part of a process that just terminated making those pages free as the whole VIVT cache has to be flushed on every task switch. Hence unmapped highmem pages need no cache maintenance in that case. However unmapped pages may still be cached with a VIPT cache because the cache is tagged with physical addresses. There is no need for a whole cache flush during task switching for that reason, and despite the explicit cache flushes in flush_all_zero_pkmaps() and kunmap_atomic(), some highmem pages that were mapped in user space end up still cached even when they become unmapped. So, we do have to perform cache maintenance on those unmapped highmem pages in the context of DMA when using a VIPT cache. Unfortunately, it is not possible to perform that cache maintenance using physical addresses as all the L1 cache maintenance coprocessor functions accept virtual addresses only. Therefore we have no choice but to set up a temporary virtual mapping for that purpose. And of course the explicit cache flushing when unmapping a highmem page on a system with a VIPT cache now can go, which should increase performance. While at it, because the code in __flush_dcache_page() has to be modified anyway, let's also make sure the mapped highmem pages are pinned with kmap_high_get() for the duration of the cache maintenance operation. Because kunmap() does unmap highmem pages lazily, it was reported by Gary King <GKing@nvidia.com> that those pages ended up being unmapped during cache maintenance on SMP causing segmentation faults. Signed-off-by: NNicolas Pitre <nico@marvell.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 30 3月, 2010 1 次提交
-
-
由 Tejun Heo 提交于
include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h percpu.h is included by sched.h and module.h and thus ends up being included when building most .c files. percpu.h includes slab.h which in turn includes gfp.h making everything defined by the two files universally available and complicating inclusion dependencies. percpu.h -> slab.h dependency is about to be removed. Prepare for this change by updating users of gfp and slab facilities include those headers directly instead of assuming availability. As this conversion needs to touch large number of source files, the following script is used as the basis of conversion. http://userweb.kernel.org/~tj/misc/slabh-sweep.py The script does the followings. * Scan files for gfp and slab usages and update includes such that only the necessary includes are there. ie. if only gfp is used, gfp.h, if slab is used, slab.h. * When the script inserts a new include, it looks at the include blocks and try to put the new include such that its order conforms to its surrounding. It's put in the include block which contains core kernel includes, in the same order that the rest are ordered - alphabetical, Christmas tree, rev-Xmas-tree or at the end if there doesn't seem to be any matching order. * If the script can't find a place to put a new include (mostly because the file doesn't have fitting include block), it prints out an error message indicating which .h file needs to be added to the file. The conversion was done in the following steps. 1. The initial automatic conversion of all .c files updated slightly over 4000 files, deleting around 700 includes and adding ~480 gfp.h and ~3000 slab.h inclusions. The script emitted errors for ~400 files. 2. Each error was manually checked. Some didn't need the inclusion, some needed manual addition while adding it to implementation .h or embedding .c file was more appropriate for others. This step added inclusions to around 150 files. 3. The script was run again and the output was compared to the edits from #2 to make sure no file was left behind. 4. Several build tests were done and a couple of problems were fixed. e.g. lib/decompress_*.c used malloc/free() wrappers around slab APIs requiring slab.h to be added manually. 5. The script was run on all .h files but without automatically editing them as sprinkling gfp.h and slab.h inclusions around .h files could easily lead to inclusion dependency hell. Most gfp.h inclusion directives were ignored as stuff from gfp.h was usually wildly available and often used in preprocessor macros. Each slab.h inclusion directive was examined and added manually as necessary. 6. percpu.h was updated not to include slab.h. 7. Build test were done on the following configurations and failures were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my distributed build env didn't work with gcov compiles) and a few more options had to be turned off depending on archs to make things build (like ipr on powerpc/64 which failed due to missing writeq). * x86 and x86_64 UP and SMP allmodconfig and a custom test config. * powerpc and powerpc64 SMP allmodconfig * sparc and sparc64 SMP allmodconfig * ia64 SMP allmodconfig * s390 SMP allmodconfig * alpha SMP allmodconfig * um on x86_64 SMP allmodconfig 8. percpu.h modifications were reverted so that it could be applied as a separate patch and serve as bisection point. Given the fact that I had only a couple of failures from tests on step 6, I'm fairly confident about the coverage of this conversion patch. If there is a breakage, it's likely to be something in one of the arch headers which should be easily discoverable easily on most builds of the specific arch. Signed-off-by: NTejun Heo <tj@kernel.org> Guess-its-ok-by: NChristoph Lameter <cl@linux-foundation.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
-
- 16 2月, 2010 1 次提交
-
-
由 Fenkart/Bostandzhyan 提交于
Adds DMA area to 'virtual memory map' startup message Tested-by: NH Hartley Sweeten <hsweeten@visionengravers.com> Signed-off-by: NAndreas Fenkart <andreas.fenkart@streamunlimited.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 15 2月, 2010 5 次提交
-
-
由 Russell King 提交于
ARMv6 and ARMv7 CPUs can perform speculative prefetching, which makes DMA cache coherency handling slightly more interesting. Rather than being able to rely upon the CPU not accessing the DMA buffer until DMA has completed, we now must expect that the cache could be loaded with possibly stale data from the DMA buffer. Where DMA involves data being transferred to the device, we clean the cache before handing it over for DMA, otherwise we invalidate the buffer to get rid of potential writebacks. On DMA Completion, if data was transferred from the device, we invalidate the buffer to get rid of any stale speculative prefetches. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk> Tested-By: NSantosh Shilimkar <santosh.shilimkar@ti.com>
-
由 Russell King 提交于
Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk> Tested-By: NSantosh Shilimkar <santosh.shilimkar@ti.com>
-
由 Russell King 提交于
dma_cache_maint_contiguous is now simple enough to live inside dma_cache_maint_page, so move it there. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk> Tested-By: NSantosh Shilimkar <santosh.shilimkar@ti.com>
-
由 Russell King 提交于
Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk> Tested-By: NSantosh Shilimkar <santosh.shilimkar@ti.com>
-
由 Russell King 提交于
Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk> Tested-By: NSantosh Shilimkar <santosh.shilimkar@ti.com>
-