- 04 7月, 2011 9 次提交
-
-
由 Russell King 提交于
Pass the device type specific needs_bounce function in at dmabounce register time, avoiding the need for a platform specific global function to do this. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
DMA addresses should not be casted to void * for printing. Fix that to be consistent with the rest of the file. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Pointers should be checked against NULL rather than 0, otherwise we get sparse warnings. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
We already check that dev != NULL, so this won't be reached. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Move the decision whether to bounce into __dma_map_page(), before the check for high pages. This avoids triggering the high page check for devices which aren't using dmabounce. Fix the unmap path to cope too. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Move the decision to perform DMA bouncing out of map_single() into its own stand-alone function. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
This check is done at the DMA API level, so there's no point repeating it here. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Use dma_map_page()/dma_unmap_page() internals to handle dma_map_single() and dma_unmap_single(). Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
When map_single() is unable to obtain a safe buffer, we must return the dma_addr_t error value, which is ~0 rather than 0. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 07 1月, 2011 1 次提交
-
-
由 Russell King 提交于
Add ARM support for the DMA debug infrastructure, which allows the DMA API usage to be debugged. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 15 2月, 2010 1 次提交
-
-
由 Russell King 提交于
The DMA API has the notion of buffer ownership; make it explicit in the ARM implementation of this API. This gives us a set of hooks to allow us to deal with CPU cache issues arising from non-cache coherent DMA. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk> Tested-By: NSantosh Shilimkar <santosh.shilimkar@ti.com> Tested-By: NJamie Iles <jamie@jamieiles.com>
-
- 22 12月, 2009 1 次提交
-
-
由 Mike Rapoport 提交于
Commit f74f7e57 (ARM: use flush_kernel_dcache_area() for dmabounce) has broken dmabounce build: CC arch/arm/common/dmabounce.o arch/arm/common/dmabounce.c: In function 'unmap_single': arch/arm/common/dmabounce.c:315: error: implicit declaration of function '__cpuc_flush_kernel_dcache_area' make[2]: *** [arch/arm/common/dmabounce.o] Error 1 Fix it. Signed-off-by: NMike Rapoport <mike@compulab.co.il> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 14 12月, 2009 1 次提交
-
-
由 Russell King 提交于
After copying data from the bounce buffer to the real buffer, use flush_kernel_dcache_page() to ensure that data is written back in manner coherent with future userspace mappings. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 23 11月, 2009 1 次提交
-
-
由 Russell King 提交于
We will need to treat dma_unmap_page() differently from dma_unmap_single() Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk> Tested-By: NJamie Iles <jamie@jamieiles.com>
-
- 16 3月, 2009 1 次提交
-
-
由 Nicolas Pitre 提交于
If a machine class has a custom __virt_to_bus() implementation then it must provide a __arch_page_to_dma() implementation as well which is _not_ based on page_address() to support highmem. This patch fixes existing __arch_page_to_dma() and provide a default implementation otherwise. The default implementation for highmem is based on __pfn_to_bus() which is defined only when no custom __virt_to_bus() is provided by the machine class. That leaves only ebsa110 and footbridge which cannot support highmem until they provide their own __arch_page_to_dma() implementation. But highmem support on those legacy platforms with limited memory is certainly not a priority. Signed-off-by: NNicolas Pitre <nico@marvell.com>
-
- 29 9月, 2008 5 次提交
-
-
由 Russell King 提交于
Validate the direction argument like x86 does. In addition, validate the dma_unmap_* parameters against those passed to dma_map_* when using the DMA bounce code. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
The dmabounce dma_sync_xxx() implementation have been broken for quite some time; they all copy data between the DMA buffer and the CPU visible buffer no irrespective of the change of ownership. (IOW, a DMA_FROM_DEVICE mapping copies data from the DMA buffer to the CPU buffer during a call to dma_sync_single_for_device().) Fix it by getting rid of sync_single(), moving the contents into the recently created dmabounce_sync_for_xxx() functions and adjusting appropriately. This also makes it possible to properly support the DMA range sync functions. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 26 9月, 2008 3 次提交
-
-
由 Russell King 提交于
No point having two of these; dma_map_page() can do all the work for us. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
We can translate a struct page directly to a DMA address using page_to_dma(). No need to use page_address() followed by virt_to_dma(). Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Update the ARM DMA scatter gather APIs for the scatterlist changes. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 10 8月, 2008 2 次提交
-
-
由 Russell King 提交于
Convert the existing dma_sync_single_for_* APIs to the new range based APIs, and make the dma_sync_single_for_* API a superset of it. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
OMAP at least gets the return type(s) for the DMA translation functions wrong, which can lead to subtle errors. Avoid this by moving the DMA translation functions to asm/dma-mapping.h, and converting them to inline functions. Fix the OMAP DMA translation macros to use the correct argument and result types. Also, remove the unnecessary casts in dmabounce.c. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 27 7月, 2008 1 次提交
-
-
由 FUJITA Tomonori 提交于
Add per-device dma_mapping_ops support for CONFIG_X86_64 as POWER architecture does: This enables us to cleanly fix the Calgary IOMMU issue that some devices are not behind the IOMMU (http://lkml.org/lkml/2008/5/8/423). I think that per-device dma_mapping_ops support would be also helpful for KVM people to support PCI passthrough but Andi thinks that this makes it difficult to support the PCI passthrough (see the above thread). So I CC'ed this to KVM camp. Comments are appreciated. A pointer to dma_mapping_ops to struct dev_archdata is added. If the pointer is non NULL, DMA operations in asm/dma-mapping.h use it. If it's NULL, the system-wide dma_ops pointer is used as before. If it's useful for KVM people, I plan to implement a mechanism to register a hook called when a new pci (or dma capable) device is created (it works with hot plugging). It enables IOMMUs to set up an appropriate dma_mapping_ops per device. The major obstacle is that dma_mapping_error doesn't take a pointer to the device unlike other DMA operations. So x86 can't have dma_mapping_ops per device. Note all the POWER IOMMUs use the same dma_mapping_error function so this is not a problem for POWER but x86 IOMMUs use different dma_mapping_error functions. The first patch adds the device argument to dma_mapping_error. The patch is trivial but large since it touches lots of drivers and dma-mapping.h in all the architecture. This patch: dma_mapping_error() doesn't take a pointer to the device unlike other DMA operations. So we can't have dma_mapping_ops per device. Note that POWER already has dma_mapping_ops per device but all the POWER IOMMUs use the same dma_mapping_error function. x86 IOMMUs use device argument. [akpm@linux-foundation.org: fix sge] [akpm@linux-foundation.org: fix svc_rdma] [akpm@linux-foundation.org: build fix] [akpm@linux-foundation.org: fix bnx2x] [akpm@linux-foundation.org: fix s2io] [akpm@linux-foundation.org: fix pasemi_mac] [akpm@linux-foundation.org: fix sdhci] [akpm@linux-foundation.org: build fix] [akpm@linux-foundation.org: fix sparc] [akpm@linux-foundation.org: fix ibmvscsi] Signed-off-by: NFUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Cc: Muli Ben-Yehuda <muli@il.ibm.com> Cc: Andi Kleen <andi@firstfloor.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@elte.hu> Cc: Avi Kivity <avi@qumranet.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 22 7月, 2008 1 次提交
-
-
由 Greg Kroah-Hartman 提交于
We have the dev_printk() variants for this kind of thing, use them instead of directly trying to access the bus_id field of struct device. This is done in order to remove bus_id entirely. Cc: Kay Sievers <kay.sievers@vrfy.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@suse.de>
-
- 22 6月, 2008 1 次提交
-
-
由 Russell King 提交于
Noticed by Martin Michlmayr, this missing export prevents IEEE1394 from building with: ERROR: "dma_sync_sg_for_device" [drivers/ieee1394/ieee1394.ko] undefined! Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 23 10月, 2007 2 次提交
-
-
由 FUJITA Tomonori 提交于
arch/arm/common/dmabounce.c: In function 'dma_map_sg': arch/arm/common/dmabounce.c:445: error: implicit declaration of function 'sg_page' Signed-off-by: NFUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
-
由 Jens Axboe 提交于
Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
-
- 13 10月, 2007 1 次提交
-
-
由 Russell King 提交于
consistent_sync() is used to handle the cache maintainence issues with DMA operations. Since we've now removed the misuse of this function from the two MTD drivers, rename it to prevent future mis-use. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 21 5月, 2007 1 次提交
-
-
由 Simon Arlott 提交于
Spelling fixes in arch/arm/. Signed-off-by: NSimon Arlott <simon@fire.lp0.eu> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 12 2月, 2007 2 次提交
-
-
由 Russell King 提交于
Rather than printk'ing the dmabounce statistics occasionally to the kernel log, provide a sysfs file to allow this information to be periodically read. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
dmabounce keeps a per-device structure, and finds the correct structure by walking a list. Since architectures can now add fields to struct device, we can attach this structure direct to the struct device, thereby eliminating the code to search the list. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 08 2月, 2007 3 次提交
-
-
由 Russell King 提交于
The DMA cache handling functions take virtual addresses, but in the form of unsigned long arguments. This leads to a little confusion about what exactly they take. So, convert them to take const void * instead. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Catalin Marinas 提交于
The outer cache can be L2 as on RealView/EB MPCore platform or even L3 or further on ARMv7 cores. This patch adds the generic support for flushing the outer cache in the DMA operations. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Memory allocated by the coherent memory allocators will be marked uncacheable, which means it's pointless calling consistent_sync() to perform cache maintainence on this memory; it's just a waste of CPU cycles. Moreover, with the (subsequent) merge of outer cache support, it actually breaks things to call consistent_sync() on anything but direct-mapped memory. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 03 11月, 2006 1 次提交
-
-
由 Kevin Hilman 提交于
dma_sync_single is no more (and to be removed in 2.7) so this export should be dma_sync_single_for_cpu. Also export dma_sync_single_for_device. Signed-off-by: NKevin Hilman <khilman@mvista.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 18 8月, 2006 1 次提交
-
-
由 Kevin Hilman 提交于
Patch from Kevin Hilman Previous locking changes to dmabounce incorrectly return non-NULL even when buffer not found. Fix it up. Signed-off-by: NKevin Hilman <khilman@mvista.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 23 6月, 2006 1 次提交
-
-
由 Kevin Hilman 提交于
Patch from Kevin Hilman This time with IRQ versions of locks. Rework also enables compatability with realtime-preemption patch. With the current locking via interrupt disabling, under RT, potentially sleeping functions can be called with interrupts disabled. Signed-off-by: NKevin Hilman <khilman@mvista.com> Signed-off-by: NDeepak Saxena <dsaxena@plexity.net> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-