- 03 11月, 2018 1 次提交
-
-
由 Christoph Hellwig 提交于
__swiotlb_get_sgtable_page and __swiotlb_mmap_pfn are not only misnamed but also only used if CONFIG_IOMMU_DMA is set. Just add a simple ifdef for now, given that we plan to remove them entirely for the next merge window. Reported-by: NFlorian Fainelli <f.fainelli@gmail.com> Signed-off-by: NChristoph Hellwig <hch@lst.de> Tested-by: NWill Deacon <will.deacon@arm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 31 10月, 2018 1 次提交
-
-
由 Mike Rapoport 提交于
Move remaining definitions and declarations from include/linux/bootmem.h into include/linux/memblock.h and remove the redundant header. The includes were replaced with the semantic patch below and then semi-automated removal of duplicated '#include <linux/memblock.h> @@ @@ - #include <linux/bootmem.h> + #include <linux/memblock.h> [sfr@canb.auug.org.au: dma-direct: fix up for the removal of linux/bootmem.h] Link: http://lkml.kernel.org/r/20181002185342.133d1680@canb.auug.org.au [sfr@canb.auug.org.au: powerpc: fix up for removal of linux/bootmem.h] Link: http://lkml.kernel.org/r/20181005161406.73ef8727@canb.auug.org.au [sfr@canb.auug.org.au: x86/kaslr, ACPI/NUMA: fix for linux/bootmem.h removal] Link: http://lkml.kernel.org/r/20181008190341.5e396491@canb.auug.org.au Link: http://lkml.kernel.org/r/1536927045-23536-30-git-send-email-rppt@linux.vnet.ibm.comSigned-off-by: NMike Rapoport <rppt@linux.vnet.ibm.com> Signed-off-by: NStephen Rothwell <sfr@canb.auug.org.au> Acked-by: NMichal Hocko <mhocko@suse.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Chris Zankel <chris@zankel.net> Cc: "David S. Miller" <davem@davemloft.net> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Greentime Hu <green.hu@gmail.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Guan Xuetao <gxt@pku.edu.cn> Cc: Ingo Molnar <mingo@redhat.com> Cc: "James E.J. Bottomley" <jejb@parisc-linux.org> Cc: Jonas Bonn <jonas@southpole.se> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Ley Foon Tan <lftan@altera.com> Cc: Mark Salter <msalter@redhat.com> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Matt Turner <mattst88@gmail.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Simek <monstr@monstr.eu> Cc: Palmer Dabbelt <palmer@sifive.com> Cc: Paul Burton <paul.burton@mips.com> Cc: Richard Kuo <rkuo@codeaurora.org> Cc: Richard Weinberger <richard@nod.at> Cc: Rich Felker <dalias@libc.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Serge Semin <fancer.lancer@gmail.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Tony Luck <tony.luck@intel.com> Cc: Vineet Gupta <vgupta@synopsys.com> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 19 10月, 2018 3 次提交
-
-
由 Christoph Hellwig 提交于
Now that the generic swiotlb code supports non-coherent DMA we can switch to it for arm64. For that we need to refactor the existing alloc/free/mmap/pgprot helpers to be used as the architecture hooks, and implement the standard arch_sync_dma_for_{device,cpu} hooks for cache maintaincance in the streaming dma hooks, which also implies using the generic dma_coherent flag in struct device. Note that we need to keep the old is_device_dma_coherent function around for now, so that the shared arm/arm64 Xen code keeps working. Signed-off-by: NChristoph Hellwig <hch@lst.de> Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Christoph Hellwig 提交于
All architectures that support swiotlb also have a zone that backs up these less than full addressing allocations (usually ZONE_DMA32). Because of that it is rather pointless to fall back to the global swiotlb buffer if the normal dma direct allocation failed - the only thing this will do is to eat up bounce buffers that would be more useful to serve streaming mappings. Signed-off-by: NChristoph Hellwig <hch@lst.de> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
-
由 Christoph Hellwig 提交于
Like all other dma mapping drivers just return an error code instead of an actual memory buffer. The reason for the overflow buffer was that at the time swiotlb was invented there was no way to check for dma mapping errors, but this has long been fixed. Signed-off-by: NChristoph Hellwig <hch@lst.de> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Reviewed-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
-
- 26 9月, 2018 1 次提交
-
-
由 Christoph Hellwig 提交于
This reverts commit 46053c73. This change breaks architectures setting up dma_ops in their own magic way and not using arch_setup_dma_ops, so revert it. Reported-by: NGuenter Roeck <linux@roeck-us.net> Signed-off-by: NChristoph Hellwig <hch@lst.de>
-
- 25 9月, 2018 1 次提交
-
-
由 Robin Murphy 提交于
Whilst the symmetry of deferring to the existing sync callback in __iommu_map_page() is nice, taking a round-trip through iommu_iova_to_phys() is a pretty heavyweight way to get an address we can trivially compute from the page we already have. Tweaking it to just perform the cache maintenance directly when appropriate doesn't really make the code any more complicated, and the runtime efficiency gain can only be a benefit. Furthermore, the sync operations themselves know they can only be invoked on a managed DMA ops domain, so can use the fast specific domain lookup to avoid excessive manipulation of the group refcount (particularly in the scatterlist cases). Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Tested-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 08 9月, 2018 1 次提交
-
-
由 Christoph Hellwig 提交于
There is no reason to leave the per-device dma_ops around when deconfiguring a device, so move this code from arm64 into the common code. Signed-off-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NRobin Murphy <robin.murphy@arm.com>
-
- 18 8月, 2018 1 次提交
-
-
由 Marek Szyprowski 提交于
The CMA memory allocator doesn't support standard gfp flags for memory allocation, so there is no point having it as a parameter for dma_alloc_from_contiguous() function. Replace it by a boolean no_warn argument, which covers all the underlaying cma_alloc() function supports. This will help to avoid giving false feeling that this function supports standard gfp flags and callers can pass __GFP_ZERO to get zeroed buffer, what has already been an issue: see commit dd65a941 ("arm64: dma-mapping: clear buffers allocated with FORCE_CONTIGUOUS flag"). Link: http://lkml.kernel.org/r/20180709122020eucas1p21a71b092975cb4a3b9954ffc63f699d1~-sqUFoa-h2939329393eucas1p2Y@eucas1p2.samsung.comSigned-off-by: NMarek Szyprowski <m.szyprowski@samsung.com> Acked-by: NMichał Nazarewicz <mina86@mina86.com> Acked-by: NVlastimil Babka <vbabka@suse.cz> Reviewed-by: NChristoph Hellwig <hch@lst.de> Cc: Laura Abbott <labbott@redhat.com> Cc: Michal Hocko <mhocko@suse.com> Cc: Joonsoo Kim <js1304@gmail.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 19 6月, 2018 1 次提交
-
-
由 Marek Szyprowski 提交于
dma_alloc_*() buffers might be exposed to userspace via mmap() call, so they should be cleared on allocation. In case of IOMMU-based dma-mapping implementation such buffer clearing was missing in the code path for DMA_ATTR_FORCE_CONTIGUOUS flag handling, because dma_alloc_from_contiguous() doesn't honor __GFP_ZERO flag. This patch fixes this issue. For more information on clearing buffers allocated by dma_alloc_* functions, see commit 6829e274 ("arm64: dma-mapping: always clear allocated buffers"). Fixes: 44176bb3 ("arm64: Add support for DMA_ATTR_FORCE_CONTIGUOUS to IOMMU") Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 15 5月, 2018 1 次提交
-
-
由 Catalin Marinas 提交于
This patch increases the ARCH_DMA_MINALIGN to 128 so that it covers the currently known Cache Writeback Granule (CTR_EL0.CWG) on arm64 and moves the fallback in cache_line_size() from L1_CACHE_BYTES to this constant. In addition, it warns (and taints) if the CWG is larger than ARCH_DMA_MINALIGN as this is not safe with non-coherent DMA. Cc: Will Deacon <will.deacon@arm.com> Cc: Robin Murphy <robin.murphy@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 08 5月, 2018 1 次提交
-
-
由 Christoph Hellwig 提交于
Most mainstream architectures are using 65536 entries, so lets stick to that. If someone is really desperate to override it that can still be done through <asm/dma-mapping.h>, but I'd rather see a really good rationale for that. dma_debug_init is now called as a core_initcall, which for many architectures means much earlier, and provides dma-debug functionality earlier in the boot process. This should be safe as it only relies on the memory allocator already being available. Signed-off-by: NChristoph Hellwig <hch@lst.de> Acked-by: NMarek Szyprowski <m.szyprowski@samsung.com> Reviewed-by: NRobin Murphy <robin.murphy@arm.com>
-
- 27 3月, 2018 1 次提交
-
-
由 Will Deacon 提交于
This reverts commit 1f85b42a. The internal dma-direct.h API has changed in -next, which collides with us trying to use it to manage non-coherent DMA devices on systems with unreasonably large cache writeback granules. This isn't at all trivial to resolve, so revert our changes for now and we can revisit this after the merge window. Effectively, this just restores our behaviour back to that of 4.16. Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 07 3月, 2018 1 次提交
-
-
由 Catalin Marinas 提交于
Commit 97303480 ("arm64: Increase the max granular size") increased the cache line size to 128 to match Cavium ThunderX, apparently for some performance benefit which could not be confirmed. This change, however, has an impact on the network packets allocation in certain circumstances, requiring slightly over a 4K page with a significant performance degradation. This patch reverts L1_CACHE_SHIFT back to 6 (64-byte cache line) while keeping ARCH_DMA_MINALIGN at 128. The cache_line_size() function was changed to default to ARCH_DMA_MINALIGN in the absence of a meaningful CTR_EL0.CWG bit field. In addition, if a system with ARCH_DMA_MINALIGN < CTR_EL0.CWG is detected, the kernel will force swiotlb bounce buffering for all non-coherent devices since DMA cache maintenance on sub-CWG ranges is not safe, leading to data corruption. Cc: Tirumalesh Chalamarla <tchalamarla@cavium.com> Cc: Timur Tabi <timur@codeaurora.org> Cc: Florian Fainelli <f.fainelli@gmail.com> Acked-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 15 1月, 2018 3 次提交
-
-
由 Christoph Hellwig 提交于
The generic swiotlb_alloc and swiotlb_free routines already take care of CMA allocations and adding GFP_DMA32 where needed, so use them instead of the arm specific helpers. Signed-off-by: NChristoph Hellwig <hch@lst.de> Acked-by: NChristian König <christian.koenig@amd.com> Reviewed-by: NRobin Murphy <robin.murphy@arm.com>
-
由 Christoph Hellwig 提交于
arm64 uses ZONE_DMA for allocations below 32-bits. These days we name the zone for that ZONE_DMA32, which will allow to use the dma-direct and generic swiotlb code as-is, so rename it. Signed-off-by: NChristoph Hellwig <hch@lst.de> Acked-by: NChristian König <christian.koenig@amd.com> Reviewed-by: NRobin Murphy <robin.murphy@arm.com>
-
由 Christoph Hellwig 提交于
We'll need that name for a generic implementation soon. Signed-off-by: NChristoph Hellwig <hch@lst.de> Acked-by: NChristian König <christian.koenig@amd.com> Reviewed-by: NRobin Murphy <robin.murphy@arm.com>
-
- 10 1月, 2018 1 次提交
-
-
由 Christoph Hellwig 提交于
phys_to_dma, dma_to_phys and dma_capable are helpers published by architecture code for use of swiotlb and xen-swiotlb only. Drivers are not supposed to use these directly, but use the DMA API instead. Move these to a new asm/dma-direct.h helper, included by a linux/dma-direct.h wrapper that provides the default linear mapping unless the architecture wants to override it. In the MIPS case the existing dma-coherent.h is reused for now as untangling it will take a bit of work. Signed-off-by: NChristoph Hellwig <hch@lst.de> Acked-by: NRobin Murphy <robin.murphy@arm.com>
-
- 04 10月, 2017 1 次提交
-
-
由 Matthieu CASTET 提交于
For example on arm64 board, this add info to "user" entries in vmallocinfo Before : [...] 0xffffff8008997000 0xffffff80089d8000 266240 user [...] Afer : [...] 0xffffff8008997000 0xffffff80089d8000 266240 atomic_pool_init+0x0/0x1d8 user [...] This help to debug mapping issues, and is consistent with others entries (ioremap, vmalloc, ...) that already provide caller. Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NMatthieu CASTET <matthieu.castet@parrot.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 02 10月, 2017 1 次提交
-
-
由 Thomas Meyer 提交于
Use vma_pages function on vma object instead of explicit computation. Found by coccinelle spatch "api/vma_pages.cocci" Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NThomas Meyer <thomas@m3y3r.de> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 21 8月, 2017 2 次提交
-
-
由 Vladimir Murzin 提交于
atomic_pool is setup once while init stage and never changed after that, so it is good candidate for __ro_after_init Signed-off-by: NVladimir Murzin <vladimir.murzin@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Vladimir Murzin 提交于
gen_pool_first_fit_order_align() does not make use of additional data, so pass plain NULL there. Signed-off-by: NVladimir Murzin <vladimir.murzin@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 20 7月, 2017 1 次提交
-
-
由 Vladimir Murzin 提交于
Christoph noticed [1] that default DMA pool in current form overload the DMA coherent infrastructure. In reply, Robin suggested [2] to split the per-device vs. global pool interfaces, so allocation/release from default DMA pool is driven by dma ops implementation. This patch implements Robin's idea and provide interface to allocate/release/mmap the default (aka global) DMA pool. To make it clear that existing *_from_coherent routines work on per-device pool rename them to *_from_dev_coherent. [1] https://lkml.org/lkml/2017/7/7/370 [2] https://lkml.org/lkml/2017/7/7/431 Cc: Vineet Gupta <vgupta@synopsys.com> Cc: Russell King <linux@armlinux.org.uk> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Ralf Baechle <ralf@linux-mips.org> Suggested-by: NRobin Murphy <robin.murphy@arm.com> Tested-by: NAndras Szemzo <sza@esh.hu> Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NVladimir Murzin <vladimir.murzin@arm.com> Signed-off-by: NChristoph Hellwig <hch@lst.de>
-
- 20 6月, 2017 1 次提交
-
-
由 Christoph Hellwig 提交于
The dma alloc interface returns an error by return NULL, and the mapping interfaces rely on the mapping_error method, which the dummy ops already implement correctly. Thus remove the DMA_ERROR_CODE define. Signed-off-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NRobin Murphy <robin.murphy@arm.com>
-
- 15 6月, 2017 1 次提交
-
-
由 Olav Haugan 提交于
The current null-pointer check in __dma_alloc_coherent and __dma_free_coherent is not needed anymore since the __dma_alloc/__dma_free functions won't be called if !dev (dummy ops will be called instead). Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NOlav Haugan <ohaugan@codeaurora.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 05 5月, 2017 1 次提交
-
-
由 Catalin Marinas 提交于
While honouring the DMA_ATTR_FORCE_CONTIGUOUS on arm64 (commit 44176bb3: "arm64: Add support for DMA_ATTR_FORCE_CONTIGUOUS to IOMMU"), the existing uses of dma_mmap_attrs() and dma_get_sgtable() have been broken by passing a physically contiguous vm_struct with an invalid pages pointer through the common iommu API. Since the coherent allocation with DMA_ATTR_FORCE_CONTIGUOUS uses CMA, this patch simply reuses the existing swiotlb logic for mmap and get_sgtable. Note that the current implementation of get_sgtable (both swiotlb and iommu) is broken if dma_declare_coherent_memory() is used since such memory does not have a corresponding struct page. To be addressed in a subsequent patch. Fixes: 44176bb3 ("arm64: Add support for DMA_ATTR_FORCE_CONTIGUOUS to IOMMU") Reported-by: NAndrzej Hajda <a.hajda@samsung.com> Cc: Geert Uytterhoeven <geert+renesas@glider.be> Acked-by: NRobin Murphy <robin.murphy@arm.com> Tested-by: NAndrzej Hajda <a.hajda@samsung.com> Reviewed-by: NAndrzej Hajda <a.hajda@samsung.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 02 5月, 2017 1 次提交
-
-
由 Stefano Stabellini 提交于
The following commit: commit 815dd187 Author: Bart Van Assche <bart.vanassche@sandisk.com> Date: Fri Jan 20 13:04:04 2017 -0800 treewide: Consolidate get_dma_ops() implementations rearranges get_dma_ops in a way that xen_dma_ops are not returned when running on Xen anymore, dev->dma_ops is returned instead (see arch/arm/include/asm/dma-mapping.h:get_arch_dma_ops and include/linux/dma-mapping.h:get_dma_ops). Fix the problem by storing dev->dma_ops in dev_archdata, and setting dev->dma_ops to xen_dma_ops. This way, xen_dma_ops is returned naturally by get_dma_ops. The Xen code can retrieve the original dev->dma_ops from dev_archdata when needed. It also allows us to remove __generic_dma_ops from common headers. Signed-off-by: NStefano Stabellini <sstabellini@kernel.org> Tested-by: NJulien Grall <julien.grall@arm.com> Suggested-by: NCatalin Marinas <catalin.marinas@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Cc: <stable@vger.kernel.org> [4.11+] CC: linux@armlinux.org.uk CC: catalin.marinas@arm.com CC: will.deacon@arm.com CC: boris.ostrovsky@oracle.com CC: jgross@suse.com CC: Julien Grall <julien.grall@arm.com>
-
- 29 4月, 2017 1 次提交
-
-
由 Joerg Roedel 提交于
The include file does not need any PCI specifics, so remove that include. Also fix the places that relied on it. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 20 4月, 2017 1 次提交
-
-
由 Sricharan R 提交于
With arch_setup_dma_ops now being called late during device's probe after the device's iommu is probed, the notifier trick required to handle the early setup of dma_ops before the iommu group gets created is not required. So removing the notifier's here. Tested-by: NMarek Szyprowski <m.szyprowski@samsung.com> Tested-by: NHanjun Guo <hanjun.guo@linaro.org> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NSricharan R <sricharan@codeaurora.org> [rm: clean up even more] Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 23 3月, 2017 1 次提交
-
-
由 Geert Uytterhoeven 提交于
Add support for allocating physically contiguous DMA buffers on arm64 systems with an IOMMU. This can be useful when two or more devices with different memory requirements are involved in buffer sharing. Note that as this uses the CMA allocator, setting the DMA_ATTR_FORCE_CONTIGUOUS attribute has a runtime-dependency on CONFIG_DMA_CMA, just like on arm32. For arm64 systems using swiotlb, no changes are needed to support the allocation of physically contiguous DMA buffers: - swiotlb always uses physically contiguous buffers (up to IO_TLB_SEGSIZE = 128 pages), - arm64's __dma_alloc_coherent() already calls dma_alloc_from_contiguous() when CMA is available. Signed-off-by: NGeert Uytterhoeven <geert+renesas@glider.be> Acked-by: NLaurent Pinchart <laurent.pinchart@ideasonboard.com> Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 25 2月, 2017 1 次提交
-
-
由 Lucas Stach 提交于
The callers of the DMA alloc functions already provide the proper context GFP flags. Make sure to pass them through to the CMA allocator, to make the CMA compaction context aware. Link: http://lkml.kernel.org/r/20170127172328.18574-3-l.stach@pengutronix.deSigned-off-by: NLucas Stach <l.stach@pengutronix.de> Acked-by: NVlastimil Babka <vbabka@suse.cz> Acked-by: NMichal Hocko <mhocko@suse.com> Cc: Radim Krcmar <rkrcmar@redhat.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Chris Zankel <chris@zankel.net> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Alexander Graf <agraf@suse.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 06 2月, 2017 1 次提交
-
-
由 Robin Murphy 提交于
Back when this was first written, dma_supported() was somewhat of a murky mess, with subtly different interpretations being relied upon in various places. The "does device X support DMA to address range Y?" uses assuming Y to be physical addresses, which motivated the current iommu_dma_supported() implementation and are alluded to in the comment therein, have since been cleaned up, leaving only the far less ambiguous "can device X drive address bits Y" usage internal to DMA API mask setting. As such, there is no reason to keep a slightly misleading callback which does nothing but duplicate the current default behaviour; we already constrain IOVA allocations to the iommu_domain aperture where necessary, so let's leave DMA mask business to architecture-specific code where it belongs. Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 26 1月, 2017 1 次提交
-
-
由 Robin Murphy 提交于
When bypassing SWIOTLB on small-memory systems, we need to avoid calling into swiotlb_dma_mapping_error() in exactly the same way as we avoid swiotlb_dma_supported(), because the former also relies on SWIOTLB state being initialised. Under the assumptions for which we skip SWIOTLB, dma_map_{single,page}() will only ever return the DMA-offset-adjusted physical address of the page passed in, thus we can report success unconditionally. Fixes: b67a8b29 ("arm64: mm: only initialize swiotlb when necessary") CC: stable@vger.kernel.org CC: Jisheng Zhang <jszhang@marvell.com> Reported-by: NAaro Koskinen <aaro.koskinen@iki.fi> Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 25 1月, 2017 2 次提交
-
-
由 Bart Van Assche 提交于
Some but not all architectures provide set_dma_ops(). Move dma_ops from struct dev_archdata into struct device such that it becomes possible on all architectures to configure dma_ops per device. Signed-off-by: NBart Van Assche <bart.vanassche@sandisk.com> Acked-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com> Cc: David Woodhouse <dwmw2@infradead.org> Cc: Juergen Gross <jgross@suse.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: linux-arch@vger.kernel.org Cc: linux-kernel@vger.kernel.org Cc: Russell King <linux@armlinux.org.uk> Cc: x86@kernel.org Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Bart Van Assche 提交于
Most dma_map_ops structures are never modified. Constify these structures such that these can be write-protected. This patch has been generated as follows: git grep -l 'struct dma_map_ops' | xargs -d\\n sed -i \ -e 's/struct dma_map_ops/const struct dma_map_ops/g' \ -e 's/const struct dma_map_ops {/struct dma_map_ops {/g' \ -e 's/^const struct dma_map_ops;$/struct dma_map_ops;/' \ -e 's/const const struct dma_map_ops /const struct dma_map_ops /g'; sed -i -e 's/const \(struct dma_map_ops intel_dma_ops\)/\1/' \ $(git grep -l 'struct dma_map_ops intel_dma_ops'); sed -i -e 's/const \(struct dma_map_ops dma_iommu_ops\)/\1/' \ $(git grep -l 'struct dma_map_ops' | grep ^arch/powerpc); sed -i -e '/^struct vmd_dev {$/,/^};$/ s/const \(struct dma_map_ops[[:blank:]]dma_ops;\)/\1/' \ -e '/^static void vmd_setup_dma_ops/,/^}$/ s/const \(struct dma_map_ops \*dest\)/\1/' \ -e 's/const \(struct dma_map_ops \*dest = \&vmd->dma_ops\)/\1/' \ drivers/pci/host/*.c sed -i -e '/^void __init pci_iommu_alloc(void)$/,/^}$/ s/dma_ops->/intel_dma_ops./' arch/ia64/kernel/pci-dma.c sed -i -e 's/static const struct dma_map_ops sn_dma_ops/static struct dma_map_ops sn_dma_ops/' arch/ia64/sn/pci/pci_dma.c sed -i -e 's/(const struct dma_map_ops \*)//' drivers/misc/mic/bus/vop_bus.c Signed-off-by: NBart Van Assche <bart.vanassche@sandisk.com> Reviewed-by: NChristoph Hellwig <hch@lst.de> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com> Cc: David Woodhouse <dwmw2@infradead.org> Cc: Juergen Gross <jgross@suse.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: linux-arch@vger.kernel.org Cc: linux-kernel@vger.kernel.org Cc: Russell King <linux@armlinux.org.uk> Cc: x86@kernel.org Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
- 23 1月, 2017 1 次提交
-
-
由 Will Deacon 提交于
The arm64 DMA-mapping implementation sets the DMA ops to the IOMMU DMA ops if we detect that an IOMMU is present for the master and the DMA ranges are valid. In the case when the IOMMU domain for the device is not of type IOMMU_DOMAIN_DMA, then we have no business swizzling the ops, since we're not in control of the underlying address space. This patch leaves the DMA ops alone for masters attached to non-DMA IOMMU domains. Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 19 1月, 2017 1 次提交
-
-
由 Mitchel Humpherys 提交于
The newly added DMA_ATTR_PRIVILEGED is useful for creating mappings that are only accessible to privileged DMA engines. Implement it in dma-iommu.c so that the ARM64 DMA IOMMU mapper can make use of it. Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Tested-by: NRobin Murphy <robin.murphy@arm.com> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NMitchel Humpherys <mitchelh@codeaurora.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 12 1月, 2017 1 次提交
-
-
由 Takeshi Kihara 提交于
This patch adds support for DMA_ATTR_SKIP_CPU_SYNC attribute for dma_{un}map_{page,sg} functions family to swiotlb. DMA_ATTR_SKIP_CPU_SYNC allows platform code to skip synchronization of the CPU cache for the given buffer assuming that it has been already transferred to 'device' domain. Ported from IOMMU .{un}map_{sg,page} ops. Acked-by: NLaurent Pinchart <laurent.pinchart@ideasonboard.com> Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NTakeshi Kihara <takeshi.kihara.df@renesas.com> Signed-off-by: NGeert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 19 12月, 2016 1 次提交
-
-
由 Geert Uytterhoeven 提交于
Convert the flag swiotlb_force from an int to an enum, to prepare for the advent of more possible values. Suggested-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: NGeert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
-
- 14 11月, 2016 1 次提交
-
-
由 Robin Murphy 提交于
With no coherency to worry about, just plug'em straight in. Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-