- 22 9月, 2022 1 次提交
-
-
由 Will Deacon 提交于
arch_dma_prep_coherent() is called when preparing a non-cacheable region for a consistent DMA buffer allocation. Since the buffer pages may previously have been written via a cacheable mapping and consequently allocated as dirty cachelines, the purpose of this function is to remove these dirty lines from the cache, writing them back so that the non-coherent device is able to see them. On arm64, this operation can be achieved with a clean to the point of coherency; a subsequent invalidation is not required and serves little purpose in the presence of a cacheable alias (e.g. the linear map), since clean lines can be speculatively fetched back into the cache after the invalidation operation has completed. Relax the cache maintenance in arch_dma_prep_coherent() so that only a clean, and not a clean-and-invalidate operation is performed. Cc: Mark Rutland <mark.rutland@arm.com> Cc: Robin Murphy <robin.murphy@arm.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: NWill Deacon <will@kernel.org> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20220823122111.17439-1-will@kernel.orgSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 05 7月, 2022 1 次提交
-
-
由 Will Deacon 提交于
Remove the __dma_{flush,map,unmap}_area assembly wrappers and call the appropriate cache maintenance functions directly from the DMA mapping callbacks. Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Reviewed-by: NArd Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20220610151228.4562-3-will@kernel.orgSigned-off-by: NWill Deacon <will@kernel.org>
-
- 06 6月, 2022 1 次提交
-
-
由 Oleksandr Tyshchenko 提交于
This patch introduces new helper and places it in new header. The helper's purpose is to assign any Xen specific DMA ops in a single place. For now, we deal with xen-swiotlb DMA ops only. The one of the subsequent commits in current series will add xen-grant DMA ops case. Also re-use the xen_swiotlb_detect() check on Arm32. Signed-off-by: NOleksandr Tyshchenko <oleksandr_tyshchenko@epam.com> Reviewed-by: NStefano Stabellini <sstabellini@kernel.org> [For arm64] Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/1654197833-25362-2-git-send-email-olekstysh@gmail.comSigned-off-by: NJuergen Gross <jgross@suse.com>
-
- 25 6月, 2021 1 次提交
-
-
由 Jean-Philippe Brucker 提交于
Passing a 64-bit address width to iommu_setup_dma_ops() is valid on virtual platforms, but isn't currently possible. The overflow check in iommu_dma_init_domain() prevents this even when @dma_base isn't 0. Pass a limit address instead of a size, so callers don't have to fake a size to work around the check. The base and limit parameters are being phased out, because: * they are redundant for x86 callers. dma-iommu already reserves the first page, and the upper limit is already in domain->geometry. * they can now be obtained from dev->dma_range_map on Arm. But removing them on Arm isn't completely straightforward so is left for future work. As an intermediate step, simplify the x86 callers by passing dummy limits. Signed-off-by: NJean-Philippe Brucker <jean-philippe@linaro.org> Reviewed-by: NEric Auger <eric.auger@redhat.com> Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/20210618152059.1194210-5-jean-philippe@linaro.orgSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 23 4月, 2021 1 次提交
-
-
由 Stefano Stabellini 提交于
Newer Xen versions expose two Xen feature flags to tell us if the domain is directly mapped or not. Only when a domain is directly mapped it makes sense to enable swiotlb-xen on ARM. Introduce a function on ARM to check the new Xen feature flags and also to deal with the legacy case. Call the function xen_swiotlb_detect. Signed-off-by: NStefano Stabellini <stefano.stabellini@xilinx.com> Reviewed-by: NBoris Ostrovsky <boris.ostrovsky@oracle.com> Link: https://lore.kernel.org/r/20210319200140.12512-1-sstabellini@kernel.orgSigned-off-by: NJuergen Gross <jgross@suse.com>
-
- 06 10月, 2020 2 次提交
-
-
由 Christoph Hellwig 提交于
Move more nitty gritty DMA implementation details into the common internal header. Signed-off-by: NChristoph Hellwig <hch@lst.de>
-
由 Christoph Hellwig 提交于
Split out all the bits that are purely for dma_map_ops implementations and related code into a new <linux/dma-map-ops.h> header so that they don't get pulled into all the drivers. That also means the architecture specific <asm/dma-mapping.h> is not pulled in by <linux/dma-mapping.h> any more, which leads to a missing includes that were pulled in by the x86 or arm versions in a few not overly portable drivers. Signed-off-by: NChristoph Hellwig <hch@lst.de>
-
- 21 11月, 2019 1 次提交
-
-
由 Christoph Hellwig 提交于
These are pure cache maintainance routines, so drop the unused struct device argument. Signed-off-by: NChristoph Hellwig <hch@lst.de> Suggested-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 11 9月, 2019 2 次提交
-
-
由 Christoph Hellwig 提交于
Now that the Xen special cases are gone nothing worth mentioning is left in the arm64 <asm/dma-mapping.h> file, so switch to use the asm-generic version instead. Signed-off-by: NChristoph Hellwig <hch@lst.de> Acked-by: NWill Deacon <will@kernel.org> Reviewed-by: NStefano Stabellini <sstabellini@kernel.org>
-
由 Christoph Hellwig 提交于
arm and arm64 can just use xen_swiotlb_dma_ops directly like x86, no need for a pointer indirection. Signed-off-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NJulien Grall <julien.grall@arm.com> Reviewed-by: NStefano Stabellini <sstabellini@kernel.org>
-
- 29 8月, 2019 2 次提交
-
-
由 Christoph Hellwig 提交于
The memory allocated for the atomic pool needs to have the same mapping attributes that we use for remapping, so use pgprot_dmacoherent instead of open coding it. Also deduct a suitable zone to allocate the memory from based on the presence of the DMA zones. Signed-off-by: NChristoph Hellwig <hch@lst.de>
-
由 Christoph Hellwig 提交于
arch_dma_mmap_pgprot is used for two things: 1) to override the "normal" uncached page attributes for mapping memory coherent to devices that can't snoop the CPU caches 2) to provide the special DMA_ATTR_WRITE_COMBINE semantics on older arm systems and some mips platforms Replace one with the pgprot_dmacoherent macro that is already provided by arm and much simpler to use, and lift the DMA_ATTR_WRITE_COMBINE handling to common code with an explicit arch opt-in. Signed-off-by: NChristoph Hellwig <hch@lst.de> Acked-by: Geert Uytterhoeven <geert@linux-m68k.org> # m68k Acked-by: Paul Burton <paul.burton@mips.com> # mips
-
- 11 8月, 2019 1 次提交
-
-
由 Christoph Hellwig 提交于
All the way back to introducing dma_common_mmap we've defaulted to mark the pages as uncached. But this is wrong for DMA coherent devices. Later on DMA_ATTR_WRITE_COMBINE also got incorrect treatment as that flag is only treated special on the alloc side for non-coherent devices. Introduce a new dma_pgprot helper that deals with the check for coherent devices so that only the remapping cases ever reach arch_dma_mmap_pgprot and we thus ensure no aliasing of page attributes happens, which makes the powerpc version of arch_dma_mmap_pgprot obsolete and simplifies the remaining ones. Note that this means arch_dma_mmap_pgprot is a bit misnamed now, but we'll phase it out soon. Fixes: 64ccc9c0 ("common: dma-mapping: add support for generic dma_mmap_* calls") Reported-by: NShawn Anastasio <shawn@anastas.io> Reported-by: NGavin Li <git@thegavinli.com> Signed-off-by: NChristoph Hellwig <hch@lst.de> Acked-by: Catalin Marinas <catalin.marinas@arm.com> # arm64
-
- 19 6月, 2019 1 次提交
-
-
由 Thomas Gleixner 提交于
Based on 1 normalized pattern(s): this program is free software you can redistribute it and or modify it under the terms of the gnu general public license version 2 as published by the free software foundation this program is distributed in the hope that it will be useful but without any warranty without even the implied warranty of merchantability or fitness for a particular purpose see the gnu general public license for more details you should have received a copy of the gnu general public license along with this program if not see http www gnu org licenses extracted by the scancode license scanner the SPDX license identifier GPL-2.0-only has been chosen to replace the boilerplate/reference in 503 file(s). Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Reviewed-by: NAlexios Zavras <alexios.zavras@intel.com> Reviewed-by: NAllison Randal <allison@lohutok.net> Reviewed-by: NEnrico Weigelt <info@metux.net> Cc: linux-spdx@vger.kernel.org Link: https://lkml.kernel.org/r/20190602204653.811534538@linutronix.deSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 17 6月, 2019 1 次提交
-
-
由 Masayoshi Mizuma 提交于
If the cache line size is greater than ARCH_DMA_MINALIGN (128), the warning shows and it's tainted as TAINT_CPU_OUT_OF_SPEC. However, it's not good because as discussed in the thread [1], the cpu cache line size will be problem only on non-coherent devices. Since the coherent flag is already introduced to struct device, show the warning only if the device is non-coherent device and ARCH_DMA_MINALIGN is smaller than the cpu cache size. [1] https://lore.kernel.org/linux-arm-kernel/20180514145703.celnlobzn3uh5tc2@localhost/Signed-off-by: NMasayoshi Mizuma <m.mizuma@jp.fujitsu.com> Reviewed-by: NHidetoshi Seto <seto.hidetoshi@jp.fujitsu.com> Tested-by: NZhang Lei <zhang.lei@jp.fujitsu.com> [catalin.marinas@arm.com: removed 'if' block for WARN_TAINT] Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 27 5月, 2019 4 次提交
-
-
由 Christoph Hellwig 提交于
With most of the previous functionality now elsewhere a lot of the headers included in this file are not needed. Signed-off-by: NChristoph Hellwig <hch@lst.de> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Christoph Hellwig 提交于
Signed-off-by: NChristoph Hellwig <hch@lst.de> Acked-by: NRobin Murphy <robin.murphy@arm.com> Reviewed-by: NMukesh Ojha <mojha@codeaurora.org> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Christoph Hellwig 提交于
There is nothing really arm64 specific in the iommu_dma_ops implementation, so move it to dma-iommu.c and keep a lot of symbols self-contained. Note the implementation does depend on the DMA_DIRECT_REMAP infrastructure for now, so we'll have to make the DMA_IOMMU support depend on it, but this will be relaxed soon. Signed-off-by: NChristoph Hellwig <hch@lst.de> Acked-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Christoph Hellwig 提交于
We now have a arch_dma_prep_coherent architecture hook that is used for the generic DMA remap allocator, and we should use the same interface for the dma-iommu code. Signed-off-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 07 5月, 2019 1 次提交
-
-
由 Christoph Hellwig 提交于
DMA allocations that can't sleep may return non-remapped addresses, but we do not properly handle them in the mmap and get_sgtable methods. Resolve non-vmalloc addresses using virt_to_page to handle this corner case. Cc: <stable@vger.kernel.org> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 24 1月, 2019 1 次提交
-
-
由 Christoph Hellwig 提交于
Xen-swiotlb hooks into the arm/arm64 arch code through a copy of the DMA DMA mapping operations stored in the struct device arch data. Switching arm64 to use the direct calls for the merged DMA direct / swiotlb code broke this scheme. Replace the indirect calls with direct-calls in xen-swiotlb as well to fix this problem. Fixes: 356da6d0 ("dma-mapping: bypass indirect calls for dma-direct") Reported-by: NJulien Grall <julien.grall@arm.com> Signed-off-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NStefano Stabellini <sstabellini@kernel.org>
-
- 14 12月, 2018 3 次提交
-
-
由 Christoph Hellwig 提交于
Avoid expensive indirect calls in the fast path DMA mapping operations by directly calling the dma_direct_* ops if we are using the directly mapped DMA operations. Signed-off-by: NChristoph Hellwig <hch@lst.de> Acked-by: NJesper Dangaard Brouer <brouer@redhat.com> Tested-by: NJesper Dangaard Brouer <brouer@redhat.com> Tested-by: NTony Luck <tony.luck@intel.com>
-
由 Christoph Hellwig 提交于
While the dma-direct code is (relatively) clean and simple we actually have to use the swiotlb ops for the mapping on many architectures due to devices with addressing limits. Instead of keeping two implementations around this commit allows the dma-direct implementation to call the swiotlb bounce buffering functions and thus share the guts of the mapping implementation. This also simplified the dma-mapping setup on a few architectures where we don't have to differenciate which implementation to use. Signed-off-by: NChristoph Hellwig <hch@lst.de> Acked-by: NJesper Dangaard Brouer <brouer@redhat.com> Tested-by: NJesper Dangaard Brouer <brouer@redhat.com> Tested-by: NTony Luck <tony.luck@intel.com>
-
由 Robin Murphy 提交于
The dummy DMA ops are currently used by arm64 for any device which has an invalid ACPI description and is thus barred from using DMA due to not knowing whether is is cache-coherent or not. Factor these out into general dma-mapping code so that they can be referenced from other common code paths. In the process, we can prune all the optional callbacks which just do the same thing as the default behaviour, and fill in .map_resource for completeness. Signed-off-by: NRobin Murphy <robin.murphy@arm.com> [hch: moved to a separate source file] Reviewed-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Acked-by: NJesper Dangaard Brouer <brouer@redhat.com> Tested-by: NJesper Dangaard Brouer <brouer@redhat.com> Tested-by: NTony Luck <tony.luck@intel.com> Signed-off-by: NChristoph Hellwig <hch@lst.de>
-
- 11 12月, 2018 1 次提交
-
-
由 Robin Murphy 提交于
We need to invalidate the caches *before* clearing the buffer via the non-cacheable alias, else in the worst case __dma_flush_area() may write back dirty lines over the top of our nice new zeros. Fixes: dd65a941 ("arm64: dma-mapping: clear buffers allocated with FORCE_CONTIGUOUS flag") Cc: <stable@vger.kernel.org> # 4.18.x- Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 06 12月, 2018 2 次提交
-
-
由 Christoph Hellwig 提交于
Return DMA_MAPPING_ERROR instead of 0 on a dma mapping failure and let the core dma-mapping code handle the rest. Signed-off-by: NChristoph Hellwig <hch@lst.de> Acked-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Christoph Hellwig 提交于
Just return DMA_MAPPING_ERROR from __dummy_map_page and let the core dma-mapping code handle the rest. Signed-off-by: NChristoph Hellwig <hch@lst.de> Acked-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 02 12月, 2018 1 次提交
-
-
由 Christoph Hellwig 提交于
The arm64 codebase to implement coherent dma allocation for architectures with non-coherent DMA is a good start for a generic implementation, given that is uses the generic remap helpers, provides the atomic pool for allocations that can't sleep and still is realtively simple and well tested. Move it to kernel/dma and allow architectures to opt into it using a config symbol. Architectures just need to provide a new arch_dma_prep_coherent helper to writeback an invalidate the caches for any memory that gets remapped for uncached access. Signed-off-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NWill Deacon <will.deacon@arm.com> Reviewed-by: NRobin Murphy <robin.murphy@arm.com>
-
- 03 11月, 2018 1 次提交
-
-
由 Christoph Hellwig 提交于
__swiotlb_get_sgtable_page and __swiotlb_mmap_pfn are not only misnamed but also only used if CONFIG_IOMMU_DMA is set. Just add a simple ifdef for now, given that we plan to remove them entirely for the next merge window. Reported-by: NFlorian Fainelli <f.fainelli@gmail.com> Signed-off-by: NChristoph Hellwig <hch@lst.de> Tested-by: NWill Deacon <will.deacon@arm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 31 10月, 2018 1 次提交
-
-
由 Mike Rapoport 提交于
Move remaining definitions and declarations from include/linux/bootmem.h into include/linux/memblock.h and remove the redundant header. The includes were replaced with the semantic patch below and then semi-automated removal of duplicated '#include <linux/memblock.h> @@ @@ - #include <linux/bootmem.h> + #include <linux/memblock.h> [sfr@canb.auug.org.au: dma-direct: fix up for the removal of linux/bootmem.h] Link: http://lkml.kernel.org/r/20181002185342.133d1680@canb.auug.org.au [sfr@canb.auug.org.au: powerpc: fix up for removal of linux/bootmem.h] Link: http://lkml.kernel.org/r/20181005161406.73ef8727@canb.auug.org.au [sfr@canb.auug.org.au: x86/kaslr, ACPI/NUMA: fix for linux/bootmem.h removal] Link: http://lkml.kernel.org/r/20181008190341.5e396491@canb.auug.org.au Link: http://lkml.kernel.org/r/1536927045-23536-30-git-send-email-rppt@linux.vnet.ibm.comSigned-off-by: NMike Rapoport <rppt@linux.vnet.ibm.com> Signed-off-by: NStephen Rothwell <sfr@canb.auug.org.au> Acked-by: NMichal Hocko <mhocko@suse.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Chris Zankel <chris@zankel.net> Cc: "David S. Miller" <davem@davemloft.net> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Greentime Hu <green.hu@gmail.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Guan Xuetao <gxt@pku.edu.cn> Cc: Ingo Molnar <mingo@redhat.com> Cc: "James E.J. Bottomley" <jejb@parisc-linux.org> Cc: Jonas Bonn <jonas@southpole.se> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Ley Foon Tan <lftan@altera.com> Cc: Mark Salter <msalter@redhat.com> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Matt Turner <mattst88@gmail.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Simek <monstr@monstr.eu> Cc: Palmer Dabbelt <palmer@sifive.com> Cc: Paul Burton <paul.burton@mips.com> Cc: Richard Kuo <rkuo@codeaurora.org> Cc: Richard Weinberger <richard@nod.at> Cc: Rich Felker <dalias@libc.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Serge Semin <fancer.lancer@gmail.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Tony Luck <tony.luck@intel.com> Cc: Vineet Gupta <vgupta@synopsys.com> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 19 10月, 2018 3 次提交
-
-
由 Christoph Hellwig 提交于
Now that the generic swiotlb code supports non-coherent DMA we can switch to it for arm64. For that we need to refactor the existing alloc/free/mmap/pgprot helpers to be used as the architecture hooks, and implement the standard arch_sync_dma_for_{device,cpu} hooks for cache maintaincance in the streaming dma hooks, which also implies using the generic dma_coherent flag in struct device. Note that we need to keep the old is_device_dma_coherent function around for now, so that the shared arm/arm64 Xen code keeps working. Signed-off-by: NChristoph Hellwig <hch@lst.de> Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Christoph Hellwig 提交于
All architectures that support swiotlb also have a zone that backs up these less than full addressing allocations (usually ZONE_DMA32). Because of that it is rather pointless to fall back to the global swiotlb buffer if the normal dma direct allocation failed - the only thing this will do is to eat up bounce buffers that would be more useful to serve streaming mappings. Signed-off-by: NChristoph Hellwig <hch@lst.de> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
-
由 Christoph Hellwig 提交于
Like all other dma mapping drivers just return an error code instead of an actual memory buffer. The reason for the overflow buffer was that at the time swiotlb was invented there was no way to check for dma mapping errors, but this has long been fixed. Signed-off-by: NChristoph Hellwig <hch@lst.de> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Reviewed-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
-
- 26 9月, 2018 1 次提交
-
-
由 Christoph Hellwig 提交于
This reverts commit 46053c73. This change breaks architectures setting up dma_ops in their own magic way and not using arch_setup_dma_ops, so revert it. Reported-by: NGuenter Roeck <linux@roeck-us.net> Signed-off-by: NChristoph Hellwig <hch@lst.de>
-
- 25 9月, 2018 1 次提交
-
-
由 Robin Murphy 提交于
Whilst the symmetry of deferring to the existing sync callback in __iommu_map_page() is nice, taking a round-trip through iommu_iova_to_phys() is a pretty heavyweight way to get an address we can trivially compute from the page we already have. Tweaking it to just perform the cache maintenance directly when appropriate doesn't really make the code any more complicated, and the runtime efficiency gain can only be a benefit. Furthermore, the sync operations themselves know they can only be invoked on a managed DMA ops domain, so can use the fast specific domain lookup to avoid excessive manipulation of the group refcount (particularly in the scatterlist cases). Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Tested-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 08 9月, 2018 1 次提交
-
-
由 Christoph Hellwig 提交于
There is no reason to leave the per-device dma_ops around when deconfiguring a device, so move this code from arm64 into the common code. Signed-off-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NRobin Murphy <robin.murphy@arm.com>
-
- 18 8月, 2018 1 次提交
-
-
由 Marek Szyprowski 提交于
The CMA memory allocator doesn't support standard gfp flags for memory allocation, so there is no point having it as a parameter for dma_alloc_from_contiguous() function. Replace it by a boolean no_warn argument, which covers all the underlaying cma_alloc() function supports. This will help to avoid giving false feeling that this function supports standard gfp flags and callers can pass __GFP_ZERO to get zeroed buffer, what has already been an issue: see commit dd65a941 ("arm64: dma-mapping: clear buffers allocated with FORCE_CONTIGUOUS flag"). Link: http://lkml.kernel.org/r/20180709122020eucas1p21a71b092975cb4a3b9954ffc63f699d1~-sqUFoa-h2939329393eucas1p2Y@eucas1p2.samsung.comSigned-off-by: NMarek Szyprowski <m.szyprowski@samsung.com> Acked-by: NMichał Nazarewicz <mina86@mina86.com> Acked-by: NVlastimil Babka <vbabka@suse.cz> Reviewed-by: NChristoph Hellwig <hch@lst.de> Cc: Laura Abbott <labbott@redhat.com> Cc: Michal Hocko <mhocko@suse.com> Cc: Joonsoo Kim <js1304@gmail.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 19 6月, 2018 1 次提交
-
-
由 Marek Szyprowski 提交于
dma_alloc_*() buffers might be exposed to userspace via mmap() call, so they should be cleared on allocation. In case of IOMMU-based dma-mapping implementation such buffer clearing was missing in the code path for DMA_ATTR_FORCE_CONTIGUOUS flag handling, because dma_alloc_from_contiguous() doesn't honor __GFP_ZERO flag. This patch fixes this issue. For more information on clearing buffers allocated by dma_alloc_* functions, see commit 6829e274 ("arm64: dma-mapping: always clear allocated buffers"). Fixes: 44176bb3 ("arm64: Add support for DMA_ATTR_FORCE_CONTIGUOUS to IOMMU") Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 15 5月, 2018 1 次提交
-
-
由 Catalin Marinas 提交于
This patch increases the ARCH_DMA_MINALIGN to 128 so that it covers the currently known Cache Writeback Granule (CTR_EL0.CWG) on arm64 and moves the fallback in cache_line_size() from L1_CACHE_BYTES to this constant. In addition, it warns (and taints) if the CWG is larger than ARCH_DMA_MINALIGN as this is not safe with non-coherent DMA. Cc: Will Deacon <will.deacon@arm.com> Cc: Robin Murphy <robin.murphy@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 08 5月, 2018 1 次提交
-
-
由 Christoph Hellwig 提交于
Most mainstream architectures are using 65536 entries, so lets stick to that. If someone is really desperate to override it that can still be done through <asm/dma-mapping.h>, but I'd rather see a really good rationale for that. dma_debug_init is now called as a core_initcall, which for many architectures means much earlier, and provides dma-debug functionality earlier in the boot process. This should be safe as it only relies on the memory allocator already being available. Signed-off-by: NChristoph Hellwig <hch@lst.de> Acked-by: NMarek Szyprowski <m.szyprowski@samsung.com> Reviewed-by: NRobin Murphy <robin.murphy@arm.com>
-