- 18 11月, 2020 1 次提交
-
-
由 Christoph Hellwig 提交于
Drop the dma_direct_set_offset export and move the declaration to dma-map-ops.h now that the Allwinner drivers have stopped calling it. Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NMaxime Ripard <maxime@cerno.tech>
-
- 20 10月, 2020 1 次提交
-
-
由 Christoph Hellwig 提交于
Due to a mismerge a bunch of prototypes that should have moved to dma-map-ops.h are still in dma-mapping.h, fix that up. Signed-off-by: NChristoph Hellwig <hch@lst.de>
-
- 06 10月, 2020 2 次提交
-
-
由 Christoph Hellwig 提交于
Most of dma-debug.h is not required by anything outside of kernel/dma. Move the four declarations needed by dma-mappin.h or dma-ops providers into dma-mapping.h and dma-map-ops.h, and move the remainder of the file to kernel/dma/debug.h. Signed-off-by: NChristoph Hellwig <hch@lst.de>
-
由 Christoph Hellwig 提交于
Split out all the bits that are purely for dma_map_ops implementations and related code into a new <linux/dma-map-ops.h> header so that they don't get pulled into all the drivers. That also means the architecture specific <asm/dma-mapping.h> is not pulled in by <linux/dma-mapping.h> any more, which leads to a missing includes that were pulled in by the x86 or arm versions in a few not overly portable drivers. Signed-off-by: NChristoph Hellwig <hch@lst.de>
-
- 25 9月, 2020 7 次提交
-
-
由 Christoph Hellwig 提交于
This will allow IOMMU drivers to allocate non-contigous memory and return a vmapped virtual address. Signed-off-by: NChristoph Hellwig <hch@lst.de>
-
由 Christoph Hellwig 提交于
This API is the equivalent of alloc_pages, except that the returned memory is guaranteed to be DMA addressable by the passed in device. The implementation will also be used to provide a more sensible replacement for DMA_ATTR_NON_CONSISTENT flag. Additionally dma_alloc_noncoherent is switched over to use dma_alloc_pages as its backend. Signed-off-by: NChristoph Hellwig <hch@lst.de> Acked-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de> (MIPS part)
-
由 Christoph Hellwig 提交于
All users are gone now, remove the API. Signed-off-by: NChristoph Hellwig <hch@lst.de> Acked-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de> (MIPS part)
-
由 Christoph Hellwig 提交于
Add a new API to allocate and free memory that is guaranteed to be addressable by a device, but which potentially is not cache coherent for DMA. To transfer ownership to and from the device, the existing streaming DMA API calls dma_sync_single_for_device and dma_sync_single_for_cpu must be used. For now the new calls are implemented on top of dma_alloc_attrs just like the old-noncoherent API, but once all drivers are switched to the new API it will be replaced with a better working implementation that is available on all architectures. Signed-off-by: NChristoph Hellwig <hch@lst.de>
-
由 Christoph Hellwig 提交于
Move the comment documenting dma_addr_t away from the dma_map_ops definition which isn't very related to it, and toward DMA_MAPPING_ERROR, which is somewhat related. Add a little blurb about DMA_MAPPING_ERROR as well. Signed-off-by: NChristoph Hellwig <hch@lst.de>
-
由 Christoph Hellwig 提交于
Move the valid_dma_direction helper to a more suitable header, and clean it up to use the proper enum as well as removing pointless braces. Signed-off-by: NChristoph Hellwig <hch@lst.de>
-
由 Christoph Hellwig 提交于
This value is only used by a PCMCIA driver and not very useful. Signed-off-by: NChristoph Hellwig <hch@lst.de> Acked-by: NDominik Brodowski <linux@dominikbrodwski.net>
-
- 18 9月, 2020 1 次提交
-
-
由 Jim Quinlan 提交于
The new field 'dma_range_map' in struct device is used to facilitate the use of single or multiple offsets between mapping regions of cpu addrs and dma addrs. It subsumes the role of "dev->dma_pfn_offset" which was only capable of holding a single uniform offset and had no region bounds checking. The function of_dma_get_range() has been modified so that it takes a single argument -- the device node -- and returns a map, NULL, or an error code. The map is an array that holds the information regarding the DMA regions. Each range entry contains the address offset, the cpu_start address, the dma_start address, and the size of the region. of_dma_configure() is the typical manner to set range offsets but there are a number of ad hoc assignments to "dev->dma_pfn_offset" in the kernel driver code. These cases now invoke the function dma_direct_set_offset(dev, cpu_addr, dma_addr, size). Signed-off-by: NJim Quinlan <james.quinlan@broadcom.com> [hch: various interface cleanups] Signed-off-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NMathieu Poirier <mathieu.poirier@linaro.org> Tested-by: NMathieu Poirier <mathieu.poirier@linaro.org> Tested-by: NNathan Chancellor <natechancellor@gmail.com>
-
- 04 9月, 2020 2 次提交
-
-
由 Nicolin Chen 提交于
The default segment_boundary_mask was set to DMA_BIT_MAKS(32) a decade ago by referencing SCSI/block subsystem, as a 32-bit mask was good enough for most of the devices. Now more and more drivers set dma_masks above DMA_BIT_MAKS(32) while only a handful of them call dma_set_seg_boundary(). This means that most drivers have a 4GB segmention boundary because DMA API returns a 32-bit default value, though they might not really have such a limit. The default segment_boundary_mask should mean "no limit" since the device doesn't explicitly set the mask. But a 32-bit mask certainly limits those devices capable of 32+ bits addressing. So this patch sets default segment_boundary_mask to ULONG_MAX. Signed-off-by: NNicolin Chen <nicoleotsuka@gmail.com> Acked-by: NNiklas Schnelle <schnelle@linux.ibm.com> Signed-off-by: NChristoph Hellwig <hch@lst.de>
-
由 Nicolin Chen 提交于
We found that callers of dma_get_seg_boundary mostly do an ALIGN with page mask and then do a page shift to get number of pages: ALIGN(boundary + 1, 1 << shift) >> shift However, the boundary might be as large as ULONG_MAX, which means that a device has no specific boundary limit. So either "+ 1" or passing it to ALIGN() would potentially overflow. According to kernel defines: #define ALIGN_MASK(x, mask) (((x) + (mask)) & ~(mask)) #define ALIGN(x, a) ALIGN_MASK(x, (typeof(x))(a) - 1) We can simplify the logic here into a helper function doing: ALIGN(boundary + 1, 1 << shift) >> shift = ALIGN_MASK(b + 1, (1 << s) - 1) >> s = {[b + 1 + (1 << s) - 1] & ~[(1 << s) - 1]} >> s = [b + 1 + (1 << s) - 1] >> s = [b + (1 << s)] >> s = (b >> s) + 1 This patch introduces and applies dma_get_seg_boundary_nr_pages() as an overflow-free helper for the dma_get_seg_boundary() callers to get numbers of pages. It also takes care of the NULL dev case for non-DMA API callers. Suggested-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NNicolin Chen <nicoleotsuka@gmail.com> Acked-by: NNiklas Schnelle <schnelle@linux.ibm.com> Acked-by: Michael Ellerman <mpe@ellerman.id.au> (powerpc) Signed-off-by: NChristoph Hellwig <hch@lst.de>
-
- 14 8月, 2020 1 次提交
-
-
由 Christoph Hellwig 提交于
When allocating coherent pool memory for an IOMMU mapping we don't care about the DMA mask. Move the guess for the initial GFP mask into the dma_direct_alloc_pages and pass dma_coherent_ok as a function pointer argument so that it doesn't get applied to the IOMMU case. Signed-off-by: NChristoph Hellwig <hch@lst.de> Tested-by: NAmit Pundir <amit.pundir@linaro.org>
-
- 19 7月, 2020 1 次提交
-
-
由 Christoph Hellwig 提交于
Avoid the overhead of the dma ops support for tiny builds that only use the direct mapping. Signed-off-by: NChristoph Hellwig <hch@lst.de> Tested-by: NAlexey Kardashevskiy <aik@ozlabs.ru> Reviewed-by: NAlexey Kardashevskiy <aik@ozlabs.ru>
-
- 16 7月, 2020 1 次提交
-
-
由 Christoph Hellwig 提交于
For a long time the DMA API has been implemented inline in dma-mapping.h, but the function bodies can be quite large. Move them all out of line. This also removes all the dma_direct_* exports as those are just implementation details and should never be used by drivers directly. Signed-off-by: NChristoph Hellwig <hch@lst.de> Tested-by: NAlexey Kardashevskiy <aik@ozlabs.ru> Reviewed-by: NAlexey Kardashevskiy <aik@ozlabs.ru>
-
- 30 6月, 2020 1 次提交
-
-
由 Christoph Hellwig 提交于
Add a new API to check if calls to dma_sync_single_for_{device,cpu} are required for a given DMA streaming mapping. Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20200629130359.2690853-2-hch@lst.de
-
- 27 6月, 2020 1 次提交
-
-
由 Mauro Carvalho Chehab 提交于
As we moved those files to core-api, fix references to point to their newer locations. Signed-off-by: NMauro Carvalho Chehab <mchehab+huawei@kernel.org> Link: https://lore.kernel.org/r/37b2fd159fbc7655dbf33b3eb1215396a25f6344.1592895969.git.mchehab+huawei@kernel.orgSigned-off-by: NJonathan Corbet <corbet@lwn.net>
-
- 13 5月, 2020 1 次提交
-
-
由 Marek Szyprowski 提交于
struct sg_table is a common structure used for describing a memory buffer. It consists of a scatterlist with memory pages and DMA addresses (sgl entry), as well as the number of scatterlist entries: CPU pages (orig_nents entry) and DMA mapped pages (nents entry). It turned out that it was a common mistake to misuse nents and orig_nents entries, calling DMA-mapping functions with a wrong number of entries or ignoring the number of mapped entries returned by the dma_map_sg function. To avoid such issues, let's introduce a common wrappers operating directly on the struct sg_table objects, which take care of the proper use of the nents and orig_nents entries. Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com> Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NChristoph Hellwig <hch@lst.de>
-
- 20 4月, 2020 1 次提交
-
-
由 David Rientjes 提交于
The single atomic pool is allocated from the lowest zone possible since it is guaranteed to be applicable for any DMA allocation. Devices may allocate through the DMA API but not have a strict reliance on GFP_DMA memory. Since the atomic pool will be used for all non-blockable allocations, returning all memory from ZONE_DMA may unnecessarily deplete the zone. Provision for multiple atomic pools that will map to the optimal gfp mask of the device. When allocating non-blockable memory, determine the optimal gfp mask of the device and use the appropriate atomic pool. The coherent DMA mask will remain the same between allocation and free and, thus, memory will be freed to the same atomic pool it was allocated from. __dma_atomic_pool_init() will be changed to return struct gen_pool * later once dynamic expansion is added. Signed-off-by: NDavid Rientjes <rientjes@google.com> Signed-off-by: NChristoph Hellwig <hch@lst.de>
-
- 22 11月, 2019 1 次提交
-
-
由 Nicolas Saenz Julienne 提交于
Using a mask to represent bus DMA constraints has a set of limitations. The biggest one being it can only hold a power of two (minus one). The DMA mapping code is already aware of this and treats dev->bus_dma_mask as a limit. This quirk is already used by some architectures although still rare. With the introduction of the Raspberry Pi 4 we've found a new contender for the use of bus DMA limits, as its PCIe bus can only address the lower 3GB of memory (of a total of 4GB). This is impossible to represent with a mask. To make things worse the device-tree code rounds non power of two bus DMA limits to the next power of two, which is unacceptable in this case. In the light of this, rename dev->bus_dma_mask to dev->bus_dma_limit all over the tree and treat it as such. Note that dev->bus_dma_limit should contain the higher accessible DMA address. Signed-off-by: NNicolas Saenz Julienne <nsaenzjulienne@suse.de> Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NChristoph Hellwig <hch@lst.de>
-
- 15 11月, 2019 1 次提交
-
-
由 Christoph Hellwig 提交于
This flag is not implemented by any backend and only set by the ib_umem module in a single instance. Link: https://lore.kernel.org/r/20191113073214.9514-2-hch@lst.deSigned-off-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NJason Gunthorpe <jgg@mellanox.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
- 31 10月, 2019 2 次提交
-
-
由 Kees Cook 提交于
As we've seen from USB and other areas[1], we need to always do runtime checks for DMA operating on memory regions that might be remapped. This adds vmap checks (similar to those already in USB but missing in other places) into dma_map_single() so all callers benefit from the checking. [1] https://git.kernel.org/linus/3840c5b78803b2b6cc1ff820100a74a092c40cbbSuggested-by: NLaura Abbott <labbott@redhat.com> Signed-off-by: NKees Cook <keescook@chromium.org> [hch: fixed the printk message] Signed-off-by: NChristoph Hellwig <hch@lst.de>
-
由 Vladimir Murzin 提交于
Daniele reported that issue previously fixed in c41f9ea9 ("drivers: dma-coherent: Account dma_pfn_offset when used with device tree") reappear shortly after 43fc509c ("dma-coherent: introduce interface for default DMA pool") where fix was accidentally dropped. Lets put fix back in place and respect dma-ranges for reserved memory. Fixes: 43fc509c ("dma-coherent: introduce interface for default DMA pool") Reported-by: NDaniele Alessandrelli <daniele.alessandrelli@gmail.com> Tested-by: NDaniele Alessandrelli <daniele.alessandrelli@gmail.com> Tested-by: NAlexandre Torgue <alexandre.torgue@st.com> Signed-off-by: NVladimir Murzin <vladimir.murzin@arm.com> Signed-off-by: NChristoph Hellwig <hch@lst.de>
-
- 04 9月, 2019 5 次提交
-
-
由 Christoph Hellwig 提交于
A helper to find the backing page array based on a virtual address. This also ensures we do the same vm_flags check everywhere instead of slightly different or missing ones in a few places. Signed-off-by: NChristoph Hellwig <hch@lst.de>
-
由 Christoph Hellwig 提交于
Currently the generic dma remap allocator gets a vm_flags passed by the caller that is a little confusing. We just introduced a generic vmalloc-level flag to identify the dma coherent allocations, so use that everywhere and remove the now pointless argument. Signed-off-by: NChristoph Hellwig <hch@lst.de>
-
由 Christoph Hellwig 提交于
This function is entirely unused given that declared memory is generally provided by platform setup code. Signed-off-by: NChristoph Hellwig <hch@lst.de>
-
由 Christoph Hellwig 提交于
We can already use DMA_ATTR_WRITE_COMBINE or the _wc prefixed version, so remove the third way of doing things. Signed-off-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NLaurent Pinchart <laurent.pinchart@ideasonboard.com> Reviewed-by: NTomi Valkeinen <tomi.valkeinen@ti.com>
-
由 Christoph Hellwig 提交于
Add a helper to check if DMA allocations for a specific device can be mapped to userspace using dma_mmap_*. Signed-off-by: NChristoph Hellwig <hch@lst.de>
-
- 03 9月, 2019 1 次提交
-
-
由 Yoshihiro Shimoda 提交于
This patch adds a new DMA API "dma_get_merge_boundary". This function returns the DMA merge boundary if the DMA layer can merge the segments. This patch also adds the implementation for a new dma_map_ops pointer. Signed-off-by: NYoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com> Reviewed-by: NSimon Horman <horms+renesas@verge.net.au> Signed-off-by: NChristoph Hellwig <hch@lst.de>
-
- 29 8月, 2019 1 次提交
-
-
由 Christoph Hellwig 提交于
The memory allocated for the atomic pool needs to have the same mapping attributes that we use for remapping, so use pgprot_dmacoherent instead of open coding it. Also deduct a suitable zone to allocate the memory from based on the presence of the DMA zones. Signed-off-by: NChristoph Hellwig <hch@lst.de>
-
- 22 8月, 2019 1 次提交
-
-
由 Christoph Hellwig 提交于
No users left. Signed-off-by: NChristoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/r/20190816062435.881-6-hch@lst.deSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 23 7月, 2019 1 次提交
-
-
由 Eric Auger 提交于
We currently have cases where the dma_addressing_limited() gets called with dma_mask unset. This causes a NULL pointer dereference. Use dma_get_mask() accessor to prevent the crash. Fixes: b8664554 ("dma-mapping: add a dma_addressing_limited helper") Signed-off-by: NEric Auger <eric.auger@redhat.com> Acked-by: NMichael S. Tsirkin <mst@redhat.com> Signed-off-by: NChristoph Hellwig <hch@lst.de>
-
- 17 7月, 2019 1 次提交
-
-
由 Christoph Hellwig 提交于
This helper returns if the device has issues addressing all present memory in the system. Signed-off-by: NChristoph Hellwig <hch@lst.de>
-
- 10 7月, 2019 1 次提交
-
-
由 Christoph Hellwig 提交于
These days, the DMA mapping code must bounce buffers for any unsupported address. If the driver needs to optimize for natively supported ranges, then it should use dma_get_required_mask. Signed-off-by: NChristoph Hellwig <hch@lst.de> Acked-by: NMarc Gonzalez <marc.w.gonzalez@free.fr> Signed-off-by: NUlf Hansson <ulf.hansson@linaro.org>
-
- 08 4月, 2019 1 次提交
-
-
由 Christoph Hellwig 提交于
Most dma_map_ops implementations already had some issues with a NULL device, or did simply crash if one was fed to them. Now that we have cleaned up all the obvious offenders we can stop to pretend we support this mode. Signed-off-by: NChristoph Hellwig <hch@lst.de>
-
- 07 3月, 2019 1 次提交
-
-
由 Joerg Roedel 提交于
The function returns the maximum size that can be mapped using DMA-API functions. The patch also adds the implementation for direct DMA and a new dma_map_ops pointer so that other implementations can expose their limit. Cc: stable@vger.kernel.org Reviewed-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> Reviewed-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NJoerg Roedel <jroedel@suse.de> Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
-
- 20 2月, 2019 2 次提交
-
-
由 Christoph Hellwig 提交于
All users of dma_declare_coherent want their allocations to be exclusive, so default to exclusive allocations. Signed-off-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Christoph Hellwig 提交于
This API is not used anywhere, so remove it. Signed-off-by: NChristoph Hellwig <hch@lst.de>
-