- 22 11月, 2019 1 次提交
-
-
由 Nicolas Saenz Julienne 提交于
Using a mask to represent bus DMA constraints has a set of limitations. The biggest one being it can only hold a power of two (minus one). The DMA mapping code is already aware of this and treats dev->bus_dma_mask as a limit. This quirk is already used by some architectures although still rare. With the introduction of the Raspberry Pi 4 we've found a new contender for the use of bus DMA limits, as its PCIe bus can only address the lower 3GB of memory (of a total of 4GB). This is impossible to represent with a mask. To make things worse the device-tree code rounds non power of two bus DMA limits to the next power of two, which is unacceptable in this case. In the light of this, rename dev->bus_dma_mask to dev->bus_dma_limit all over the tree and treat it as such. Note that dev->bus_dma_limit should contain the higher accessible DMA address. Signed-off-by: NNicolas Saenz Julienne <nsaenzjulienne@suse.de> Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NChristoph Hellwig <hch@lst.de>
-
- 31 10月, 2019 2 次提交
-
-
由 Kees Cook 提交于
As we've seen from USB and other areas[1], we need to always do runtime checks for DMA operating on memory regions that might be remapped. This adds vmap checks (similar to those already in USB but missing in other places) into dma_map_single() so all callers benefit from the checking. [1] https://git.kernel.org/linus/3840c5b78803b2b6cc1ff820100a74a092c40cbbSuggested-by: NLaura Abbott <labbott@redhat.com> Signed-off-by: NKees Cook <keescook@chromium.org> [hch: fixed the printk message] Signed-off-by: NChristoph Hellwig <hch@lst.de>
-
由 Vladimir Murzin 提交于
Daniele reported that issue previously fixed in c41f9ea9 ("drivers: dma-coherent: Account dma_pfn_offset when used with device tree") reappear shortly after 43fc509c ("dma-coherent: introduce interface for default DMA pool") where fix was accidentally dropped. Lets put fix back in place and respect dma-ranges for reserved memory. Fixes: 43fc509c ("dma-coherent: introduce interface for default DMA pool") Reported-by: NDaniele Alessandrelli <daniele.alessandrelli@gmail.com> Tested-by: NDaniele Alessandrelli <daniele.alessandrelli@gmail.com> Tested-by: NAlexandre Torgue <alexandre.torgue@st.com> Signed-off-by: NVladimir Murzin <vladimir.murzin@arm.com> Signed-off-by: NChristoph Hellwig <hch@lst.de>
-
- 04 9月, 2019 5 次提交
-
-
由 Christoph Hellwig 提交于
A helper to find the backing page array based on a virtual address. This also ensures we do the same vm_flags check everywhere instead of slightly different or missing ones in a few places. Signed-off-by: NChristoph Hellwig <hch@lst.de>
-
由 Christoph Hellwig 提交于
Currently the generic dma remap allocator gets a vm_flags passed by the caller that is a little confusing. We just introduced a generic vmalloc-level flag to identify the dma coherent allocations, so use that everywhere and remove the now pointless argument. Signed-off-by: NChristoph Hellwig <hch@lst.de>
-
由 Christoph Hellwig 提交于
This function is entirely unused given that declared memory is generally provided by platform setup code. Signed-off-by: NChristoph Hellwig <hch@lst.de>
-
由 Christoph Hellwig 提交于
We can already use DMA_ATTR_WRITE_COMBINE or the _wc prefixed version, so remove the third way of doing things. Signed-off-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NLaurent Pinchart <laurent.pinchart@ideasonboard.com> Reviewed-by: NTomi Valkeinen <tomi.valkeinen@ti.com>
-
由 Christoph Hellwig 提交于
Add a helper to check if DMA allocations for a specific device can be mapped to userspace using dma_mmap_*. Signed-off-by: NChristoph Hellwig <hch@lst.de>
-
- 03 9月, 2019 1 次提交
-
-
由 Yoshihiro Shimoda 提交于
This patch adds a new DMA API "dma_get_merge_boundary". This function returns the DMA merge boundary if the DMA layer can merge the segments. This patch also adds the implementation for a new dma_map_ops pointer. Signed-off-by: NYoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com> Reviewed-by: NSimon Horman <horms+renesas@verge.net.au> Signed-off-by: NChristoph Hellwig <hch@lst.de>
-
- 29 8月, 2019 1 次提交
-
-
由 Christoph Hellwig 提交于
The memory allocated for the atomic pool needs to have the same mapping attributes that we use for remapping, so use pgprot_dmacoherent instead of open coding it. Also deduct a suitable zone to allocate the memory from based on the presence of the DMA zones. Signed-off-by: NChristoph Hellwig <hch@lst.de>
-
- 22 8月, 2019 1 次提交
-
-
由 Christoph Hellwig 提交于
No users left. Signed-off-by: NChristoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/r/20190816062435.881-6-hch@lst.deSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 23 7月, 2019 1 次提交
-
-
由 Eric Auger 提交于
We currently have cases where the dma_addressing_limited() gets called with dma_mask unset. This causes a NULL pointer dereference. Use dma_get_mask() accessor to prevent the crash. Fixes: b8664554 ("dma-mapping: add a dma_addressing_limited helper") Signed-off-by: NEric Auger <eric.auger@redhat.com> Acked-by: NMichael S. Tsirkin <mst@redhat.com> Signed-off-by: NChristoph Hellwig <hch@lst.de>
-
- 17 7月, 2019 1 次提交
-
-
由 Christoph Hellwig 提交于
This helper returns if the device has issues addressing all present memory in the system. Signed-off-by: NChristoph Hellwig <hch@lst.de>
-
- 10 7月, 2019 1 次提交
-
-
由 Christoph Hellwig 提交于
These days, the DMA mapping code must bounce buffers for any unsupported address. If the driver needs to optimize for natively supported ranges, then it should use dma_get_required_mask. Signed-off-by: NChristoph Hellwig <hch@lst.de> Acked-by: NMarc Gonzalez <marc.w.gonzalez@free.fr> Signed-off-by: NUlf Hansson <ulf.hansson@linaro.org>
-
- 08 4月, 2019 1 次提交
-
-
由 Christoph Hellwig 提交于
Most dma_map_ops implementations already had some issues with a NULL device, or did simply crash if one was fed to them. Now that we have cleaned up all the obvious offenders we can stop to pretend we support this mode. Signed-off-by: NChristoph Hellwig <hch@lst.de>
-
- 07 3月, 2019 1 次提交
-
-
由 Joerg Roedel 提交于
The function returns the maximum size that can be mapped using DMA-API functions. The patch also adds the implementation for direct DMA and a new dma_map_ops pointer so that other implementations can expose their limit. Cc: stable@vger.kernel.org Reviewed-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> Reviewed-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NJoerg Roedel <jroedel@suse.de> Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
-
- 20 2月, 2019 3 次提交
-
-
由 Christoph Hellwig 提交于
All users of dma_declare_coherent want their allocations to be exclusive, so default to exclusive allocations. Signed-off-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Christoph Hellwig 提交于
This API is not used anywhere, so remove it. Signed-off-by: NChristoph Hellwig <hch@lst.de>
-
由 Christoph Hellwig 提交于
This API is primarily used through DT entries, but two architectures and two drivers call it directly. So instead of selecting the config symbol for random architectures pull it in implicitly for the actual users. Also rename the Kconfig option to describe the feature better. Signed-off-by: NChristoph Hellwig <hch@lst.de> Acked-by: Paul Burton <paul.burton@mips.com> # MIPS Acked-by: NLee Jones <lee.jones@linaro.org> Reviewed-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 14 2月, 2019 2 次提交
-
-
由 Christoph Hellwig 提交于
Signed-off-by: NChristoph Hellwig <hch@lst.de> Acked-by: Catalin Marinas <catalin.marinas@arm.com> # arm64
-
由 Christoph Hellwig 提交于
Signed-off-by: NChristoph Hellwig <hch@lst.de> Acked-by: Paul Burton <paul.burton@mips.com> # MIPS Acked-by: Catalin Marinas <catalin.marinas@arm.com> # arm64
-
- 01 2月, 2019 2 次提交
-
-
由 Christoph Hellwig 提交于
Use WARN_ON_ONCE to print a stack trace and return a proper error code instead. Signed-off-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Tested-by: NMarek Szyprowski <m.szyprowski@samsung.com>
-
由 Christoph Hellwig 提交于
Instead provide a proper implementation in the direct mapping code, and also wire it up for arm and powerpc, leaving an error return for all the IOMMU or virtual mapping instances for which we'd have to wire up an actual implementation Signed-off-by: NChristoph Hellwig <hch@lst.de> Tested-by: NMarek Szyprowski <m.szyprowski@samsung.com>
-
- 08 1月, 2019 1 次提交
-
-
由 Luis Chamberlain 提交于
dma_zalloc_coherent() is no longer needed as it has no users because dma_alloc_coherent() already zeroes out memory for us. The Coccinelle grammar rule that used to check for dma_alloc_coherent() + memset() is modified so that it just tells the user that the memset is not needed anymore. Suggested-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NLuis Chamberlain <mcgrof@kernel.org> Signed-off-by: NChristoph Hellwig <hch@lst.de>
-
- 04 1月, 2019 4 次提交
-
-
由 Christoph Hellwig 提交于
This avoids link failures in drivers using the DMA API, when they are compiled for user mode Linux with CONFIG_COMPILE_TEST=y. Fixes: 356da6d0 ("dma-mapping: bypass indirect calls for dma-direct") Signed-off-by: NChristoph Hellwig <hch@lst.de>
-
由 Christoph Hellwig 提交于
These functions have never been used. Signed-off-by: NChristoph Hellwig <hch@lst.de>
-
由 Christoph Hellwig 提交于
dmam_alloc_coherent is just the default no-flags case of dmam_alloc_attrs, so take advantage of this similar to the non-managed version. Signed-off-by: NChristoph Hellwig <hch@lst.de>
-
由 Christoph Hellwig 提交于
And also switch the way we implement the unmap side around to stay consistent. This ensures dma-debug works again because it records which function we used for mapping to ensure it is also used for unmapping, and also reduces further code duplication. Last but not least this also officially allows calling dma_sync_single_* for mappings created using dma_map_page, which is perfectly fine given that the sync calls only take a dma_addr_t, but not a virtual address or struct page. Fixes: 7f0fee24 ("dma-mapping: merge dma_unmap_page_attrs and dma_unmap_single_attrs") Signed-off-by: NChristoph Hellwig <hch@lst.de> Tested-by: NLABBE Corentin <clabbe.montjoie@gmail.com>
-
- 23 12月, 2018 1 次提交
-
-
由 Christoph Hellwig 提交于
We really need the writecombine flag in dma_alloc_wc, fix a stupid oversight. Fixes: 7ed1d91a ("dma-mapping: translate __GFP_NOFAIL to DMA_ATTR_NO_WARN") Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 20 12月, 2018 1 次提交
-
-
由 Christoph Hellwig 提交于
We now always return zeroed memory from dma_alloc_coherent. Note that simply passing GFP_ZERO to dma_alloc_coherent wasn't always doing the right thing to start with given that various allocators are not backed by the page allocator and thus would ignore GFP_ZERO. Signed-off-by: NChristoph Hellwig <hch@lst.de>
-
- 14 12月, 2018 6 次提交
-
-
由 Christoph Hellwig 提交于
Avoid expensive indirect calls in the fast path DMA mapping operations by directly calling the dma_direct_* ops if we are using the directly mapped DMA operations. Signed-off-by: NChristoph Hellwig <hch@lst.de> Acked-by: NJesper Dangaard Brouer <brouer@redhat.com> Tested-by: NJesper Dangaard Brouer <brouer@redhat.com> Tested-by: NTony Luck <tony.luck@intel.com>
-
由 Robin Murphy 提交于
The dummy DMA ops are currently used by arm64 for any device which has an invalid ACPI description and is thus barred from using DMA due to not knowing whether is is cache-coherent or not. Factor these out into general dma-mapping code so that they can be referenced from other common code paths. In the process, we can prune all the optional callbacks which just do the same thing as the default behaviour, and fill in .map_resource for completeness. Signed-off-by: NRobin Murphy <robin.murphy@arm.com> [hch: moved to a separate source file] Reviewed-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Acked-by: NJesper Dangaard Brouer <brouer@redhat.com> Tested-by: NJesper Dangaard Brouer <brouer@redhat.com> Tested-by: NTony Luck <tony.luck@intel.com> Signed-off-by: NChristoph Hellwig <hch@lst.de>
-
由 Christoph Hellwig 提交于
This isn't exactly a slow path routine, but it is not super critical either, and moving it out of line will help to keep the include chain clean for the following DMA indirection bypass work. Signed-off-by: NChristoph Hellwig <hch@lst.de> Acked-by: NJesper Dangaard Brouer <brouer@redhat.com> Tested-by: NJesper Dangaard Brouer <brouer@redhat.com> Tested-by: NTony Luck <tony.luck@intel.com>
-
由 Christoph Hellwig 提交于
There is no need to have all setup and coherent allocation / freeing routines inline. Move them out of line to keep the implemeation nicely encapsulated and save some kernel text size. Signed-off-by: NChristoph Hellwig <hch@lst.de> Acked-by: NJesper Dangaard Brouer <brouer@redhat.com> Tested-by: NJesper Dangaard Brouer <brouer@redhat.com> Tested-by: NTony Luck <tony.luck@intel.com>
-
由 Christoph Hellwig 提交于
The two functions are exactly the same, so don't bother implementing them twice. Signed-off-by: NChristoph Hellwig <hch@lst.de> Acked-by: NJesper Dangaard Brouer <brouer@redhat.com> Tested-by: NJesper Dangaard Brouer <brouer@redhat.com> Tested-by: NTony Luck <tony.luck@intel.com>
-
由 Christoph Hellwig 提交于
We can just call the regular calls after adding offset the the address instead of reimplementing them. Signed-off-by: NChristoph Hellwig <hch@lst.de> Acked-by: NJesper Dangaard Brouer <brouer@redhat.com> Tested-by: NJesper Dangaard Brouer <brouer@redhat.com> Tested-by: NTony Luck <tony.luck@intel.com>
-
- 06 12月, 2018 3 次提交
-
-
由 Christoph Hellwig 提交于
Currently dma_mapping_error returns a boolean as int, with 1 meaning error. This is rather unusual and many callers have to convert it to errno value. The callers are highly inconsistent with error codes ranging from -ENOMEM over -EIO, -EINVAL and -EFAULT ranging to -EAGAIN. Return -ENOMEM which seems to be what the largest number of callers convert it to, and which also matches the typical error case where we are out of resources. Signed-off-by: NChristoph Hellwig <hch@lst.de> Acked-by: NRussell King <rmk+kernel@armlinux.org.uk> Acked-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Christoph Hellwig 提交于
No users left except for vmd which just forwards it. Signed-off-by: NChristoph Hellwig <hch@lst.de> Acked-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Christoph Hellwig 提交于
Error handling of the dma_map_single and dma_map_page APIs is a little problematic at the moment, in that we use different encodings in the returned dma_addr_t to indicate an error. That means we require an additional indirect call to figure out if a dma mapping call returned an error, and a lot of boilerplate code to implement these semantics. Instead return the maximum addressable value as the error. As long as we don't allow mapping single-byte ranges with single-byte alignment this value can never be a valid return. Additionaly if drivers do not check the return value from the dma_map* routines this values means they will generally not be pointed to actual memory. Once the default value is added here we can start removing the various mapping_error methods and just rely on this generic check. Signed-off-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Acked-by: NRussell King <rmk+kernel@armlinux.org.uk> Acked-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 02 12月, 2018 1 次提交
-
-
由 Christoph Hellwig 提交于
The arm64 codebase to implement coherent dma allocation for architectures with non-coherent DMA is a good start for a generic implementation, given that is uses the generic remap helpers, provides the atomic pool for allocations that can't sleep and still is realtively simple and well tested. Move it to kernel/dma and allow architectures to opt into it using a config symbol. Architectures just need to provide a new arch_dma_prep_coherent helper to writeback an invalidate the caches for any memory that gets remapped for uncached access. Signed-off-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NWill Deacon <will.deacon@arm.com> Reviewed-by: NRobin Murphy <robin.murphy@arm.com>
-