- 28 5月, 2018 4 次提交
-
-
由 Christoph Hellwig 提交于
This is something drivers should decide (modulo chipset quirks like for VIA), which as far as I can tell is how things have been handled for the last 15 years. Note that we keep the usedac option for now, as it is used in the wild to override the too generic VIA quirk. Signed-off-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Christoph Hellwig 提交于
Limiting the dma mask to avoid PCI (pre-PCIe) DAC cycles while paying the huge overhead of an IOMMU is rather pointless, and this seriously gets in the way of dma mapping work. Signed-off-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Christoph Hellwig 提交于
This is just the minimal workaround. The file is mostly either stale and/or duplicative of Documentation/admin-guide/kernel-parameters.txt, but that is much more work than I'm willing to do right now. Signed-off-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Christoph Hellwig 提交于
Various PCI bridges (VIA PCI, Xilinx PCIe) limit DMA to only 32-bits even if the device itself supports more. Add a single bit flag to struct device (to be moved into the dma extension once we get to it) to flag such devices and reject larger DMA to them. Signed-off-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 25 5月, 2018 1 次提交
-
-
由 Huaisheng Ye 提交于
Signed-off-by: NHuaisheng Ye <yehs1@lenovo.com> Signed-off-by: NChristoph Hellwig <hch@lst.de>
-
- 24 5月, 2018 1 次提交
-
-
由 Robin Murphy 提交于
Drivers/subsystems creating scatterlists for DMA should be taking care to respect the scatter-gather limitations of the appropriate device, as described by dma_parms. A DMA API implementation cannot feasibly split a scatterlist into *more* entries than originally passed, so it is not well defined what they should do when given a segment larger than the limit they are also required to respect. Conversely, devices which are less limited than the rather conservative defaults, or indeed have no limitations at all (e.g. GPUs with their own internal MMU), should be encouraged to set appropriate dma_parms, as they may get more efficient DMA mapping performance out of it. Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NChristoph Hellwig <hch@lst.de>
-
- 19 5月, 2018 10 次提交
-
-
由 Christoph Hellwig 提交于
Switch to the generic noncoherent direct mapping implementation. Signed-off-by: NChristoph Hellwig <hch@lst.de> Acked-by: NMark Salter <msalter@redhat.com>
-
由 Christoph Hellwig 提交于
Switch to the generic noncoherent direct mapping implementation. Signed-off-by: NChristoph Hellwig <hch@lst.de> Tested-by: NAlexey Brodkin <abrodkin@synopsys.com> Acked-by: NVineet Gupta <vgupta@synopsys.com>
-
由 Christoph Hellwig 提交于
These functions should perform the same cache synchronoization as calling arc_dma_sync_single_for_{cpu,device} in addition to doing any required address translation or mapping [1]. Ensure they actually do that by calling arc_dma_sync_single_for_{cpu,device} instead of passing the dir argument along to _dma_cache_sync. The now unused _dma_cache_sync function is removed as well. [1] in fact various drivers rely on that by passing DMA_ATTR_SKIP_CPU_SYNC to the map/unmap routines and doing the cache synchronization manually. Signed-off-by: NChristoph Hellwig <hch@lst.de> Tested-by: NAlexey Brodkin <abrodkin@synopsys.com> Acked-by: NVineet Gupta <vgupta@synopsys.com>
-
由 Christoph Hellwig 提交于
These functions should perform the same functionality as calling arc_dma_sync_single_for_{cpu,device} on each S/G list element. Ensure they actually do that by calling arc_dma_sync_single_for_{cpu,device}. Otherwise we could be passing a different dir argument. Signed-off-by: NChristoph Hellwig <hch@lst.de> Tested-by: NAlexey Brodkin <abrodkin@synopsys.com> Acked-by: NVineet Gupta <vgupta@synopsys.com>
-
由 Christoph Hellwig 提交于
Remove the indirection through _dma_cache_sync. Also move the functions up a bit in the source file as we'll need them in more places soon. Signed-off-by: NChristoph Hellwig <hch@lst.de> Tested-by: NAlexey Brodkin <abrodkin@synopsys.com> Acked-by: NVineet Gupta <vgupta@synopsys.com>
-
由 Christoph Hellwig 提交于
Add a new dma_map_ops implementation that uses dma-direct for the address mapping of streaming mappings, and which requires arch-specific implemenations of coherent allocate/free. Architectures have to provide flushing helpers to ownership trasnfers to the device and/or CPU, and can provide optional implementations of the coherent mmap functionality, and the cache_flush routines for non-coherent long term allocations. Signed-off-by: NChristoph Hellwig <hch@lst.de> Tested-by: NAlexey Brodkin <abrodkin@synopsys.com> Acked-by: NVineet Gupta <vgupta@synopsys.com>
-
由 Christoph Hellwig 提交于
ARCH_DMA_ADDR_T_64BIT is always true for 64-bit architectures now, so we can skip the clause requiring it. 'n' is the default default, so no need to explicitly state it. Tested-by: NAlexey Brodkin <abrodkin@synopsys.com> Acked-by: NVineet Gupta <vgupta@synopsys.com> Signed-off-by: NChristoph Hellwig <hch@lst.de>
-
由 Christoph Hellwig 提交于
All RISC-V platforms today lack an IOMMU. However, legacy PCI devices sometimes require DMA-memory to be in the low 32 bits. To make this work, we enable the software-based bounce buffers from swiotlb. They only impose overhead when the device in question cannot address the full 64-bit address space, so a perfect fit. This patch assumes that DMA is coherent with the processor and the PCI bus. It also assumes that the processor and devices share a common address space. This is true for all RISC-V platforms so far. [changelog stolen from an earlier patch by Palmer Dabbelt that did the more complicated swiotlb wireup before the recent consolidation] Signed-off-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NPalmer Dabbelt <palmer@sifive.com>
-
由 Christoph Hellwig 提交于
Until we actually support > 32bit physical addresses for 32-bit using highmem there is no point in enabling ZONE_DMA32. And even if such support is ever added it probably should be conditional to not burden low end embedded devices. Signed-off-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NPalmer Dabbelt <palmer@sifive.com>
-
由 Christoph Hellwig 提交于
We can deduct this directly using a select from ARCH_RV32I/ARCH_RV64I. Signed-off-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NPalmer Dabbelt <palmer@sifive.com>
-
- 09 5月, 2018 14 次提交
-
-
由 Yisheng Xie 提交于
swiotlb use physical address of bounce buffer when do map and unmap, therefore, related comment should be updated. Signed-off-by: NYisheng Xie <xieyisheng1@huawei.com> Signed-off-by: NChristoph Hellwig <hch@lst.de>
-
由 Christoph Hellwig 提交于
swiotlb now selects the DMA_DIRECT_OPS config symbol, so this will always be true. Signed-off-by: NChristoph Hellwig <hch@lst.de>
-
由 Christoph Hellwig 提交于
This way we have one central definition of it, and user can select it as needed. The new option is not user visible, which is the behavior it had in most architectures, with a few notable exceptions: - On x86_64 and mips/loongson3 it used to be user selectable, but defaulted to y. It now is unconditional, which seems like the right thing for 64-bit architectures without guaranteed availablity of IOMMUs. - on powerpc the symbol is user selectable and defaults to n, but many boards select it. This change assumes no working setup required a manual selection, but if that turned out to be wrong we'll have to add another select statement or two for the respective boards. Signed-off-by: NChristoph Hellwig <hch@lst.de>
-
由 Christoph Hellwig 提交于
Only mips and unicore32 select CONFIG_NEED_SG_DMA_LENGTH when building swiotlb. swiotlb itself never merges segements and doesn't accesses the dma_length field directly, so drop the dependency. Signed-off-by: NChristoph Hellwig <hch@lst.de> Acked-by: NJames Hogan <jhogan@kernel.org>
-
由 Christoph Hellwig 提交于
swiotlb is only used as a library of helper for xen-swiotlb if Xen support is enabled on arm, so don't build it by default. Signed-off-by: NChristoph Hellwig <hch@lst.de>
-
由 Christoph Hellwig 提交于
This symbol is now always identical to CONFIG_ARCH_DMA_ADDR_T_64BIT, so remove it. Signed-off-by: NChristoph Hellwig <hch@lst.de> Acked-by: NBjorn Helgaas <bhelgaas@google.com>
-
由 Christoph Hellwig 提交于
Define this symbol if the architecture either uses 64-bit pointers or the PHYS_ADDR_T_64BIT is set. This covers 95% of the old arch magic. We only need an additional select for Xen on ARM (why anyway?), and we now always set ARCH_DMA_ADDR_T_64BIT on mips boards with 64-bit physical addressing instead of only doing it when highmem is set. Signed-off-by: NChristoph Hellwig <hch@lst.de> Acked-by: NJames Hogan <jhogan@kernel.org>
-
由 Christoph Hellwig 提交于
Instead select the PHYS_ADDR_T_64BIT for 32-bit architectures that need a 64-bit phys_addr_t type directly. Signed-off-by: NChristoph Hellwig <hch@lst.de> Acked-by: NJames Hogan <jhogan@kernel.org>
-
由 Christoph Hellwig 提交于
This way we have one central definition of it, and user can select it as needed. Note that we now also always select it when CONFIG_DMA_API_DEBUG is select, which fixes some incorrect checks in a few network drivers. Signed-off-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NAnshuman Khandual <khandual@linux.vnet.ibm.com>
-
由 Christoph Hellwig 提交于
This way we have one central definition of it, and user can select it as needed. Signed-off-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NAnshuman Khandual <khandual@linux.vnet.ibm.com>
-
由 Christoph Hellwig 提交于
This way we have one central definition of it, and user can select it as needed. Signed-off-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NAnshuman Khandual <khandual@linux.vnet.ibm.com>
-
由 Christoph Hellwig 提交于
This avoids selecting IOMMU_HELPER just for this function. And we only use it once or twice in normal builds so this often even is a size reduction. Signed-off-by: NChristoph Hellwig <hch@lst.de>
-
由 Christoph Hellwig 提交于
This function is only used by built-in code. Signed-off-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NAnshuman Khandual <khandual@linux.vnet.ibm.com>
-
由 Christoph Hellwig 提交于
This code is only used by sparc, and all new iommu drivers should use the drivers/iommu/ framework. Also remove the unused exports. Signed-off-by: NChristoph Hellwig <hch@lst.de> Acked-by: NDavid S. Miller <davem@davemloft.net> Reviewed-by: NAnshuman Khandual <khandual@linux.vnet.ibm.com>
-
- 08 5月, 2018 4 次提交
-
-
由 Christoph Hellwig 提交于
There is no arch specific code required for dma-debug, so there is no need to opt into the support either. Signed-off-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NRobin Murphy <robin.murphy@arm.com>
-
由 Christoph Hellwig 提交于
Only used by the AMD GART and Intel VT-D drivers, which must be built in. Signed-off-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NRobin Murphy <robin.murphy@arm.com>
-
由 Christoph Hellwig 提交于
Just keep a single variable with a descriptive name instead of two with confusing names. Signed-off-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NRobin Murphy <robin.murphy@arm.com>
-
由 Christoph Hellwig 提交于
Most mainstream architectures are using 65536 entries, so lets stick to that. If someone is really desperate to override it that can still be done through <asm/dma-mapping.h>, but I'd rather see a really good rationale for that. dma_debug_init is now called as a core_initcall, which for many architectures means much earlier, and provides dma-debug functionality earlier in the boot process. This should be safe as it only relies on the memory allocator already being available. Signed-off-by: NChristoph Hellwig <hch@lst.de> Acked-by: NMarek Szyprowski <m.szyprowski@samsung.com> Reviewed-by: NRobin Murphy <robin.murphy@arm.com>
-
- 07 5月, 2018 6 次提交
-
-
由 Christoph Hellwig 提交于
This was used by the ide, scsi and networking code in the past to determine if they should bounce payloads. Now that the dma mapping always have to support dma to all physical memory (thanks to swiotlb for non-iommu systems) there is no need to this crude hack any more. Signed-off-by: NChristoph Hellwig <hch@lst.de> Acked-by: Palmer Dabbelt <palmer@sifive.com> (for riscv) Reviewed-by: NJens Axboe <axboe@kernel.dk>
-
由 Christoph Hellwig 提交于
These days the dma mapping routines must be able to handle any address supported by the device, be that using an iommu, or swiotlb if none is supported. With that the PCI_DMA_BUS_IS_PHYS check in illegal_highdma is not needed and can be removed. Signed-off-by: NChristoph Hellwig <hch@lst.de> Acked-by: NDavid S. Miller <davem@davemloft.net>
-
由 Christoph Hellwig 提交于
We now have ways to deal with drainage in the block layer, and libata has been using it for ages. We also want to get rid of PCI_DMA_BUS_IS_PHYS now, so just reduce the PCI transfer size for ide - anyone who cares for performance on PCI controllers should have switched to libata long ago. Signed-off-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NJens Axboe <axboe@kernel.dk>
-
由 Christoph Hellwig 提交于
ide_toggle_bounce did select various strange block bounce limits, including not bouncing at all as soon as an iommu is present in the system. Given that the dma_map routines now handle any required bounce buffering except for ISA DMA, and the ide code already must handle either ISA DMA or highmem at least for iommu equipped systems we can get rid of the block layer bounce limit setting entirely. Signed-off-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NJens Axboe <axboe@kernel.dk>
-
由 Christoph Hellwig 提交于
We can rely on the dma-mapping code to handle any DMA limits that is bigger than the ISA DMA mask for us (either using an iommu or swiotlb), so remove setting the block layer bounce limit for anything but the unchecked_isa_dma case, or the bouncing for highmem pages. Signed-off-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NJens Axboe <axboe@kernel.dk>
-
由 Takashi Iwai 提交于
As the recent swiotlb bug revealed, we seem to have given up the direct DMA allocation too early and felt back to swiotlb allocation. The reason is that swiotlb allocator expected that dma_direct_alloc() would try harder to get pages even below 64bit DMA mask with GFP_DMA32, but the function doesn't do that but only deals with GFP_DMA case. This patch adds a similar fallback reallocation with GFP_DMA32 as we've done with GFP_DMA. The condition is that the coherent mask is smaller than 64bit (i.e. some address limitation), and neither GFP_DMA nor GFP_DMA32 is set beforehand. Signed-off-by: NTakashi Iwai <tiwai@suse.de> Signed-off-by: NChristoph Hellwig <hch@lst.de>
-