- 04 8月, 2016 1 次提交
-
-
由 Krzysztof Kozlowski 提交于
The dma-mapping core and the implementations do not change the DMA attributes passed by pointer. Thus the pointer can point to const data. However the attributes do not have to be a bitfield. Instead unsigned long will do fine: 1. This is just simpler. Both in terms of reading the code and setting attributes. Instead of initializing local attributes on the stack and passing pointer to it to dma_set_attr(), just set the bits. 2. It brings safeness and checking for const correctness because the attributes are passed by value. Semantic patches for this change (at least most of them): virtual patch virtual context @r@ identifier f, attrs; @@ f(..., - struct dma_attrs *attrs + unsigned long attrs , ...) { ... } @@ identifier r.f; @@ f(..., - NULL + 0 ) and // Options: --all-includes virtual patch virtual context @r@ identifier f, attrs; type t; @@ t f(..., struct dma_attrs *attrs); @@ identifier r.f; @@ f(..., - NULL + 0 ) Link: http://lkml.kernel.org/r/1468399300-5399-2-git-send-email-k.kozlowski@samsung.comSigned-off-by: NKrzysztof Kozlowski <k.kozlowski@samsung.com> Acked-by: NVineet Gupta <vgupta@synopsys.com> Acked-by: NRobin Murphy <robin.murphy@arm.com> Acked-by: NHans-Christian Noren Egtvedt <egtvedt@samfundet.no> Acked-by: Mark Salter <msalter@redhat.com> [c6x] Acked-by: Jesper Nilsson <jesper.nilsson@axis.com> [cris] Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch> [drm] Reviewed-by: NBart Van Assche <bart.vanassche@sandisk.com> Acked-by: Joerg Roedel <jroedel@suse.de> [iommu] Acked-by: Fabien Dessenne <fabien.dessenne@st.com> [bdisp] Reviewed-by: Marek Szyprowski <m.szyprowski@samsung.com> [vb2-core] Acked-by: David Vrabel <david.vrabel@citrix.com> [xen] Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> [xen swiotlb] Acked-by: Joerg Roedel <jroedel@suse.de> [iommu] Acked-by: Richard Kuo <rkuo@codeaurora.org> [hexagon] Acked-by: Geert Uytterhoeven <geert@linux-m68k.org> [m68k] Acked-by: Gerald Schaefer <gerald.schaefer@de.ibm.com> [s390] Acked-by: NBjorn Andersson <bjorn.andersson@linaro.org> Acked-by: Hans-Christian Noren Egtvedt <egtvedt@samfundet.no> [avr32] Acked-by: Vineet Gupta <vgupta@synopsys.com> [arc] Acked-by: Robin Murphy <robin.murphy@arm.com> [arm64 and dma-iommu] Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 28 3月, 2012 1 次提交
-
-
由 Andrzej Pietrasiewicz 提交于
Adapt core x86 and IA64 architecture code for dma_map_ops changes: replace alloc/free_coherent with generic alloc/free methods. Signed-off-by: NAndrzej Pietrasiewicz <andrzej.p@samsung.com> Acked-by: NKyungmin Park <kyungmin.park@samsung.com> [removed swiotlb related changes and replaced it with wrappers, merged with IA64 patch to avoid inter-patch dependences in intel-iommu code] Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com> Reviewed-by: NArnd Bergmann <arnd@arndb.de> Acked-by: NTony Luck <tony.luck@intel.com>
-
- 28 5月, 2010 1 次提交
-
-
由 FUJITA Tomonori 提交于
sync_single_range_for_cpu and sync_single_range_for_device hooks in swiotlb_dma_ops are unnecessary because sync_single_for_cpu and sync_single_for_device are used there. Signed-off-by: NFUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Cc: Tony Luck <tony.luck@intel.com> Cc: Fenghua Yu <fenghua.yu@intel.com> Reviewed-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 30 3月, 2010 1 次提交
-
-
由 Tejun Heo 提交于
include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h percpu.h is included by sched.h and module.h and thus ends up being included when building most .c files. percpu.h includes slab.h which in turn includes gfp.h making everything defined by the two files universally available and complicating inclusion dependencies. percpu.h -> slab.h dependency is about to be removed. Prepare for this change by updating users of gfp and slab facilities include those headers directly instead of assuming availability. As this conversion needs to touch large number of source files, the following script is used as the basis of conversion. http://userweb.kernel.org/~tj/misc/slabh-sweep.py The script does the followings. * Scan files for gfp and slab usages and update includes such that only the necessary includes are there. ie. if only gfp is used, gfp.h, if slab is used, slab.h. * When the script inserts a new include, it looks at the include blocks and try to put the new include such that its order conforms to its surrounding. It's put in the include block which contains core kernel includes, in the same order that the rest are ordered - alphabetical, Christmas tree, rev-Xmas-tree or at the end if there doesn't seem to be any matching order. * If the script can't find a place to put a new include (mostly because the file doesn't have fitting include block), it prints out an error message indicating which .h file needs to be added to the file. The conversion was done in the following steps. 1. The initial automatic conversion of all .c files updated slightly over 4000 files, deleting around 700 includes and adding ~480 gfp.h and ~3000 slab.h inclusions. The script emitted errors for ~400 files. 2. Each error was manually checked. Some didn't need the inclusion, some needed manual addition while adding it to implementation .h or embedding .c file was more appropriate for others. This step added inclusions to around 150 files. 3. The script was run again and the output was compared to the edits from #2 to make sure no file was left behind. 4. Several build tests were done and a couple of problems were fixed. e.g. lib/decompress_*.c used malloc/free() wrappers around slab APIs requiring slab.h to be added manually. 5. The script was run on all .h files but without automatically editing them as sprinkling gfp.h and slab.h inclusions around .h files could easily lead to inclusion dependency hell. Most gfp.h inclusion directives were ignored as stuff from gfp.h was usually wildly available and often used in preprocessor macros. Each slab.h inclusion directive was examined and added manually as necessary. 6. percpu.h was updated not to include slab.h. 7. Build test were done on the following configurations and failures were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my distributed build env didn't work with gcov compiles) and a few more options had to be turned off depending on archs to make things build (like ipr on powerpc/64 which failed due to missing writeq). * x86 and x86_64 UP and SMP allmodconfig and a custom test config. * powerpc and powerpc64 SMP allmodconfig * sparc and sparc64 SMP allmodconfig * ia64 SMP allmodconfig * s390 SMP allmodconfig * alpha SMP allmodconfig * um on x86_64 SMP allmodconfig 8. percpu.h modifications were reverted so that it could be applied as a separate patch and serve as bisection point. Given the fact that I had only a couple of failures from tests on step 6, I'm fairly confident about the coverage of this conversion patch. If there is a breakage, it's likely to be something in one of the arch headers which should be easily discoverable easily on most builds of the specific arch. Signed-off-by: NTejun Heo <tj@kernel.org> Guess-its-ok-by: NChristoph Lameter <cl@linux-foundation.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
-
- 10 11月, 2009 1 次提交
-
-
由 FUJITA Tomonori 提交于
This enables us to avoid printing swiotlb memory info when we initialize swiotlb. After swiotlb initialization, we could find that we don't need swiotlb. This patch removes the code to print swiotlb memory info in swiotlb_init() and exports the function to do that. Signed-off-by: NFUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Cc: chrisw@sous-sol.org Cc: dwmw2@infradead.org Cc: joerg.roedel@amd.com Cc: muli@il.ibm.com Cc: tony.luck@intel.com Cc: benh@kernel.crashing.org LKML-Reference: <1257849980-22640-9-git-send-email-fujita.tomonori@lab.ntt.co.jp> [ -v2: merge up conflict ] Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 14 8月, 2009 1 次提交
-
-
由 David Woodhouse 提交于
Since commit 19943b0e ('intel-iommu: Unify hardware and software passthrough support'), hardware passthrough mode will do the same as software passthrough mode was doing -- it'll still use the IOMMU normally for devices which can't address all of memory. This means that we don't need to bother with swiotlb. Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
- 29 4月, 2009 1 次提交
-
-
由 Fenghua Yu 提交于
The patch adds kernel parameter intel_iommu=pt to set up pass through mode in context mapping entry. This disables DMAR in linux kernel; but KVM still runs on VT-d and interrupt remapping still works. In this mode, kernel uses swiotlb for DMA API functions but other VT-d functionalities are enabled for KVM. KVM always uses multi level translation page table in VT-d. By default, pass though mode is disabled in kernel. This is useful when people don't want to enable VT-d DMAR in kernel but still want to use KVM and interrupt remapping for reasons like DMAR performance concern or debug purpose. Signed-off-by: NFenghua Yu <fenghua.yu@intel.com> Acked-by: NWeidong Han <weidong@intel.com> Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
- 14 4月, 2009 1 次提交
-
-
由 Yang Hongyang 提交于
This is the second go through of the old DMA_nBIT_MASK macro,and there're not so many of them left,so I put them into one patch.I hope this is the last round. After this the definition of the old DMA_nBIT_MASK macro could be removed. Signed-off-by: NYang Hongyang <yanghy@cn.fujitsu.com> Cc: Russell King <rmk@arm.linux.org.uk> Cc: Tony Lindgren <tony@atomide.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: James Bottomley <James.Bottomley@HansenPartnership.com> Cc: Greg KH <greg@kroah.com> Cc: Takashi Iwai <tiwai@suse.de> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 29 1月, 2009 1 次提交
-
-
由 FUJITA Tomonori 提交于
Before the dma ops unification, IA64 always uses GFP_DMA for dma_alloc_coherent like: #define dma_alloc_coherent(dev, size, handle, gfp) \ platform_dma_alloc_coherent(dev, size, handle, (gfp) | GFP_DMA) This GFP_DMA enforcement doesn't make sense for IOMMUs since they can do address translation to give addresses that devices can access to. The IOMMU drivers ignore the zone flag. However, this is still necessary for swiotlb since it can't do address translation. We don't always need to use GFP_DMA for swiotlb. We need GFP_DMA for devices incapable of 64bit DMA. This patch is sorta updated version of: http://marc.info/?l=linux-kernel&m=122638215612705&w=2Signed-off-by: NFUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 27 1月, 2009 1 次提交
-
-
由 FUJITA Tomonori 提交于
This moves iommu_detected to arch/ia64/kernel/dma-mapping.c from arch/ia64/kernel/pci-swiotlb.c to fix the following error on on IA64_DIG_VTD: arch/ia64/kernel/built-in.o: In function `pci_iommu_init': pci-dma.c:(.init.text+0xa021): undefined reference to `iommu_detected' pci-dma.c:(.init.text+0xa030): undefined reference to `iommu_detected' drivers/built-in.o: In function `detect_intel_iommu': (.init.text+0x11c0): undefined reference to `iommu_detected' drivers/built-in.o: In function `detect_intel_iommu': (.init.text+0x11e1): undefined reference to `iommu_detected' iommu_detected is used to handle IOMMUs so I guess that arch/ia64/kernel/dma-mapping.c is ok (there might be a better place for it though). Signed-off-by: NFUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 07 1月, 2009 1 次提交
-
-
由 Luck, Tony 提交于
Impact: Section fix WARNING: vmlinux.o(.text+0x596d2): Section mismatch in reference from the function swiotlb_dma_init() to the function .init.text:swiotlb_init() The function swiotlb_dma_init() references the function __init swiotlb_init(). This is often because swiotlb_dma_init lacks a __init annotation or the annotation of swiotlb_init is wrong. Signed-off-by: NTony Luck <tony.luck@intel.com> Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
- 06 1月, 2009 4 次提交
-
-
由 FUJITA Tomonori 提交于
This adds swiotlb_map_page and swiotlb_unmap_page to lib/swiotlb.c and remove IA64 and X86's swiotlb_map_page and swiotlb_unmap_page. This also removes unnecessary swiotlb_map_single, swiotlb_map_single_attrs, swiotlb_unmap_single and swiotlb_unmap_single_attrs. Signed-off-by: NFUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Acked-by: NTony Luck <tony.luck@intel.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 FUJITA Tomonori 提交于
This converts X86 and IA64 to use include/linux/dma-mapping.h. It's a bit large but pretty boring. The major change for X86 is converting 'int dir' to 'enum dma_data_direction dir' in DMA mapping operations. The major changes for IA64 is using map_page and unmap_page instead of map_single and unmap_single. Signed-off-by: NFUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Acked-by: NTony Luck <tony.luck@intel.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 FUJITA Tomonori 提交于
This patch introduces a global pointer, dma_ops, which points to an appropriate dma_mapping_ops that the kernel should use. This is a common way to handle multiple dma_mapping_ops (X86, POWER, and SPARC). dma_ops is set in platform_dma_init. We also set it by hand where machvec_init is callev via subsys_initcall. - IA64_DIG_VTD uses vtd_dma_ops. - IA64_HP_ZX1 uses sba_dma_ops. - IA64_HP_ZX1_SWIOTLB uses hwsw_dma_ops. - IA64_SGI_SN2 uses sn_dma_ops. - The rest use swiotlb_dma_ops. Signed-off-by: NFUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Acked-by: NTony Luck <tony.luck@intel.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 FUJITA Tomonori 提交于
There is already dma_mapping_ops for SWIOTLB but there are some missing hooks. This is for IA64_DIG_VTD, IA64_HP_ZX1_SWIOTLB, IA64_SGI_UV, IA64_HP_SIM, IA64_XEN_GUEST and IA64_GENERIC. Signed-off-by: NFUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Acked-by: NTony Luck <tony.luck@intel.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 18 10月, 2008 1 次提交
-
-
由 Fenghua Yu 提交于
The patch contains Intel IOMMU IA64 specific code. It defines new machvec dig_vtd, hooks for IOMMU, DMAR table detection, cache line flush function, etc. For a generic kernel with CONFIG_DMAR=y, if Intel IOMMU is detected, dig_vtd is used for machinve vector. Otherwise, kernel falls back to dig machine vector. Kernel parameter "machvec=dig" or "intel_iommu=off" can be used to force kernel to boot dig machine vector. Signed-off-by: NFenghua Yu <fenghua.yu@intel.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
- 27 7月, 2008 1 次提交
-
-
由 FUJITA Tomonori 提交于
Add per-device dma_mapping_ops support for CONFIG_X86_64 as POWER architecture does: This enables us to cleanly fix the Calgary IOMMU issue that some devices are not behind the IOMMU (http://lkml.org/lkml/2008/5/8/423). I think that per-device dma_mapping_ops support would be also helpful for KVM people to support PCI passthrough but Andi thinks that this makes it difficult to support the PCI passthrough (see the above thread). So I CC'ed this to KVM camp. Comments are appreciated. A pointer to dma_mapping_ops to struct dev_archdata is added. If the pointer is non NULL, DMA operations in asm/dma-mapping.h use it. If it's NULL, the system-wide dma_ops pointer is used as before. If it's useful for KVM people, I plan to implement a mechanism to register a hook called when a new pci (or dma capable) device is created (it works with hot plugging). It enables IOMMUs to set up an appropriate dma_mapping_ops per device. The major obstacle is that dma_mapping_error doesn't take a pointer to the device unlike other DMA operations. So x86 can't have dma_mapping_ops per device. Note all the POWER IOMMUs use the same dma_mapping_error function so this is not a problem for POWER but x86 IOMMUs use different dma_mapping_error functions. The first patch adds the device argument to dma_mapping_error. The patch is trivial but large since it touches lots of drivers and dma-mapping.h in all the architecture. This patch: dma_mapping_error() doesn't take a pointer to the device unlike other DMA operations. So we can't have dma_mapping_ops per device. Note that POWER already has dma_mapping_ops per device but all the POWER IOMMUs use the same dma_mapping_error function. x86 IOMMUs use device argument. [akpm@linux-foundation.org: fix sge] [akpm@linux-foundation.org: fix svc_rdma] [akpm@linux-foundation.org: build fix] [akpm@linux-foundation.org: fix bnx2x] [akpm@linux-foundation.org: fix s2io] [akpm@linux-foundation.org: fix pasemi_mac] [akpm@linux-foundation.org: fix sdhci] [akpm@linux-foundation.org: build fix] [akpm@linux-foundation.org: fix sparc] [akpm@linux-foundation.org: fix ibmvscsi] Signed-off-by: NFUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Cc: Muli Ben-Yehuda <muli@il.ibm.com> Cc: Andi Kleen <andi@firstfloor.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@elte.hu> Cc: Avi Kivity <avi@qumranet.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 11 7月, 2008 1 次提交
-
-
由 FUJITA Tomonori 提交于
gart.h has only GART-specific stuff. Only GART code needs it. Other IOMMU stuff should include iommu.h instead of gart.h. Signed-off-by: NFUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Acked-by: NMuli Ben-Yehuda <muli@il.ibm.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 08 7月, 2008 1 次提交
-
-
由 Yinghai Lu 提交于
and use max_pfn directly. Signed-off-by: NYinghai Lu <yhlu.kernel@gmail.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 20 4月, 2008 1 次提交
-
-
由 Ingo Molnar 提交于
Signed-off-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 30 1月, 2008 1 次提交
-
-
由 Thomas Gleixner 提交于
Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 30 10月, 2007 1 次提交
-
-
由 Joerg Roedel 提交于
This patch renames the include file asm-x86/iommu.h to asm-x86/gart.h to make clear to which IOMMU implementation it belongs. The patch also adds "GART" to the Kconfig line. Signed-off-by: NJoerg Roedel <joerg.roedel@amd.com> Acked-by: NMuli Ben-Yehuda <muli@il.ibm.com> Signed-off-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 11 10月, 2007 2 次提交
-
-
由 Thomas Gleixner 提交于
Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Thomas Gleixner 提交于
Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 22 7月, 2007 1 次提交
-
-
由 Yinghai Lu 提交于
[akpm@linux-foundation.org: build fix] Signed-off-by: NYinghai Lu <yinghai.lu@sun.com> Cc: Alan Cox <alan@lxorguk.ukuu.org.uk> Cc: "Eric W. Biederman" <ebiederm@xmission.com> Cc: Muli Ben-Yehuda <muli@il.ibm.com> Cc: Vivek Goyal <vgoyal@in.ibm.com> Cc: Dave Jones <davej@codemonkey.org.uk> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NAndi Kleen <ak@suse.de> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 03 5月, 2007 1 次提交
-
-
由 Stephen Hemminger 提交于
The dma_ops structure can be const since it never changes after boot. Signed-off-by: NStephen Hemminger <shemminger@linux-foundation.org> Signed-off-by: NAndi Kleen <ak@suse.de>
-
- 06 2月, 2007 1 次提交
-
-
由 Jan Beulich 提交于
- add proper __init decoration to swiotlb's init code (and the code calling it, where not already the case) - replace uses of 'unsigned long' with dma_addr_t where appropriate - do miscellaneous simplicfication and cleanup Signed-off-by: NJan Beulich <jbeulich@novell.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
- 30 9月, 2006 1 次提交
-
-
由 Rolf Eike Beer 提交于
As suggested by Muli Ben-Yehuda this function is moved to generic code as may be useful for all archs. [akpm@osdl.org: fix] Signed-off-by: NRolf Eike Beer <eike-kernel@sf-tec.de> Cc: Muli Ben-Yehuda <muli@il.ibm.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
- 30 7月, 2006 1 次提交
-
-
由 Andi Kleen 提交于
It was broken before. But having it is important as possible hardware bug workaround. And previously there was no way to force swiotlb if there is another IOMMU. Side effect is that iommu=force won't force swiotlb anymore even if there isn't another IOMMU. Signed-off-by: NAndi Kleen <ak@suse.de> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
- 27 6月, 2006 1 次提交
-
-
由 Jon Mason 提交于
swiotlb relies on the gart specific iommu_aperture variable to know if we discovered a hardware IOMMU before swiotlb initialization. Introduce iommu_detected to do the same thing, but in a HW IOMMU neutral manner, in preparation for adding the Calgary HW IOMMU. Signed-Off-By: NMuli Ben-Yehuda <muli@il.ibm.com> Signed-Off-By: NJon Mason <jdmason@us.ibm.com> Signed-off-by: NAndi Kleen <ak@suse.de> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
- 05 2月, 2006 1 次提交
-
-
由 Jon Mason 提交于
This patch contains a printk reorder to remove the current problem of displaying "PCI-DMA: Disabling IOMMU." and then "PCI-DMA: using GART IOMMU" 20 lines later in dmesg. It also constains a printk reorder in swiotlb to state swiotlb enablement prior to describing the location of the bounce buffers, and a printk reorder to state gart enablement prior to describing the aperature. Also constains a whitespace cleanup in arch/x86_64/kernel/setup.c Tested (along with patch 2/2) on dual opteron with gart enabled, iommu=soft, and iommu=off. Signed-off-by: NJon Mason <jdmason@us.ibm.com> Signed-off-by: NAndi Kleen <ak@suse.de> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
- 12 1月, 2006 1 次提交
-
-
由 Muli Ben-Yehuda 提交于
AK: I hacked Muli's original patch a lot and there were a lot of changes - all bugs are probably to blame on me now. There were also some changes in the fall back behaviour for swiotlb - in particular it doesn't try to use GFP_DMA now anymore. Also all DMA mapping operations use the same core dma_alloc_coherent code with proper fallbacks now. And various other changes and cleanups. Known problems: iommu=force swiotlb=force together breaks needs more testing. This patch cleans up x86_64's DMA mapping dispatching code. Right now we have three possible IOMMU types: AGP GART, swiotlb and nommu, and in the future we will also have Xen's x86_64 swiotlb and other HW IOMMUs for x86_64. In order to support all of them cleanly, this patch: - introduces a struct dma_mapping_ops with function pointers for each of the DMA mapping operations of gart (AMD HW IOMMU), swiotlb (software IOMMU) and nommu (no IOMMU). - gets rid of: if (swiotlb) return swiotlb_xxx(); - PCI_DMA_BUS_IS_PHYS is now checked against the dma_ops being set This makes swiotlb faster by avoiding double copying in some cases. Signed-Off-By: NMuli Ben-Yehuda <mulix@mulix.org> Signed-Off-By: NJon D. Mason <jdmason@us.ibm.com> Signed-off-by: NAndi Kleen <ak@suse.de> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-