- 14 8月, 2009 1 次提交
-
-
由 David Woodhouse 提交于
Since commit 19943b0e ('intel-iommu: Unify hardware and software passthrough support'), hardware passthrough mode will do the same as software passthrough mode was doing -- it'll still use the IOMMU normally for devices which can't address all of memory. This means that we don't need to bother with swiotlb. Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
- 29 4月, 2009 1 次提交
-
-
由 Fenghua Yu 提交于
The patch adds kernel parameter intel_iommu=pt to set up pass through mode in context mapping entry. This disables DMAR in linux kernel; but KVM still runs on VT-d and interrupt remapping still works. In this mode, kernel uses swiotlb for DMA API functions but other VT-d functionalities are enabled for KVM. KVM always uses multi level translation page table in VT-d. By default, pass though mode is disabled in kernel. This is useful when people don't want to enable VT-d DMAR in kernel but still want to use KVM and interrupt remapping for reasons like DMAR performance concern or debug purpose. Signed-off-by: NFenghua Yu <fenghua.yu@intel.com> Acked-by: NWeidong Han <weidong@intel.com> Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
- 14 4月, 2009 1 次提交
-
-
由 Yang Hongyang 提交于
This is the second go through of the old DMA_nBIT_MASK macro,and there're not so many of them left,so I put them into one patch.I hope this is the last round. After this the definition of the old DMA_nBIT_MASK macro could be removed. Signed-off-by: NYang Hongyang <yanghy@cn.fujitsu.com> Cc: Russell King <rmk@arm.linux.org.uk> Cc: Tony Lindgren <tony@atomide.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: James Bottomley <James.Bottomley@HansenPartnership.com> Cc: Greg KH <greg@kroah.com> Cc: Takashi Iwai <tiwai@suse.de> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 29 1月, 2009 1 次提交
-
-
由 FUJITA Tomonori 提交于
Before the dma ops unification, IA64 always uses GFP_DMA for dma_alloc_coherent like: #define dma_alloc_coherent(dev, size, handle, gfp) \ platform_dma_alloc_coherent(dev, size, handle, (gfp) | GFP_DMA) This GFP_DMA enforcement doesn't make sense for IOMMUs since they can do address translation to give addresses that devices can access to. The IOMMU drivers ignore the zone flag. However, this is still necessary for swiotlb since it can't do address translation. We don't always need to use GFP_DMA for swiotlb. We need GFP_DMA for devices incapable of 64bit DMA. This patch is sorta updated version of: http://marc.info/?l=linux-kernel&m=122638215612705&w=2Signed-off-by: NFUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 27 1月, 2009 1 次提交
-
-
由 FUJITA Tomonori 提交于
This moves iommu_detected to arch/ia64/kernel/dma-mapping.c from arch/ia64/kernel/pci-swiotlb.c to fix the following error on on IA64_DIG_VTD: arch/ia64/kernel/built-in.o: In function `pci_iommu_init': pci-dma.c:(.init.text+0xa021): undefined reference to `iommu_detected' pci-dma.c:(.init.text+0xa030): undefined reference to `iommu_detected' drivers/built-in.o: In function `detect_intel_iommu': (.init.text+0x11c0): undefined reference to `iommu_detected' drivers/built-in.o: In function `detect_intel_iommu': (.init.text+0x11e1): undefined reference to `iommu_detected' iommu_detected is used to handle IOMMUs so I guess that arch/ia64/kernel/dma-mapping.c is ok (there might be a better place for it though). Signed-off-by: NFUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 07 1月, 2009 1 次提交
-
-
由 Luck, Tony 提交于
Impact: Section fix WARNING: vmlinux.o(.text+0x596d2): Section mismatch in reference from the function swiotlb_dma_init() to the function .init.text:swiotlb_init() The function swiotlb_dma_init() references the function __init swiotlb_init(). This is often because swiotlb_dma_init lacks a __init annotation or the annotation of swiotlb_init is wrong. Signed-off-by: NTony Luck <tony.luck@intel.com> Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
- 06 1月, 2009 4 次提交
-
-
由 FUJITA Tomonori 提交于
This adds swiotlb_map_page and swiotlb_unmap_page to lib/swiotlb.c and remove IA64 and X86's swiotlb_map_page and swiotlb_unmap_page. This also removes unnecessary swiotlb_map_single, swiotlb_map_single_attrs, swiotlb_unmap_single and swiotlb_unmap_single_attrs. Signed-off-by: NFUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Acked-by: NTony Luck <tony.luck@intel.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 FUJITA Tomonori 提交于
This converts X86 and IA64 to use include/linux/dma-mapping.h. It's a bit large but pretty boring. The major change for X86 is converting 'int dir' to 'enum dma_data_direction dir' in DMA mapping operations. The major changes for IA64 is using map_page and unmap_page instead of map_single and unmap_single. Signed-off-by: NFUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Acked-by: NTony Luck <tony.luck@intel.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 FUJITA Tomonori 提交于
This patch introduces a global pointer, dma_ops, which points to an appropriate dma_mapping_ops that the kernel should use. This is a common way to handle multiple dma_mapping_ops (X86, POWER, and SPARC). dma_ops is set in platform_dma_init. We also set it by hand where machvec_init is callev via subsys_initcall. - IA64_DIG_VTD uses vtd_dma_ops. - IA64_HP_ZX1 uses sba_dma_ops. - IA64_HP_ZX1_SWIOTLB uses hwsw_dma_ops. - IA64_SGI_SN2 uses sn_dma_ops. - The rest use swiotlb_dma_ops. Signed-off-by: NFUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Acked-by: NTony Luck <tony.luck@intel.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 FUJITA Tomonori 提交于
There is already dma_mapping_ops for SWIOTLB but there are some missing hooks. This is for IA64_DIG_VTD, IA64_HP_ZX1_SWIOTLB, IA64_SGI_UV, IA64_HP_SIM, IA64_XEN_GUEST and IA64_GENERIC. Signed-off-by: NFUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Acked-by: NTony Luck <tony.luck@intel.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 18 10月, 2008 1 次提交
-
-
由 Fenghua Yu 提交于
The patch contains Intel IOMMU IA64 specific code. It defines new machvec dig_vtd, hooks for IOMMU, DMAR table detection, cache line flush function, etc. For a generic kernel with CONFIG_DMAR=y, if Intel IOMMU is detected, dig_vtd is used for machinve vector. Otherwise, kernel falls back to dig machine vector. Kernel parameter "machvec=dig" or "intel_iommu=off" can be used to force kernel to boot dig machine vector. Signed-off-by: NFenghua Yu <fenghua.yu@intel.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
- 27 7月, 2008 1 次提交
-
-
由 FUJITA Tomonori 提交于
Add per-device dma_mapping_ops support for CONFIG_X86_64 as POWER architecture does: This enables us to cleanly fix the Calgary IOMMU issue that some devices are not behind the IOMMU (http://lkml.org/lkml/2008/5/8/423). I think that per-device dma_mapping_ops support would be also helpful for KVM people to support PCI passthrough but Andi thinks that this makes it difficult to support the PCI passthrough (see the above thread). So I CC'ed this to KVM camp. Comments are appreciated. A pointer to dma_mapping_ops to struct dev_archdata is added. If the pointer is non NULL, DMA operations in asm/dma-mapping.h use it. If it's NULL, the system-wide dma_ops pointer is used as before. If it's useful for KVM people, I plan to implement a mechanism to register a hook called when a new pci (or dma capable) device is created (it works with hot plugging). It enables IOMMUs to set up an appropriate dma_mapping_ops per device. The major obstacle is that dma_mapping_error doesn't take a pointer to the device unlike other DMA operations. So x86 can't have dma_mapping_ops per device. Note all the POWER IOMMUs use the same dma_mapping_error function so this is not a problem for POWER but x86 IOMMUs use different dma_mapping_error functions. The first patch adds the device argument to dma_mapping_error. The patch is trivial but large since it touches lots of drivers and dma-mapping.h in all the architecture. This patch: dma_mapping_error() doesn't take a pointer to the device unlike other DMA operations. So we can't have dma_mapping_ops per device. Note that POWER already has dma_mapping_ops per device but all the POWER IOMMUs use the same dma_mapping_error function. x86 IOMMUs use device argument. [akpm@linux-foundation.org: fix sge] [akpm@linux-foundation.org: fix svc_rdma] [akpm@linux-foundation.org: build fix] [akpm@linux-foundation.org: fix bnx2x] [akpm@linux-foundation.org: fix s2io] [akpm@linux-foundation.org: fix pasemi_mac] [akpm@linux-foundation.org: fix sdhci] [akpm@linux-foundation.org: build fix] [akpm@linux-foundation.org: fix sparc] [akpm@linux-foundation.org: fix ibmvscsi] Signed-off-by: NFUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Cc: Muli Ben-Yehuda <muli@il.ibm.com> Cc: Andi Kleen <andi@firstfloor.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@elte.hu> Cc: Avi Kivity <avi@qumranet.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 11 7月, 2008 1 次提交
-
-
由 FUJITA Tomonori 提交于
gart.h has only GART-specific stuff. Only GART code needs it. Other IOMMU stuff should include iommu.h instead of gart.h. Signed-off-by: NFUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Acked-by: NMuli Ben-Yehuda <muli@il.ibm.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 08 7月, 2008 1 次提交
-
-
由 Yinghai Lu 提交于
and use max_pfn directly. Signed-off-by: NYinghai Lu <yhlu.kernel@gmail.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 20 4月, 2008 1 次提交
-
-
由 Ingo Molnar 提交于
Signed-off-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 30 1月, 2008 1 次提交
-
-
由 Thomas Gleixner 提交于
Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 30 10月, 2007 1 次提交
-
-
由 Joerg Roedel 提交于
This patch renames the include file asm-x86/iommu.h to asm-x86/gart.h to make clear to which IOMMU implementation it belongs. The patch also adds "GART" to the Kconfig line. Signed-off-by: NJoerg Roedel <joerg.roedel@amd.com> Acked-by: NMuli Ben-Yehuda <muli@il.ibm.com> Signed-off-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 11 10月, 2007 2 次提交
-
-
由 Thomas Gleixner 提交于
Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Thomas Gleixner 提交于
Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 22 7月, 2007 1 次提交
-
-
由 Yinghai Lu 提交于
[akpm@linux-foundation.org: build fix] Signed-off-by: NYinghai Lu <yinghai.lu@sun.com> Cc: Alan Cox <alan@lxorguk.ukuu.org.uk> Cc: "Eric W. Biederman" <ebiederm@xmission.com> Cc: Muli Ben-Yehuda <muli@il.ibm.com> Cc: Vivek Goyal <vgoyal@in.ibm.com> Cc: Dave Jones <davej@codemonkey.org.uk> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NAndi Kleen <ak@suse.de> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 03 5月, 2007 1 次提交
-
-
由 Stephen Hemminger 提交于
The dma_ops structure can be const since it never changes after boot. Signed-off-by: NStephen Hemminger <shemminger@linux-foundation.org> Signed-off-by: NAndi Kleen <ak@suse.de>
-
- 06 2月, 2007 1 次提交
-
-
由 Jan Beulich 提交于
- add proper __init decoration to swiotlb's init code (and the code calling it, where not already the case) - replace uses of 'unsigned long' with dma_addr_t where appropriate - do miscellaneous simplicfication and cleanup Signed-off-by: NJan Beulich <jbeulich@novell.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
- 30 9月, 2006 1 次提交
-
-
由 Rolf Eike Beer 提交于
As suggested by Muli Ben-Yehuda this function is moved to generic code as may be useful for all archs. [akpm@osdl.org: fix] Signed-off-by: NRolf Eike Beer <eike-kernel@sf-tec.de> Cc: Muli Ben-Yehuda <muli@il.ibm.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
- 30 7月, 2006 1 次提交
-
-
由 Andi Kleen 提交于
It was broken before. But having it is important as possible hardware bug workaround. And previously there was no way to force swiotlb if there is another IOMMU. Side effect is that iommu=force won't force swiotlb anymore even if there isn't another IOMMU. Signed-off-by: NAndi Kleen <ak@suse.de> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
- 27 6月, 2006 1 次提交
-
-
由 Jon Mason 提交于
swiotlb relies on the gart specific iommu_aperture variable to know if we discovered a hardware IOMMU before swiotlb initialization. Introduce iommu_detected to do the same thing, but in a HW IOMMU neutral manner, in preparation for adding the Calgary HW IOMMU. Signed-Off-By: NMuli Ben-Yehuda <muli@il.ibm.com> Signed-Off-By: NJon Mason <jdmason@us.ibm.com> Signed-off-by: NAndi Kleen <ak@suse.de> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
- 05 2月, 2006 1 次提交
-
-
由 Jon Mason 提交于
This patch contains a printk reorder to remove the current problem of displaying "PCI-DMA: Disabling IOMMU." and then "PCI-DMA: using GART IOMMU" 20 lines later in dmesg. It also constains a printk reorder in swiotlb to state swiotlb enablement prior to describing the location of the bounce buffers, and a printk reorder to state gart enablement prior to describing the aperature. Also constains a whitespace cleanup in arch/x86_64/kernel/setup.c Tested (along with patch 2/2) on dual opteron with gart enabled, iommu=soft, and iommu=off. Signed-off-by: NJon Mason <jdmason@us.ibm.com> Signed-off-by: NAndi Kleen <ak@suse.de> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
- 12 1月, 2006 1 次提交
-
-
由 Muli Ben-Yehuda 提交于
AK: I hacked Muli's original patch a lot and there were a lot of changes - all bugs are probably to blame on me now. There were also some changes in the fall back behaviour for swiotlb - in particular it doesn't try to use GFP_DMA now anymore. Also all DMA mapping operations use the same core dma_alloc_coherent code with proper fallbacks now. And various other changes and cleanups. Known problems: iommu=force swiotlb=force together breaks needs more testing. This patch cleans up x86_64's DMA mapping dispatching code. Right now we have three possible IOMMU types: AGP GART, swiotlb and nommu, and in the future we will also have Xen's x86_64 swiotlb and other HW IOMMUs for x86_64. In order to support all of them cleanly, this patch: - introduces a struct dma_mapping_ops with function pointers for each of the DMA mapping operations of gart (AMD HW IOMMU), swiotlb (software IOMMU) and nommu (no IOMMU). - gets rid of: if (swiotlb) return swiotlb_xxx(); - PCI_DMA_BUS_IS_PHYS is now checked against the dma_ops being set This makes swiotlb faster by avoiding double copying in some cases. Signed-Off-By: NMuli Ben-Yehuda <mulix@mulix.org> Signed-Off-By: NJon D. Mason <jdmason@us.ibm.com> Signed-off-by: NAndi Kleen <ak@suse.de> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-