- 14 6月, 2012 1 次提交
-
-
由 Yinghai Lu 提交于
Replace the struct pci_bus secondary/subordinate members with the struct resource busn_res. Later we'll build a resource tree of these bus numbers. [bhelgaas: changelog] Signed-off-by: NYinghai Lu <yinghai@kernel.org> Signed-off-by: NBjorn Helgaas <bhelgaas@google.com>
-
- 26 5月, 2012 2 次提交
-
-
由 David Woodhouse 提交于
Now we have four copies of this code, Linus "suggested" it was about time we stopped copying it and turned it into a helper. Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 David Woodhouse 提交于
Add device info into list before doing context mapping, because device info will be used by iommu_enable_dev_iotlb(). Without it, ATS won't get enabled as it should be. ATS, while a dubious decision from a security point of view, can be very important for performance. Signed-off-by: NXudong Hao <xudong.hao@intel.com> Signed-off-by: NXiantao Zhang <xiantao.zhang@intel.com> Acked-by: NChris Wright <chrisw@sous-sol.org> Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com> Cc: stable@kernel.org Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 07 5月, 2012 3 次提交
-
-
由 Suresh Siddha 提交于
Make the file names consistent with the naming conventions of irq subsystem. Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com> Cc: Joerg Roedel <joerg.roedel@amd.com> Cc: Yinghai Lu <yinghai@kernel.org> Cc: David Woodhouse <dwmw2@infradead.org> Cc: Alex Williamson <alex.williamson@redhat.com> Signed-off-by: NJoerg Roedel <joerg.roedel@amd.com>
-
由 Suresh Siddha 提交于
Make the code consistent with the naming conventions of irq subsystem. Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com> Cc: Joerg Roedel <joerg.roedel@amd.com> Cc: Yinghai Lu <yinghai@kernel.org> Cc: David Woodhouse <dwmw2@infradead.org> Cc: Alex Williamson <alex.williamson@redhat.com> Signed-off-by: NJoerg Roedel <joerg.roedel@amd.com>
-
由 Joerg Roedel 提交于
This patch introduces irq_remap_ops to hold implementation specific function pointer to handle interrupt remapping. As the first part the initialization functions for VT-d are converted to these ops. Signed-off-by: NJoerg Roedel <joerg.roedel@amd.com> Acked-by: NYinghai Lu <yinghai@kernel.org> Cc: David Woodhouse <dwmw2@infradead.org> Cc: Alex Williamson <alex.williamson@redhat.com> Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com> Signed-off-by: NJoerg Roedel <joerg.roedel@amd.com>
-
- 28 3月, 2012 1 次提交
-
-
由 Andrzej Pietrasiewicz 提交于
Adapt core x86 and IA64 architecture code for dma_map_ops changes: replace alloc/free_coherent with generic alloc/free methods. Signed-off-by: NAndrzej Pietrasiewicz <andrzej.p@samsung.com> Acked-by: NKyungmin Park <kyungmin.park@samsung.com> [removed swiotlb related changes and replaced it with wrappers, merged with IA64 patch to avoid inter-patch dependences in intel-iommu code] Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com> Reviewed-by: NArnd Bergmann <arnd@arndb.de> Acked-by: NTony Luck <tony.luck@intel.com>
-
- 06 3月, 2012 2 次提交
-
-
由 Mike Travis 提交于
The number of IOMMUs supported should be the same as the number of IO APICS. This limit comes into play when the IOMMUs are identity mapped, thus the number of possible IOMMUs in the "static identity" (si) domain should be this same number. Signed-off-by: NMike Travis <travis@sgi.com> Signed-off-by: NJack Steiner <steiner@sgi.com> Cc: Jack Steiner <steiner@sgi.com> Cc: David Woodhouse <dwmw2@infradead.org> Cc: Chris Wright <chrisw@sous-sol.org> Cc: Daniel Rahn <drahn@suse.com> Cc: Jesse Barnes <jbarnes@virtuousgeek.org> Cc: Joerg Roedel <joerg.roedel@amd.com> [ Fixed printk format string, cleaned up the code ] Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Link: http://lkml.kernel.org/n/tip-ixcmp0hfp0a3b2lfv3uo0p0x@git.kernel.orgSigned-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Mike Travis 提交于
With SandyBridge, Intel has changed these Socket PCI devices to have a class type of "System Peripheral" & "Performance counter", rather than "HostBridge". So instead of using a "special" case to detect which devices will not be doing DMA, use the fact that a device that is not associated with an IOMMU, will not need an identity map. Signed-off-by: NMike Travis <travis@sgi.com> Signed-off-by: NMike Habeck <habeck@sgi.com> Cc: Jack Steiner <steiner@sgi.com> Cc: David Woodhouse <dwmw2@infradead.org> Cc: Chris Wright <chrisw@sous-sol.org> Cc: Daniel Rahn <drahn@suse.com> Cc: Jesse Barnes <jbarnes@virtuousgeek.org> Cc: Joerg Roedel <joerg.roedel@amd.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Link: http://lkml.kernel.org/n/tip-018fywmjs3lmzfyzjlktg8dx@git.kernel.orgSigned-off-by: NIngo Molnar <mingo@elte.hu>
-
- 06 2月, 2012 1 次提交
-
-
由 Masanari Iida 提交于
Correct spelling "supportd" to "supported" in drivers/iommu/intel-iommu.c Signed-off-by: Masanari Iida<standby24x7@gmail.com> Signed-off-by: NJiri Kosina <jkosina@suse.cz>
-
- 17 12月, 2011 1 次提交
-
-
由 Eugeni Dodonov 提交于
In i915 driver, we do not enable either rc6 or semaphores on SNB when dmar is enabled. The new 'intel_iommu_enabled' variable signals when the iommu code is in operation. Cc: Ted Phelps <phelps@gnusto.com> Cc: Peter <pab1612@gmail.com> Cc: Lukas Hejtmanek <xhejtman@fi.muni.cz> Cc: Andrew Lutomirski <luto@mit.edu> CC: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: Eugeni Dodonov <eugeni.dodonov@intel.com> Signed-off-by: NKeith Packard <keithp@keithp.com>
-
- 09 12月, 2011 1 次提交
-
-
由 Tejun Heo 提交于
Now all ARCH_POPULATES_NODE_MAP archs select HAVE_MEBLOCK_NODE_MAP - there's no user of early_node_map[] left. Kill early_node_map[] and replace ARCH_POPULATES_NODE_MAP with HAVE_MEMBLOCK_NODE_MAP. Also, relocate for_each_mem_pfn_range() and helper from mm.h to memblock.h as page_alloc.c would no longer host an alternative implementation. This change is ultimately one to one mapping and shouldn't cause any observable difference; however, after the recent changes, there are some functions which now would fit memblock.c better than page_alloc.c and dependency on HAVE_MEMBLOCK_NODE_MAP instead of HAVE_MEMBLOCK doesn't make much sense on some of them. Further cleanups for functions inside HAVE_MEMBLOCK_NODE_MAP in mm.h would be nice. -v2: Fix compile bug introduced by mis-spelling CONFIG_HAVE_MEMBLOCK_NODE_MAP to CONFIG_MEMBLOCK_HAVE_NODE_MAP in mmzone.h. Reported by Stephen Rothwell. Signed-off-by: NTejun Heo <tj@kernel.org> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Yinghai Lu <yinghai@kernel.org> Cc: Tony Luck <tony.luck@intel.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Chen Liqin <liqin.chen@sunplusct.com> Cc: Paul Mundt <lethal@linux-sh.org> Cc: "David S. Miller" <davem@davemloft.net> Cc: "H. Peter Anvin" <hpa@zytor.com>
-
- 06 12月, 2011 1 次提交
-
-
由 Sergey Senozhatsky 提交于
dmar_parse_rmrr_atsr_dev() calls rmrr_parse_dev() and atsr_parse_dev() which are both marked as __init. Section mismatch in reference from the function dmar_parse_rmrr_atsr_dev() to the function .init.text:dmar_parse_dev_scope() The function dmar_parse_rmrr_atsr_dev() references the function __init dmar_parse_dev_scope(). Section mismatch in reference from the function dmar_parse_rmrr_atsr_dev() to the function .init.text:dmar_parse_dev_scope() The function dmar_parse_rmrr_atsr_dev() references the function __init dmar_parse_dev_scope(). Signed-off-by: NSergey Senozhatsky <sergey.senozhatsky@gmail.com> Cc: David Woodhouse <dwmw2@infradead.org> Cc: iommu@lists.linux-foundation.org Cc: Joerg Roedel <joerg.roedel@amd.com> Cc: Ohad Ben-Cohen <ohad@wizery.com> Link: http://lkml.kernel.org/r/20111026154539.GA10103@swordfishSigned-off-by: NIngo Molnar <mingo@elte.hu>
-
- 15 11月, 2011 2 次提交
-
-
由 Alex Williamson 提交于
The option iommu=group_mf indicates the that the iommu driver should expose all functions of a multi-function PCI device as the same iommu_device_group. This is useful for disallowing individual functions being exposed as independent devices to userspace as there are often hidden dependencies. Virtual functions are not affected by this option. Signed-off-by: NAlex Williamson <alex.williamson@redhat.com> Signed-off-by: NJoerg Roedel <joerg.roedel@amd.com>
-
由 Alex Williamson 提交于
We generally have BDF granularity for devices, so we just need to make sure devices aren't hidden behind PCIe-to-PCI bridges. We can then make up a group number that's simply the concatenated seg|bus|dev|fn so we don't have to track them (not that users should depend on that). Signed-off-by: NAlex Williamson <alex.williamson@redhat.com> Acked-By: NDavid Woodhouse <David.Woodhouse@intel.com> Signed-off-by: NJoerg Roedel <joerg.roedel@amd.com>
-
- 10 11月, 2011 2 次提交
-
-
由 Ohad Ben-Cohen 提交于
Let the IOMMU core know we support arbitrary page sizes (as long as they're an order of 4KiB). This way the IOMMU core will retain the existing behavior we're used to; it will let us map regions that: - their size is an order of 4KiB - they are naturally aligned Note: Intel IOMMU hardware doesn't support arbitrary page sizes, but the driver does (it splits arbitrary-sized mappings into the pages supported by the hardware). To make everything simpler for now, though, this patch effectively tells the IOMMU core to keep giving this driver the same memory regions it did before, so nothing is changed as far as it's concerned. At this point, the page sizes announced remain static within the IOMMU core. To correctly utilize the pgsize-splitting of the IOMMU core by this driver, it seems that some core changes should still be done, because Intel's IOMMU page size capabilities seem to have the potential to be different between different DMA remapping devices. Signed-off-by: NOhad Ben-Cohen <ohad@wizery.com> Cc: David Woodhouse <dwmw2@infradead.org> Signed-off-by: NJoerg Roedel <joerg.roedel@amd.com>
-
由 Ohad Ben-Cohen 提交于
Express sizes in bytes rather than in page order, to eliminate the size->order->size conversions we have whenever the IOMMU API is calling the low level drivers' map/unmap methods. Adopt all existing drivers. Signed-off-by: NOhad Ben-Cohen <ohad@wizery.com> Cc: David Brown <davidb@codeaurora.org> Cc: David Woodhouse <dwmw2@infradead.org> Cc: Joerg Roedel <Joerg.Roedel@amd.com> Cc: Stepan Moskovchenko <stepanm@codeaurora.org> Cc: KyongHo Cho <pullip.cho@samsung.com> Cc: Hiroshi DOYU <hdoyu@nvidia.com> Cc: Laurent Pinchart <laurent.pinchart@ideasonboard.com> Signed-off-by: NJoerg Roedel <joerg.roedel@amd.com>
-
- 01 11月, 2011 1 次提交
-
-
由 Paul Gortmaker 提交于
Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
-
- 21 10月, 2011 3 次提交
-
-
由 Joerg Roedel 提交于
Convert the Intel IOMMU driver to use the new interface for publishing the iommu_ops. Cc: David Woodhouse <dwmw2@infradead.org> Signed-off-by: NJoerg Roedel <joerg.roedel@amd.com>
-
由 David Woodhouse 提交于
We really don't want this to work in the general case; device drivers *shouldn't* care whether they are behind an IOMMU or not. But the integrated graphics is a special case, because the IOMMU and the GTT are all kind of smashed into one and generally horrifically buggy, so it's reasonable for the graphics driver to want to know when the IOMMU is active for the graphics hardware. Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com> Signed-off-by: NBen Widawsky <ben@bwidawsk.net> Signed-off-by: NKeith Packard <keithp@keithp.com>
-
由 David Woodhouse 提交于
To work around a hardware issue, we have to submit IOTLB flushes while the graphics engine is idle. The graphics driver will (we hope) go to great lengths to ensure that it gets that right on the affected chipset(s)... so let's not screw it over by deferring the unmap and doing it later. That wouldn't be very helpful. Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com> Signed-off-by: NBen Widawsky <ben@bwidawsk.net> Signed-off-by: NKeith Packard <keithp@keithp.com>
-
- 19 10月, 2011 3 次提交
-
-
由 Allen Kay 提交于
If target_level == 0, current code breaks out of the while-loop if SUPERPAGE bit is set. We should also break out if PTE is not present. If we don't do this, KVM calls to iommu_iova_to_phys() will cause pfn_to_dma_pte() to create mapping for 4KiB pages. Signed-off-by: NAllen Kay <allen.m.kay@intel.com> Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
由 Allen Kay 提交于
set dmar->iommu_superpage field to the smallest common denominator of super page sizes supported by all active VT-d engines. Initialize this field in intel_iommu_domain_init() API so intel_iommu_map() API will be able to use iommu_superpage field to determine the appropriate super page size to use. Signed-off-by: NAllen Kay <allen.m.kay@intel.com> Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
由 Allen Kay 提交于
iommu_unmap() API expects IOMMU drivers to return the actual page order of the address being unmapped. Previous code was just returning page order passed in from the caller. This patch fixes this problem. Signed-off-by: NAllen Kay <allen.m.kay@intel.com> Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
- 15 10月, 2011 2 次提交
-
-
由 David Woodhouse 提交于
We really don't want this to work in the general case; device drivers *shouldn't* care whether they are behind an IOMMU or not. But the integrated graphics is a special case, because the IOMMU and the GTT are all kind of smashed into one and generally horrifically buggy, so it's reasonable for the graphics driver to want to know when the IOMMU is active for the graphics hardware. Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
由 David Woodhouse 提交于
To work around a hardware issue, we have to submit IOTLB flushes while the graphics engine is idle. The graphics driver will (we hope) go to great lengths to ensure that it gets that right on the affected chipset(s)... so let's not screw it over by deferring the unmap and doing it later. That wouldn't be very helpful. Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
- 11 10月, 2011 1 次提交
-
-
由 Roland Dreier 提交于
When unbinding a device so that I could pass it through to a KVM VM, I got the lockdep report below. It looks like a legitimate lock ordering problem: - domain_context_mapping_one() takes iommu->lock and calls iommu_support_dev_iotlb(), which takes device_domain_lock (inside iommu->lock). - domain_remove_one_dev_info() starts by taking device_domain_lock then takes iommu->lock inside it (near the end of the function). So this is the classic AB-BA deadlock. It looks like a safe fix is to simply release device_domain_lock a bit earlier, since as far as I can tell, it doesn't protect any of the stuff accessed at the end of domain_remove_one_dev_info() anyway. BTW, the use of device_domain_lock looks a bit unsafe to me... it's at least not obvious to me why we aren't vulnerable to the race below: iommu_support_dev_iotlb() domain_remove_dev_info() lock device_domain_lock find info unlock device_domain_lock lock device_domain_lock find same info unlock device_domain_lock free_devinfo_mem(info) do stuff with info after it's free However I don't understand the locking here well enough to know if this is a real problem, let alone what the best fix is. Anyway here's the full lockdep output that prompted all of this: ======================================================= [ INFO: possible circular locking dependency detected ] 2.6.39.1+ #1 ------------------------------------------------------- bash/13954 is trying to acquire lock: (&(&iommu->lock)->rlock){......}, at: [<ffffffff812f6421>] domain_remove_one_dev_info+0x121/0x230 but task is already holding lock: (device_domain_lock){-.-...}, at: [<ffffffff812f6508>] domain_remove_one_dev_info+0x208/0x230 which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #1 (device_domain_lock){-.-...}: [<ffffffff8109ca9d>] lock_acquire+0x9d/0x130 [<ffffffff81571475>] _raw_spin_lock_irqsave+0x55/0xa0 [<ffffffff812f8350>] domain_context_mapping_one+0x600/0x750 [<ffffffff812f84df>] domain_context_mapping+0x3f/0x120 [<ffffffff812f9175>] iommu_prepare_identity_map+0x1c5/0x1e0 [<ffffffff81ccf1ca>] intel_iommu_init+0x88e/0xb5e [<ffffffff81cab204>] pci_iommu_init+0x16/0x41 [<ffffffff81002165>] do_one_initcall+0x45/0x190 [<ffffffff81ca3d3f>] kernel_init+0xe3/0x168 [<ffffffff8157ac24>] kernel_thread_helper+0x4/0x10 -> #0 (&(&iommu->lock)->rlock){......}: [<ffffffff8109bf3e>] __lock_acquire+0x195e/0x1e10 [<ffffffff8109ca9d>] lock_acquire+0x9d/0x130 [<ffffffff81571475>] _raw_spin_lock_irqsave+0x55/0xa0 [<ffffffff812f6421>] domain_remove_one_dev_info+0x121/0x230 [<ffffffff812f8b42>] device_notifier+0x72/0x90 [<ffffffff8157555c>] notifier_call_chain+0x8c/0xc0 [<ffffffff81089768>] __blocking_notifier_call_chain+0x78/0xb0 [<ffffffff810897b6>] blocking_notifier_call_chain+0x16/0x20 [<ffffffff81373a5c>] __device_release_driver+0xbc/0xe0 [<ffffffff81373ccf>] device_release_driver+0x2f/0x50 [<ffffffff81372ee3>] driver_unbind+0xa3/0xc0 [<ffffffff813724ac>] drv_attr_store+0x2c/0x30 [<ffffffff811e4506>] sysfs_write_file+0xe6/0x170 [<ffffffff8117569e>] vfs_write+0xce/0x190 [<ffffffff811759e4>] sys_write+0x54/0xa0 [<ffffffff81579a82>] system_call_fastpath+0x16/0x1b other info that might help us debug this: 6 locks held by bash/13954: #0: (&buffer->mutex){+.+.+.}, at: [<ffffffff811e4464>] sysfs_write_file+0x44/0x170 #1: (s_active#3){++++.+}, at: [<ffffffff811e44ed>] sysfs_write_file+0xcd/0x170 #2: (&__lockdep_no_validate__){+.+.+.}, at: [<ffffffff81372edb>] driver_unbind+0x9b/0xc0 #3: (&__lockdep_no_validate__){+.+.+.}, at: [<ffffffff81373cc7>] device_release_driver+0x27/0x50 #4: (&(&priv->bus_notifier)->rwsem){.+.+.+}, at: [<ffffffff8108974f>] __blocking_notifier_call_chain+0x5f/0xb0 #5: (device_domain_lock){-.-...}, at: [<ffffffff812f6508>] domain_remove_one_dev_info+0x208/0x230 stack backtrace: Pid: 13954, comm: bash Not tainted 2.6.39.1+ #1 Call Trace: [<ffffffff810993a7>] print_circular_bug+0xf7/0x100 [<ffffffff8109bf3e>] __lock_acquire+0x195e/0x1e10 [<ffffffff810972bd>] ? trace_hardirqs_off+0xd/0x10 [<ffffffff8109d57d>] ? trace_hardirqs_on_caller+0x13d/0x180 [<ffffffff8109ca9d>] lock_acquire+0x9d/0x130 [<ffffffff812f6421>] ? domain_remove_one_dev_info+0x121/0x230 [<ffffffff81571475>] _raw_spin_lock_irqsave+0x55/0xa0 [<ffffffff812f6421>] ? domain_remove_one_dev_info+0x121/0x230 [<ffffffff810972bd>] ? trace_hardirqs_off+0xd/0x10 [<ffffffff812f6421>] domain_remove_one_dev_info+0x121/0x230 [<ffffffff812f8b42>] device_notifier+0x72/0x90 [<ffffffff8157555c>] notifier_call_chain+0x8c/0xc0 [<ffffffff81089768>] __blocking_notifier_call_chain+0x78/0xb0 [<ffffffff810897b6>] blocking_notifier_call_chain+0x16/0x20 [<ffffffff81373a5c>] __device_release_driver+0xbc/0xe0 [<ffffffff81373ccf>] device_release_driver+0x2f/0x50 [<ffffffff81372ee3>] driver_unbind+0xa3/0xc0 [<ffffffff813724ac>] drv_attr_store+0x2c/0x30 [<ffffffff811e4506>] sysfs_write_file+0xe6/0x170 [<ffffffff8117569e>] vfs_write+0xce/0x190 [<ffffffff811759e4>] sys_write+0x54/0xa0 [<ffffffff81579a82>] system_call_fastpath+0x16/0x1b Signed-off-by: NRoland Dreier <roland@purestorage.com> Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
- 21 9月, 2011 3 次提交
-
-
由 Suresh Siddha 提交于
Change the CONFIG_DMAR to CONFIG_INTEL_IOMMU to be consistent with the other IOMMU options. Rename the CONFIG_INTR_REMAP to CONFIG_IRQ_REMAP to match the irq subsystem name. And define the CONFIG_DMAR_TABLE for the common ACPI DMAR routines shared by both CONFIG_INTEL_IOMMU and CONFIG_IRQ_REMAP. Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com> Cc: yinghai@kernel.org Cc: youquan.song@intel.com Cc: joerg.roedel@amd.com Cc: tony.luck@intel.com Cc: dwmw2@infradead.org Link: http://lkml.kernel.org/r/20110824001456.558630224@sbsiddha-desk.sc.intel.comSigned-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Suresh Siddha 提交于
Move the IOMMU specific routines to intel-iommu.c leaving the dmar.c to the common ACPI dmar code shared between DMA-remapping and Interrupt-remapping. Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com> Cc: yinghai@kernel.org Cc: youquan.song@intel.com Cc: joerg.roedel@amd.com Cc: tony.luck@intel.com Cc: dwmw2@infradead.org Link: http://lkml.kernel.org/r/20110824001456.282401285@sbsiddha-desk.sc.intel.comSigned-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Suresh Siddha 提交于
Both DMA-remapping aswell as Interrupt-remapping depend on the dmar dev scope to be initialized. When both DMA and IRQ-remapping are enabled, we depend on DMA-remapping init code to call dmar_dev_scope_init(). This resulted in not doing this init when DMA-remapping was turned off but interrupt-remapping turned on in the kernel config. This caused interrupt routing to break with CONFIG_INTR_REMAP=y and CONFIG_DMAR=n. This issue was introduced by this commit: | commit 9d5ce73a | Author: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> | Date: Tue Nov 10 19:46:16 2009 +0900 | | x86: intel-iommu: Convert detect_intel_iommu to use iommu_init hook Fix this by calling dmar_dev_scope_init() explicitly from the interrupt remapping code too. Reported-by: NAndrew Vasquez <andrew.vasquez@qlogic.com> Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com> Cc: yinghai@kernel.org Cc: youquan.song@intel.com Cc: joerg.roedel@amd.com Cc: tony.luck@intel.com Cc: dwmw2@infradead.org Link: http://lkml.kernel.org/r/20110824001456.229207526@sbsiddha-desk.sc.intel.comSigned-off-by: NIngo Molnar <mingo@elte.hu>
-
- 13 9月, 2011 1 次提交
-
-
由 Thomas Gleixner 提交于
The iommu->register_lock can be taken in atomic context and therefore must not be preempted on -rt - annotate it. In mainline this change documents the low level nature of the lock - otherwise there's no functional difference. Lockdep and Sparse checking will work as usual. Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 21 6月, 2011 1 次提交
-
-
由 Ohad Ben-Cohen 提交于
This should ease finding similarities with different platforms, with the intention of solving problems once in a generic framework which everyone can use. Note: to move intel-iommu.c, the declaration of pci_find_upstream_pcie_bridge() has to move from drivers/pci/pci.h to include/linux/pci.h. This is handled in this patch, too. As suggested, also drop DMAR's EXPERIMENTAL tag while we're at it. Compile-tested on x86_64. Signed-off-by: NOhad Ben-Cohen <ohad@wizery.com> Signed-off-by: NJoerg Roedel <joerg.roedel@amd.com>
-
- 08 6月, 2011 1 次提交
-
-
由 Rafael J. Wysocki 提交于
If CONFIG_PM is not set, init_iommu_pm_ops() introduced by commit 134fac3f (PCI / Intel IOMMU: Use syscore_ops instead of sysdev class and sysdev) is not defined appropriately. Fix this issue. Reported-by: NTony Luck <tony.luck@intel.com> Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
-
- 01 6月, 2011 7 次提交
-
-
由 David Woodhouse 提交于
We were mapping an extra byte (and hence usually an extra page): iommu_prepare_identity_map() expects to be given an 'end' argument which is the last byte to be mapped; not the first byte *not* to be mapped. Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
由 Mike Habeck 提交于
The comment in domain_remove_one_dev_info() states "No need to compare PCI domain; it has to be the same". But for the si_domain that isn't going to be true, as it consists of all the PCI devices that are identity mapped thus multiple PCI domains can be in si_domain. The code needs to validate the PCI domain too. Signed-off-by: NMike Habeck <habeck@sgi.com> Signed-off-by: NMike Travis <travis@sgi.com> Cc: stable@kernel.org Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
由 Mike Travis 提交于
When using the 1:1 (identity) PCI DMA remapping, PCI Host Bridge devices that do not use the IOMMU causes a kernel panic. Fix that by not inserting those devices into the si_domain. Signed-off-by: NMike Travis <travis@sgi.com> Reviewed-by: NMike Habeck <habeck@sgi.com> Cc: stable@kernel.org Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
由 Mike Travis 提交于
The __intel_map_single function is not honoring the passed in DMA mask. This results in not using the coherent DMA mask when called from intel_alloc_coherent(). Signed-off-by: NMike Travis <travis@sgi.com> Acked-by: NChris Wright <chrisw@sous-sol.org> Reviewed-by: NMike Habeck <habeck@sgi.com> Cc: stable@kernel.org Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
由 Mike Travis 提交于
When there are a large count of PCI devices, and the pass through option for iommu is set, much time is spent in the identity_mapping function hunting though the iommu domains to check if a specific device is "identity mapped". Speed up the function by checking the cached info to see if it's mapped to the static identity domain. Signed-off-by: NMike Travis <travis@sgi.com> Reviewed-by: NMike Habeck <habeck@sgi.com> Cc: stable@kernel.org Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
由 Chris Wright 提交于
The identity mapping code appears to make the assumption that if the devices dma_mask is greater than 32bits the device can use identity mapping. But that is not true: take the case where we have a 40bit device in a 44bit architecture. The device can potentially receive a physical address that it will truncate and cause incorrect addresses to be used. Instead check to see if the device's dma_mask is large enough to address the system's dma_mask. Signed-off-by: NMike Travis <travis@sgi.com> Reviewed-by: NMike Habeck <habeck@sgi.com> Cc: stable@kernel.org Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
由 Alex Williamson 提交于
Commit a97590e5 added unlinking domains from iommus to reciprocate the iommu from domains unlinking that was already done. We actually want to only do this for device domains and never for the static identity map domain or VM domains. The SI domain is special and never freed, while VM domain->id lives in their own special address space, separate from iommu->domain_ids. In the current code, a VM can get domain->id zero, then mark that domain unused when unbound from pci-stub. This leads to DMAR write faults when the device is re-bound to the host driver. Signed-off-by: NAlex Williamson <alex.williamson@redhat.com> Cc: stable@kernel.org Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-