- 21 4月, 2016 3 次提交
-
-
由 Omer Peleg 提交于
Change flush_unmaps() to correctly pass iommu_flush_iotlb_psi() dma addresses. (x86_64 mm and dma have the same size for pages at the moment, but this usage improves consistency.) Signed-off-by: NOmer Peleg <omer@cs.technion.ac.il> [mad@cs.technion.ac.il: rebased and reworded the commit message] Signed-off-by: NAdam Morrison <mad@cs.technion.ac.il> Reviewed-by: NShaohua Li <shli@fb.com> Reviewed-by: NBen Serebrin <serebrin@google.com> Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
由 Omer Peleg 提交于
The IOMMU's IOTLB invalidation is a costly process. When iommu mode is not set to "strict", it is done asynchronously. Current code amortizes the cost of invalidating IOTLB entries by batching all the invalidations in the system and performing a single global invalidation instead. The code queues pending invalidations in a global queue that is accessed under the global "async_umap_flush_lock" spinlock, which can result is significant spinlock contention. This patch splits this deferred queue into multiple per-cpu deferred queues, and thus gets rid of the "async_umap_flush_lock" and its contention. To keep existing deferred invalidation behavior, it still invalidates the pending invalidations of all CPUs whenever a CPU reaches its watermark or a timeout occurs. Signed-off-by: NOmer Peleg <omer@cs.technion.ac.il> [mad@cs.technion.ac.il: rebased, cleaned up and reworded the commit message] Signed-off-by: NAdam Morrison <mad@cs.technion.ac.il> Reviewed-by: NShaohua Li <shli@fb.com> Reviewed-by: NBen Serebrin <serebrin@google.com> Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
由 Omer Peleg 提交于
Currently, deferred flushes' info is striped between several lists in the flush tables. Instead, move all information about a specific flush to a single entry in this table. This patch does not introduce any functional change. Signed-off-by: NOmer Peleg <omer@cs.technion.ac.il> [mad@cs.technion.ac.il: rebased and reworded the commit message] Signed-off-by: NAdam Morrison <mad@cs.technion.ac.il> Reviewed-by: NShaohua Li <shli@fb.com> Reviewed-by: NBen Serebrin <serebrin@google.com> Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
- 07 4月, 2016 1 次提交
-
-
由 Dan Carpenter 提交于
My static checker complains that "dma_alias" is uninitialized unless we are dealing with a pci device. This is true but harmless. Anyway, we can flip the condition around to silence the warning. Signed-off-by: NDan Carpenter <dan.carpenter@oracle.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 01 3月, 2016 1 次提交
-
-
由 Joerg Roedel 提交于
In the PCI hotplug path of the Intel IOMMU driver, replace the usage of the BUS_NOTIFY_DEL_DEVICE notifier, which is executed before the driver is unbound from the device, with BUS_NOTIFY_REMOVED_DEVICE, which runs after that. This fixes a kernel BUG being triggered in the VT-d code when the device driver tries to unmap DMA buffers and the VT-d driver already destroyed all mappings. Reported-by: NStefani Seibold <stefani@seibold.net> Cc: stable@vger.kernel.org # v4.3+ Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 29 1月, 2016 1 次提交
-
-
由 Jeremy McNicoll 提交于
Fix a simple typo when disabling IOTLB on PCI(e) devices. Fixes: b16d0cb9 ("iommu/vt-d: Always enable PASID/PRI PCI capabilities before ATS") Cc: stable@vger.kernel.org # v4.4 Signed-off-by: NJeremy McNicoll <jmcnicol@redhat.com> Reviewed-by: NAlex Williamson <alex.williamson@redhat.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 16 12月, 2015 1 次提交
-
-
由 Dan Williams 提交于
commit db0fa0cb "scatterlist: use sg_phys()" did replacements of the form: phys_addr_t phys = page_to_phys(sg_page(s)); phys_addr_t phys = sg_phys(s) & PAGE_MASK; However, this breaks platforms where sizeof(phys_addr_t) > sizeof(unsigned long). Revert for 4.3 and 4.4 to make room for a combined helper in 4.5. Cc: <stable@vger.kernel.org> Cc: Jens Axboe <axboe@fb.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Russell King <linux@arm.linux.org.uk> Cc: David Woodhouse <dwmw2@infradead.org> Cc: Andrew Morton <akpm@linux-foundation.org> Fixes: db0fa0cb ("scatterlist: use sg_phys()") Suggested-by: NJoerg Roedel <joro@8bytes.org> Reported-by: NVitaly Lavrov <vel21ripn@gmail.com> Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
- 07 11月, 2015 1 次提交
-
-
由 Mel Gorman 提交于
mm, page_alloc: distinguish between being unable to sleep, unwilling to sleep and avoiding waking kswapd __GFP_WAIT has been used to identify atomic context in callers that hold spinlocks or are in interrupts. They are expected to be high priority and have access one of two watermarks lower than "min" which can be referred to as the "atomic reserve". __GFP_HIGH users get access to the first lower watermark and can be called the "high priority reserve". Over time, callers had a requirement to not block when fallback options were available. Some have abused __GFP_WAIT leading to a situation where an optimisitic allocation with a fallback option can access atomic reserves. This patch uses __GFP_ATOMIC to identify callers that are truely atomic, cannot sleep and have no alternative. High priority users continue to use __GFP_HIGH. __GFP_DIRECT_RECLAIM identifies callers that can sleep and are willing to enter direct reclaim. __GFP_KSWAPD_RECLAIM to identify callers that want to wake kswapd for background reclaim. __GFP_WAIT is redefined as a caller that is willing to enter direct reclaim and wake kswapd for background reclaim. This patch then converts a number of sites o __GFP_ATOMIC is used by callers that are high priority and have memory pools for those requests. GFP_ATOMIC uses this flag. o Callers that have a limited mempool to guarantee forward progress clear __GFP_DIRECT_RECLAIM but keep __GFP_KSWAPD_RECLAIM. bio allocations fall into this category where kswapd will still be woken but atomic reserves are not used as there is a one-entry mempool to guarantee progress. o Callers that are checking if they are non-blocking should use the helper gfpflags_allow_blocking() where possible. This is because checking for __GFP_WAIT as was done historically now can trigger false positives. Some exceptions like dm-crypt.c exist where the code intent is clearer if __GFP_DIRECT_RECLAIM is used instead of the helper due to flag manipulations. o Callers that built their own GFP flags instead of starting with GFP_KERNEL and friends now also need to specify __GFP_KSWAPD_RECLAIM. The first key hazard to watch out for is callers that removed __GFP_WAIT and was depending on access to atomic reserves for inconspicuous reasons. In some cases it may be appropriate for them to use __GFP_HIGH. The second key hazard is callers that assembled their own combination of GFP flags instead of starting with something like GFP_KERNEL. They may now wish to specify __GFP_KSWAPD_RECLAIM. It's almost certainly harmless if it's missed in most cases as other activity will wake kswapd. Signed-off-by: NMel Gorman <mgorman@techsingularity.net> Acked-by: NVlastimil Babka <vbabka@suse.cz> Acked-by: NMichal Hocko <mhocko@suse.com> Acked-by: NJohannes Weiner <hannes@cmpxchg.org> Cc: Christoph Lameter <cl@linux.com> Cc: David Rientjes <rientjes@google.com> Cc: Vitaly Wool <vitalywool@gmail.com> Cc: Rik van Riel <riel@redhat.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 25 10月, 2015 1 次提交
-
-
由 David Woodhouse 提交于
When booted with intel_iommu=ecs_off we were still allocating the PASID tables even though we couldn't actually use them. We really want to make the pasid_enabled() macro depend on ecs_enabled(). Which is unfortunate, because currently they're the other way round to cope with the Broadwell/Skylake problems with ECS. Instead of having ecs_enabled() depend on pasid_enabled(), which was never something that made me happy anyway, make it depend in the normal case on the "broken PASID" bit 28 *not* being set. Then pasid_enabled() can depend on ecs_enabled() as it should. And we also don't need to mess with it if we ever see an implementation that has some features requiring ECS (like PRI) but which *doesn't* have PASID support. Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
- 22 10月, 2015 1 次提交
-
-
由 Joerg Roedel 提交于
Set the device_group call-back to pci_device_group() for the Intel VT-d and the AMD IOMMU driver. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 19 10月, 2015 1 次提交
-
-
由 Sudeep Dutt 提交于
This will give a little bit of assistance to those developing drivers using SVM. It might cause a slight annoyance to end-users whose kernel disables the IOMMU when drivers are trying to use it. But the fix there is to fix the kernel to enable the IOMMU. Signed-off-by: NSudeep Dutt <sudeep.dutt@intel.com> Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
- 15 10月, 2015 7 次提交
-
-
由 David Woodhouse 提交于
Largely based on the driver-mode implementation by Jesse Barnes. Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
由 David Woodhouse 提交于
Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
由 David Woodhouse 提交于
This provides basic PASID support for endpoint devices, tested with a version of the i915 driver. Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
由 David Woodhouse 提交于
The behaviour if you enable PASID support after ATS is undefined. So we have to enable it first, even if we don't know whether we'll need it. This is safe enough; unless we set up a context that permits it, the device can't actually *do* anything with it. Also shift the feature detction to dmar_insert_one_dev_info() as it only needs to happen once. Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
由 David Woodhouse 提交于
Add CONFIG_INTEL_IOMMU_SVM, and allocate PASID tables on supported hardware. Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
由 David Woodhouse 提交于
As long as we use an identity mapping to work around the worst of the hardware bugs which caused us to defeature it and change the definition of the capability bit, we *can* use PASID support on the devices which advertised it in bit 28 of the Extended Capability Register. Allow people to do so with 'intel_iommu=pasid28' on the command line. Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
由 David Woodhouse 提交于
The VT-d specification says that "Software must enable ATS on endpoint devices behind a Root Port only if the Root Port is reported as supporting ATS transactions." We walk up the tree to find a Root Port, but for integrated devices we don't find one — we get to the host bridge. In that case we *should* allow ATS. Currently we don't, which means that we are incorrectly failing to use ATS for the integrated graphics. Fix that. We should never break out of this loop "naturally" with bus==NULL, since we'll always find bridge==NULL in that case (and now return 1). So remove the check for (!bridge) after the loop, since it can never happen. If it did, it would be worthy of a BUG_ON(!bridge). But since it'll oops anyway in that case, that'll do just as well. Cc: stable@vger.kernel.org Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
- 14 10月, 2015 2 次提交
-
-
由 Dan Williams 提交于
In preparation for deprecating ioremap_cache() convert its usage in intel-iommu to memremap. This also eliminates the mishandling of the __iomem annotation in the implementation. Cc: David Woodhouse <dwmw2@infradead.org> Signed-off-by: NDan Williams <dan.j.williams@intel.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Christian Zander 提交于
In preparation for the installation of a large page, any small page tables that may still exist in the target IOV address range are removed. However, if a scatter/gather list entry is large enough to fit more than one large page, the address space for any subsequent large pages is not cleared of conflicting small page tables. This can cause legitimate mapping requests to fail with errors of the form below, potentially followed by a series of IOMMU faults: ERROR: DMA PTE for vPFN 0xfde00 already set (to 7f83a4003 not 7e9e00083) In this example, a 4MiB scatter/gather list entry resulted in the successful installation of a large page @ vPFN 0xfdc00, followed by a failed attempt to install another large page @ vPFN 0xfde00, due to the presence of a pointer to a small page table @ 0x7f83a4000. To address this problem, compute the number of large pages that fit into a given scatter/gather list entry, and use it to derive the last vPFN covered by the large page(s). Cc: stable@vger.kernel.org Signed-off-by: NChristian Zander <christian@nervanasys.com> Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
- 05 10月, 2015 2 次提交
-
-
由 Joerg Roedel 提交于
Currently the RMRR entries are created only at boot time. This means they will vanish when the domain allocated at boot time is destroyed. This patch makes sure that also newly allocated domains will get RMRR mappings. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
Split the part of the function that fetches the domain out and put the rest into into a domain_prepare_identity_map, so that the code can also be used with when the domain is already known. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 04 10月, 2015 1 次提交
-
-
由 Sakari Ailus 提交于
This is necessary to separate intel-iommu from the iova library. Signed-off-by: NSakari Ailus <sakari.ailus@linux.intel.com> Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 29 9月, 2015 1 次提交
-
-
由 Sudip Mukherjee 提交于
We are returning NULL if we are not able to attach the iommu to the domain but while returning we missed freeing info. Signed-off-by: NSudip Mukherjee <sudip@vectorindia.org> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 25 8月, 2015 1 次提交
-
-
由 Joerg Roedel 提交于
There is a bug in iommu_context_addr() which will always use the lower context table, even when the upper context table needs to be used. Fix this issue. Fixes: 03ecc32c ("iommu/vt-d: support extended root and context entries") Reported-by: NXiao, Nan <nan.xiao@hp.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 17 8月, 2015 1 次提交
-
-
由 Dan Williams 提交于
Coccinelle cleanup to replace open coded sg to physical address translations. This is in preparation for introducing scatterlists that reference __pfn_t. // sg_phys.cocci: convert usage page_to_phys(sg_page(sg)) to sg_phys(sg) // usage: make coccicheck COCCI=sg_phys.cocci MODE=patch virtual patch @@ struct scatterlist *sg; @@ - page_to_phys(sg_page(sg)) + sg->offset + sg_phys(sg) @@ struct scatterlist *sg; @@ - page_to_phys(sg_page(sg)) + sg_phys(sg) & PAGE_MASK Signed-off-by: NDan Williams <dan.j.williams@intel.com> Signed-off-by: NJens Axboe <axboe@fb.com>
-
- 14 8月, 2015 3 次提交
-
-
由 Joerg Roedel 提交于
This fixes wrong accesses to iomem introduced by the kdump fixing code. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
These functions are only used in that file and can be static. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
Found by a coccicheck script. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 12 8月, 2015 11 次提交
-
-
由 Joerg Roedel 提交于
When a 'struct device_domain_info' is created as an alias for another device, this struct will not be re-used when the real device is encountered. Fix that to avoid duplicate device_domain_info structures being added. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
For devices without an PCI alias there will be two device_domain_info structures added. Prevent that by checking if the alias is different from the device. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
This struct contains all necessary information for the function already. Also handle the info->dev == NULL case while at it. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
The code in the locked section does not touch anything protected by the dmar_global_lock. Remove it from there. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
When this lock is held the device_domain_lock is also required to make sure the device_domain_info does not vanish while in use. So this lock can be removed as it gives no additional protection. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
There is no need to make a difference here between VM and non-VM domains, so simplify this code here. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
Move the code to attach/detach domains to iommus and vice verce into a single function to make sure there are no dangling references. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
This makes domain attachment more synchronous with domain deattachment. The domain<->iommu link is released in dmar_remove_one_dev_info. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
This allows to do domain->iommu attachment after domain_init has run. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
Rename this function and the ones further down its call-chain to domain_context_clear_*. In particular this means: iommu_detach_dependent_devices -> domain_context_clear iommu_detach_dev_cb -> domain_context_clear_one_cb iommu_detach_dev -> domain_context_clear_one These names match a lot better with its domain_context_mapping counterparts. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
Rename the function to dmar_remove_one_dev_info to match is name better with its dmar_insert_one_dev_info counterpart. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-