- 15 10月, 2019 3 次提交
-
-
由 Tom Murphy 提交于
Handle devices which defer their attach to the iommu in the dma-iommu api Signed-off-by: NTom Murphy <murphyt7@tcd.ie> Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Tom Murphy 提交于
Add a gfp_t parameter to the iommu_ops::map function. Remove the needless locking in the AMD iommu driver. The iommu_ops::map function (or the iommu_map function which calls it) was always supposed to be sleepable (according to Joerg's comment in this thread: https://lore.kernel.org/patchwork/patch/977520/ ) and so should probably have had a "might_sleep()" since it was written. However currently the dma-iommu api can call iommu_map in an atomic context, which it shouldn't do. This doesn't cause any problems because any iommu driver which uses the dma-iommu api uses gfp_atomic in it's iommu_ops::map function. But doing this wastes the memory allocators atomic pools. Signed-off-by: NTom Murphy <murphyt7@tcd.ie> Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Reviewed-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Tom Murphy 提交于
With or without locking it doesn't make sense for two writers to be writing to the same IOVA range at the same time. Even with locking we still have a race condition, whoever gets the lock first, so we still can't be sure what the result will be. With locking the result will be more sane, it will be correct for the last writer, but still useless because we can't be sure which writer will get the lock last. It's a fundamentally broken design to have two writers writing to the same IOVA range at the same time. So we can remove the locking and work on the assumption that no two writers will be writing to the same IOVA range at the same time. The only exception is when we have to allocate a middle page in the page tables, the middle page can cover more than just the IOVA range a writer has been allocated. However this isn't an issue in the AMD driver because it can atomically allocate middle pages using "cmpxchg64()". Signed-off-by: NTom Murphy <murphyt7@tcd.ie> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 28 9月, 2019 6 次提交
-
-
由 Joerg Roedel 提交于
The traversing of this list requires protection_domain->lock to be taken to avoid nasty races with attach/detach code. Make sure the lock is held on all code-paths traversing this list. Reported-by: NFilippo Sironi <sironi@amazon.de> Fixes: 92d420ec ("iommu/amd: Relax locking in dma_ops path") Reviewed-by: NFilippo Sironi <sironi@amazon.de> Reviewed-by: NJerry Snitselaar <jsnitsel@redhat.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
Make sure that attaching a detaching a device can't race against each other and protect the iommu_dev_data with a spin_lock in these code paths. Fixes: 92d420ec ("iommu/amd: Relax locking in dma_ops path") Reviewed-by: NFilippo Sironi <sironi@amazon.de> Reviewed-by: NJerry Snitselaar <jsnitsel@redhat.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
Check early in attach_device whether the device is already attached to a domain. This also simplifies the code path so that __attach_device() can be removed. Fixes: 92d420ec ("iommu/amd: Relax locking in dma_ops path") Reviewed-by: NFilippo Sironi <sironi@amazon.de> Reviewed-by: NJerry Snitselaar <jsnitsel@redhat.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
The code-paths before __attach_device() and __detach_device() are called also access and modify domain state, so take the domain lock there too. This allows to get rid of the __detach_device() function. Fixes: 92d420ec ("iommu/amd: Relax locking in dma_ops path") Reviewed-by: NFilippo Sironi <sironi@amazon.de> Reviewed-by: NJerry Snitselaar <jsnitsel@redhat.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
The lock is not necessary because the device table does not contain shared state that needs protection. Locking is only needed on an individual entry basis, and that needs to happen on the iommu_dev_data level. Fixes: 92d420ec ("iommu/amd: Relax locking in dma_ops path") Reviewed-by: NFilippo Sironi <sironi@amazon.de> Reviewed-by: NJerry Snitselaar <jsnitsel@redhat.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
This struct member was used to track whether a domain change requires updates to the device-table and IOMMU cache flushes. The problem is, that access to this field is racy since locking in the common mapping code-paths has been eliminated. Move the updated field to the stack to get rid of all potential races and remove the field from the struct. Fixes: 92d420ec ("iommu/amd: Relax locking in dma_ops path") Reviewed-by: NFilippo Sironi <sironi@amazon.de> Reviewed-by: NJerry Snitselaar <jsnitsel@redhat.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 24 9月, 2019 5 次提交
-
-
由 Filippo Sironi 提交于
To make sure the domain tlb flush completes before the function returns, explicitly wait for its completion. Signed-off-by: NFilippo Sironi <sironi@amazon.de> Fixes: 42a49f96 ("amd-iommu: flush domain tlb when attaching a new device") [joro: Added commit message and fixes tag] Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Andrei Dulea 提交于
When replacing a large mapping created with page-mode 7 (i.e. non-default page size), tear down the entire series of replicated PTEs. Besides providing access to the old mapping, another thing that might go wrong with this issue is on the fetch_pte() code path that can return a PDE entry of the newly re-mapped range. While at it, make sure that we flush the TLB in case alloc_pte() fails and returns NULL at a lower level. Fixes: 6d568ef9 ("iommu/amd: Allow downgrading page-sizes in alloc_pte()") Signed-off-by: NAndrei Dulea <adulea@amazon.de>
-
由 Andrei Dulea 提交于
Given an arbitrary pte that is part of a large mapping, this function returns the first pte of the series (and optionally the mapped size and number of PTEs) It will be re-used in a subsequent patch to replace an existing L7 mapping. Fixes: 6d568ef9 ("iommu/amd: Allow downgrading page-sizes in alloc_pte()") Signed-off-by: NAndrei Dulea <adulea@amazon.de>
-
由 Andrei Dulea 提交于
Downgrading an existing large mapping to a mapping using smaller page-sizes works only for the mappings created with page-mode 7 (i.e. non-default page size). Treat large mappings created with page-mode 0 (i.e. default page size) like a non-present mapping and allow to overwrite it in alloc_pte(). While around, make sure that we flush the TLB only if we change an existing mapping, otherwise we might end up acting on garbage PTEs. Fixes: 6d568ef9 ("iommu/amd: Allow downgrading page-sizes in alloc_pte()") Signed-off-by: NAndrei Dulea <adulea@amazon.de>
-
由 Andrei Dulea 提交于
Take into account the gathered freelist in free_sub_pt(), otherwise we end up leaking all that pages. Fixes: 409afa44 ("iommu/amd: Introduce free_sub_pt() function") Signed-off-by: NAndrei Dulea <adulea@amazon.de>
-
- 14 9月, 2019 1 次提交
-
-
由 Uwe Kleine-König 提交于
Currently of_for_each_phandle ignores the cell_count parameter when a cells_name is given. I intend to change that and let the iterator fall back to a non-negative cell_count if the cells_name property is missing in the referenced node. To not change how existing of_for_each_phandle's users iterate, fix them to pass cell_count = -1 when also cells_name is given which yields the expected behaviour with and without my change. Signed-off-by: NUwe Kleine-König <u.kleine-koenig@pengutronix.de> Acked-by: NJoerg Roedel <jroedel@suse.de> Signed-off-by: NRob Herring <robh@kernel.org>
-
- 11 9月, 2019 6 次提交
-
-
由 Chris Wilson 提交于
Despite the widespread and complete failure of Broadwell integrated graphics when DMAR is enabled, known over the years, we have never been able to root cause the issue. Instead, we let the failure undermine our confidence in the iommu system itself when we should be pushing for it to be always enabled. Quirk away Broadwell and remove the rotten apple. References: https://bugs.freedesktop.org/show_bug.cgi?id=89360Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Lu Baolu <baolu.lu@linux.intel.com> Cc: Martin Peres <martin.peres@linux.intel.com> Cc: Joerg Roedel <joro@8bytes.org> Reviewed-by: NLu Baolu <baolu.lu@linux.intel.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Kyung Min Park 提交于
Intel VT-d specification revision 3 added support for Scalable Mode Translation for DMA remapping. Add the Scalable Mode fault reasons to show detailed fault reasons when the translation fault happens. Link: https://software.intel.com/sites/default/files/managed/c5/15/vt-directed-io-spec.pdfReviewed-by: NSohil Mehta <sohil.mehta@intel.com> Signed-off-by: NKyung Min Park <kyung.min.park@intel.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Lu Baolu 提交于
The Intel VT-d hardware uses paging for DMA remapping. The minimum mapped window is a page size. The device drivers may map buffers not filling the whole IOMMU window. This allows the device to access to possibly unrelated memory and a malicious device could exploit this to perform DMA attacks. To address this, the Intel IOMMU driver will use bounce pages for those buffers which don't fill whole IOMMU pages. Cc: Ashok Raj <ashok.raj@intel.com> Cc: Jacob Pan <jacob.jun.pan@linux.intel.com> Cc: Kevin Tian <kevin.tian@intel.com> Signed-off-by: NLu Baolu <baolu.lu@linux.intel.com> Tested-by: NXu Pengfei <pengfei.xu@intel.com> Tested-by: NMika Westerberg <mika.westerberg@intel.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Lu Baolu 提交于
This adds trace support for the Intel IOMMU driver. It also declares some events which could be used to trace the events when an IOVA is being mapped or unmapped in a domain. Cc: Ashok Raj <ashok.raj@intel.com> Cc: Jacob Pan <jacob.jun.pan@linux.intel.com> Cc: Kevin Tian <kevin.tian@intel.com> Signed-off-by: NMika Westerberg <mika.westerberg@linux.intel.com> Signed-off-by: NLu Baolu <baolu.lu@linux.intel.com> Reviewed-by: NSteven Rostedt (VMware) <rostedt@goodmis.org> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Lu Baolu 提交于
The bounce page implementation depends on swiotlb. Hence, don't switch off swiotlb if the system has untrusted devices or could potentially be hot-added with any untrusted devices. Cc: Ashok Raj <ashok.raj@intel.com> Cc: Jacob Pan <jacob.jun.pan@linux.intel.com> Cc: Kevin Tian <kevin.tian@intel.com> Signed-off-by: NLu Baolu <baolu.lu@linux.intel.com> Reviewed-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Lu Baolu 提交于
This adds a helper to check whether a device needs to use bounce buffer. It also provides a boot time option to disable the bounce buffer. Users can use this to prevent the iommu driver from using the bounce buffer for performance gain. Cc: Ashok Raj <ashok.raj@intel.com> Cc: Jacob Pan <jacob.jun.pan@linux.intel.com> Cc: Kevin Tian <kevin.tian@intel.com> Signed-off-by: NLu Baolu <baolu.lu@linux.intel.com> Tested-by: NXu Pengfei <pengfei.xu@intel.com> Tested-by: NMika Westerberg <mika.westerberg@intel.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 06 9月, 2019 3 次提交
-
-
由 Arnd Bergmann 提交于
The runtime_pm functions are unused when CONFIG_PM is disabled: drivers/iommu/omap-iommu.c:1022:12: error: unused function 'omap_iommu_runtime_suspend' [-Werror,-Wunused-function] static int omap_iommu_runtime_suspend(struct device *dev) drivers/iommu/omap-iommu.c:1064:12: error: unused function 'omap_iommu_runtime_resume' [-Werror,-Wunused-function] static int omap_iommu_runtime_resume(struct device *dev) Mark them as __maybe_unused to let gcc silently drop them instead of warning. Fixes: db8918f6 ("iommu/omap: streamline enable/disable through runtime pm callbacks") Signed-off-by: NArnd Bergmann <arnd@arndb.de> Acked-by: NSuman Anna <s-anna@ti.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
After the conversion to lock-less dma-api call the increase_address_space() function can be called without any locking. Multiple CPUs could potentially race for increasing the address space, leading to invalid domain->mode settings and invalid page-tables. This has been happening in the wild under high IO load and memory pressure. Fix the race by locking this operation. The function is called infrequently so that this does not introduce a performance regression in the dma-api path again. Reported-by: NQian Cai <cai@lca.pw> Fixes: 256e4621 ('iommu/amd: Make use of the generic IOVA allocator') Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Stuart Hayes 提交于
When devices are attached to the amd_iommu in a kdump kernel, the old device table entries (DTEs), which were copied from the crashed kernel, will be overwritten with a new domain number. When the new DTE is written, the IOMMU is told to flush the DTE from its internal cache--but it is not told to flush the translation cache entries for the old domain number. Without this patch, AMD systems using the tg3 network driver fail when kdump tries to save the vmcore to a network system, showing network timeouts and (sometimes) IOMMU errors in the kernel log. This patch will flush IOMMU translation cache entries for the old domain when a DTE gets overwritten with a new domain number. Signed-off-by: NStuart Hayes <stuart.w.hayes@gmail.com> Fixes: 3ac3e5ee ('iommu/amd: Copy old trans table from old kernel') Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 05 9月, 2019 2 次提交
-
-
由 Hai Nguyen Pham 提交于
According to the Hardware Manual Errata for Rev. 1.50 of April 10, 2019, cache snoop transactions for page table walk requests are not supported on R-Car Gen3. Hence, this patch removes setting these fields in the IMTTBCR register, since it will have no effect, and adds comments to the register bit definitions, to make it clear they apply to R-Car Gen2 only. Signed-off-by: NHai Nguyen Pham <hai.pham.ud@renesas.com> [geert: Reword, add comments] Signed-off-by: NGeert Uytterhoeven <geert+renesas@glider.be> Reviewed-by: NSimon Horman <horms+renesas@verge.net.au> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Geert Uytterhoeven 提交于
Move the recently added IMTTBCR_SL0_TWOBIT_* definitions up, to make sure all IMTTBCR register bit definitions are sorted by decreasing bit index. Add comments to make it clear that they exist on R-Car Gen3 only. Fixes: c295f504 ("iommu/ipmmu-vmsa: Allow two bit SL0") Signed-off-by: NGeert Uytterhoeven <geert+renesas@glider.be> Reviewed-by: NSimon Horman <horms+renesas@verge.net.au> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 04 9月, 2019 3 次提交
-
-
由 Christoph Hellwig 提交于
A helper to find the backing page array based on a virtual address. This also ensures we do the same vm_flags check everywhere instead of slightly different or missing ones in a few places. Signed-off-by: NChristoph Hellwig <hch@lst.de>
-
由 Christoph Hellwig 提交于
Currently the generic dma remap allocator gets a vm_flags passed by the caller that is a little confusing. We just introduced a generic vmalloc-level flag to identify the dma coherent allocations, so use that everywhere and remove the now pointless argument. Signed-off-by: NChristoph Hellwig <hch@lst.de>
-
由 Christoph Hellwig 提交于
While the default ->mmap and ->get_sgtable implementations work for the majority of our dma_map_ops impementations they are inherently safe for others that don't use the page allocator or CMA and/or use their own way of remapping not covered by the common code. So remove the defaults if these methods are not wired up, but instead wire up the default implementations for all safe instances. Fixes: e1c7e324 ("dma-mapping: always provide the dma_map_ops based implementation") Signed-off-by: NChristoph Hellwig <hch@lst.de>
-
- 03 9月, 2019 4 次提交
-
-
由 Joerg Roedel 提交于
Switch to the generic function mem_encrypt_active() because sme_active() is x86 specific and can't be called from generic code on other platforms than x86. Fixes: 2cc13bb4 ("iommu: Disable passthrough mode when SME is active") Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Jacob Pan 提交于
Global pages support is removed from VT-d spec 3.0. Since global pages G flag only affects first-level paging structures and because DMA request with PASID are only supported by VT-d spec. 3.0 and onward, we can safely remove global pages support. For kernel shared virtual address IOTLB invalidation, PASID granularity and page selective within PASID will be used. There is no global granularity supported. Without this fix, IOTLB invalidation will cause invalid descriptor error in the queued invalidation (QI) interface. Fixes: 1c4f88b7 ("iommu/vt-d: Shared virtual address in scalable mode") Reported-by: NSanjay K Kumar <sanjay.k.kumar@intel.com> Signed-off-by: NJacob Pan <jacob.jun.pan@linux.intel.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 YueHaibing 提交于
If CONFIG_PCI_ATS is not set, building fails: drivers/iommu/arm-smmu-v3.c: In function arm_smmu_ats_supported: drivers/iommu/arm-smmu-v3.c:2325:35: error: struct pci_dev has no member named ats_cap; did you mean msi_cap? return !pdev->untrusted && pdev->ats_cap; ^~~~~~~ ats_cap should only used when CONFIG_PCI_ATS is defined, so use #ifdef block to guard this. Fixes: bfff88ec ("iommu/arm-smmu-v3: Rework enabling/disabling of ATS for PCI masters") Signed-off-by: NYueHaibing <yuehaibing@huawei.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Yoshihiro Shimoda 提交于
This patch adds a new dma_map_ops of get_merge_boundary() to expose the DMA merge boundary if the domain type is IOMMU_DOMAIN_DMA. Signed-off-by: NYoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com> Reviewed-by: NSimon Horman <horms+renesas@verge.net.au> Acked-by: NJoerg Roedel <jroedel@suse.de> Signed-off-by: NChristoph Hellwig <hch@lst.de>
-
- 30 8月, 2019 7 次提交
-
-
由 Gustavo A. R. Silva 提交于
One of the more common cases of allocation size calculations is finding the size of a structure that has a zero-sized array at the end, along with memory for some number of elements for that array. For example: struct qcom_iommu_dev { ... struct qcom_iommu_ctx *ctxs[0]; /* indexed by asid-1 */ }; Make use of the struct_size() helper instead of an open-coded version in order to avoid any potential type mistakes. So, replace the following form: sizeof(*qcom_iommu) + (max_asid * sizeof(qcom_iommu->ctxs[0])) with: struct_size(qcom_iommu, ctxs, max_asid) Also, notice that, in this case, variable sz is not necessary, hence it is removed. This code was detected with the help of Coccinelle. Signed-off-by: NGustavo A. R. Silva <gustavo@embeddedor.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Tom Murphy 提交于
These comments are wrong. request_default_domain_for_dev doesn't just handle direct mapped domains. Signed-off-by: NTom Murphy <murphyt7@tcd.ie> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Lu Baolu 提交于
This reverts commit 55752949. Commit 55752949 ("iommu/vt-d: Avoid duplicated pci dma alias consideration") aimed to address a NULL pointer deference issue happened when a thunderbolt device driver returned unexpectedly. Unfortunately, this change breaks a previous pci quirk added by commit cc346a47 ("PCI: Add function 1 DMA alias quirk for Marvell devices"), as the result, devices like Marvell 88SE9128 SATA controller doesn't work anymore. We will continue to try to find the real culprit mentioned in 55752949, but for now we should revert it to fix current breakage. Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=204627 Cc: Stijn Tintel <stijn@linux-ipv6.be> Cc: Petr Vandrovec <petr@vandrovec.name> Reported-by: NStijn Tintel <stijn@linux-ipv6.be> Reported-by: NPetr Vandrovec <petr@vandrovec.name> Signed-off-by: NLu Baolu <baolu.lu@linux.intel.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Yunsheng Lin 提交于
The cookie is dereferenced before null checking in the function iommu_dma_init_domain. This patch moves the dereferencing after the null checking. Fixes: fdbe574e ("iommu/dma: Allow MSI-only cookies") Signed-off-by: NYunsheng Lin <linyunsheng@huawei.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Yong Wu 提交于
Remove the "struct mtk_smi_iommu" to simplify the code since it has only one item in it right now. Signed-off-by: NYong Wu <yong.wu@mediatek.com> Reviewed-by: NMatthias Brugger <matthias.bgg@gmail.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Yong Wu 提交于
The "mediatek,larb-id" has already been parsed in MTK IOMMU driver. It's no need to parse it again in SMI driver. Only clean some codes. This patch is fit for all the current mt2701, mt2712, mt7623, mt8173 and mt8183. After this patch, the "mediatek,larb-id" only be needed for mt2712 which have 2 M4Us. In the other SoCs, we can get the larb-id from M4U in which the larbs in the "mediatek,larbs" always are ordered. Correspondingly, the larb_nr in the "struct mtk_smi_iommu" could also be deleted. CC: Matthias Brugger <matthias.bgg@gmail.com> Signed-off-by: NYong Wu <yong.wu@mediatek.com> Reviewed-by: NEvan Green <evgreen@chromium.org> Reviewed-by: NMatthias Brugger <matthias.bgg@gmail.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Yong Wu 提交于
The register VLD_PA_RNG(0x118) was forgot to backup while adding 4GB mode support for mt2712. this patch add it. Fixes: 30e2fccf ("iommu/mediatek: Enlarge the validate PA range for 4GB mode") Signed-off-by: NYong Wu <yong.wu@mediatek.com> Reviewed-by: NEvan Green <evgreen@chromium.org> Reviewed-by: NMatthias Brugger <matthias.bgg@gmail.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-