- 24 7月, 2020 7 次提交
-
-
由 Lu Baolu 提交于
It is refactored in two ways: - Make it global so that it could be used in other files. - Make bus/devfn optional so that callers could ignore these two returned values when they only want to get the coresponding iommu pointer. Signed-off-by: NLu Baolu <baolu.lu@linux.intel.com> Reviewed-by: NKevin Tian <kevin.tian@intel.com> Link: https://lore.kernel.org/r/20200724014925.15523-9-baolu.lu@linux.intel.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Jacob Pan 提交于
For the unlikely use case where multiple aux domains from the same pdev are attached to a single guest and then bound to a single process (thus same PASID) within that guest, we cannot easily support this case by refcounting the number of users. As there is only one SL page table per PASID while we have multiple aux domains thus multiple SL page tables for the same PASID. Extra unbinding guest PASID can happen due to race between normal and exception cases. Termination of one aux domain may affect others unless we actively track and switch aux domains to ensure the validity of SL page tables and TLB states in the shared PASID entry. Support for sharing second level PGDs across domains can reduce the complexity but this is not available due to the limitations on VFIO container architecture. We can revisit this decision once sharing PGDs are available. Overall, the complexity and potential glitch do not warrant this unlikely use case thereby removed by this patch. Fixes: 56722a43 ("iommu/vt-d: Add bind guest PASID support") Signed-off-by: NLiu Yi L <yi.l.liu@intel.com> Signed-off-by: NJacob Pan <jacob.jun.pan@linux.intel.com> Signed-off-by: NLu Baolu <baolu.lu@linux.intel.com> Reviewed-by: NEric Auger <eric.auger@redhat.com> Cc: Kevin Tian <kevin.tian@intel.com> Link: https://lore.kernel.org/r/20200724014925.15523-8-baolu.lu@linux.intel.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Jacob Pan 提交于
For guest requested IOTLB invalidation, address and mask are provided as part of the invalidation data. VT-d HW silently ignores any address bits below the mask. SW shall also allow such case but give warning if address does not align with the mask. This patch relax the fault handling from error to warning and proceed with invalidation request with the given mask. Fixes: 6ee1b77b ("iommu/vt-d: Add svm/sva invalidate function") Signed-off-by: NJacob Pan <jacob.jun.pan@linux.intel.com> Signed-off-by: NLu Baolu <baolu.lu@linux.intel.com> Reviewed-by: NEric Auger <eric.auger@redhat.com> Link: https://lore.kernel.org/r/20200724014925.15523-7-baolu.lu@linux.intel.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Liu Yi L 提交于
For guest SVA usage, in order to optimize for less VMEXIT, guest request of IOTLB flush also includes device TLB. On the host side, IOMMU driver performs IOTLB and implicit devTLB invalidation. When PASID-selective granularity is requested by the guest we need to derive the equivalent address range for devTLB instead of using the address information in the UAPI data. The reason for that is, unlike IOTLB flush, devTLB flush does not support PASID-selective granularity. This is to say, we need to set the following in the PASID based devTLB invalidation descriptor: - entire 64 bit range in address ~(0x1 << 63) - S bit = 1 (VT-d CH 6.5.2.6). Without this fix, device TLB flush range is not set properly for PASID selective granularity. This patch also merged devTLB flush code for both implicit and explicit cases. Fixes: 6ee1b77b ("iommu/vt-d: Add svm/sva invalidate function") Signed-off-by: NLiu Yi L <yi.l.liu@intel.com> Signed-off-by: NJacob Pan <jacob.jun.pan@linux.intel.com> Signed-off-by: NLu Baolu <baolu.lu@linux.intel.com> Reviewed-by: NEric Auger <eric.auger@redhat.com> Link: https://lore.kernel.org/r/20200724014925.15523-6-baolu.lu@linux.intel.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Liu Yi L 提交于
Address information for device TLB invalidation comes from userspace when device is directly assigned to a guest with vIOMMU support. VT-d requires page aligned address. This patch checks and enforce address to be page aligned, otherwise reserved bits can be set in the invalidation descriptor. Unrecoverable fault will be reported due to non-zero value in the reserved bits. Fixes: 61a06a16 ("iommu/vt-d: Support flushing more translation cache types") Signed-off-by: NLiu Yi L <yi.l.liu@intel.com> Signed-off-by: NJacob Pan <jacob.jun.pan@linux.intel.com> Signed-off-by: NLu Baolu <baolu.lu@linux.intel.com> Reviewed-by: NEric Auger <eric.auger@redhat.com> Link: https://lore.kernel.org/r/20200724014925.15523-5-baolu.lu@linux.intel.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Jacob Pan 提交于
DevTLB flush can be used for both DMA request with and without PASIDs. The former uses PASID#0 (RID2PASID), latter uses non-zero PASID for SVA usage. This patch adds a check for PASID value such that devTLB flush with PASID is used for SVA case. This is more efficient in that multiple PASIDs can be used by a single device, when tearing down a PASID entry we shall flush only the devTLB specific to a PASID. Fixes: 6f7db75e ("iommu/vt-d: Add second level page table") Signed-off-by: NJacob Pan <jacob.jun.pan@linux.intel.com> Signed-off-by: NLu Baolu <baolu.lu@linux.intel.com> Reviewed-by: NEric Auger <eric.auger@redhat.com> Link: https://lore.kernel.org/r/20200724014925.15523-4-baolu.lu@linux.intel.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Jacob Pan 提交于
Global pages support is removed from VT-d spec 3.0 for dev TLB invalidation. This patch is to remove the bits for vSVA. Similar change already made for the native SVA. See the link below. Signed-off-by: NJacob Pan <jacob.jun.pan@linux.intel.com> Signed-off-by: NLu Baolu <baolu.lu@linux.intel.com> Reviewed-by: NEric Auger <eric.auger@redhat.com> Link: https://lore.kernel.org/linux-iommu/20190830142919.GE11578@8bytes.org/T/ Link: https://lore.kernel.org/r/20200724014925.15523-3-baolu.lu@linux.intel.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 23 6月, 2020 6 次提交
-
-
由 Lu Baolu 提交于
The iommu_domain_identity_map() helper takes start/end PFN as arguments. Fix a misuse case where the start and end addresses are passed. Fixes: e70b081c ("iommu/vt-d: Remove IOVA handling code from the non-dma_ops path") Reported-by: NAlex Williamson <alex.williamson@redhat.com> Signed-off-by: NLu Baolu <baolu.lu@linux.intel.com> Reviewed-by: NJerry Snitselaar <jsnitsel@redhat.com> Cc: Tom Murphy <murphyt7@tcd.ie> Link: https://lore.kernel.org/r/20200622231345.29722-7-baolu.lu@linux.intel.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Lu Baolu 提交于
The Scalable-mode Page-walk Coherency (SMPWC) field in the VT-d extended capability register indicates the hardware coherency behavior on paging structures accessed through the pasid table entry. This is ignored in current code and using ECAP.C instead which is only valid in legacy mode. Fix this so that paging structure updates could be manually flushed from the cache line if hardware page walking is not snooped. Fixes: 765b6a98 ("iommu/vt-d: Enumerate the scalable mode capability") Signed-off-by: NLu Baolu <baolu.lu@linux.intel.com> Cc: Ashok Raj <ashok.raj@intel.com> Cc: Kevin Tian <kevin.tian@intel.com> Cc: Jacob Pan <jacob.jun.pan@linux.intel.com> Link: https://lore.kernel.org/r/20200622231345.29722-6-baolu.lu@linux.intel.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Lu Baolu 提交于
PCI ACS is disabled if Intel IOMMU is off by default or intel_iommu=off is used in command line. Unfortunately, Intel IOMMU will be forced on if there're devices sitting on an external facing PCI port that is marked as untrusted (for example, thunderbolt peripherals). That means, PCI ACS is disabled while Intel IOMMU is forced on to isolate those devices. As the result, the devices of an MFD will be grouped by a single group even the ACS is supported on device. [ 0.691263] pci 0000:00:07.1: Adding to iommu group 3 [ 0.691277] pci 0000:00:07.2: Adding to iommu group 3 [ 0.691292] pci 0000:00:07.3: Adding to iommu group 3 Fix it by requesting PCI ACS when Intel IOMMU is detected with platform opt in hint. Fixes: 89a6079d ("iommu/vt-d: Force IOMMU on for platform opt in hint") Co-developed-by: NLalithambika Krishnakumar <lalithambika.krishnakumar@intel.com> Signed-off-by: NLalithambika Krishnakumar <lalithambika.krishnakumar@intel.com> Signed-off-by: NLu Baolu <baolu.lu@linux.intel.com> Reviewed-by: NMika Westerberg <mika.westerberg@linux.intel.com> Cc: Mika Westerberg <mika.westerberg@linux.intel.com> Cc: Ashok Raj <ashok.raj@intel.com> Link: https://lore.kernel.org/r/20200622231345.29722-5-baolu.lu@linux.intel.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Rajat Jain 提交于
Currently, an external malicious PCI device can masquerade the VID:PID of faulty gfx devices, and thus apply iommu quirks to effectively disable the IOMMU restrictions for itself. Thus we need to ensure that the device we are applying quirks to, is indeed an internal trusted device. Signed-off-by: NRajat Jain <rajatja@google.com> Reviewed-by: NAshok Raj <ashok.raj@intel.com> Reviewed-by: NMika Westerberg <mika.westerberg@linux.intel.com> Acked-by: NLu Baolu <baolu.lu@linux.intel.com> Link: https://lore.kernel.org/r/20200622231345.29722-4-baolu.lu@linux.intel.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Lu Baolu 提交于
When using first-level translation for IOVA, currently the U/S bit in the page table is cleared which implies DMA requests with user privilege are blocked. As the result, following error messages might be observed when passing through a device to user level: DMAR: DRHD: handling fault status reg 3 DMAR: [DMA Read] Request device [41:00.0] PASID 1 fault addr 7ecdcd000 [fault reason 129] SM: U/S set 0 for first-level translation with user privilege This fixes it by setting U/S bit in the first level page table and makes IOVA over first level compatible with previous second-level translation. Fixes: b802d070 ("iommu/vt-d: Use iova over first level") Reported-by: NXin Zeng <xin.zeng@intel.com> Signed-off-by: NLu Baolu <baolu.lu@linux.intel.com> Link: https://lore.kernel.org/r/20200622231345.29722-3-baolu.lu@linux.intel.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Lu Baolu 提交于
Current Intel SVM is designed by setting the pgd_t of the processor page table to FLPTR field of the PASID entry. The first level translation only supports 4 and 5 level paging structures, hence it's infeasible for the IOMMU to share a processor's page table when it's running in 32-bit mode. Let's disable 32bit support for now and claim support only when all the missing pieces are ready in the future. Fixes: 1c4f88b7 ("iommu/vt-d: Shared virtual address in scalable mode") Suggested-by: NJoerg Roedel <jroedel@suse.de> Signed-off-by: NLu Baolu <baolu.lu@linux.intel.com> Link: https://lore.kernel.org/r/20200622231345.29722-2-baolu.lu@linux.intel.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 14 6月, 2020 1 次提交
-
-
由 Masahiro Yamada 提交于
Since commit 84af7a61 ("checkpatch: kconfig: prefer 'help' over '---help---'"), the number of '---help---' has been gradually decreasing, but there are still more than 2400 instances. This commit finishes the conversion. While I touched the lines, I also fixed the indentation. There are a variety of indentation styles found. a) 4 spaces + '---help---' b) 7 spaces + '---help---' c) 8 spaces + '---help---' d) 1 space + 1 tab + '---help---' e) 1 tab + '---help---' (correct indentation) f) 1 tab + 1 space + '---help---' g) 1 tab + 2 spaces + '---help---' In order to convert all of them to 1 tab + 'help', I ran the following commend: $ find . -name 'Kconfig*' | xargs sed -i 's/^[[:space:]]*---help---/\thelp/' Signed-off-by: NMasahiro Yamada <masahiroy@kernel.org>
-
- 10 6月, 2020 3 次提交
-
-
由 Joerg Roedel 提交于
Move all files related to the Intel IOMMU driver into its own subdirectory. Signed-off-by: NJoerg Roedel <jroedel@suse.de> Reviewed-by: NJerry Snitselaar <jsnitsel@redhat.com> Reviewed-by: NLu Baolu <baolu.lu@linux.intel.com> Link: https://lore.kernel.org/r/20200609130303.26974-3-joro@8bytes.org
-
由 Joerg Roedel 提交于
Move all files related to the AMD IOMMU driver into its own subdirectory. Signed-off-by: NJoerg Roedel <jroedel@suse.de> Reviewed-by: NSuravee Suthikulpanit <suravee.suthikulpanit@amd.com> Reviewed-by: NJerry Snitselaar <jsnitsel@redhat.com> Link: https://lore.kernel.org/r/20200609130303.26974-2-joro@8bytes.org
-
由 Michel Lespinasse 提交于
This change converts the existing mmap_sem rwsem calls to use the new mmap locking API instead. The change is generated using coccinelle with the following rule: // spatch --sp-file mmap_lock_api.cocci --in-place --include-headers --dir . @@ expression mm; @@ ( -init_rwsem +mmap_init_lock | -down_write +mmap_write_lock | -down_write_killable +mmap_write_lock_killable | -down_write_trylock +mmap_write_trylock | -up_write +mmap_write_unlock | -downgrade_write +mmap_write_downgrade | -down_read +mmap_read_lock | -down_read_killable +mmap_read_lock_killable | -down_read_trylock +mmap_read_trylock | -up_read +mmap_read_unlock ) -(&mm->mmap_sem) +(mm) Signed-off-by: NMichel Lespinasse <walken@google.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Reviewed-by: NDaniel Jordan <daniel.m.jordan@oracle.com> Reviewed-by: NLaurent Dufour <ldufour@linux.ibm.com> Reviewed-by: NVlastimil Babka <vbabka@suse.cz> Cc: Davidlohr Bueso <dbueso@suse.de> Cc: David Rientjes <rientjes@google.com> Cc: Hugh Dickins <hughd@google.com> Cc: Jason Gunthorpe <jgg@ziepe.ca> Cc: Jerome Glisse <jglisse@redhat.com> Cc: John Hubbard <jhubbard@nvidia.com> Cc: Liam Howlett <Liam.Howlett@oracle.com> Cc: Matthew Wilcox <willy@infradead.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ying Han <yinghan@google.com> Link: http://lkml.kernel.org/r/20200520052908.204642-5-walken@google.comSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 04 6月, 2020 1 次提交
-
-
由 Joerg Roedel 提交于
The iommu_group_do_dma_attach() must not attach devices which have deferred_attach set. Otherwise devices could cause IOMMU faults when re-initialized in a kdump kernel. Fixes: deac0b3b ("iommu: Split off default domain allocation from group assignment") Reported-by: NJerry Snitselaar <jsnitsel@redhat.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de> Tested-by: NJerry Snitselaar <jsnitsel@redhat.com> Reviewed-by: NJerry Snitselaar <jsnitsel@redhat.com> Link: https://lore.kernel.org/r/20200604091944.26402-1-joro@8bytes.org
-
- 29 5月, 2020 15 次提交
-
-
由 Joerg Roedel 提交于
Checking the return value of get_device_id() in a code-path which has already done check_device() is not needed, as check_device() does the same check and bails out if it fails. Signed-off-by: NJoerg Roedel <jroedel@suse.de> Reviewed-by: NSuravee Suthikulpanit <suravee.suthikulpanit@amd.com> Link: https://lore.kernel.org/r/20200527115313.7426-11-joro@8bytes.org
-
由 Joerg Roedel 提交于
Do not use dev->archdata.iommu anymore and switch to using the private per-device pointer provided by the IOMMU core code. Signed-off-by: NJoerg Roedel <jroedel@suse.de> Reviewed-by: NSuravee Suthikulpanit <suravee.suthikulpanit@amd.com> Link: https://lore.kernel.org/r/20200527115313.7426-10-joro@8bytes.org
-
由 Joerg Roedel 提交于
Merge amd_iommu_proto.h into amd_iommu.h. Signed-off-by: NJoerg Roedel <jroedel@suse.de> Reviewed-by: NSuravee Suthikulpanit <suravee.suthikulpanit@amd.com> Link: https://lore.kernel.org/r/20200527115313.7426-9-joro@8bytes.org
-
由 Joerg Roedel 提交于
This is covered by IOMMU_DOMAIN_DMA from the IOMMU core code already, so remove it. Signed-off-by: NJoerg Roedel <jroedel@suse.de> Reviewed-by: NSuravee Suthikulpanit <suravee.suthikulpanit@amd.com> Link: https://lore.kernel.org/r/20200527115313.7426-8-joro@8bytes.org
-
由 Joerg Roedel 提交于
Merge the allocation code paths of DMA and UNMANAGED domains and remove code duplication. Signed-off-by: NJoerg Roedel <jroedel@suse.de> Reviewed-by: NSuravee Suthikulpanit <suravee.suthikulpanit@amd.com> Link: https://lore.kernel.org/r/20200527115313.7426-7-joro@8bytes.org
-
由 Joerg Roedel 提交于
Align release of the page-table with the place where it is allocated. Signed-off-by: NJoerg Roedel <jroedel@suse.de> Reviewed-by: NSuravee Suthikulpanit <suravee.suthikulpanit@amd.com> Link: https://lore.kernel.org/r/20200527115313.7426-6-joro@8bytes.org
-
由 Joerg Roedel 提交于
Consolidate the allocation of the domain page-table in one place. Signed-off-by: NJoerg Roedel <jroedel@suse.de> Reviewed-by: NSuravee Suthikulpanit <suravee.suthikulpanit@amd.com> Link: https://lore.kernel.org/r/20200527115313.7426-5-joro@8bytes.org
-
由 Joerg Roedel 提交于
Use 'struct domain_pgtable' instead to free_pagetable(). This solves the problem that amd_iommu_domain_direct_map() needs to restore domain->pt_root after the device table has been updated just to make free_pagetable release the domain page-table. Signed-off-by: NJoerg Roedel <jroedel@suse.de> Reviewed-by: NSuravee Suthikulpanit <suravee.suthikulpanit@amd.com> Link: https://lore.kernel.org/r/20200527115313.7426-4-joro@8bytes.org
-
由 Joerg Roedel 提交于
This function is internal to the AMD IOMMU driver and only exported because the amd_iommu_v2 modules calls it. But the reason it is called from there could better be handled by amd_iommu_is_attach_deferred(). So unexport get_dev_data() and use amd_iommu_is_attach_deferred() instead. Signed-off-by: NJoerg Roedel <jroedel@suse.de> Reviewed-by: NSuravee Suthikulpanit <suravee.suthikulpanit@amd.com> Link: https://lore.kernel.org/r/20200527115313.7426-3-joro@8bytes.org
-
由 Qiushi Wu 提交于
kobject_init_and_add() takes reference even when it fails. Thus, when kobject_init_and_add() returns an error, kobject_put() must be called to properly clean up the kobject. Fixes: d72e31c9 ("iommu: IOMMU Groups") Signed-off-by: NQiushi Wu <wu000273@umn.edu> Link: https://lore.kernel.org/r/20200527210020.6522-1-wu000273@umn.eduSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Jacob Pan 提交于
Make intel_svm_unbind_mm() a static function. Fixes: 064a57d7 ("iommu/vt-d: Replace intel SVM APIs with generic SVA APIs") Reported-by: Nkbuild test robot <lkp@intel.com> Signed-off-by: NJacob Pan <jacob.jun.pan@linux.intel.com> Acked-by: NLu Baolu <baolu.lu@linux.intel.com> Link: https://lore.kernel.org/r/1590689031-79318-1-git-send-email-jacob.jun.pan@linux.intel.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Jon Derrick 提交于
By removing the real DMA indirection in find_domain(), we can allow sub-devices of a real DMA device to have their own valid device_domain_info. The dmar lookup and context entry removal paths have been fixed to account for sub-devices. Fixes: 2b0140c6 ("iommu/vt-d: Use pci_real_dma_dev() for mapping") Signed-off-by: NJon Derrick <jonathan.derrick@intel.com> Acked-by: NLu Baolu <baolu.lu@linux.intel.com> Link: https://lore.kernel.org/r/20200527165617.297470-4-jonathan.derrick@intel.com Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=207575Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Jon Derrick 提交于
Sub-devices of a real DMA device might exist on a separate segment than the real DMA device and its IOMMU. These devices should still have a valid device_domain_info, but the current dma alias model won't allocate info for the subdevice. This patch adds a segment member to struct device_domain_info and uses the sub-device's BDF so that these sub-devices won't alias to other devices. Fixes: 2b0140c6 ("iommu/vt-d: Use pci_real_dma_dev() for mapping") Cc: stable@vger.kernel.org # v5.6+ Signed-off-by: NJon Derrick <jonathan.derrick@intel.com> Acked-by: NLu Baolu <baolu.lu@linux.intel.com> Link: https://lore.kernel.org/r/20200527165617.297470-3-jonathan.derrick@intel.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Jon Derrick 提交于
Domain context mapping can encounter issues with sub-devices of a real DMA device. A sub-device cannot have a valid context entry due to it potentially aliasing another device's 16-bit ID. It's expected that sub-devices of the real DMA device uses the real DMA device's requester when context mapping. This is an issue when a sub-device is removed where the context entry is cleared for all aliases. Other sub-devices are still valid, resulting in those sub-devices being stranded without valid context entries. The correct approach is to use the real DMA device when programming the context entries. The insertion path is correct because device_to_iommu() will return the bus and devfn of the real DMA device. The removal path needs to only operate on the real DMA device, otherwise the entire context entry would be cleared for all sub-devices of the real DMA device. This patch also adds a helper to determine if a struct device is a sub-device of a real DMA device. Fixes: 2b0140c6 ("iommu/vt-d: Use pci_real_dma_dev() for mapping") Cc: stable@vger.kernel.org # v5.6+ Signed-off-by: NJon Derrick <jonathan.derrick@intel.com> Acked-by: NLu Baolu <baolu.lu@linux.intel.com> Link: https://lore.kernel.org/r/20200527165617.297470-2-jonathan.derrick@intel.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Jean-Philippe Brucker 提交于
After binding a device to an mm, device drivers currently need to register a mm_exit handler. This function is called when the mm exits, to gracefully stop DMA targeting the address space and flush page faults to the IOMMU. This is deemed too complex for the MMU release() notifier, which may be triggered by any mmput() invocation, from about 120 callsites [1]. The upcoming SVA module has an example of such complexity: the I/O Page Fault handler would need to call mmput_async() instead of mmput() after handling an IOPF, to avoid triggering the release() notifier which would in turn drain the IOPF queue and lock up. Another concern is the DMA stop function taking too long, up to several minutes [2]. For some mmput() callers this may disturb other users. For example, if the OOM killer picks the mm bound to a device as the victim and that mm's memory is locked, if the release() takes too long, it might choose additional innocent victims to kill. To simplify the MMU release notifier, don't forward the notification to device drivers. Since they don't stop DMA on mm exit anymore, the PASID lifetime is extended: (1) The device driver calls bind(). A PASID is allocated. Here any DMA fault is handled by mm, and on error we don't print anything to dmesg. Userspace can easily trigger errors by issuing DMA on unmapped buffers. (2) exit_mmap(), for example the process took a SIGKILL. This step doesn't happen during normal operations. Remove the pgd from the PASID table, since the page tables are about to be freed. Invalidate the IOTLBs. Here the device may still perform DMA on the address space. Incoming transactions are aborted but faults aren't printed out. ATS Translation Requests return Successful Translation Completions with R=W=0. PRI Page Requests return with Invalid Request. (3) The device driver stops DMA, possibly following release of a fd, and calls unbind(). PASID table is cleared, IOTLB invalidated if necessary. The page fault queues are drained, and the PASID is freed. If DMA for that PASID is still running here, something went seriously wrong and errors should be reported. For now remove iommu_sva_ops entirely. We might need to re-introduce them at some point, for example to notify device drivers of unhandled IOPF. [1] https://lore.kernel.org/linux-iommu/20200306174239.GM31668@ziepe.ca/ [2] https://lore.kernel.org/linux-iommu/4d68da96-0ad5-b412-5987-2f7a6aa796c3@amd.com/Signed-off-by: NJean-Philippe Brucker <jean-philippe@linaro.org> Acked-by: NJacob Pan <jacob.jun.pan@linux.intel.com> Acked-by: NLu Baolu <baolu.lu@linux.intel.com> Link: https://lore.kernel.org/r/20200423125329.782066-3-jean-philippe@linaro.orgSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 27 5月, 2020 5 次提交
-
-
由 Rikard Falkeborn 提交于
The struct sun50i_iommu_ops is not modified and can be made const to allow the compiler to put it in read-only memory. Before: text data bss dec hex filename 14358 2501 64 16923 421b drivers/iommu/sun50i-iommu.o After: text data bss dec hex filename 14726 2117 64 16907 420b drivers/iommu/sun50i-iommu.o Signed-off-by: NRikard Falkeborn <rikard.falkeborn@gmail.com> Acked-by: NMaxime Ripard <mripard@kernel.org> Link: https://lore.kernel.org/r/20200525214958.30015-3-rikard.falkeborn@gmail.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Rikard Falkeborn 提交于
The struct hyperv_ir_domain_ops is not modified and can be made const to allow the compiler to put it in read-only memory. Before: text data bss dec hex filename 2916 1180 1120 5216 1460 drivers/iommu/hyperv-iommu.o After: text data bss dec hex filename 3044 1052 1120 5216 1460 drivers/iommu/hyperv-iommu.o Signed-off-by: NRikard Falkeborn <rikard.falkeborn@gmail.com> Acked-by: NWei Liu <wei.liu@kernel.org> Link: https://lore.kernel.org/r/20200525214958.30015-2-rikard.falkeborn@gmail.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Jean-Philippe Brucker 提交于
The pci_ats_supported() helper checks if a device supports ATS and is allowed to use it. By checking the ATS capability it also integrates the pci_ats_disabled() check from pci_ats_init(). Simplify the vt-d checks. Signed-off-by: NJean-Philippe Brucker <jean-philippe@linaro.org> Acked-by: NLu Baolu <baolu.lu@linux.intel.com> Link: https://lore.kernel.org/r/20200520152201.3309416-5-jean-philippe@linaro.orgSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Jean-Philippe Brucker 提交于
The new pci_ats_supported() function checks if a device supports ATS and is allowed to use it. Signed-off-by: NJean-Philippe Brucker <jean-philippe@linaro.org> Acked-by: NWill Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20200520152201.3309416-4-jean-philippe@linaro.orgSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Jean-Philippe Brucker 提交于
The pci_ats_supported() function checks if a device supports ATS and is allowed to use it. In addition to checking that the device has an ATS capability and that the global pci=noats is not set (pci_ats_disabled()), it also checks if a device is untrusted. A device is untrusted if it is plugged into an external-facing port such as Thunderbolt and could be spoofing an existing device to exploit weaknesses in the IOMMU configuration. By calling pci_ats_supported() we keep DTE[I]=0 for untrusted devices and abort transactions with Pretranslated Addresses. Signed-off-by: NJean-Philippe Brucker <jean-philippe@linaro.org> Reviewed-by: NJoerg Roedel <jroedel@suse.de> Link: https://lore.kernel.org/r/20200520152201.3309416-3-jean-philippe@linaro.orgSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 25 5月, 2020 2 次提交
-
-
由 Joerg Roedel 提交于
The iommu_alloc_default_domain() function takes a reference to an IOMMU group without releasing it. This causes the group to never be released, with undefined side effects. The function has only one call-site, which takes a group reference on its own, so to fix this leak, do not take another reference in iommu_alloc_default_domain() and pass the group as a function parameter instead. Fixes: 6e1aa204 ("iommu: Move default domain allocation to iommu_probe_device()") Reported-by: NSai Prakash Ranjan <saiprakash.ranjan@codeaurora.org> Signed-off-by: NJoerg Roedel <jroedel@suse.de> Tested-by: NSai Prakash Ranjan <saiprakash.ranjan@codeaurora.org> Cc: Sai Prakash Ranjan <saiprakash.ranjan@codeaurora.org> Link: https://lore.kernel.org/r/20200525130122.380-1-joro@8bytes.org Reference: https://lore.kernel.org/lkml/20200522130145.30067-1-saiprakash.ranjan@codeaurora.org/
-
由 Qian Cai 提交于
The commit 6ee1b77b ("iommu/vt-d: Add svm/sva invalidate function") introduced a GCC warning, drivers/iommu/intel-iommu.c:5330:1: warning: 'static' is not at beginning of declaration [-Wold-style-declaration] const static int ^~~~~ Fixes: 6ee1b77b ("iommu/vt-d: Add svm/sva invalidate function") Signed-off-by: NQian Cai <cai@lca.pw> Acked-by: NLu Baolu <baolu.lu@linux.intel.com> Link: https://lore.kernel.org/r/20200521215030.16938-1-cai@lca.pwSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-