- 09 7月, 2020 1 次提交
-
-
由 Will Deacon 提交于
The IOMMU_SYS_CACHE_ONLY flag was never exposed via the DMA API and has no in-tree users. Remove it. Cc: Robin Murphy <robin.murphy@arm.com> Cc: "Isaac J. Manjarres" <isaacm@codeaurora.org> Cc: Joerg Roedel <joro@8bytes.org> Cc: Rob Clark <robdclark@gmail.com> Reviewed-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NSai Prakash Ranjan <saiprakash.ranjan@codeaurora.org> Signed-off-by: NWill Deacon <will@kernel.org>
-
- 29 5月, 2020 1 次提交
-
-
由 Jean-Philippe Brucker 提交于
After binding a device to an mm, device drivers currently need to register a mm_exit handler. This function is called when the mm exits, to gracefully stop DMA targeting the address space and flush page faults to the IOMMU. This is deemed too complex for the MMU release() notifier, which may be triggered by any mmput() invocation, from about 120 callsites [1]. The upcoming SVA module has an example of such complexity: the I/O Page Fault handler would need to call mmput_async() instead of mmput() after handling an IOPF, to avoid triggering the release() notifier which would in turn drain the IOPF queue and lock up. Another concern is the DMA stop function taking too long, up to several minutes [2]. For some mmput() callers this may disturb other users. For example, if the OOM killer picks the mm bound to a device as the victim and that mm's memory is locked, if the release() takes too long, it might choose additional innocent victims to kill. To simplify the MMU release notifier, don't forward the notification to device drivers. Since they don't stop DMA on mm exit anymore, the PASID lifetime is extended: (1) The device driver calls bind(). A PASID is allocated. Here any DMA fault is handled by mm, and on error we don't print anything to dmesg. Userspace can easily trigger errors by issuing DMA on unmapped buffers. (2) exit_mmap(), for example the process took a SIGKILL. This step doesn't happen during normal operations. Remove the pgd from the PASID table, since the page tables are about to be freed. Invalidate the IOTLBs. Here the device may still perform DMA on the address space. Incoming transactions are aborted but faults aren't printed out. ATS Translation Requests return Successful Translation Completions with R=W=0. PRI Page Requests return with Invalid Request. (3) The device driver stops DMA, possibly following release of a fd, and calls unbind(). PASID table is cleared, IOTLB invalidated if necessary. The page fault queues are drained, and the PASID is freed. If DMA for that PASID is still running here, something went seriously wrong and errors should be reported. For now remove iommu_sva_ops entirely. We might need to re-introduce them at some point, for example to notify device drivers of unhandled IOPF. [1] https://lore.kernel.org/linux-iommu/20200306174239.GM31668@ziepe.ca/ [2] https://lore.kernel.org/linux-iommu/4d68da96-0ad5-b412-5987-2f7a6aa796c3@amd.com/Signed-off-by: NJean-Philippe Brucker <jean-philippe@linaro.org> Acked-by: NJacob Pan <jacob.jun.pan@linux.intel.com> Acked-by: NLu Baolu <baolu.lu@linux.intel.com> Link: https://lore.kernel.org/r/20200423125329.782066-3-jean-philippe@linaro.orgSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 15 5月, 2020 1 次提交
-
-
由 Sai Praneeth Prakhya 提交于
After moving iommu_group setup to iommu core code [1][2] and removing private domain support in vt-d [3], there are no users for functions such as iommu_request_dm_for_dev(), iommu_request_dma_domain_for_dev() and request_default_domain_for_dev(). So, remove these functions. [1] commit dce8d696 ("iommu/amd: Convert to probe/release_device() call-backs") [2] commit e5d1841f ("iommu/vt-d: Convert to probe/release_device() call-backs") [3] commit 327d5b2f ("iommu/vt-d: Allow 32bit devices to uses DMA domain") Signed-off-by: NSai Praneeth Prakhya <sai.praneeth.prakhya@intel.com> Cc: Joerg Roedel <joro@8bytes.org> Cc: Lu Baolu <baolu.lu@linux.intel.com> Link: https://lore.kernel.org/r/20200513224721.20504-1-sai.praneeth.prakhya@intel.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 13 5月, 2020 1 次提交
-
-
由 Marek Szyprowski 提交于
struct sg_table is a common structure used for describing a memory buffer. It consists of a scatterlist with memory pages and DMA addresses (sgl entry), as well as the number of scatterlist entries: CPU pages (orig_nents entry) and DMA mapped pages (nents entry). It turned out that it was a common mistake to misuse nents and orig_nents entries, calling mapping functions with a wrong number of entries. To avoid such issues, lets introduce a common wrapper operating directly on the struct sg_table objects, which take care of the proper use of the nents and orig_nents entries. Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com> Acked-by: NJoerg Roedel <jroedel@suse.de> Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NChristoph Hellwig <hch@lst.de>
-
- 05 5月, 2020 5 次提交
-
-
由 Joerg Roedel 提交于
The function is now only used in IOMMU core code and shouldn't be used outside of it anyway, so remove the export for it. Signed-off-by: NJoerg Roedel <jroedel@suse.de> Tested-by: NMarek Szyprowski <m.szyprowski@samsung.com> Acked-by: NMarek Szyprowski <m.szyprowski@samsung.com> Link: https://lore.kernel.org/r/20200429133712.31431-35-joro@8bytes.orgSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
All drivers are converted to use the probe/release_device() call-backs, so the add_device/remove_device() pointers are unused and the code using them can be removed. Signed-off-by: NJoerg Roedel <jroedel@suse.de> Tested-by: NMarek Szyprowski <m.szyprowski@samsung.com> Acked-by: NMarek Szyprowski <m.szyprowski@samsung.com> Link: https://lore.kernel.org/r/20200429133712.31431-33-joro@8bytes.orgSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
Add a check to the bus_iommu_probe() call-path to make sure it ignores devices which have already been successfully probed. Then export the bus_iommu_probe() function so it can be used by IOMMU drivers. Signed-off-by: NJoerg Roedel <jroedel@suse.de> Tested-by: NMarek Szyprowski <m.szyprowski@samsung.com> Acked-by: NMarek Szyprowski <m.szyprowski@samsung.com> Link: https://lore.kernel.org/r/20200429133712.31431-14-joro@8bytes.orgSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
Add call-backs to 'struct iommu_ops' as an alternative to the add_device() and remove_device() call-backs, which will be removed when all drivers are converted. The new call-backs will not setup IOMMU groups and domains anymore, so also add a probe_finalize() call-back where the IOMMU driver can do per-device setup work which require the device to be set up with a group and a domain. Signed-off-by: NJoerg Roedel <jroedel@suse.de> Tested-by: NMarek Szyprowski <m.szyprowski@samsung.com> Acked-by: NMarek Szyprowski <m.szyprowski@samsung.com> Link: https://lore.kernel.org/r/20200429133712.31431-8-joro@8bytes.orgSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Sai Praneeth Prakhya 提交于
Some devices are reqired to use a specific type (identity or dma) of default domain when they are used with a vendor iommu. When the system level default domain type is different from it, the vendor iommu driver has to request a new default domain with iommu_request_dma_domain_for_dev() and iommu_request_dm_for_dev() in the add_dev() callback. Unfortunately, these two helpers only work when the group hasn't been assigned to any other devices, hence, some vendor iommu driver has to use a private domain if it fails to request a new default one. This adds def_domain_type() callback in the iommu_ops, so that any special requirement of default domain for a device could be aware by the iommu generic layer. Signed-off-by: NSai Praneeth Prakhya <sai.praneeth.prakhya@intel.com> Signed-off-by: NLu Baolu <baolu.lu@linux.intel.com> [ jroedel@suse.de: Added iommu_get_def_domain_type() function and use it to allocate the default domain ] Co-developed-by: NJoerg Roedel <jroedel@suse.de> Signed-off-by: NJoerg Roedel <jroedel@suse.de> Tested-by: NMarek Szyprowski <m.szyprowski@samsung.com> Acked-by: NMarek Szyprowski <m.szyprowski@samsung.com> Link: https://lore.kernel.org/r/20200429133712.31431-3-joro@8bytes.orgSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 27 3月, 2020 5 次提交
-
-
由 Joerg Roedel 提交于
Move the pointer for iommu private data from struct iommu_fwspec to struct dev_iommu. Signed-off-by: NJoerg Roedel <jroedel@suse.de> Tested-by: Will Deacon <will@kernel.org> # arm-smmu Reviewed-by: NJean-Philippe Brucker <jean-philippe@linaro.org> Link: https://lore.kernel.org/r/20200326150841.10083-17-joro@8bytes.org
-
由 Joerg Roedel 提交于
Add dev_iommu_priv_get/set() functions to access per-device iommu private data. This makes it easier to move the pointer to a different location. Signed-off-by: NJoerg Roedel <jroedel@suse.de> Tested-by: Will Deacon <will@kernel.org> # arm-smmu Reviewed-by: NJean-Philippe Brucker <jean-philippe@linaro.org> Link: https://lore.kernel.org/r/20200326150841.10083-9-joro@8bytes.org
-
由 Joerg Roedel 提交于
Move the iommu_fwspec pointer in struct device into struct dev_iommu. This is a step in the effort to reduce the iommu related pointers in struct device to one. Signed-off-by: NJoerg Roedel <jroedel@suse.de> Tested-by: Will Deacon <will@kernel.org> # arm-smmu Reviewed-by: NJean-Philippe Brucker <jean-philippe@linaro.org> Reviewed-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Link: https://lore.kernel.org/r/20200326150841.10083-7-joro@8bytes.org
-
由 Joerg Roedel 提交于
The term dev_iommu aligns better with other existing structures and their accessor functions. Signed-off-by: NJoerg Roedel <jroedel@suse.de> Tested-by: Will Deacon <will@kernel.org> # arm-smmu Reviewed-by: NJean-Philippe Brucker <jean-philippe@linaro.org> Reviewed-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Link: https://lore.kernel.org/r/20200326150841.10083-6-joro@8bytes.org
-
由 Joerg Roedel 提交于
There are users outside of the IOMMU code that need to call that function. Define it for !CONFIG_IOMMU_API too so that compilation does not break. Reported-by: Nkbuild test robot <lkp@intel.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de> Reviewed-by: NJean-Philippe Brucker <jean-philippe@linaro.org> Link: https://lore.kernel.org/r/20200326150841.10083-2-joro@8bytes.org
-
- 28 2月, 2020 1 次提交
-
-
由 Robin Murphy 提交于
Although the 1-element array was a typical pre-C99 way to implement variable-length structures, and indeed is a fundamental construct in the APIs of certain other popular platforms, there's no good reason for it here (and in particular the sizeof() trick is far too "clever" for its own good). We can just as easily implement iommu_fwspec's preallocation behaviour using a standard flexible array member, so let's make it look the way most readers would expect. Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 16 1月, 2020 1 次提交
-
-
由 Jean-Philippe Brucker 提交于
For platform devices that support SubstreamID (SSID), firmware provides the number of supported SSID bits. Restrict it to what the SMMU supports and cache it into master->ssid_bits, which will also be used for PCI PASID. Reviewed-by: NEric Auger <eric.auger@redhat.com> Reviewed-by: NJonathan Cameron <Jonathan.Cameron@huawei.com> Signed-off-by: NJean-Philippe Brucker <jean-philippe@linaro.org> Signed-off-by: NWill Deacon <will@kernel.org>
-
- 10 1月, 2020 1 次提交
-
-
由 Will Deacon 提交于
Requiring each IOMMU driver to initialise the 'owner' field of their 'struct iommu_ops' is error-prone and easily forgotten. Follow the example set by PCI and USB by assigning THIS_MODULE automatically when registering the ops structure with IOMMU core. Reviewed-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Suggested-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NWill Deacon <will@kernel.org>
-
- 23 12月, 2019 2 次提交
-
-
由 Thierry Reding 提交于
Implement a generic function for removing reserved regions. This can be used by drivers that don't do anything fancy with these regions other than allocating memory for them. Signed-off-by: NThierry Reding <treding@nvidia.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Will Deacon 提交于
To avoid accidental removal of an active IOMMU driver module, take a reference to the driver module in 'iommu_probe_device()' immediately prior to invoking the '->add_device()' callback and hold it until the after the device has been removed by '->remove_device()'. Suggested-by: NJoerg Roedel <joro@8bytes.org> Signed-off-by: NWill Deacon <will@kernel.org> Tested-by: John Garry <john.garry@huawei.com> # smmu v3 Reviewed-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 07 11月, 2019 1 次提交
-
-
由 Will Deacon 提交于
The 'IOMMU_QCOM_SYS_CACHE' IOMMU protection flag is exposed to all users of the IOMMU API. Despite its name, the idea behind it isn't especially tied to Qualcomm implementations and could conceivably be used by other systems. Rename it to 'IOMMU_SYS_CACHE_ONLY' and update the comment to describe a bit better the idea behind it. Cc: Robin Murphy <robin.murphy@arm.com> Cc: "Isaac J. Manjarres" <isaacm@codeaurora.org> Signed-off-by: NWill Deacon <will@kernel.org>
-
- 15 10月, 2019 3 次提交
-
-
由 Jacob Pan 提交于
Guest shared virtual address (SVA) may require host to shadow guest PASID tables. Guest PASID can also be allocated from the host via enlightened interfaces. In this case, guest needs to bind the guest mm, i.e. cr3 in guest physical address to the actual PASID table in the host IOMMU. Nesting will be turned on such that guest virtual address can go through a two level translation: - 1st level translates GVA to GPA - 2nd level translates GPA to HPA This patch introduces APIs to bind guest PASID data to the assigned device entry in the physical IOMMU. See the diagram below for usage explanation. .-------------. .---------------------------. | vIOMMU | | Guest process mm, FL only | | | '---------------------------' .----------------/ | PASID Entry |--- PASID cache flush - '-------------' | | | V | | GP '-------------' Guest ------| Shadow |----------------------- GP->HP* --------- v v | Host v .-------------. .----------------------. | pIOMMU | | Bind FL for GVA-GPA | | | '----------------------' .----------------/ | | PASID Entry | V (Nested xlate) '----------------\.---------------------. | | |Set SL to GPA-HPA | | | '---------------------' '-------------' Where: - FL = First level/stage one page tables - SL = Second level/stage two page tables - GP = Guest PASID - HP = Host PASID * Conversion needed if non-identity GP-HP mapping option is chosen. Signed-off-by: NJacob Pan <jacob.jun.pan@linux.intel.com> Signed-off-by: NLiu Yi L <yi.l.liu@intel.com> Reviewed-by: NJean-Philippe Brucker <jean-philippe@linaro.com> Reviewed-by: NJean-Philippe Brucker <jean-philippe@linaro.org> Reviewed-by: NEric Auger <eric.auger@redhat.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Yi L Liu 提交于
In any virtualization use case, when the first translation stage is "owned" by the guest OS, the host IOMMU driver has no knowledge of caching structure updates unless the guest invalidation activities are trapped by the virtualizer and passed down to the host. Since the invalidation data can be obtained from user space and will be written into physical IOMMU, we must allow security check at various layers. Therefore, generic invalidation data format are proposed here, model specific IOMMU drivers need to convert them into their own format. Signed-off-by: NYi L Liu <yi.l.liu@intel.com> Signed-off-by: NJacob Pan <jacob.jun.pan@linux.intel.com> Signed-off-by: NAshok Raj <ashok.raj@intel.com> Signed-off-by: NEric Auger <eric.auger@redhat.com> Signed-off-by: NJean-Philippe Brucker <jean-philippe@linaro.com> Reviewed-by: NJean-Philippe Brucker <jean-philippe@linaro.org> Reviewed-by: NEric Auger <eric.auger@redhat.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Tom Murphy 提交于
Add a gfp_t parameter to the iommu_ops::map function. Remove the needless locking in the AMD iommu driver. The iommu_ops::map function (or the iommu_map function which calls it) was always supposed to be sleepable (according to Joerg's comment in this thread: https://lore.kernel.org/patchwork/patch/977520/ ) and so should probably have had a "might_sleep()" since it was written. However currently the dma-iommu api can call iommu_map in an atomic context, which it shouldn't do. This doesn't cause any problems because any iommu driver which uses the dma-iommu api uses gfp_atomic in it's iommu_ops::map function. But doing this wastes the memory allocators atomic pools. Signed-off-by: NTom Murphy <murphyt7@tcd.ie> Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Reviewed-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 23 8月, 2019 1 次提交
-
-
由 Joerg Roedel 提交于
Add a couple of functions to allow changing the default domain type from architecture code and a function for iommu drivers to request whether the default domain is passthrough. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 30 7月, 2019 1 次提交
-
-
由 Will Deacon 提交于
To allow IOMMU drivers to batch up TLB flushing operations and postpone them until ->iotlb_sync() is called, extend the prototypes for the ->unmap() and ->iotlb_sync() IOMMU ops callbacks to take a pointer to the current iommu_iotlb_gather structure. All affected IOMMU drivers are updated, but there should be no functional change since the extra parameter is ignored for now. Signed-off-by: NWill Deacon <will@kernel.org>
-
- 24 7月, 2019 3 次提交
-
-
由 Will Deacon 提交于
Introduce a helper function for drivers to use when updating an iommu_iotlb_gather structure in response to an ->unmap() call, rather than having to open-code the logic in every page-table implementation. Signed-off-by: NWill Deacon <will@kernel.org>
-
由 Will Deacon 提交于
To permit batching of TLB flushes across multiple calls to the IOMMU driver's ->unmap() implementation, introduce a new structure for tracking the address range to be flushed and the granularity at which the flushing is required. This is hooked into the IOMMU API and its caller are updated to make use of the new structure. Subsequent patches will plumb this into the IOMMU drivers as well, but for now the gathering information is ignored. Signed-off-by: NWill Deacon <will@kernel.org>
-
由 Will Deacon 提交于
Commit add02cfd ("iommu: Introduce Interface for IOMMU TLB Flushing") added three new TLB flushing operations to the IOMMU API so that the underlying driver operations can be batched when unmapping large regions of IO virtual address space. However, the ->iotlb_range_add() callback has not been implemented by any IOMMU drivers (amd_iommu.c implements it as an empty function, which incurs the overhead of an indirect branch). Instead, drivers either flush the entire IOTLB in the ->iotlb_sync() callback or perform the necessary invalidation during ->unmap(). Attempting to implement ->iotlb_range_add() for arm-smmu-v3.c revealed two major issues: 1. The page size used to map the region in the page-table is not known, and so it is not generally possible to issue TLB flushes in the most efficient manner. 2. The only mutable state passed to the callback is a pointer to the iommu_domain, which can be accessed concurrently and therefore requires expensive synchronisation to keep track of the outstanding flushes. Remove the callback entirely in preparation for extending ->unmap() and ->iotlb_sync() to update a token on the caller's stack. Signed-off-by: NWill Deacon <will@kernel.org>
-
- 19 6月, 2019 1 次提交
-
-
由 Vivek Gautam 提交于
Few Qualcomm platforms such as, sdm845 have an additional outer cache called as System cache, aka. Last level cache (LLC) that allows non-coherent devices to upgrade to using caching. This cache sits right before the DDR, and is tightly coupled with the memory controller. The clients using this cache request their slices from this system cache, make it active, and can then start using it. There is a fundamental assumption that non-coherent devices can't access caches. This change adds an exception where they *can* use some level of cache despite still being non-coherent overall. The coherent devices that use cacheable memory, and CPU make use of this system cache by default. Looking at memory types, we have following - a) Normal uncached :- MAIR 0x44, inner non-cacheable, outer non-cacheable; b) Normal cached :- MAIR 0xff, inner read write-back non-transient, outer read write-back non-transient; attribute setting for coherenet I/O devices. and, for non-coherent i/o devices that can allocate in system cache another type gets added - c) Normal sys-cached :- MAIR 0xf4, inner non-cacheable, outer read write-back non-transient Coherent I/O devices use system cache by marking the memory as normal cached. Non-coherent I/O devices should mark the memory as normal sys-cached in page tables to use system cache. Acked-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NVivek Gautam <vivek.gautam@codeaurora.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 12 6月, 2019 4 次提交
-
-
由 Eric Auger 提交于
Introduce a new type for reserved region. This corresponds to directly mapped regions which are known to be relaxable in some specific conditions, such as device assignment use case. Well known examples are those used by USB controllers providing PS/2 keyboard emulation for pre-boot BIOS and early BOOT or RMRRs associated to IGD working in legacy mode. Since commit c875d2c1 ("iommu/vt-d: Exclude devices using RMRRs from IOMMU API domains") and commit 18436afd ("iommu/vt-d: Allow RMRR on graphics devices too"), those regions are currently considered "safe" with respect to device assignment use case which requires a non direct mapping at IOMMU physical level (RAM GPA -> HPA mapping). Those RMRRs currently exist and sometimes the device is attempting to access it but this has not been considered an issue until now. However at the moment, iommu_get_group_resv_regions() is not able to make any difference between directly mapped regions: those which must be absolutely enforced and those like above ones which are known as relaxable. This is a blocker for reporting severe conflicts between non relaxable RMRRs (like MSI doorbells) and guest GPA space. With this new reserved region type we will be able to use iommu_get_group_resv_regions() to enumerate the IOVA space that is usable through the IOMMU API without introducing regressions with respect to existing device assignment use cases (USB and IGD). Signed-off-by: NEric Auger <eric.auger@redhat.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Jean-Philippe Brucker 提交于
Some IOMMU hardware features, for example PCI PRI and Arm SMMU Stall, enable recoverable I/O page faults. Allow IOMMU drivers to report PRI Page Requests and Stall events through the new fault reporting API. The consumer of the fault can be either an I/O page fault handler in the host, or a guest OS. Once handled, the fault must be completed by sending a page response back to the IOMMU. Add an iommu_page_response() function to complete a page fault. There are two ways to extend the userspace API: * Add a field to iommu_page_response and a flag to iommu_page_response::flags describing the validity of this field. * Introduce a new iommu_page_response_X structure with a different version number. The kernel must then support both versions. Signed-off-by: NJacob Pan <jacob.jun.pan@linux.intel.com> Signed-off-by: NJean-Philippe Brucker <jean-philippe.brucker@arm.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Jacob Pan 提交于
Traditionally, device specific faults are detected and handled within their own device drivers. When IOMMU is enabled, faults such as DMA related transactions are detected by IOMMU. There is no generic reporting mechanism to report faults back to the in-kernel device driver or the guest OS in case of assigned devices. This patch introduces a registration API for device specific fault handlers. This differs from the existing iommu_set_fault_handler/ report_iommu_fault infrastructures in several ways: - it allows to report more sophisticated fault events (both unrecoverable faults and page request faults) due to the nature of the iommu_fault struct - it is device specific and not domain specific. The current iommu_report_device_fault() implementation only handles the "shoot and forget" unrecoverable fault case. Handling of page request faults or stalled faults will come later. Signed-off-by: NJacob Pan <jacob.jun.pan@linux.intel.com> Signed-off-by: NAshok Raj <ashok.raj@intel.com> Signed-off-by: NJean-Philippe Brucker <jean-philippe.brucker@arm.com> Signed-off-by: NEric Auger <eric.auger@redhat.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Jacob Pan 提交于
Device faults detected by IOMMU can be reported outside the IOMMU subsystem for further processing. This patch introduces a generic device fault data structure. The fault can be either an unrecoverable fault or a page request, also referred to as a recoverable fault. We only care about non internal faults that are likely to be reported to an external subsystem. Signed-off-by: NJacob Pan <jacob.jun.pan@linux.intel.com> Signed-off-by: NJean-Philippe Brucker <jean-philippe.brucker@arm.com> Signed-off-by: NLiu, Yi L <yi.l.liu@linux.intel.com> Signed-off-by: NAshok Raj <ashok.raj@intel.com> Signed-off-by: NEric Auger <eric.auger@redhat.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 05 6月, 2019 1 次提交
-
-
由 Thomas Gleixner 提交于
Based on 1 normalized pattern(s): this program is free software you can redistribute it and or modify it under the terms of the gnu general public license version 2 as published by the free software foundation this program is distributed in the hope that it will be useful but without any warranty without even the implied warranty of merchantability or fitness for a particular purpose see the gnu general public license for more details you should have received a copy of the gnu general public license along with this program if not write to the free software foundation inc 59 temple place suite 330 boston ma 02111 1307 usa extracted by the scancode license scanner the SPDX license identifier GPL-2.0-only has been chosen to replace the boilerplate/reference in 136 file(s). Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Reviewed-by: NAlexios Zavras <alexios.zavras@intel.com> Reviewed-by: NAllison Randal <allison@lohutok.net> Cc: linux-spdx@vger.kernel.org Link: https://lkml.kernel.org/r/20190530000436.384967451@linutronix.deSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 27 5月, 2019 1 次提交
-
-
由 Lu Baolu 提交于
Normally during iommu probing a device, a default doamin will be allocated and attached to the device. The domain type of the default domain is statically defined, which results in a situation where the allocated default domain isn't suitable for the device due to some limitations. We already have API iommu_request_dm_for_dev() to replace a DMA domain with an identity one. This adds iommu_request_dma_domain_for_dev() to request a dma domain if an allocated identity domain isn't suitable for the device in question. Signed-off-by: NLu Baolu <baolu.lu@linux.intel.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 23 4月, 2019 1 次提交
-
-
由 Jean-Philippe Brucker 提交于
Root complex node in IORT has a bit telling whether it supports ATS or not. Store this bit in the IOMMU fwspec when setting up a device, so it can be accessed later by an IOMMU driver. In the future we'll probably want to store this bit at the host bridge or SMMU rather than in each endpoint. Acked-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by: NJean-Philippe Brucker <jean-philippe.brucker@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 11 4月, 2019 2 次提交
-
-
由 Jean-Philippe Brucker 提交于
Add bind() and unbind() operations to the IOMMU API. iommu_sva_bind_device() binds a device to an mm, and returns a handle to the bond, which is released by calling iommu_sva_unbind_device(). Each mm bound to devices gets a PASID (by convention, a 20-bit system-wide ID representing the address space), which can be retrieved with iommu_sva_get_pasid(). When programming DMA addresses, device drivers include this PASID in a device-specific manner, to let the device access the given address space. Since the process memory may be paged out, device and IOMMU must support I/O page faults (e.g. PCI PRI). Using iommu_sva_set_ops(), device drivers provide an mm_exit() callback that is called by the IOMMU driver if the process exits before the device driver called unbind(). In mm_exit(), device driver should disable DMA from the given context, so that the core IOMMU can reallocate the PASID. Whether the process exited or nor, the device driver should always release the handle with unbind(). To use these functions, device driver must first enable the IOMMU_DEV_FEAT_SVA device feature with iommu_dev_enable_feature(). Signed-off-by: NJean-Philippe Brucker <jean-philippe.brucker@arm.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Lu Baolu 提交于
Sharing a physical PCI device in a finer-granularity way is becoming a consensus in the industry. IOMMU vendors are also engaging efforts to support such sharing as well as possible. Among the efforts, the capability of support finer-granularity DMA isolation is a common requirement due to the security consideration. With finer-granularity DMA isolation, subsets of a PCI function can be isolated from each others by the IOMMU. As a result, there is a request in software to attach multiple domains to a physical PCI device. One example of such use model is the Intel Scalable IOV [1] [2]. The Intel vt-d 3.0 spec [3] introduces the scalable mode which enables PASID granularity DMA isolation. This adds the APIs to support multiple domains per device. In order to ease the discussions, we call it 'a domain in auxiliary mode' or simply 'auxiliary domain' when multiple domains are attached to a physical device. The APIs include: * iommu_dev_has_feature(dev, IOMMU_DEV_FEAT_AUX) - Detect both IOMMU and PCI endpoint devices supporting the feature (aux-domain here) without the host driver dependency. * iommu_dev_feature_enabled(dev, IOMMU_DEV_FEAT_AUX) - Check the enabling status of the feature (aux-domain here). The aux-domain interfaces are available only if this returns true. * iommu_dev_enable/disable_feature(dev, IOMMU_DEV_FEAT_AUX) - Enable/disable device specific aux-domain feature. * iommu_aux_attach_device(domain, dev) - Attaches @domain to @dev in the auxiliary mode. Multiple domains could be attached to a single device in the auxiliary mode with each domain representing an isolated address space for an assignable subset of the device. * iommu_aux_detach_device(domain, dev) - Detach @domain which has been attached to @dev in the auxiliary mode. * iommu_aux_get_pasid(domain, dev) - Return ID used for finer-granularity DMA translation. For the Intel Scalable IOV usage model, this will be a PASID. The device which supports Scalable IOV needs to write this ID to the device register so that DMA requests could be tagged with a right PASID prefix. This has been updated with the latest proposal from Joerg posted here [5]. Many people involved in discussions of this design. Kevin Tian <kevin.tian@intel.com> Liu Yi L <yi.l.liu@intel.com> Ashok Raj <ashok.raj@intel.com> Sanjay Kumar <sanjay.k.kumar@intel.com> Jacob Pan <jacob.jun.pan@linux.intel.com> Alex Williamson <alex.williamson@redhat.com> Jean-Philippe Brucker <jean-philippe.brucker@arm.com> Joerg Roedel <joro@8bytes.org> and some discussions can be found here [4] [5]. [1] https://software.intel.com/en-us/download/intel-scalable-io-virtualization-technical-specification [2] https://schd.ws/hosted_files/lc32018/00/LC3-SIOV-final.pdf [3] https://software.intel.com/en-us/download/intel-virtualization-technology-for-directed-io-architecture-specification [4] https://lkml.org/lkml/2018/7/26/4 [5] https://www.spinics.net/lists/iommu/msg31874.html Cc: Ashok Raj <ashok.raj@intel.com> Cc: Jacob Pan <jacob.jun.pan@linux.intel.com> Cc: Kevin Tian <kevin.tian@intel.com> Cc: Liu Yi L <yi.l.liu@intel.com> Suggested-by: NKevin Tian <kevin.tian@intel.com> Suggested-by: NJean-Philippe Brucker <jean-philippe.brucker@arm.com> Suggested-by: NJoerg Roedel <jroedel@suse.de> Signed-off-by: NLu Baolu <baolu.lu@linux.intel.com> Reviewed-by: NJean-Philippe Brucker <jean-philippe.brucker@arm.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 26 2月, 2019 2 次提交
-
-
由 Geert Uytterhoeven 提交于
Add missing kerneldoc for iommu_ops.is_attach_deferred(). Fixes: e01d1913 ("iommu: Add is_attach_deferred call-back to iommu-ops") Signed-off-by: NGeert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Geert Uytterhoeven 提交于
Add missing kerneldoc for iommu_ops.iotlb_sync_map(). Fixes: 1d7ae53b ("iommu: Introduce iotlb_sync_map callback") Signed-off-by: NGeert Uytterhoeven <geert+renesas@glider.be> Reviewed-by: NDmitry Osipenko <digetx@gmail.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-