- 28 4月, 2022 1 次提交
-
-
由 Lu Baolu 提交于
Multiple devices may be placed in the same IOMMU group because they cannot be isolated from each other. These devices must either be entirely under kernel control or userspace control, never a mixture. This adds dma ownership management in iommu core and exposes several interfaces for the device drivers and the device userspace assignment framework (i.e. VFIO), so that any conflict between user and kernel controlled dma could be detected at the beginning. The device driver oriented interfaces are, int iommu_device_use_default_domain(struct device *dev); void iommu_device_unuse_default_domain(struct device *dev); By calling iommu_device_use_default_domain(), the device driver tells the iommu layer that the device dma is handled through the kernel DMA APIs. The iommu layer will manage the IOVA and use the default domain for DMA address translation. The device user-space assignment framework oriented interfaces are, int iommu_group_claim_dma_owner(struct iommu_group *group, void *owner); void iommu_group_release_dma_owner(struct iommu_group *group); bool iommu_group_dma_owner_claimed(struct iommu_group *group); The device userspace assignment must be disallowed if the DMA owner claiming interface returns failure. Signed-off-by: NJason Gunthorpe <jgg@nvidia.com> Signed-off-by: NKevin Tian <kevin.tian@intel.com> Signed-off-by: NLu Baolu <baolu.lu@linux.intel.com> Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/20220418005000.897664-2-baolu.lu@linux.intel.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 28 2月, 2022 6 次提交
-
-
由 Lu Baolu 提交于
Move the domain specific operations out of struct iommu_ops into a new structure that only has domain specific operations. This solves the problem of needing to know if the method vector for a given operation needs to be retrieved from the device or the domain. Logically the domain ops are the ones that make sense for external subsystems and endpoint drivers to use, while device ops, with the sole exception of domain_alloc, are IOMMU API internals. Signed-off-by: NLu Baolu <baolu.lu@linux.intel.com> Reviewed-by: NJason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/20220216025249.3459465-10-baolu.lu@linux.intel.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Lu Baolu 提交于
The is_attach_deferred iommu_ops callback is a device op. The domain argument is unnecessary and never used. Remove it to make code clean. Suggested-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NLu Baolu <baolu.lu@linux.intel.com> Reviewed-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NJason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/20220216025249.3459465-9-baolu.lu@linux.intel.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Lu Baolu 提交于
The common iommu_ops is hooked to both device and domain. When a helper has both device and domain pointer, the way to get the iommu_ops looks messy in iommu core. This sorts out the way to get iommu_ops. The device related helpers go through device pointer, while the domain related ones go through domain pointer. Signed-off-by: NLu Baolu <baolu.lu@linux.intel.com> Reviewed-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NJason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/20220216025249.3459465-8-baolu.lu@linux.intel.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Lu Baolu 提交于
The apply_resv_region callback in iommu_ops was introduced to reserve an IOVA range in the given DMA domain when the IOMMU driver manages the IOVA by itself. As all drivers converted to use dma-iommu in the core, there's no driver using this anymore. Remove it to avoid dead code. Suggested-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NLu Baolu <baolu.lu@linux.intel.com> Reviewed-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NJason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/20220216025249.3459465-6-baolu.lu@linux.intel.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Lu Baolu 提交于
The aux-domain related interfaces and iommu_ops are not referenced anywhere in the tree. We've also reached a consensus to redesign it based the new iommufd framework. Remove them to avoid dead code. Signed-off-by: NLu Baolu <baolu.lu@linux.intel.com> Reviewed-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NJason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/20220216025249.3459465-5-baolu.lu@linux.intel.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Lu Baolu 提交于
The guest pasid related uapi interfaces and definitions are not referenced anywhere in the tree. We've also reached a consensus to replace them with a new iommufd design. Remove them to avoid dead code. Signed-off-by: NLu Baolu <baolu.lu@linux.intel.com> Reviewed-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NJason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/20220216025249.3459465-3-baolu.lu@linux.intel.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 20 12月, 2021 1 次提交
-
-
由 Matthew Wilcox (Oracle) 提交于
page->freelist is for the use of slab. We already have the ability to free a list of pages in the core mm, but it requires the use of a list_head and for the pages to be chained together through page->lru. Switch the Intel IOMMU and IOVA code over to using free_pages_list(). Signed-off-by: NMatthew Wilcox (Oracle) <willy@infradead.org> [rm: split from original patch, cosmetic tweaks, fix fq entries] Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Reviewed-by: NLu Baolu <baolu.lu@linux.intel.com> Link: https://lore.kernel.org/r/2115b560d9a0ce7cd4b948bd51a2b7bde8fdfd59.1639753638.git.robin.murphy@arm.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 20 8月, 2021 1 次提交
-
-
由 Robin Murphy 提交于
Previously io-pgtable merely passed the iommu_iotlb_gather pointer through to helpers, but now it has grown its own direct dereference. This turns out to break the build for !IOMMU_API configs where the structure only has a dummy definition. It will probably also crash drivers who don't use the gather mechanism and simply pass in NULL. Wrap this dereference in a suitable helper which can both be stubbed out for !IOMMU_API and encapsulate a NULL check otherwise. Fixes: 7a7c5bad ("iommu: Indicate queued flushes via gather data") Reported-by: Nkernel test robot <lkp@intel.com> Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/83672ee76f6405c82845a55c148fa836f56fbbc1.1629465282.git.robin.murphy@arm.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 18 8月, 2021 4 次提交
-
-
由 Robin Murphy 提交于
Eliminate the iommu_get_dma_strict() indirection and pipe the information through the domain type from the beginning. Besides the flow simplification this also has several nice side-effects: - Automatically implies strict mode for untrusted devices by virtue of their IOMMU_DOMAIN_DMA override. - Ensures that we only end up using flush queues for drivers which are aware of them and can actually benefit. - Allows us to handle flush queue init failure by falling back to strict mode instead of leaving it to possibly blow up later. Reviewed-by: NLu Baolu <baolu.lu@linux.intel.com> Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/47083d69155577f1367877b1594921948c366eb3.1628682049.git.robin.murphy@arm.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Robin Murphy 提交于
Promote the difference between strict and non-strict DMA domains from an internal detail to a distinct domain feature and type, to pave the road for exposing it through the sysfs default domain interface. Reviewed-by: NLu Baolu <baolu.lu@linux.intel.com> Reviewed-by: NJean-Philippe Brucker <jean-philippe@linaro.org> Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/08cd2afaf6b63c58ad49acec3517c9b32c2bb946.1628682049.git.robin.murphy@arm.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Robin Murphy 提交于
Since iommu_iotlb_gather exists to help drivers optimise flushing for a given unmap request, it is also the logical place to indicate whether the unmap is strict or not, and thus help them further optimise for whether to expect a sync or a flush_all subsequently. As part of that, it also seems fair to make the flush queue code take responsibility for enforcing the really subtle ordering requirement it brings, so that we don't need to worry about forgetting that if new drivers want to add flush queue support, and can consolidate the existing versions. While we're adding to the kerneldoc, also fill in some info for @freelist which was overlooked previously. Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/bf5f8e2ad84e48c712ccbf80fa8c610594c7595f.1628682049.git.robin.murphy@arm.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Robin Murphy 提交于
Now that everyone has converged on iommu-dma for IOMMU_DOMAIN_DMA support, we can abandon the notion of drivers being responsible for the cookie type, and consolidate all the management into the core code. CC: Yong Wu <yong.wu@mediatek.com> CC: Chunyan Zhang <chunyan.zhang@unisoc.com> CC: Maxime Ripard <mripard@kernel.org> Tested-by: NHeiko Stuebner <heiko@sntech.de> Tested-by: NMarek Szyprowski <m.szyprowski@samsung.com> Tested-by: NYoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com> Reviewed-by: NJean-Philippe Brucker <jean-philippe@linaro.org> Reviewed-by: NLu Baolu <baolu.lu@linux.intel.com> Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/46a2c0e7419c7d1d931762dc7b6a69fa082d199a.1628682048.git.robin.murphy@arm.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 09 8月, 2021 1 次提交
-
-
由 Logan Gunthorpe 提交于
Convert to ssize_t return code so the return code from __iommu_map() can be returned all the way down through dma_iommu_map_sg(). Signed-off-by: NLogan Gunthorpe <logang@deltatee.com> Cc: Joerg Roedel <joro@8bytes.org> Cc: Will Deacon <will@kernel.org> Signed-off-by: NChristoph Hellwig <hch@lst.de>
-
- 02 8月, 2021 2 次提交
-
-
由 Nadav Amit 提交于
Refactor iommu_iotlb_gather_add_page() and factor out the logic that detects whether IOTLB gather range and a new range are disjoint. To be used by the next patch that implements different gathering logic for AMD. Note that updating gather->pgsize unconditionally does not affect correctness as the function had (and has) an invariant, in which gather->pgsize always represents the flushing granularity of its range. Arguably, “size" should never be zero, but lets assume for the matter of discussion that it might. If "size" equals to "gather->pgsize", then the assignment in question has no impact. Otherwise, if "size" is non-zero, then iommu_iotlb_sync() would initialize the size and range (see iommu_iotlb_gather_init()), and the invariant is kept. Otherwise, "size" is zero, and "gather" already holds a range, so gather->pgsize is non-zero and (gather->pgsize && gather->pgsize != size) is true. Therefore, again, iommu_iotlb_sync() would be called and initialize the size. Cc: Joerg Roedel <joro@8bytes.org> Cc: Jiajun Cao <caojiajun@vmware.com> Cc: Lu Baolu <baolu.lu@linux.intel.com> Cc: iommu@lists.linux-foundation.org Cc: linux-kernel@vger.kernel.org> Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Acked-by: NWill Deacon <will@kernel.org> Signed-off-by: NNadav Amit <namit@vmware.com> Link: https://lore.kernel.org/r/20210723093209.714328-5-namit@vmware.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Robin Murphy 提交于
The Mediatek driver is not the only one which might want a basic address-based gathering behaviour, so although it's arguably simple enough to open-code, let's factor it out for the sake of cleanliness. Let's also take this opportunity to document the intent of these helpers for clarity. Cc: Joerg Roedel <joro@8bytes.org> Cc: Will Deacon <will@kernel.org> Cc: Jiajun Cao <caojiajun@vmware.com> Cc: Robin Murphy <robin.murphy@arm.com> Cc: Lu Baolu <baolu.lu@linux.intel.com> Cc: iommu@lists.linux-foundation.org Cc: linux-kernel@vger.kernel.org Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NNadav Amit <namit@vmware.com> Link: https://lore.kernel.org/r/20210723093209.714328-4-namit@vmware.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 26 7月, 2021 3 次提交
-
-
由 John Garry 提交于
We only ever now set strict mode enabled in iommu_set_dma_strict(), so just remove the argument. Signed-off-by: NJohn Garry <john.garry@huawei.com> Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Reviewed-by: NLu Baolu <baolu.lu@linux.intel.com> Link: https://lore.kernel.org/r/1626088340-5838-7-git-send-email-john.garry@huawei.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Isaac J. Manjarres 提交于
Add a callback for IOMMU drivers to provide a path for the IOMMU framework to call into an IOMMU driver, which can call into the io-pgtable code, to map a physically contiguous rnage of pages of the same size. For IOMMU drivers that do not specify a map_pages() callback, the existing logic of mapping memory one page block at a time will be used. Signed-off-by: NIsaac J. Manjarres <isaacm@codeaurora.org> Suggested-by: NWill Deacon <will@kernel.org> Acked-by: NLu Baolu <baolu.lu@linux.intel.com> Signed-off-by: NGeorgi Djakov <quic_c_gdjako@quicinc.com> Link: https://lore.kernel.org/r/1623850736-389584-5-git-send-email-quic_c_gdjako@quicinc.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Isaac J. Manjarres 提交于
Add a callback for IOMMU drivers to provide a path for the IOMMU framework to call into an IOMMU driver, which can call into the io-pgtable code, to unmap a virtually contiguous range of pages of the same size. For IOMMU drivers that do not specify an unmap_pages() callback, the existing logic of unmapping memory one page block at a time will be used. Signed-off-by: NIsaac J. Manjarres <isaacm@codeaurora.org> Suggested-by: NWill Deacon <will@kernel.org> Signed-off-by: NWill Deacon <will@kernel.org> Acked-by: NLu Baolu <baolu.lu@linux.intel.com> Signed-off-by: NGeorgi Djakov <quic_c_gdjako@quicinc.com> Link: https://lore.kernel.org/r/1623850736-389584-3-git-send-email-quic_c_gdjako@quicinc.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 16 4月, 2021 2 次提交
-
-
由 Robin Murphy 提交于
Rather than have separate opaque setter functions that are easy to overlook and lead to repetitive boilerplate in drivers, let's pass the relevant initialisation parameters directly to iommu_device_register(). Acked-by: NWill Deacon <will@kernel.org> Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/ab001b87c533b6f4db71eb90db6f888953986c36.1617285386.git.robin.murphy@arm.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Robin Murphy 提交于
It happens that the 3 drivers which first supported being modular are also ones which play games with their pgsize_bitmap, so have non-const iommu_ops where dynamically setting the owner manages to work out OK. However, it's less than ideal to force that upon all drivers which want to be modular - like the new sprd-iommu driver which now has a potential bug in that regard - so let's just statically set the module owner and let ops remain const wherever possible. Reviewed-by: NChristoph Hellwig <hch@lst.de> Acked-by: NWill Deacon <will@kernel.org> Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/31423b99ff609c3d4b291c701a7a7a810d9ce8dc.1617285386.git.robin.murphy@arm.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 07 4月, 2021 17 次提交
-
-
由 Christoph Hellwig 提交于
Remove the now unused iommu attr infrastructure. Signed-off-by: NChristoph Hellwig <hch@lst.de> Acked-by: NWill Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210401155256.298656-21-hch@lst.deSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Christoph Hellwig 提交于
Use an explicit set_pgtable_quirks method instead that just passes the actual quirk bitmask instead. Signed-off-by: NChristoph Hellwig <hch@lst.de> Acked-by: NWill Deacon <will@kernel.org> Acked-by: NLi Yang <leoyang.li@nxp.com> Link: https://lore.kernel.org/r/20210401155256.298656-20-hch@lst.deSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Robin Murphy 提交于
Instead make the global iommu_dma_strict paramete in iommu.c canonical by exporting helpers to get and set it and use those directly in the drivers. This make sure that the iommu.strict parameter also works for the AMD and Intel IOMMU drivers on x86. As those default to lazy flushing a new IOMMU_CMD_LINE_STRICT is used to turn the value into a tristate to represent the default if not overriden by an explicit parameter. [ported on top of the other iommu_attr changes and added a few small missing bits] Signed-off-by: Robin Murphy <robin.murphy@arm.com>. Signed-off-by: NChristoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/r/20210401155256.298656-19-hch@lst.deSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Christoph Hellwig 提交于
Use an explicit enable_nesting method instead. Signed-off-by: NChristoph Hellwig <hch@lst.de> Acked-by: NWill Deacon <will@kernel.org> Acked-by: NLi Yang <leoyang.li@nxp.com> Link: https://lore.kernel.org/r/20210401155256.298656-17-hch@lst.deSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Christoph Hellwig 提交于
The geometry information can be trivially queried from the iommu_domain struture. Signed-off-by: NChristoph Hellwig <hch@lst.de> Acked-by: NWill Deacon <will@kernel.org> Acked-by: NLi Yang <leoyang.li@nxp.com> Link: https://lore.kernel.org/r/20210401155256.298656-16-hch@lst.deSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Christoph Hellwig 提交于
DOMAIN_ATTR_PAGING is never used. Signed-off-by: NChristoph Hellwig <hch@lst.de> Acked-by: NWill Deacon <will@kernel.org> Acked-by: NLi Yang <leoyang.li@nxp.com> Link: https://lore.kernel.org/r/20210401155256.298656-15-hch@lst.deSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Christoph Hellwig 提交于
Instead of a separate call to enable all devices from the list, just enable the liodn once the device is attached to the iommu domain. This also remove the DOMAIN_ATTR_FSL_PAMU_ENABLE iommu_attr. Signed-off-by: NChristoph Hellwig <hch@lst.de> Acked-by: NWill Deacon <will@kernel.org> Acked-by: NLi Yang <leoyang.li@nxp.com> Link: https://lore.kernel.org/r/20210401155256.298656-11-hch@lst.deSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Christoph Hellwig 提交于
Add a fsl_pamu_configure_l1_stash API that qman_portal can call directly instead of indirecting through the iommu attr API. Signed-off-by: NChristoph Hellwig <hch@lst.de> Acked-by: NWill Deacon <will@kernel.org> Acked-by: NLi Yang <leoyang.li@nxp.com> Link: https://lore.kernel.org/r/20210401155256.298656-8-hch@lst.deSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Christoph Hellwig 提交于
The only thing that fsl_pamu_window_enable does for the current caller is to fill in the prot value in the only dma_window structure, and to propagate a few values from the iommu_domain_geometry struture into the dma_window. Remove the dma_window entirely, hardcode the prot value and otherwise use the iommu_domain_geometry structure instead. Remove the now unused ->domain_window_enable iommu method. Signed-off-by: NChristoph Hellwig <hch@lst.de> Acked-by: NLi Yang <leoyang.li@nxp.com> Link: https://lore.kernel.org/r/20210401155256.298656-7-hch@lst.deSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Christoph Hellwig 提交于
The only domains allocated forces use of a single window. Remove all the code related to multiple window support, as well as the need for qman_portal to force a single window. Remove the now unused DOMAIN_ATTR_WINDOWS iommu_attr. Signed-off-by: NChristoph Hellwig <hch@lst.de> Acked-by: NWill Deacon <will@kernel.org> Acked-by: NLi Yang <leoyang.li@nxp.com> Link: https://lore.kernel.org/r/20210401155256.298656-6-hch@lst.deSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Christoph Hellwig 提交于
None of the values returned by this function are ever queried. Also remove the DOMAIN_ATTR_FSL_PAMUV1 enum value that is not otherwise used. Signed-off-by: NChristoph Hellwig <hch@lst.de> Acked-by: NWill Deacon <will@kernel.org> Acked-by: NLi Yang <leoyang.li@nxp.com> Link: https://lore.kernel.org/r/20210401155256.298656-3-hch@lst.deSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Christoph Hellwig 提交于
domain_window_disable is wired up by fsl_pamu, but never actually called. Signed-off-by: NChristoph Hellwig <hch@lst.de> Acked-by: NWill Deacon <will@kernel.org> Acked-by: NLi Yang <leoyang.li@nxp.com> Link: https://lore.kernel.org/r/20210401155256.298656-2-hch@lst.deSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Jean-Philippe Brucker 提交于
Some systems allow devices to handle I/O Page Faults in the core mm. For example systems implementing the PCIe PRI extension or Arm SMMU stall model. Infrastructure for reporting these recoverable page faults was added to the IOMMU core by commit 0c830e6b ("iommu: Introduce device fault report API"). Add a page fault handler for host SVA. IOMMU driver can now instantiate several fault workqueues and link them to IOPF-capable devices. Drivers can choose between a single global workqueue, one per IOMMU device, one per low-level fault queue, one per domain, etc. When it receives a fault event, most commonly in an IRQ handler, the IOMMU driver reports the fault using iommu_report_device_fault(), which calls the registered handler. The page fault handler then calls the mm fault handler, and reports either success or failure with iommu_page_response(). After the handler succeeds, the hardware retries the access. The iopf_param pointer could be embedded into iommu_fault_param. But putting iopf_param into the iommu_param structure allows us not to care about ordering between calls to iopf_queue_add_device() and iommu_register_device_fault_handler(). Tested-by: NLu Baolu <baolu.lu@linux.intel.com> Reviewed-by: NEric Auger <eric.auger@redhat.com> Reviewed-by: NJacob Pan <jacob.jun.pan@linux.intel.com> Reviewed-by: NJonathan Cameron <Jonathan.Cameron@huawei.com> Reviewed-by: NLu Baolu <baolu.lu@linux.intel.com> Signed-off-by: NJean-Philippe Brucker <jean-philippe@linaro.org> Link: https://lore.kernel.org/r/20210401154718.307519-7-jean-philippe@linaro.orgSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Jean-Philippe Brucker 提交于
Some devices manage I/O Page Faults (IOPF) themselves instead of relying on PCIe PRI or Arm SMMU stall. Allow their drivers to enable SVA without mandating IOMMU-managed IOPF. The other device drivers now need to first enable IOMMU_DEV_FEAT_IOPF before enabling IOMMU_DEV_FEAT_SVA. Enabling IOMMU_DEV_FEAT_IOPF on its own doesn't have any effect visible to the device driver, it is used in combination with other features. Reviewed-by: NEric Auger <eric.auger@redhat.com> Reviewed-by: NLu Baolu <baolu.lu@linux.intel.com> Signed-off-by: NJean-Philippe Brucker <jean-philippe@linaro.org> Link: https://lore.kernel.org/r/20210401154718.307519-4-jean-philippe@linaro.orgSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Jean-Philippe Brucker 提交于
The pasid-num-bits property shouldn't need a dedicated fwspec field, it's a job for device properties. Add properties for IORT, and access the number of PASID bits using device_property_read_u32(). Suggested-by: NRobin Murphy <robin.murphy@arm.com> Acked-by: NJonathan Cameron <Jonathan.Cameron@huawei.com> Acked-by: NWill Deacon <will@kernel.org> Reviewed-by: NEric Auger <eric.auger@redhat.com> Signed-off-by: NJean-Philippe Brucker <jean-philippe@linaro.org> Acked-by: NHanjun Guo <guohanjun@huawei.com> Link: https://lore.kernel.org/r/20210401154718.307519-3-jean-philippe@linaro.orgSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Jean-Philippe Brucker 提交于
Commit 986d5ecc ("iommu: Move fwspec->iommu_priv to struct dev_iommu") removed iommu_priv from fwspec and commit 5702ee24 ("ACPI/IORT: Check ATS capability in root complex nodes") added @flags. Update the struct doc. Acked-by: NJonathan Cameron <Jonathan.Cameron@huawei.com> Acked-by: NWill Deacon <will@kernel.org> Signed-off-by: NJean-Philippe Brucker <jean-philippe@linaro.org> Link: https://lore.kernel.org/r/20210401154718.307519-2-jean-philippe@linaro.orgSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Xiang Chen 提交于
After the change of patch ("iommu: Switch gather->end to the inclusive end"), the performace drops from 1600+K IOPS to 1200K in our kunpeng ARM64 platform. We find that the range [start1, end1) actually is joint from the range [end1, end2), but it is considered as disjoint after the change, so it needs more times of TLB sync, and spends more time on it. So fix the boundary issue to avoid performance drop. Fixes: 862c3715 ("iommu: Switch gather->end to the inclusive end") Signed-off-by: NXiang Chen <chenxiang66@hisilicon.com> Acked-by: NWill Deacon <will@kernel.org> Link: https://lore.kernel.org/r/1616643504-120688-1-git-send-email-chenxiang66@hisilicon.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 02 2月, 2021 1 次提交
-
-
由 Joerg Roedel 提交于
The dev_iommu_priv_get() needs a similar check to dev_iommu_fwspec_get() to make sure no NULL-ptr is dereferenced. Fixes: 05a0542b ("iommu/amd: Store dev_data as device iommu private data") Cc: stable@vger.kernel.org # v5.8+ Link: https://lore.kernel.org/r/20210202145419.29143-1-joro@8bytes.org Reference: https://bugzilla.kernel.org/show_bug.cgi?id=211241Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 28 1月, 2021 1 次提交
-
-
由 Lianbo Jiang 提交于
Currently, because domain attach allows to be deferred from iommu driver to device driver, and when iommu initializes, the devices on the bus will be scanned and the default groups will be allocated. Due to the above changes, some devices could be added to the same group as below: [ 3.859417] pci 0000:01:00.0: Adding to iommu group 16 [ 3.864572] pci 0000:01:00.1: Adding to iommu group 16 [ 3.869738] pci 0000:02:00.0: Adding to iommu group 17 [ 3.874892] pci 0000:02:00.1: Adding to iommu group 17 But when attaching these devices, it doesn't allow that a group has more than one device, otherwise it will return an error. This conflicts with the deferred attaching. Unfortunately, it has two devices in the same group for my side, for example: [ 9.627014] iommu_group_device_count(): device name[0]:0000:01:00.0 [ 9.633545] iommu_group_device_count(): device name[1]:0000:01:00.1 ... [ 10.255609] iommu_group_device_count(): device name[0]:0000:02:00.0 [ 10.262144] iommu_group_device_count(): device name[1]:0000:02:00.1 Finally, which caused the failure of tg3 driver when tg3 driver calls the dma_alloc_coherent() to allocate coherent memory in the tg3_test_dma(). [ 9.660310] tg3 0000:01:00.0: DMA engine test failed, aborting [ 9.754085] tg3: probe of 0000:01:00.0 failed with error -12 [ 9.997512] tg3 0000:01:00.1: DMA engine test failed, aborting [ 10.043053] tg3: probe of 0000:01:00.1 failed with error -12 [ 10.288905] tg3 0000:02:00.0: DMA engine test failed, aborting [ 10.334070] tg3: probe of 0000:02:00.0 failed with error -12 [ 10.578303] tg3 0000:02:00.1: DMA engine test failed, aborting [ 10.622629] tg3: probe of 0000:02:00.1 failed with error -12 In addition, the similar situations also occur in other drivers such as the bnxt_en driver. That can be reproduced easily in kdump kernel when SME is active. Let's move the handling currently in iommu_dma_deferred_attach() into the iommu core code so that it can call the __iommu_attach_device() directly instead of the iommu_attach_device(). The external interface iommu_attach_device() is not suitable for handling this situation. Signed-off-by: NLianbo Jiang <lijiang@redhat.com> Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/20210126115337.20068-3-lijiang@redhat.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-