- 18 8月, 2021 2 次提交
-
-
由 Robin Murphy 提交于
Since iommu_iotlb_gather exists to help drivers optimise flushing for a given unmap request, it is also the logical place to indicate whether the unmap is strict or not, and thus help them further optimise for whether to expect a sync or a flush_all subsequently. As part of that, it also seems fair to make the flush queue code take responsibility for enforcing the really subtle ordering requirement it brings, so that we don't need to worry about forgetting that if new drivers want to add flush queue support, and can consolidate the existing versions. While we're adding to the kerneldoc, also fill in some info for @freelist which was overlooked previously. Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/bf5f8e2ad84e48c712ccbf80fa8c610594c7595f.1628682049.git.robin.murphy@arm.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Robin Murphy 提交于
Now that everyone has converged on iommu-dma for IOMMU_DOMAIN_DMA support, we can abandon the notion of drivers being responsible for the cookie type, and consolidate all the management into the core code. CC: Yong Wu <yong.wu@mediatek.com> CC: Chunyan Zhang <chunyan.zhang@unisoc.com> CC: Maxime Ripard <mripard@kernel.org> Tested-by: NHeiko Stuebner <heiko@sntech.de> Tested-by: NMarek Szyprowski <m.szyprowski@samsung.com> Tested-by: NYoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com> Reviewed-by: NJean-Philippe Brucker <jean-philippe@linaro.org> Reviewed-by: NLu Baolu <baolu.lu@linux.intel.com> Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/46a2c0e7419c7d1d931762dc7b6a69fa082d199a.1628682048.git.robin.murphy@arm.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 26 7月, 2021 3 次提交
-
-
由 John Garry 提交于
We only ever now set strict mode enabled in iommu_set_dma_strict(), so just remove the argument. Signed-off-by: NJohn Garry <john.garry@huawei.com> Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Reviewed-by: NLu Baolu <baolu.lu@linux.intel.com> Link: https://lore.kernel.org/r/1626088340-5838-7-git-send-email-john.garry@huawei.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Isaac J. Manjarres 提交于
Add a callback for IOMMU drivers to provide a path for the IOMMU framework to call into an IOMMU driver, which can call into the io-pgtable code, to map a physically contiguous rnage of pages of the same size. For IOMMU drivers that do not specify a map_pages() callback, the existing logic of mapping memory one page block at a time will be used. Signed-off-by: NIsaac J. Manjarres <isaacm@codeaurora.org> Suggested-by: NWill Deacon <will@kernel.org> Acked-by: NLu Baolu <baolu.lu@linux.intel.com> Signed-off-by: NGeorgi Djakov <quic_c_gdjako@quicinc.com> Link: https://lore.kernel.org/r/1623850736-389584-5-git-send-email-quic_c_gdjako@quicinc.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Isaac J. Manjarres 提交于
Add a callback for IOMMU drivers to provide a path for the IOMMU framework to call into an IOMMU driver, which can call into the io-pgtable code, to unmap a virtually contiguous range of pages of the same size. For IOMMU drivers that do not specify an unmap_pages() callback, the existing logic of unmapping memory one page block at a time will be used. Signed-off-by: NIsaac J. Manjarres <isaacm@codeaurora.org> Suggested-by: NWill Deacon <will@kernel.org> Signed-off-by: NWill Deacon <will@kernel.org> Acked-by: NLu Baolu <baolu.lu@linux.intel.com> Signed-off-by: NGeorgi Djakov <quic_c_gdjako@quicinc.com> Link: https://lore.kernel.org/r/1623850736-389584-3-git-send-email-quic_c_gdjako@quicinc.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 16 4月, 2021 2 次提交
-
-
由 Robin Murphy 提交于
Rather than have separate opaque setter functions that are easy to overlook and lead to repetitive boilerplate in drivers, let's pass the relevant initialisation parameters directly to iommu_device_register(). Acked-by: NWill Deacon <will@kernel.org> Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/ab001b87c533b6f4db71eb90db6f888953986c36.1617285386.git.robin.murphy@arm.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Robin Murphy 提交于
It happens that the 3 drivers which first supported being modular are also ones which play games with their pgsize_bitmap, so have non-const iommu_ops where dynamically setting the owner manages to work out OK. However, it's less than ideal to force that upon all drivers which want to be modular - like the new sprd-iommu driver which now has a potential bug in that regard - so let's just statically set the module owner and let ops remain const wherever possible. Reviewed-by: NChristoph Hellwig <hch@lst.de> Acked-by: NWill Deacon <will@kernel.org> Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/31423b99ff609c3d4b291c701a7a7a810d9ce8dc.1617285386.git.robin.murphy@arm.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 07 4月, 2021 17 次提交
-
-
由 Christoph Hellwig 提交于
Remove the now unused iommu attr infrastructure. Signed-off-by: NChristoph Hellwig <hch@lst.de> Acked-by: NWill Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210401155256.298656-21-hch@lst.deSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Christoph Hellwig 提交于
Use an explicit set_pgtable_quirks method instead that just passes the actual quirk bitmask instead. Signed-off-by: NChristoph Hellwig <hch@lst.de> Acked-by: NWill Deacon <will@kernel.org> Acked-by: NLi Yang <leoyang.li@nxp.com> Link: https://lore.kernel.org/r/20210401155256.298656-20-hch@lst.deSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Robin Murphy 提交于
Instead make the global iommu_dma_strict paramete in iommu.c canonical by exporting helpers to get and set it and use those directly in the drivers. This make sure that the iommu.strict parameter also works for the AMD and Intel IOMMU drivers on x86. As those default to lazy flushing a new IOMMU_CMD_LINE_STRICT is used to turn the value into a tristate to represent the default if not overriden by an explicit parameter. [ported on top of the other iommu_attr changes and added a few small missing bits] Signed-off-by: Robin Murphy <robin.murphy@arm.com>. Signed-off-by: NChristoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/r/20210401155256.298656-19-hch@lst.deSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Christoph Hellwig 提交于
Use an explicit enable_nesting method instead. Signed-off-by: NChristoph Hellwig <hch@lst.de> Acked-by: NWill Deacon <will@kernel.org> Acked-by: NLi Yang <leoyang.li@nxp.com> Link: https://lore.kernel.org/r/20210401155256.298656-17-hch@lst.deSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Christoph Hellwig 提交于
The geometry information can be trivially queried from the iommu_domain struture. Signed-off-by: NChristoph Hellwig <hch@lst.de> Acked-by: NWill Deacon <will@kernel.org> Acked-by: NLi Yang <leoyang.li@nxp.com> Link: https://lore.kernel.org/r/20210401155256.298656-16-hch@lst.deSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Christoph Hellwig 提交于
DOMAIN_ATTR_PAGING is never used. Signed-off-by: NChristoph Hellwig <hch@lst.de> Acked-by: NWill Deacon <will@kernel.org> Acked-by: NLi Yang <leoyang.li@nxp.com> Link: https://lore.kernel.org/r/20210401155256.298656-15-hch@lst.deSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Christoph Hellwig 提交于
Instead of a separate call to enable all devices from the list, just enable the liodn once the device is attached to the iommu domain. This also remove the DOMAIN_ATTR_FSL_PAMU_ENABLE iommu_attr. Signed-off-by: NChristoph Hellwig <hch@lst.de> Acked-by: NWill Deacon <will@kernel.org> Acked-by: NLi Yang <leoyang.li@nxp.com> Link: https://lore.kernel.org/r/20210401155256.298656-11-hch@lst.deSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Christoph Hellwig 提交于
Add a fsl_pamu_configure_l1_stash API that qman_portal can call directly instead of indirecting through the iommu attr API. Signed-off-by: NChristoph Hellwig <hch@lst.de> Acked-by: NWill Deacon <will@kernel.org> Acked-by: NLi Yang <leoyang.li@nxp.com> Link: https://lore.kernel.org/r/20210401155256.298656-8-hch@lst.deSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Christoph Hellwig 提交于
The only thing that fsl_pamu_window_enable does for the current caller is to fill in the prot value in the only dma_window structure, and to propagate a few values from the iommu_domain_geometry struture into the dma_window. Remove the dma_window entirely, hardcode the prot value and otherwise use the iommu_domain_geometry structure instead. Remove the now unused ->domain_window_enable iommu method. Signed-off-by: NChristoph Hellwig <hch@lst.de> Acked-by: NLi Yang <leoyang.li@nxp.com> Link: https://lore.kernel.org/r/20210401155256.298656-7-hch@lst.deSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Christoph Hellwig 提交于
The only domains allocated forces use of a single window. Remove all the code related to multiple window support, as well as the need for qman_portal to force a single window. Remove the now unused DOMAIN_ATTR_WINDOWS iommu_attr. Signed-off-by: NChristoph Hellwig <hch@lst.de> Acked-by: NWill Deacon <will@kernel.org> Acked-by: NLi Yang <leoyang.li@nxp.com> Link: https://lore.kernel.org/r/20210401155256.298656-6-hch@lst.deSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Christoph Hellwig 提交于
None of the values returned by this function are ever queried. Also remove the DOMAIN_ATTR_FSL_PAMUV1 enum value that is not otherwise used. Signed-off-by: NChristoph Hellwig <hch@lst.de> Acked-by: NWill Deacon <will@kernel.org> Acked-by: NLi Yang <leoyang.li@nxp.com> Link: https://lore.kernel.org/r/20210401155256.298656-3-hch@lst.deSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Christoph Hellwig 提交于
domain_window_disable is wired up by fsl_pamu, but never actually called. Signed-off-by: NChristoph Hellwig <hch@lst.de> Acked-by: NWill Deacon <will@kernel.org> Acked-by: NLi Yang <leoyang.li@nxp.com> Link: https://lore.kernel.org/r/20210401155256.298656-2-hch@lst.deSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Jean-Philippe Brucker 提交于
Some systems allow devices to handle I/O Page Faults in the core mm. For example systems implementing the PCIe PRI extension or Arm SMMU stall model. Infrastructure for reporting these recoverable page faults was added to the IOMMU core by commit 0c830e6b ("iommu: Introduce device fault report API"). Add a page fault handler for host SVA. IOMMU driver can now instantiate several fault workqueues and link them to IOPF-capable devices. Drivers can choose between a single global workqueue, one per IOMMU device, one per low-level fault queue, one per domain, etc. When it receives a fault event, most commonly in an IRQ handler, the IOMMU driver reports the fault using iommu_report_device_fault(), which calls the registered handler. The page fault handler then calls the mm fault handler, and reports either success or failure with iommu_page_response(). After the handler succeeds, the hardware retries the access. The iopf_param pointer could be embedded into iommu_fault_param. But putting iopf_param into the iommu_param structure allows us not to care about ordering between calls to iopf_queue_add_device() and iommu_register_device_fault_handler(). Tested-by: NLu Baolu <baolu.lu@linux.intel.com> Reviewed-by: NEric Auger <eric.auger@redhat.com> Reviewed-by: NJacob Pan <jacob.jun.pan@linux.intel.com> Reviewed-by: NJonathan Cameron <Jonathan.Cameron@huawei.com> Reviewed-by: NLu Baolu <baolu.lu@linux.intel.com> Signed-off-by: NJean-Philippe Brucker <jean-philippe@linaro.org> Link: https://lore.kernel.org/r/20210401154718.307519-7-jean-philippe@linaro.orgSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Jean-Philippe Brucker 提交于
Some devices manage I/O Page Faults (IOPF) themselves instead of relying on PCIe PRI or Arm SMMU stall. Allow their drivers to enable SVA without mandating IOMMU-managed IOPF. The other device drivers now need to first enable IOMMU_DEV_FEAT_IOPF before enabling IOMMU_DEV_FEAT_SVA. Enabling IOMMU_DEV_FEAT_IOPF on its own doesn't have any effect visible to the device driver, it is used in combination with other features. Reviewed-by: NEric Auger <eric.auger@redhat.com> Reviewed-by: NLu Baolu <baolu.lu@linux.intel.com> Signed-off-by: NJean-Philippe Brucker <jean-philippe@linaro.org> Link: https://lore.kernel.org/r/20210401154718.307519-4-jean-philippe@linaro.orgSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Jean-Philippe Brucker 提交于
The pasid-num-bits property shouldn't need a dedicated fwspec field, it's a job for device properties. Add properties for IORT, and access the number of PASID bits using device_property_read_u32(). Suggested-by: NRobin Murphy <robin.murphy@arm.com> Acked-by: NJonathan Cameron <Jonathan.Cameron@huawei.com> Acked-by: NWill Deacon <will@kernel.org> Reviewed-by: NEric Auger <eric.auger@redhat.com> Signed-off-by: NJean-Philippe Brucker <jean-philippe@linaro.org> Acked-by: NHanjun Guo <guohanjun@huawei.com> Link: https://lore.kernel.org/r/20210401154718.307519-3-jean-philippe@linaro.orgSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Jean-Philippe Brucker 提交于
Commit 986d5ecc ("iommu: Move fwspec->iommu_priv to struct dev_iommu") removed iommu_priv from fwspec and commit 5702ee24 ("ACPI/IORT: Check ATS capability in root complex nodes") added @flags. Update the struct doc. Acked-by: NJonathan Cameron <Jonathan.Cameron@huawei.com> Acked-by: NWill Deacon <will@kernel.org> Signed-off-by: NJean-Philippe Brucker <jean-philippe@linaro.org> Link: https://lore.kernel.org/r/20210401154718.307519-2-jean-philippe@linaro.orgSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Xiang Chen 提交于
After the change of patch ("iommu: Switch gather->end to the inclusive end"), the performace drops from 1600+K IOPS to 1200K in our kunpeng ARM64 platform. We find that the range [start1, end1) actually is joint from the range [end1, end2), but it is considered as disjoint after the change, so it needs more times of TLB sync, and spends more time on it. So fix the boundary issue to avoid performance drop. Fixes: 862c3715 ("iommu: Switch gather->end to the inclusive end") Signed-off-by: NXiang Chen <chenxiang66@hisilicon.com> Acked-by: NWill Deacon <will@kernel.org> Link: https://lore.kernel.org/r/1616643504-120688-1-git-send-email-chenxiang66@hisilicon.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 02 2月, 2021 1 次提交
-
-
由 Joerg Roedel 提交于
The dev_iommu_priv_get() needs a similar check to dev_iommu_fwspec_get() to make sure no NULL-ptr is dereferenced. Fixes: 05a0542b ("iommu/amd: Store dev_data as device iommu private data") Cc: stable@vger.kernel.org # v5.8+ Link: https://lore.kernel.org/r/20210202145419.29143-1-joro@8bytes.org Reference: https://bugzilla.kernel.org/show_bug.cgi?id=211241Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 28 1月, 2021 1 次提交
-
-
由 Lianbo Jiang 提交于
Currently, because domain attach allows to be deferred from iommu driver to device driver, and when iommu initializes, the devices on the bus will be scanned and the default groups will be allocated. Due to the above changes, some devices could be added to the same group as below: [ 3.859417] pci 0000:01:00.0: Adding to iommu group 16 [ 3.864572] pci 0000:01:00.1: Adding to iommu group 16 [ 3.869738] pci 0000:02:00.0: Adding to iommu group 17 [ 3.874892] pci 0000:02:00.1: Adding to iommu group 17 But when attaching these devices, it doesn't allow that a group has more than one device, otherwise it will return an error. This conflicts with the deferred attaching. Unfortunately, it has two devices in the same group for my side, for example: [ 9.627014] iommu_group_device_count(): device name[0]:0000:01:00.0 [ 9.633545] iommu_group_device_count(): device name[1]:0000:01:00.1 ... [ 10.255609] iommu_group_device_count(): device name[0]:0000:02:00.0 [ 10.262144] iommu_group_device_count(): device name[1]:0000:02:00.1 Finally, which caused the failure of tg3 driver when tg3 driver calls the dma_alloc_coherent() to allocate coherent memory in the tg3_test_dma(). [ 9.660310] tg3 0000:01:00.0: DMA engine test failed, aborting [ 9.754085] tg3: probe of 0000:01:00.0 failed with error -12 [ 9.997512] tg3 0000:01:00.1: DMA engine test failed, aborting [ 10.043053] tg3: probe of 0000:01:00.1 failed with error -12 [ 10.288905] tg3 0000:02:00.0: DMA engine test failed, aborting [ 10.334070] tg3: probe of 0000:02:00.0 failed with error -12 [ 10.578303] tg3 0000:02:00.1: DMA engine test failed, aborting [ 10.622629] tg3: probe of 0000:02:00.1 failed with error -12 In addition, the similar situations also occur in other drivers such as the bnxt_en driver. That can be reproduced easily in kdump kernel when SME is active. Let's move the handling currently in iommu_dma_deferred_attach() into the iommu core code so that it can call the __iommu_attach_device() directly instead of the iommu_attach_device(). The external interface iommu_attach_device() is not suitable for handling this situation. Signed-off-by: NLianbo Jiang <lijiang@redhat.com> Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/20210126115337.20068-3-lijiang@redhat.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 27 1月, 2021 4 次提交
-
-
由 Yong Wu 提交于
Currently gather->end is "unsigned long" which may be overflow in arch32 in the corner case: 0xfff00000 + 0x100000(iova + size). Although it doesn't affect the size(end - start), it affects the checking "gather->end < end" This patch changes this "end" to the real end address (end = start + size - 1). Correspondingly, update the length to "end - start + 1". Fixes: a7d20dc1 ("iommu: Introduce struct iommu_iotlb_gather for batching TLB flushes") Signed-off-by: NYong Wu <yong.wu@mediatek.com> Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Acked-by: NWill Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210107122909.16317-5-yong.wu@mediatek.comSigned-off-by: NWill Deacon <will@kernel.org>
-
由 Yong Wu 提交于
iotlb_sync_map allow IOMMU drivers tlb sync after completing the whole mapping. This patch adds iova and size as the parameters in it. then the IOMMU driver could flush tlb with the whole range once after iova mapping to improve performance. Signed-off-by: NYong Wu <yong.wu@mediatek.com> Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Acked-by: NWill Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210107122909.16317-3-yong.wu@mediatek.comSigned-off-by: NWill Deacon <will@kernel.org>
-
由 John Garry 提交于
Function iommu_dev_has_feature() has never been referenced in the tree, and there does not appear to be anything coming soon to use it, so delete it. Signed-off-by: NJohn Garry <john.garry@huawei.com> Acked-by: NWill Deacon <will@kernel.org> Link: https://lore.kernel.org/r/1609940111-28563-7-git-send-email-john.garry@huawei.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 John Garry 提交于
Function iommu_domain_window_disable() is not referenced in the tree, so delete it. Signed-off-by: NJohn Garry <john.garry@huawei.com> Acked-by: NWill Deacon <will@kernel.org> Link: https://lore.kernel.org/r/1609940111-28563-6-git-send-email-john.garry@huawei.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 25 11月, 2020 2 次提交
-
-
由 Sai Prakash Ranjan 提交于
Add a new iommu domain attribute DOMAIN_ATTR_IO_PGTABLE_CFG for pagetable configuration which initially will be used to set quirks like for system cache aka last level cache to be used by client drivers like GPU to set right attributes for caching the hardware pagetables into the system cache and later can be extended to include other page table configuration data. Signed-off-by: NSai Prakash Ranjan <saiprakash.ranjan@codeaurora.org> Link: https://lore.kernel.org/r/9190aa16f378fc0a7f8e57b2b9f60b033e7eeb4f.1606287059.git.saiprakash.ranjan@codeaurora.orgSigned-off-by: NWill Deacon <will@kernel.org>
-
由 Tom Murphy 提交于
Allow the iommu_unmap_fast to return newly freed page table pages and pass the freelist to queue_iova in the dma-iommu ops path. This is useful for iommu drivers (in this case the intel iommu driver) which need to wait for the ioTLB to be flushed before newly free/unmapped page table pages can be freed. This way we can still batch ioTLB free operations and handle the freelists. Signed-off-by: NTom Murphy <murphyt7@tcd.ie> Signed-off-by: NLu Baolu <baolu.lu@linux.intel.com> Tested-by: NLogan Gunthorpe <logang@deltatee.com> Link: https://lore.kernel.org/r/20201124082057.2614359-2-baolu.lu@linux.intel.comSigned-off-by: NWill Deacon <will@kernel.org>
-
- 01 10月, 2020 2 次提交
-
-
由 Jacob Pan 提交于
IOMMU user APIs are responsible for processing user data. This patch changes the interface such that user pointers can be passed into IOMMU code directly. Separate kernel APIs without user pointers are introduced for in-kernel users of the UAPI functionality. IOMMU UAPI data has a user filled argsz field which indicates the data length of the structure. User data is not trusted, argsz must be validated based on the current kernel data size, mandatory data size, and feature flags. User data may also be extended, resulting in possible argsz increase. Backward compatibility is ensured based on size and flags (or the functional equivalent fields) checking. This patch adds sanity checks in the IOMMU layer. In addition to argsz, reserved/unused fields in padding, flags, and version are also checked. Details are documented in Documentation/userspace-api/iommu.rst Signed-off-by: NLiu Yi L <yi.l.liu@intel.com> Signed-off-by: NJacob Pan <jacob.jun.pan@linux.intel.com> Reviewed-by: NJean-Philippe Brucker <jean-philippe@linaro.org> Reviewed-by: NEric Auger <eric.auger@redhat.com> Link: https://lore.kernel.org/r/1601051567-54787-6-git-send-email-jacob.jun.pan@linux.intel.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Jacob Pan 提交于
User APIs such as iommu_sva_unbind_gpasid() may also be used by the kernel. Since we introduced user pointer to the UAPI functions, in-kernel callers cannot share the same APIs. In-kernel callers are also trusted, there is no need to validate the data. We plan to have two flavors of the same API functions, one called through ioctls, carrying a user pointer and one called directly with valid IOMMU UAPI structs. To differentiate both, let's rename existing functions with an iommu_uapi_ prefix. Suggested-by: NAlex Williamson <alex.williamson@redhat.com> Signed-off-by: NJacob Pan <jacob.jun.pan@linux.intel.com> Reviewed-by: NEric Auger <eric.auger@redhat.com> Link: https://lore.kernel.org/r/1601051567-54787-5-git-send-email-jacob.jun.pan@linux.intel.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 18 9月, 2020 1 次提交
-
-
由 Fenghua Yu 提交于
PASID is defined as a few different types in iommu including "int", "u32", and "unsigned int". To be consistent and to match with uapi definitions, define PASID and its variations (e.g. max PASID) as "u32". "u32" is also shorter and a little more explicit than "unsigned int". No PASID type change in uapi although it defines PASID as __u64 in some places. Suggested-by: NThomas Gleixner <tglx@linutronix.de> Signed-off-by: NFenghua Yu <fenghua.yu@intel.com> Signed-off-by: NBorislav Petkov <bp@suse.de> Reviewed-by: NTony Luck <tony.luck@intel.com> Reviewed-by: NLu Baolu <baolu.lu@linux.intel.com> Acked-by: NFelix Kuehling <Felix.Kuehling@amd.com> Acked-by: NJoerg Roedel <jroedel@suse.de> Link: https://lkml.kernel.org/r/1600187413-163670-2-git-send-email-fenghua.yu@intel.com
-
- 04 9月, 2020 1 次提交
-
-
由 Tom Murphy 提交于
To keep naming consistent we should stick with *iotlb*. This patch renames a few remaining functions. Signed-off-by: NTom Murphy <murphyt7@tcd.ie> Link: https://lore.kernel.org/r/20200817210051.13546-1-murphyt7@tcd.ieSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 09 7月, 2020 1 次提交
-
-
由 Will Deacon 提交于
The IOMMU_SYS_CACHE_ONLY flag was never exposed via the DMA API and has no in-tree users. Remove it. Cc: Robin Murphy <robin.murphy@arm.com> Cc: "Isaac J. Manjarres" <isaacm@codeaurora.org> Cc: Joerg Roedel <joro@8bytes.org> Cc: Rob Clark <robdclark@gmail.com> Reviewed-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NSai Prakash Ranjan <saiprakash.ranjan@codeaurora.org> Signed-off-by: NWill Deacon <will@kernel.org>
-
- 30 6月, 2020 1 次提交
-
-
由 Marek Szyprowski 提交于
Move the recently added sg_table wrapper out of CONFIG_IOMMU_SUPPORT to let the client code copile also when IOMMU support is disabled. Fixes: 48530d9f ("iommu: add generic helper for mapping sgtable objects") Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com> Link: https://lore.kernel.org/r/20200630081756.18526-1-m.szyprowski@samsung.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 29 5月, 2020 1 次提交
-
-
由 Jean-Philippe Brucker 提交于
After binding a device to an mm, device drivers currently need to register a mm_exit handler. This function is called when the mm exits, to gracefully stop DMA targeting the address space and flush page faults to the IOMMU. This is deemed too complex for the MMU release() notifier, which may be triggered by any mmput() invocation, from about 120 callsites [1]. The upcoming SVA module has an example of such complexity: the I/O Page Fault handler would need to call mmput_async() instead of mmput() after handling an IOPF, to avoid triggering the release() notifier which would in turn drain the IOPF queue and lock up. Another concern is the DMA stop function taking too long, up to several minutes [2]. For some mmput() callers this may disturb other users. For example, if the OOM killer picks the mm bound to a device as the victim and that mm's memory is locked, if the release() takes too long, it might choose additional innocent victims to kill. To simplify the MMU release notifier, don't forward the notification to device drivers. Since they don't stop DMA on mm exit anymore, the PASID lifetime is extended: (1) The device driver calls bind(). A PASID is allocated. Here any DMA fault is handled by mm, and on error we don't print anything to dmesg. Userspace can easily trigger errors by issuing DMA on unmapped buffers. (2) exit_mmap(), for example the process took a SIGKILL. This step doesn't happen during normal operations. Remove the pgd from the PASID table, since the page tables are about to be freed. Invalidate the IOTLBs. Here the device may still perform DMA on the address space. Incoming transactions are aborted but faults aren't printed out. ATS Translation Requests return Successful Translation Completions with R=W=0. PRI Page Requests return with Invalid Request. (3) The device driver stops DMA, possibly following release of a fd, and calls unbind(). PASID table is cleared, IOTLB invalidated if necessary. The page fault queues are drained, and the PASID is freed. If DMA for that PASID is still running here, something went seriously wrong and errors should be reported. For now remove iommu_sva_ops entirely. We might need to re-introduce them at some point, for example to notify device drivers of unhandled IOPF. [1] https://lore.kernel.org/linux-iommu/20200306174239.GM31668@ziepe.ca/ [2] https://lore.kernel.org/linux-iommu/4d68da96-0ad5-b412-5987-2f7a6aa796c3@amd.com/Signed-off-by: NJean-Philippe Brucker <jean-philippe@linaro.org> Acked-by: NJacob Pan <jacob.jun.pan@linux.intel.com> Acked-by: NLu Baolu <baolu.lu@linux.intel.com> Link: https://lore.kernel.org/r/20200423125329.782066-3-jean-philippe@linaro.orgSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 15 5月, 2020 1 次提交
-
-
由 Sai Praneeth Prakhya 提交于
After moving iommu_group setup to iommu core code [1][2] and removing private domain support in vt-d [3], there are no users for functions such as iommu_request_dm_for_dev(), iommu_request_dma_domain_for_dev() and request_default_domain_for_dev(). So, remove these functions. [1] commit dce8d696 ("iommu/amd: Convert to probe/release_device() call-backs") [2] commit e5d1841f ("iommu/vt-d: Convert to probe/release_device() call-backs") [3] commit 327d5b2f ("iommu/vt-d: Allow 32bit devices to uses DMA domain") Signed-off-by: NSai Praneeth Prakhya <sai.praneeth.prakhya@intel.com> Cc: Joerg Roedel <joro@8bytes.org> Cc: Lu Baolu <baolu.lu@linux.intel.com> Link: https://lore.kernel.org/r/20200513224721.20504-1-sai.praneeth.prakhya@intel.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-