- 13 10月, 2022 1 次提交
-
-
由 Kunkun Jiang 提交于
virt inclusion category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I5UN39 CVE: NA ------------------------------ Two redundant arguements are added to 'iommu_device_register' of 'include/linux/iommu.h' in the commit bbf3b39e ("iommu: Introduce dirty log tracking framework"). As a result, compiling the kernel fails when the CONFIG_IOMMU_API is disabled. Delete the two redundant arguements to solve this problem. Fixes: bbf3b39e ("iommu: Introduce dirty log tracking framework") Reported-by: NYejian Zheng <zhengyejian1@huawei.com> Signed-off-by: NKunkun Jiang <jiangkunkun@huawei.com> Reviewed-by: NKeqian Zhu <zhukeqian1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
- 13 7月, 2022 1 次提交
-
-
由 Guo Mengqi 提交于
hulk inclusion category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I5EOOG CVE: NA -------------------------------- This reverts commit da76349c. However, the iommu_fault_param and iommu_fault_event changes are reserved to avoid KABI change. Signed-off-by: NGuo Mengqi <guomengqi3@huawei.com> Reviewed-by: NWeilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
- 08 3月, 2022 1 次提交
-
-
由 Keqian Zhu 提交于
virt inclusion category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I4WK5B CVE: NA ------------------------------ The iommu_domain may contain more than one DMA range which can be dirty log tracked separately, so it's hard to track the dirty log status of iommu_domain. The upper layer (e.g. vfio) should make sure it's doing right thing. Fixes: bbf3b39e (iommu: Introduce dirty log tracking framework) Signed-off-by: NKeqian Zhu <zhukeqian1@huawei.com> Tested-by: NKunkun Jiang <jiangkunkun@huawei.com> Reviewed-by: NKunkun Jiang <jiangkunkun@huawei.com> Reviewed-by: NZenghui Yu <yuzenghui@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
- 31 12月, 2021 1 次提交
-
-
由 Wang ShaoBo 提交于
hulk inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4KFY7?from=project-issue CVE: NA ------------------------------- reserve space for struct iommu_domain and iommu_ops. Signed-off-by: NWang ShaoBo <bobo.shaobowang@huawei.com> Reviewed-by: NZhen Lei <thunder.leizhen@huawei.com> Reviewed-by: NCheng Jian <cj.chengjian@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
- 23 12月, 2021 1 次提交
-
-
由 Xingang Wang 提交于
ascend inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4L735 CVE: NA ------------------------------------------------- This introduce support to set and get device configuration in iommu private driver. For example, when the smmu mpam configuration need to be set in the smmu driver, these interfaces will help. Signed-off-by: NXingang Wang <wangxingang5@huawei.com> Reviewed-by: NZhen Lei <thunder.leizhen@huawei.com> Reviewed-by: NWeilong Chen <chenweilong@huawei.com> Acked-by: NXie XiuQi <xiexiuqi@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
- 02 9月, 2021 1 次提交
-
-
由 Liang Li (Euler) 提交于
virt inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I401IF CVE: NA ------------------------------ When CONFIG_IOMMU_API is off, kernel compile failed with below error: ... LD vmlinux.o drivers/of/platform.o: In function `iommu_bind_guest_msi': platform.c:(.text+0x9e): multiple definition of `iommu_bind_guest_msi' drivers/of/device.o:device.c:(.text+0x122): first defined here .../repo/srcs/kernel/Makefile:1178: recipe for target 'vmlinux' failed ... This should be a typo introduced by commit 9db83ab7. Simply correct the stub function to be 'static inline' in header file should be good enough. Signed-off-by: NLiang Li (Euler) <liliang889@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com> Reviewed-by: NKunkun Jiang <jiangkunkun@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com>
-
- 23 7月, 2021 1 次提交
-
-
由 LeoLiu-oc 提交于
zhaoxin inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I40QDN CVE: NA ---------------------------------------------------------------- Some ACPI devices need to issue dma requests to access the reserved memory area.BIOS uses the device scope type ACPI_NAMESPACE_DEVICE in RMRR to report these ACPI devices. This patch add support for detecting ACPI devices in RMRR and in order to distinguish it from PCI device, some interface functions are modified. Signed-off-by: NLeoLiu-oc <LeoLiu-oc@zhaoxin.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
- 19 7月, 2021 2 次提交
-
-
由 Eric Auger 提交于
virt inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I401IF CVE: NA ------------------------------ On ARM, MSI are translated by the SMMU. An IOVA is allocated for each MSI doorbell. If both the host and the guest are exposed with SMMUs, we end up with 2 different IOVAs allocated by each. guest allocates an IOVA (gIOVA) to map onto the guest MSI doorbell (gDB). The Host allocates another IOVA (hIOVA) to map onto the physical doorbell (hDB). So we end up with 2 untied mappings: S1 S2 gIOVA -> gDB hIOVA -> hDB Currently the PCI device is programmed by the host with hIOVA as MSI doorbell. So this does not work. This patch introduces an API to pass gIOVA/gDB to the host so that gIOVA can be reused by the host instead of re-allocating a new IOVA. So the goal is to create the following nested mapping: S1 S2 gIOVA -> gDB -> hDB and program the PCI device with gIOVA MSI doorbell. In case we have several devices attached to this nested domain (devices belonging to the same group), they cannot be isolated on guest side either. So they should also end up in the same domain on guest side. We will enforce that all the devices attached to the host iommu domain use the same physical doorbell and similarly a single virtual doorbell mapping gets registered (1 single virtual doorbell is used on guest as well). Signed-off-by: NEric Auger <eric.auger@redhat.com> Signed-off-by: Kunkun Jiang<jiangkunkun@huawei.com> Reviewed-by: NKeqian Zhu <zhukeqian1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Eric Auger 提交于
virt inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I401IF CVE: NA ------------------------------ In virtualization use case, when a guest is assigned a PCI host device, protected by a virtual IOMMU on the guest, the physical IOMMU must be programmed to be consistent with the guest mappings. If the physical IOMMU supports two translation stages it makes sense to program guest mappings onto the first stage/level (ARM/Intel terminology) while the host owns the stage/level 2. In that case, it is mandated to trap on guest configuration settings and pass those to the physical iommu driver. This patch adds a new API to the iommu subsystem that allows to set/unset the pasid table information. A generic iommu_pasid_table_config struct is introduced in a new iommu.h uapi header. This is going to be used by the VFIO user API. Signed-off-by: NJean-Philippe Brucker <jean-philippe.brucker@arm.com> Signed-off-by: NLiu, Yi L <yi.l.liu@linux.intel.com> Signed-off-by: NAshok Raj <ashok.raj@intel.com> Signed-off-by: NJacob Pan <jacob.jun.pan@linux.intel.com> Signed-off-by: NEric Auger <eric.auger@redhat.com> Signed-off-by: Kunkun Jiang<jiangkunkun@huawei.com> Reviewed-by: NKeqian Zhu <zhukeqian1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
- 16 7月, 2021 2 次提交
-
-
由 Kunkun Jiang 提交于
virt inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I3ZUKK CVE: NA ------------------------------ This realizes switch_dirty_log. In order to get finer dirty granule, it invokes arm_smmu_split_block when start dirty log, and invokes arm_smmu_merge_page() to recover block mapping when stop dirty log. Co-developed-by: NKeqian Zhu <zhukeqian1@huawei.com> Signed-off-by: NKunkun Jiang <jiangkunkun@huawei.com> Reviewed-by: NKeqian Zhu <zhukeqian1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Keqian Zhu 提交于
virt inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I3ZUKK CVE: NA ------------------------------ Some types of IOMMU are capable of tracking DMA dirty log, such as ARM SMMU with HTTU or Intel IOMMU with SLADE. This introduces the dirty log tracking framework in the IOMMU base layer. Four new essential interfaces are added, and we maintaince the status of dirty log tracking in iommu_domain. 1. iommu_support_dirty_log: Check whether domain supports dirty log tracking 2. iommu_switch_dirty_log: Perform actions to start|stop dirty log tracking 3. iommu_sync_dirty_log: Sync dirty log from IOMMU into a dirty bitmap 4. iommu_clear_dirty_log: Clear dirty log of IOMMU by a mask bitmap Note: Don't concurrently call these interfaces with other ops that access underlying page table. Signed-off-by: NKeqian Zhu <zhukeqian1@huawei.com> Signed-off-by: NKunkun Jiang <jiangkunkun@huawei.com> Reviewed-by: NKeqian Zhu <zhukeqian1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
- 03 7月, 2021 1 次提交
-
-
由 Lijun Fang 提交于
hulk inclusion category: bugfix bugzilla: 51855 CVE: NA --------------------------------------------- iommu_sva_bind_group should return pointer rather than an integer, so it will cause compile error, if not defined CONFIG_IOMMU_API Fixes: 45e52e3fc545 ("iommu: Add group variant for SVA bind/unbind") Signed-off-by: NLijun Fang <fanglijun3@huawei.com> Reviewed-by: NWeilong Chen <chenweilong@huawei.com> Signed-off-by: NChen Jun <chenjun102@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
- 03 6月, 2021 9 次提交
-
-
由 Jean-Philippe Brucker 提交于
maillist inclusion category: feature bugzilla: 51855 CVE: NA Reference: https://jpbrucker.net/git/linux/commit/?h=sva/2021-03-01&id=7984f27d63b2f56dcdc106be9484061b3a13df0b --------------------------------------------- VFIO works directly with IOMMU groups rather than devices. Even if groups ar still required to only contain one device in order to use SVA, add a set of helpers for VFIO. Signed-off-by: NJean-Philippe Brucker <jean-philippe@linaro.org> Signed-off-by: NLijun Fang <fanglijun3@huawei.com> Reviewed-by: NWeilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Jacob Pan 提交于
maillist inclusion category: feature bugzilla: 51855 CVE: NA Reference: https://jpbrucker.net/git/linux/commit/?h=sva/2021-03-01&id=9a1e957fd072d0e993827e428367947d915da382 --------------------------------------------- When IO page faults are reported outside IOMMU subsystem, the page request handler may fail for various reasons. E.g. a guest received page requests but did not have a chance to run for a long time. The irresponsive behavior could hold off limited resources on the pending device. There can be hardware or credit based software solutions as suggested in the PCI ATS Ch-4. To provide a basic safety net this patch introduces a per device deferrable timer which monitors the longest pending page fault that requires a response. Proper action such as sending failure response code could be taken when timer expires but not included in this patch. We need to consider the life cycle of page groupd ID to prevent confusion with reused group ID by a device. For now, a warning message provides clue of such failure. Signed-off-by: NJacob Pan <jacob.jun.pan@linux.intel.com> Signed-off-by: NAshok Raj <ashok.raj@intel.com> Signed-off-by: NLijun Fang <fanglijun3@huawei.com> Reviewed-by: NWeilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Jacob Pan 提交于
maillist inclusion category: feature bugzilla: 51855 CVE: NA Reference: https://jpbrucker.net/git/linux/commit/?h=sva/2021-03-01&id=ff9d3ac0df919b780a81ccfa54b4dc2eb69175a8 --------------------------------------------- In virtualization use case, when a guest is assigned a PCI host device, protected by a virtual IOMMU on the guest, the physical IOMMU must be programmed to be consistent with the guest mappings. If the physical IOMMU supports two translation stages it makes sense to program guest mappings onto the first stage/level (ARM/Intel terminology) while the host owns the stage/level 2. In that case, it is mandated to trap on guest configuration settings and pass those to the physical iommu driver. This patch adds a new API to the iommu subsystem that allows to set/unset the pasid table information. A generic iommu_pasid_table_config struct is introduced in a new iommu.h uapi header. This is going to be used by the VFIO user API. Reviewed-by: NJean-Philippe Brucker <jean-philippe.brucker@arm.com> Signed-off-by: NJean-Philippe Brucker <jean-philippe.brucker@arm.com> Signed-off-by: NLiu, Yi L <yi.l.liu@linux.intel.com> Signed-off-by: NAshok Raj <ashok.raj@intel.com> Signed-off-by: NJacob Pan <jacob.jun.pan@linux.intel.com> Signed-off-by: NEric Auger <eric.auger@redhat.com> Signed-off-by: NLijun Fang <fanglijun3@huawei.com> Reviewed-by: NWeilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Xiang Chen 提交于
stable inclusion from stable-5.10.37 commit 620aa5821aaa2636d1675c72ab89ee1f26ee4fa1 bugzilla: 51868 CVE: NA -------------------------------- [ Upstream commit 3431c3f6 ] After the change of patch ("iommu: Switch gather->end to the inclusive end"), the performace drops from 1600+K IOPS to 1200K in our kunpeng ARM64 platform. We find that the range [start1, end1) actually is joint from the range [end1, end2), but it is considered as disjoint after the change, so it needs more times of TLB sync, and spends more time on it. So fix the boundary issue to avoid performance drop. Fixes: 862c3715 ("iommu: Switch gather->end to the inclusive end") Signed-off-by: NXiang Chen <chenxiang66@hisilicon.com> Acked-by: NWill Deacon <will@kernel.org> Link: https://lore.kernel.org/r/1616643504-120688-1-git-send-email-chenxiang66@hisilicon.comSigned-off-by: NJoerg Roedel <jroedel@suse.de> Signed-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: NWeilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Jean-Philippe Brucker 提交于
mainline inclusion from mainline-5.13-rc1 commit fc36479d category: feature bugzilla: 51855 CVE: NA --------------------------------------------- Some systems allow devices to handle I/O Page Faults in the core mm. For example systems implementing the PCIe PRI extension or Arm SMMU stall model. Infrastructure for reporting these recoverable page faults was added to the IOMMU core by commit 0c830e6b ("iommu: Introduce device fault report API"). Add a page fault handler for host SVA. IOMMU driver can now instantiate several fault workqueues and link them to IOPF-capable devices. Drivers can choose between a single global workqueue, one per IOMMU device, one per low-level fault queue, one per domain, etc. When it receives a fault event, most commonly in an IRQ handler, the IOMMU driver reports the fault using iommu_report_device_fault(), which calls the registered handler. The page fault handler then calls the mm fault handler, and reports either success or failure with iommu_page_response(). After the handler succeeds, the hardware retries the access. The iopf_param pointer could be embedded into iommu_fault_param. But putting iopf_param into the iommu_param structure allows us not to care about ordering between calls to iopf_queue_add_device() and iommu_register_device_fault_handler(). Tested-by: NLu Baolu <baolu.lu@linux.intel.com> Reviewed-by: NEric Auger <eric.auger@redhat.com> Reviewed-by: NJacob Pan <jacob.jun.pan@linux.intel.com> Reviewed-by: NJonathan Cameron <Jonathan.Cameron@huawei.com> Reviewed-by: NLu Baolu <baolu.lu@linux.intel.com> Signed-off-by: NJean-Philippe Brucker <jean-philippe@linaro.org> Link: https://lore.kernel.org/r/20210401154718.307519-7-jean-philippe@linaro.orgSigned-off-by: NJoerg Roedel <jroedel@suse.de> Reviewed-by: NWeilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Jean-Philippe Brucker 提交于
mainline inclusion from mainline-5.13-rc1 commit 34b48c70 category: feature bugzilla: 51855 CVE: NA --------------------------------------------- Some devices manage I/O Page Faults (IOPF) themselves instead of relying on PCIe PRI or Arm SMMU stall. Allow their drivers to enable SVA without mandating IOMMU-managed IOPF. The other device drivers now need to first enable IOMMU_DEV_FEAT_IOPF before enabling IOMMU_DEV_FEAT_SVA. Enabling IOMMU_DEV_FEAT_IOPF on its own doesn't have any effect visible to the device driver, it is used in combination with other features. Reviewed-by: NEric Auger <eric.auger@redhat.com> Reviewed-by: NLu Baolu <baolu.lu@linux.intel.com> Signed-off-by: NJean-Philippe Brucker <jean-philippe@linaro.org> Link: https://lore.kernel.org/r/20210401154718.307519-4-jean-philippe@linaro.orgSigned-off-by: NJoerg Roedel <jroedel@suse.de> Signed-off-by: NLijun Fang <fanglijun3@huawei.com> Reviewed-by: NWeilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Jean-Philippe Brucker 提交于
mainline inclusion from mainline-5.13-rc1 commit 434b73e6 category: feature bugzilla: 51855 CVE: NA --------------------------------------------- The pasid-num-bits property shouldn't need a dedicated fwspec field, it's a job for device properties. Add properties for IORT, and access the number of PASID bits using device_property_read_u32(). Suggested-by: NRobin Murphy <robin.murphy@arm.com> Acked-by: NJonathan Cameron <Jonathan.Cameron@huawei.com> Acked-by: NWill Deacon <will@kernel.org> Reviewed-by: NEric Auger <eric.auger@redhat.com> Signed-off-by: NJean-Philippe Brucker <jean-philippe@linaro.org> Acked-by: NHanjun Guo <guohanjun@huawei.com> Link: https://lore.kernel.org/r/20210401154718.307519-3-jean-philippe@linaro.orgSigned-off-by: NJoerg Roedel <jroedel@suse.de> Signed-off-by: NLijun Fang <fanglijun3@huawei.com> Reviewed-by: NWeilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Jean-Philippe Brucker 提交于
mainline inclusion from mainline-5.13-rc1 commit 0d35309a category: feature bugzilla: 51855 CVE: NA --------------------------------------------- Commit 986d5ecc ("iommu: Move fwspec->iommu_priv to struct dev_iommu") removed iommu_priv from fwspec and commit 5702ee24 ("ACPI/IORT: Check ATS capability in root complex nodes") added @flags. Update the struct doc. Acked-by: NJonathan Cameron <Jonathan.Cameron@huawei.com> Acked-by: NWill Deacon <will@kernel.org> Signed-off-by: NJean-Philippe Brucker <jean-philippe@linaro.org> Link: https://lore.kernel.org/r/20210401154718.307519-2-jean-philippe@linaro.orgSigned-off-by: NJoerg Roedel <jroedel@suse.de> Signed-off-by: NLijun Fang <fanglijun3@huawei.com> Reviewed-by: NWeilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Yong Wu 提交于
mainline inclusion from mainline-5.12-rc1 commit 2ebbd258 category: feature bugzilla: 51855 CVE: NA --------------------------------------------- iotlb_sync_map allow IOMMU drivers tlb sync after completing the whole mapping. This patch adds iova and size as the parameters in it. then the IOMMU driver could flush tlb with the whole range once after iova mapping to improve performance. Signed-off-by: NYong Wu <yong.wu@mediatek.com> Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Acked-by: NWill Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210107122909.16317-3-yong.wu@mediatek.comSigned-off-by: NWill Deacon <will@kernel.org> Signed-off-by: NLijun Fang <fanglijun3@huawei.com> Reviewed-by: NWeilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
- 09 4月, 2021 1 次提交
-
-
由 Yong Wu 提交于
stable inclusion from stable-5.10.20 commit 48e6713713575184244e84c8d66d42d78d1832a1 bugzilla: 50608 -------------------------------- [ Upstream commit 862c3715 ] Currently gather->end is "unsigned long" which may be overflow in arch32 in the corner case: 0xfff00000 + 0x100000(iova + size). Although it doesn't affect the size(end - start), it affects the checking "gather->end < end" This patch changes this "end" to the real end address (end = start + size - 1). Correspondingly, update the length to "end - start + 1". Fixes: a7d20dc1 ("iommu: Introduce struct iommu_iotlb_gather for batching TLB flushes") Signed-off-by: NYong Wu <yong.wu@mediatek.com> Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Acked-by: NWill Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210107122909.16317-5-yong.wu@mediatek.comSigned-off-by: NWill Deacon <will@kernel.org> Signed-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: NXie XiuQi <xiexiuqi@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
- 09 3月, 2021 1 次提交
-
-
由 Joerg Roedel 提交于
stable inclusion from stable-5.10.15 commit e2bb221a16ac90a98c4fd2a6a4e0e096cba3bd45 bugzilla: 48167 -------------------------------- commit 4c9fb5d9 upstream. The dev_iommu_priv_get() needs a similar check to dev_iommu_fwspec_get() to make sure no NULL-ptr is dereferenced. Fixes: 05a0542b ("iommu/amd: Store dev_data as device iommu private data") Cc: stable@vger.kernel.org # v5.8+ Link: https://lore.kernel.org/r/20210202145419.29143-1-joro@8bytes.org Reference: https://bugzilla.kernel.org/show_bug.cgi?id=211241Signed-off-by: NJoerg Roedel <jroedel@suse.de> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com> Acked-by: NXie XiuQi <xiexiuqi@huawei.com>
-
- 01 10月, 2020 2 次提交
-
-
由 Jacob Pan 提交于
IOMMU user APIs are responsible for processing user data. This patch changes the interface such that user pointers can be passed into IOMMU code directly. Separate kernel APIs without user pointers are introduced for in-kernel users of the UAPI functionality. IOMMU UAPI data has a user filled argsz field which indicates the data length of the structure. User data is not trusted, argsz must be validated based on the current kernel data size, mandatory data size, and feature flags. User data may also be extended, resulting in possible argsz increase. Backward compatibility is ensured based on size and flags (or the functional equivalent fields) checking. This patch adds sanity checks in the IOMMU layer. In addition to argsz, reserved/unused fields in padding, flags, and version are also checked. Details are documented in Documentation/userspace-api/iommu.rst Signed-off-by: NLiu Yi L <yi.l.liu@intel.com> Signed-off-by: NJacob Pan <jacob.jun.pan@linux.intel.com> Reviewed-by: NJean-Philippe Brucker <jean-philippe@linaro.org> Reviewed-by: NEric Auger <eric.auger@redhat.com> Link: https://lore.kernel.org/r/1601051567-54787-6-git-send-email-jacob.jun.pan@linux.intel.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Jacob Pan 提交于
User APIs such as iommu_sva_unbind_gpasid() may also be used by the kernel. Since we introduced user pointer to the UAPI functions, in-kernel callers cannot share the same APIs. In-kernel callers are also trusted, there is no need to validate the data. We plan to have two flavors of the same API functions, one called through ioctls, carrying a user pointer and one called directly with valid IOMMU UAPI structs. To differentiate both, let's rename existing functions with an iommu_uapi_ prefix. Suggested-by: NAlex Williamson <alex.williamson@redhat.com> Signed-off-by: NJacob Pan <jacob.jun.pan@linux.intel.com> Reviewed-by: NEric Auger <eric.auger@redhat.com> Link: https://lore.kernel.org/r/1601051567-54787-5-git-send-email-jacob.jun.pan@linux.intel.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 18 9月, 2020 1 次提交
-
-
由 Fenghua Yu 提交于
PASID is defined as a few different types in iommu including "int", "u32", and "unsigned int". To be consistent and to match with uapi definitions, define PASID and its variations (e.g. max PASID) as "u32". "u32" is also shorter and a little more explicit than "unsigned int". No PASID type change in uapi although it defines PASID as __u64 in some places. Suggested-by: NThomas Gleixner <tglx@linutronix.de> Signed-off-by: NFenghua Yu <fenghua.yu@intel.com> Signed-off-by: NBorislav Petkov <bp@suse.de> Reviewed-by: NTony Luck <tony.luck@intel.com> Reviewed-by: NLu Baolu <baolu.lu@linux.intel.com> Acked-by: NFelix Kuehling <Felix.Kuehling@amd.com> Acked-by: NJoerg Roedel <jroedel@suse.de> Link: https://lkml.kernel.org/r/1600187413-163670-2-git-send-email-fenghua.yu@intel.com
-
- 04 9月, 2020 1 次提交
-
-
由 Tom Murphy 提交于
To keep naming consistent we should stick with *iotlb*. This patch renames a few remaining functions. Signed-off-by: NTom Murphy <murphyt7@tcd.ie> Link: https://lore.kernel.org/r/20200817210051.13546-1-murphyt7@tcd.ieSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 09 7月, 2020 1 次提交
-
-
由 Will Deacon 提交于
The IOMMU_SYS_CACHE_ONLY flag was never exposed via the DMA API and has no in-tree users. Remove it. Cc: Robin Murphy <robin.murphy@arm.com> Cc: "Isaac J. Manjarres" <isaacm@codeaurora.org> Cc: Joerg Roedel <joro@8bytes.org> Cc: Rob Clark <robdclark@gmail.com> Reviewed-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NSai Prakash Ranjan <saiprakash.ranjan@codeaurora.org> Signed-off-by: NWill Deacon <will@kernel.org>
-
- 30 6月, 2020 1 次提交
-
-
由 Marek Szyprowski 提交于
Move the recently added sg_table wrapper out of CONFIG_IOMMU_SUPPORT to let the client code copile also when IOMMU support is disabled. Fixes: 48530d9f ("iommu: add generic helper for mapping sgtable objects") Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com> Link: https://lore.kernel.org/r/20200630081756.18526-1-m.szyprowski@samsung.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 29 5月, 2020 1 次提交
-
-
由 Jean-Philippe Brucker 提交于
After binding a device to an mm, device drivers currently need to register a mm_exit handler. This function is called when the mm exits, to gracefully stop DMA targeting the address space and flush page faults to the IOMMU. This is deemed too complex for the MMU release() notifier, which may be triggered by any mmput() invocation, from about 120 callsites [1]. The upcoming SVA module has an example of such complexity: the I/O Page Fault handler would need to call mmput_async() instead of mmput() after handling an IOPF, to avoid triggering the release() notifier which would in turn drain the IOPF queue and lock up. Another concern is the DMA stop function taking too long, up to several minutes [2]. For some mmput() callers this may disturb other users. For example, if the OOM killer picks the mm bound to a device as the victim and that mm's memory is locked, if the release() takes too long, it might choose additional innocent victims to kill. To simplify the MMU release notifier, don't forward the notification to device drivers. Since they don't stop DMA on mm exit anymore, the PASID lifetime is extended: (1) The device driver calls bind(). A PASID is allocated. Here any DMA fault is handled by mm, and on error we don't print anything to dmesg. Userspace can easily trigger errors by issuing DMA on unmapped buffers. (2) exit_mmap(), for example the process took a SIGKILL. This step doesn't happen during normal operations. Remove the pgd from the PASID table, since the page tables are about to be freed. Invalidate the IOTLBs. Here the device may still perform DMA on the address space. Incoming transactions are aborted but faults aren't printed out. ATS Translation Requests return Successful Translation Completions with R=W=0. PRI Page Requests return with Invalid Request. (3) The device driver stops DMA, possibly following release of a fd, and calls unbind(). PASID table is cleared, IOTLB invalidated if necessary. The page fault queues are drained, and the PASID is freed. If DMA for that PASID is still running here, something went seriously wrong and errors should be reported. For now remove iommu_sva_ops entirely. We might need to re-introduce them at some point, for example to notify device drivers of unhandled IOPF. [1] https://lore.kernel.org/linux-iommu/20200306174239.GM31668@ziepe.ca/ [2] https://lore.kernel.org/linux-iommu/4d68da96-0ad5-b412-5987-2f7a6aa796c3@amd.com/Signed-off-by: NJean-Philippe Brucker <jean-philippe@linaro.org> Acked-by: NJacob Pan <jacob.jun.pan@linux.intel.com> Acked-by: NLu Baolu <baolu.lu@linux.intel.com> Link: https://lore.kernel.org/r/20200423125329.782066-3-jean-philippe@linaro.orgSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 15 5月, 2020 1 次提交
-
-
由 Sai Praneeth Prakhya 提交于
After moving iommu_group setup to iommu core code [1][2] and removing private domain support in vt-d [3], there are no users for functions such as iommu_request_dm_for_dev(), iommu_request_dma_domain_for_dev() and request_default_domain_for_dev(). So, remove these functions. [1] commit dce8d696 ("iommu/amd: Convert to probe/release_device() call-backs") [2] commit e5d1841f ("iommu/vt-d: Convert to probe/release_device() call-backs") [3] commit 327d5b2f ("iommu/vt-d: Allow 32bit devices to uses DMA domain") Signed-off-by: NSai Praneeth Prakhya <sai.praneeth.prakhya@intel.com> Cc: Joerg Roedel <joro@8bytes.org> Cc: Lu Baolu <baolu.lu@linux.intel.com> Link: https://lore.kernel.org/r/20200513224721.20504-1-sai.praneeth.prakhya@intel.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 13 5月, 2020 1 次提交
-
-
由 Marek Szyprowski 提交于
struct sg_table is a common structure used for describing a memory buffer. It consists of a scatterlist with memory pages and DMA addresses (sgl entry), as well as the number of scatterlist entries: CPU pages (orig_nents entry) and DMA mapped pages (nents entry). It turned out that it was a common mistake to misuse nents and orig_nents entries, calling mapping functions with a wrong number of entries. To avoid such issues, lets introduce a common wrapper operating directly on the struct sg_table objects, which take care of the proper use of the nents and orig_nents entries. Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com> Acked-by: NJoerg Roedel <jroedel@suse.de> Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NChristoph Hellwig <hch@lst.de>
-
- 05 5月, 2020 5 次提交
-
-
由 Joerg Roedel 提交于
The function is now only used in IOMMU core code and shouldn't be used outside of it anyway, so remove the export for it. Signed-off-by: NJoerg Roedel <jroedel@suse.de> Tested-by: NMarek Szyprowski <m.szyprowski@samsung.com> Acked-by: NMarek Szyprowski <m.szyprowski@samsung.com> Link: https://lore.kernel.org/r/20200429133712.31431-35-joro@8bytes.orgSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
All drivers are converted to use the probe/release_device() call-backs, so the add_device/remove_device() pointers are unused and the code using them can be removed. Signed-off-by: NJoerg Roedel <jroedel@suse.de> Tested-by: NMarek Szyprowski <m.szyprowski@samsung.com> Acked-by: NMarek Szyprowski <m.szyprowski@samsung.com> Link: https://lore.kernel.org/r/20200429133712.31431-33-joro@8bytes.orgSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
Add a check to the bus_iommu_probe() call-path to make sure it ignores devices which have already been successfully probed. Then export the bus_iommu_probe() function so it can be used by IOMMU drivers. Signed-off-by: NJoerg Roedel <jroedel@suse.de> Tested-by: NMarek Szyprowski <m.szyprowski@samsung.com> Acked-by: NMarek Szyprowski <m.szyprowski@samsung.com> Link: https://lore.kernel.org/r/20200429133712.31431-14-joro@8bytes.orgSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
Add call-backs to 'struct iommu_ops' as an alternative to the add_device() and remove_device() call-backs, which will be removed when all drivers are converted. The new call-backs will not setup IOMMU groups and domains anymore, so also add a probe_finalize() call-back where the IOMMU driver can do per-device setup work which require the device to be set up with a group and a domain. Signed-off-by: NJoerg Roedel <jroedel@suse.de> Tested-by: NMarek Szyprowski <m.szyprowski@samsung.com> Acked-by: NMarek Szyprowski <m.szyprowski@samsung.com> Link: https://lore.kernel.org/r/20200429133712.31431-8-joro@8bytes.orgSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Sai Praneeth Prakhya 提交于
Some devices are reqired to use a specific type (identity or dma) of default domain when they are used with a vendor iommu. When the system level default domain type is different from it, the vendor iommu driver has to request a new default domain with iommu_request_dma_domain_for_dev() and iommu_request_dm_for_dev() in the add_dev() callback. Unfortunately, these two helpers only work when the group hasn't been assigned to any other devices, hence, some vendor iommu driver has to use a private domain if it fails to request a new default one. This adds def_domain_type() callback in the iommu_ops, so that any special requirement of default domain for a device could be aware by the iommu generic layer. Signed-off-by: NSai Praneeth Prakhya <sai.praneeth.prakhya@intel.com> Signed-off-by: NLu Baolu <baolu.lu@linux.intel.com> [ jroedel@suse.de: Added iommu_get_def_domain_type() function and use it to allocate the default domain ] Co-developed-by: NJoerg Roedel <jroedel@suse.de> Signed-off-by: NJoerg Roedel <jroedel@suse.de> Tested-by: NMarek Szyprowski <m.szyprowski@samsung.com> Acked-by: NMarek Szyprowski <m.szyprowski@samsung.com> Link: https://lore.kernel.org/r/20200429133712.31431-3-joro@8bytes.orgSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 27 3月, 2020 3 次提交
-
-
由 Joerg Roedel 提交于
Move the pointer for iommu private data from struct iommu_fwspec to struct dev_iommu. Signed-off-by: NJoerg Roedel <jroedel@suse.de> Tested-by: Will Deacon <will@kernel.org> # arm-smmu Reviewed-by: NJean-Philippe Brucker <jean-philippe@linaro.org> Link: https://lore.kernel.org/r/20200326150841.10083-17-joro@8bytes.org
-
由 Joerg Roedel 提交于
Add dev_iommu_priv_get/set() functions to access per-device iommu private data. This makes it easier to move the pointer to a different location. Signed-off-by: NJoerg Roedel <jroedel@suse.de> Tested-by: Will Deacon <will@kernel.org> # arm-smmu Reviewed-by: NJean-Philippe Brucker <jean-philippe@linaro.org> Link: https://lore.kernel.org/r/20200326150841.10083-9-joro@8bytes.org
-
由 Joerg Roedel 提交于
Move the iommu_fwspec pointer in struct device into struct dev_iommu. This is a step in the effort to reduce the iommu related pointers in struct device to one. Signed-off-by: NJoerg Roedel <jroedel@suse.de> Tested-by: Will Deacon <will@kernel.org> # arm-smmu Reviewed-by: NJean-Philippe Brucker <jean-philippe@linaro.org> Reviewed-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Link: https://lore.kernel.org/r/20200326150841.10083-7-joro@8bytes.org
-