- 29 9月, 2021 1 次提交
-
-
由 Hanjun Guo 提交于
hulk inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I41AUQ CVE: NA ------------------------------------- Support SMMU default bypass for some CPU SoCs which the SMMU is not functional well in address translation mode. We already have the .def_domain_type hook for iommu_ops in iommu driver, so we add the CPU SoC SMMU bypass code in the .def_domain_type hook in smmuv2 driver, and return IOMMU_DOMAIN_IDENTITY for such SoCs. After we add the hook, we set all the devices for such SoCs in pass through mode, no matter adding iommu.passthrough=off/on or not in the boot cmdline. While we at it, update the config SMMU_BYPASS_DEV to specify the useage. Signed-off-by: NHanjun Guo <guohanjun@huawei.com> Cc: Guo Hui <guohui@uniontech.com> Cc: Cheng Jian <cj.chengjian@huawei.com> Cc: Zhen Lei <thunder.leizhen@huawei.com> Cc: Xiuqi Xie <xiexiuqi@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
- 31 8月, 2021 1 次提交
-
-
由 luochunsheng 提交于
euleros inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I42DAP CVE: NA ---------------------------------- arm64 cannot support virtualization pass-through feature when using the 3408iMR/3416iMR raid card. For solving the problem, we add two fuctions: 1.we prepare init bypass level2 entry for specific devices when smmu uses 2 level streamid entry. 2.we add smmu.bypassdev cmdline to allow SMMU bypass streams for some specific devices. usage: 3408iMRraid: smmu.bypassdev=0x1000:0x17 3416iMRraid: smmu.bypassdev=0x1000:0x15 Signed-off-by: luochunsheng<luochunsheng@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
- 28 7月, 2021 8 次提交
-
-
由 Zhen Lei 提交于
hulk inclusion category: feature bugzilla: 174251 CVE: NA ------------------------------------------------------------------------- Due to limited hardware resources, the number of ECMDQs may be less than the number of cores. If the number of ECMDQs is greater than the number of numa nodes, ensure that each node has at least one ECMDQ. This is because ECMDQ queue memory is requested from the NUMA node where it resides, which may result in better command filling and insertion performance. The current ECMDQ implementation reuses the command insertion function arm_smmu_cmdq_issue_cmdlist() of the normal CMDQ. This function already supports multiple cores concurrent insertion commands. Signed-off-by: NZhen Lei <thunder.leizhen@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Zhen Lei 提交于
hulk inclusion category: feature bugzilla: 174251 CVE: NA ------------------------------------------------------------------------- When a core can exclusively own an ECMDQ, competition with other cores does not need to be considered during command insertion. Therefore, we can delete the part of arm_smmu_cmdq_issue_cmdlist() that deals with multi-core contention and generate a more efficient ECMDQ-specific function arm_smmu_ecmdq_issue_cmdlist(). Signed-off-by: NZhen Lei <thunder.leizhen@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Zhen Lei 提交于
hulk inclusion category: feature bugzilla: 174251 CVE: NA ------------------------------------------------------------------------- The SYNC command only ensures that the command that precedes it in the same ECMDQ must be executed, but cannot synchronize the commands in other ECMDQs. If an unmap involves multiple commands, some commands are executed on one core, and the other commands are executed on another core. In this case, after the SYNC execution is complete, the execution of all preceded commands can not be ensured. Prevent the process that performs a set of associated commands insertion from being migrated to other cores ensures that all commands are inserted into the same ECMDQ. Signed-off-by: NZhen Lei <thunder.leizhen@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Zhen Lei 提交于
hulk inclusion category: feature bugzilla: 174251 CVE: NA ------------------------------------------------------------------------- Ensure that each core exclusively occupies an ECMDQ and all of them are enabled during initialization. During this initialization process, any errors will result in a fallback to using normal CMDQ. When GERROR is triggered by ECMDQ, all ECMDQs need to be traversed: the ECMDQs with errors will be processed and the ECMDQs without errors will be skipped directly. Compared with register SMMU_CMDQ_PROD, register SMMU_ECMDQ_PROD has one more 'EN' bit and one more 'ERRACK' bit. Therefore, an extra member 'ecmdq_prod' is added to record the values of these two bits. Each time register SMMU_ECMDQ_PROD is updated, the value of 'ecmdq_prod' is ORed. After the error indicated by SMMU_GERROR.CMDQP_ERR is fixed, the 'ERRACK' bit needs to be toggled to resume the corresponding ECMDQ. Therefore, a rwlock is used to protect the write operation to bit 'ERRACK' during error handling and the read operation to bit 'ERRACK' during command insertion. Signed-off-by: NZhen Lei <thunder.leizhen@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Zhen Lei 提交于
hulk inclusion category: feature bugzilla: 174251 CVE: NA ------------------------------------------------------------------------- When SMMU_GERROR.CMDQP_ERR is different to SMMU_GERRORN.CMDQP_ERR, it indicates that one or more errors have been encountered on a command queue control page interface. We need to traverse all ECMDQs in that control page to find all errors. For each ECMDQ error handling, it is much the same as the CMDQ error handling. This common processing part is extracted as a new function __arm_smmu_cmdq_skip_err(). Signed-off-by: NZhen Lei <thunder.leizhen@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Zhen Lei 提交于
hulk inclusion category: feature bugzilla: 174251 CVE: NA ------------------------------------------------------------------------- One SMMU has only one normal CMDQ. Therefore, this CMDQ is used regardless of the core on which the command is inserted. It can be referenced directly through "smmu->cmdq". However, one SMMU has multiple ECMDQs, and the ECMDQ used by the core on which the command insertion is executed may be different. So the helper function arm_smmu_get_cmdq() is added, which returns the CMDQ/ECMDQ that the current core should use. Currently, the code that supports ECMDQ is not added. just simply returns "&smmu->cmdq". Many subfunctions of arm_smmu_cmdq_issue_cmdlist() use "&smmu->cmdq" or "&smmu->cmdq.q" directly. To support ECMDQ, they need to call the newly added function arm_smmu_get_cmdq() instead. Note that normal CMDQ is still required until ECMDQ is available. Signed-off-by: NZhen Lei <thunder.leizhen@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Zhen Lei 提交于
hulk inclusion category: feature bugzilla: 174251 CVE: NA ------------------------------------------------------------------------- The obvious key to the performance optimization of commit 587e6c10 ("iommu/arm-smmu-v3: Reduce contention during command-queue insertion") is to allow multiple cores to insert commands in parallel after a brief mutex contention. Obviously, inserting as many commands at a time as possible can reduce the number of times the mutex contention participates, thereby improving the overall performance. At least it reduces the number of calls to function arm_smmu_cmdq_issue_cmdlist(). Therefore, function arm_smmu_cmdq_issue_cmd_with_sync() is added to insert the 'cmd+sync' commands at a time. Signed-off-by: NZhen Lei <thunder.leizhen@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Zhen Lei 提交于
hulk inclusion category: feature bugzilla: 174251 CVE: NA ------------------------------------------------------------------------- The obvious key to the performance optimization of commit 587e6c10 ("iommu/arm-smmu-v3: Reduce contention during command-queue insertion") is to allow multiple cores to insert commands in parallel after a brief mutex contention. Obviously, inserting as many commands at a time as possible can reduce the number of times the mutex contention participates, thereby improving the overall performance. At least it reduces the number of calls to function arm_smmu_cmdq_issue_cmdlist(). Therefore, use command queue batching helpers to insert multiple commands at a time. Signed-off-by: NZhen Lei <thunder.leizhen@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
- 23 7月, 2021 1 次提交
-
-
由 LeoLiu-oc 提交于
zhaoxin inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I40QDN CVE: NA ---------------------------------------------------------------- Some ACPI devices need to issue dma requests to access the reserved memory area.BIOS uses the device scope type ACPI_NAMESPACE_DEVICE in RMRR to report these ACPI devices. This patch add support for detecting ACPI devices in RMRR and in order to distinguish it from PCI device, some interface functions are modified. Signed-off-by: NLeoLiu-oc <LeoLiu-oc@zhaoxin.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
- 19 7月, 2021 19 次提交
-
-
由 Kunkun Jiang 提交于
virt inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I401IF CVE: NA ------------------------------ There is no reason to do the shift operation to 'size' in arm_smmu_cache_invalidate(). Signed-off-by: NKunkun Jiang <jiangkunkun@huawei.com> Reviewed-by: NKeqian Zhu <zhukeqian1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Kunkun Jiang 提交于
virt inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I401IF CVE: NA ------------------------------ In __arm_smmu_tlb_inv_range(), the field 'ttl' of TLB invalidation command is caculated based on granule size when the SMMU supports RIL. There are some scenarious we need to avoid, which are pointed out in the SMMUv3 spec(page 143-144, Version D.a). Adding a check to ensure that the granule size is supported by the SMMU before set the 'ttl' value. Reported-by: NNianyao Tang <tangnianyao@huawei.com> Signed-off-by: NKunkun Jiang <jiangkunkun@huawei.com> Reviewed-by: NKeqian Zhu <zhukeqian1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Kunkun Jiang 提交于
virt inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I401IF CVE: NA ------------------------------ In the __arm_smmu_tlb_inv_range(), the fileds of TLBI CMD is calculated based on the invalid range and the leaf page size, when SMMU supports RIL. It will cause some errors when the invalid range isn't aligned with the leaf page size. 1. The num_pages will be zero, if the invalid range is less than b. Then it will enter an endless loop in __arm_smmu_tlb_inv_range(). 2. The actual invalid range will only be part of the invalid range. If the invalid range is not an integral multiple of the leaf page size. To align invalid range with leaf page size upwards will solve the two issues. Reported-by: NNianyao Tang <tangnianyao@huawei.com> Signed-off-by: NKunkun Jiang <jiangkunkun@huawei.com> Reviewed-by: NKeqian Zhu <zhukeqian1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Kunkun Jiang 提交于
virt inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I401IF CVE: NA ------------------------------ According to the SMMUv3 spec, CMD_TLBI_NSNH_ALL invalidates all Non-secure, non-NS-EL2, non-NS-EL2-E2H TLB entries at all implemented stages. Obviously, it is inappropriate to use it here. We replace it with CMD_TLBI_NH_ALL. Signed-off-by: NKunkun Jiang <jiangkunkun@huawei.com> Reviewed-by: NKeqian Zhu <zhukeqian1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Kunkun Jiang 提交于
virt inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I401IF CVE: NA ------------------------------ sync/clear_dirty_log() support other Arm types of page table formats, except 32/64-bit stage 1 format. So it'd better to remove the limination. Signed-off-by: NKunkun Jiang <jiangkunkun@huawei.com> Reviewed-by: NKeqian Zhu <zhukeqian1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Kunkun Jiang 提交于
virt inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I401IF CVE: NA ------------------------------ The data access permission bits of stage 2 (S2AP[1:0])are different from stage 1 (AP[2:1]). For stage 2, S2AP[1] controls write and S2AP[0] controls read. AP[1] selects between Application level(EL0) control and the higher Exception level control, and AP[2] selects between read-only and read/write access. These differences need to be eliminated. Signed-off-by: NKunkun Jiang <jiangkunkun@huawei.com> Reviewed-by: NKeqian Zhu <zhukeqian1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Kunkun Jiang 提交于
virt inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I401IF CVE: NA ------------------------------ The SMMU which supports HTTU (Hardware Translation Table Update) can update the Access flag and the Dirty state bits by hardware. It is essential to track dirty pages of DMA. If stage 2 translation is enabled and the SMMU supports HTTU, we set S2HA (stage 2 Access flag) and S2HD (stage 2 Dirty flag). And set DBM bit of stage 2 TTD. Signed-off-by: NKunkun Jiang <jiangkunkun@huawei.com> Reviewed-by: NKeqian Zhu <zhukeqian1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Eric Auger 提交于
virt inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I401IF CVE: NA ------------------------------ Up to now we have only reported translation faults. Now that the guest can induce some configuration faults, let's report them too. Add propagation for BAD_SUBSTREAMID, CD_FETCH, BAD_CD, WALK_EABT. We also fix the transcoding for some existing translation faults. Signed-off-by: NEric Auger <eric.auger@redhat.com> Signed-off-by: Kunkun Jiang<jiangkunkun@huawei.com> Reviewed-by: NKeqian Zhu <zhukeqian1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Eric Auger 提交于
virt inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I401IF CVE: NA ------------------------------ The bind/unbind_guest_msi() callbacks check the domain is NESTED and redirect to the dma-iommu implementation. Signed-off-by: NEric Auger <eric.auger@redhat.com> Signed-off-by: Kunkun Jiang<jiangkunkun@huawei.com> Reviewed-by: NKeqian Zhu <zhukeqian1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Eric Auger 提交于
virt inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I401IF CVE: NA ------------------------------ Nested mode currently is not compatible with HW MSI reserved regions. Indeed MSI transactions targeting this MSI doorbells bypass the SMMU. Let's check nested mode is not attempted in such configuration. Signed-off-by: NEric Auger <eric.auger@redhat.com> Signed-off-by: Kunkun Jiang<jiangkunkun@huawei.com> Reviewed-by: NKeqian Zhu <zhukeqian1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Eric Auger 提交于
virt inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I401IF CVE: NA ------------------------------ In nested mode we enforce the rule that all devices belonging to the same iommu_domain share the same msi_domain. Indeed if there were several physical MSI doorbells being used within a single iommu_domain, it becomes really difficult to resolve the nested stage mapping translating into the correct physical doorbell. So let's forbid this situation. Signed-off-by: NEric Auger <eric.auger@redhat.com> Signed-off-by: Kunkun Jiang<jiangkunkun@huawei.com> Reviewed-by: NKeqian Zhu <zhukeqian1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Eric Auger 提交于
virt inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I401IF CVE: NA ------------------------------ Up to now, when the type was UNMANAGED, we used to allocate IOVA pages within a reserved IOVA MSI range. If both the host and the guest are exposed with SMMUs, each would allocate an IOVA. The guest allocates an IOVA (gIOVA) to map onto the guest MSI doorbell (gDB). The Host allocates another IOVA (hIOVA) to map onto the physical doorbell (hDB). So we end up with 2 unrelated mappings, at S1 and S2: S1 S2 gIOVA -> gDB hIOVA -> hDB The PCI device would be programmed with hIOVA. No stage 1 mapping would existing, causing the MSIs to fault. iommu_dma_bind_guest_msi() allows to pass gIOVA/gDB to the host so that gIOVA can be used by the host instead of re-allocating a new hIOVA. S1 S2 gIOVA -> gDB -> hDB this time, the PCI device can be programmed with the gIOVA MSI doorbell which is correctly mapped through both stages. Nested mode is not compatible with HW MSI regions as in that case gDB and hDB should have a 1-1 mapping. This check will be done when attaching each device to the IOMMU domain. Signed-off-by: NEric Auger <eric.auger@redhat.com> Signed-off-by: Kunkun Jiang<jiangkunkun@huawei.com> Reviewed-by: NKeqian Zhu <zhukeqian1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Eric Auger 提交于
virt inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I401IF CVE: NA ------------------------------ Implement domain-selective, pasid selective and page-selective IOTLB invalidations. Signed-off-by: NEric Auger <eric.auger@redhat.com> Signed-off-by: Kunkun Jiang<jiangkunkun@huawei.com> Reviewed-by: NKeqian Zhu <zhukeqian1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Eric Auger 提交于
virt inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I401IF CVE: NA ------------------------------ With nested stage support, soon we will need to invalidate S1 contexts and ranges tagged with an unmanaged asid, this latter being managed by the guest. So let's introduce 2 helpers that allow to invalidate with externally managed ASIDs Signed-off-by: NEric Auger <eric.auger@redhat.com> Signed-off-by: Kunkun Jiang<jiangkunkun@huawei.com> Reviewed-by: NKeqian Zhu <zhukeqian1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Eric Auger 提交于
virt inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I401IF CVE: NA ------------------------------ On attach_pasid_table() we program STE S1 related info set by the guest into the actual physical STEs. At minimum we need to program the context descriptor GPA and compute whether the stage1 is translated/bypassed or aborted. On detach, the stage 1 config is unset and the abort flag is unset. Signed-off-by: NEric Auger <eric.auger@redhat.com> Signed-off-by: Kunkun Jiang<jiangkunkun@huawei.com> Reviewed-by: NKeqian Zhu <zhukeqian1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Eric Auger 提交于
virt inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I401IF CVE: NA ------------------------------ When nested stage translation is setup, both s1_cfg and s2_cfg are set. We introduce a new smmu_domain abort field that will be set upon guest stage1 configuration passing. If no guest stage1 config has been attached, it is ignored when writing the STE. arm_smmu_write_strtab_ent() is modified to write both stage fields in the STE and deal with the abort field. In nested mode, only stage 2 is "finalized" as the host does not own/configure the stage 1 context descriptor; guest does. Signed-off-by: NEric Auger <eric.auger@redhat.com> Signed-off-by: Kunkun Jiang<jiangkunkun@huawei.com> Reviewed-by: NKeqian Zhu <zhukeqian1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Eric Auger 提交于
virt inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I401IF CVE: NA ------------------------------ In true nested mode, both s1_cfg and s2_cfg will coexist. Let's remove the union and add a "set" field in each config structure telling whether the config is set and needs to be applied when writing the STE. In legacy nested mode, only the second stage is used. In true nested mode, both stages are used and the S1 config is "set" when the guest passes its pasid table. No functional change intended. Signed-off-by: NEric Auger <eric.auger@redhat.com> Signed-off-by: Kunkun Jiang<jiangkunkun@huawei.com> Reviewed-by: NKeqian Zhu <zhukeqian1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Eric Auger 提交于
virt inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I401IF CVE: NA ------------------------------ On ARM, MSI are translated by the SMMU. An IOVA is allocated for each MSI doorbell. If both the host and the guest are exposed with SMMUs, we end up with 2 different IOVAs allocated by each. guest allocates an IOVA (gIOVA) to map onto the guest MSI doorbell (gDB). The Host allocates another IOVA (hIOVA) to map onto the physical doorbell (hDB). So we end up with 2 untied mappings: S1 S2 gIOVA -> gDB hIOVA -> hDB Currently the PCI device is programmed by the host with hIOVA as MSI doorbell. So this does not work. This patch introduces an API to pass gIOVA/gDB to the host so that gIOVA can be reused by the host instead of re-allocating a new IOVA. So the goal is to create the following nested mapping: S1 S2 gIOVA -> gDB -> hDB and program the PCI device with gIOVA MSI doorbell. In case we have several devices attached to this nested domain (devices belonging to the same group), they cannot be isolated on guest side either. So they should also end up in the same domain on guest side. We will enforce that all the devices attached to the host iommu domain use the same physical doorbell and similarly a single virtual doorbell mapping gets registered (1 single virtual doorbell is used on guest as well). Signed-off-by: NEric Auger <eric.auger@redhat.com> Signed-off-by: Kunkun Jiang<jiangkunkun@huawei.com> Reviewed-by: NKeqian Zhu <zhukeqian1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Eric Auger 提交于
virt inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I401IF CVE: NA ------------------------------ In virtualization use case, when a guest is assigned a PCI host device, protected by a virtual IOMMU on the guest, the physical IOMMU must be programmed to be consistent with the guest mappings. If the physical IOMMU supports two translation stages it makes sense to program guest mappings onto the first stage/level (ARM/Intel terminology) while the host owns the stage/level 2. In that case, it is mandated to trap on guest configuration settings and pass those to the physical iommu driver. This patch adds a new API to the iommu subsystem that allows to set/unset the pasid table information. A generic iommu_pasid_table_config struct is introduced in a new iommu.h uapi header. This is going to be used by the VFIO user API. Signed-off-by: NJean-Philippe Brucker <jean-philippe.brucker@arm.com> Signed-off-by: NLiu, Yi L <yi.l.liu@linux.intel.com> Signed-off-by: NAshok Raj <ashok.raj@intel.com> Signed-off-by: NJacob Pan <jacob.jun.pan@linux.intel.com> Signed-off-by: NEric Auger <eric.auger@redhat.com> Signed-off-by: Kunkun Jiang<jiangkunkun@huawei.com> Reviewed-by: NKeqian Zhu <zhukeqian1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
- 16 7月, 2021 10 次提交
-
-
由 Kunkun Jiang 提交于
virt inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I3ZUKK CVE: NA ------------------------------ We have implemented these interfaces required to support iommu dirty log tracking. The last step is reporting this feature to upper user, then the user can perform higher policy base on it. For arm smmuv3, it is equal to ARM_SMMU_FEAT_HD. Co-developed-by: NKeqian Zhu <zhukeqian1@huawei.com> Signed-off-by: NKunkun Jiang <jiangkunkun@huawei.com> Reviewed-by: NKeqian Zhu <zhukeqian1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Kunkun Jiang 提交于
virt inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I3ZUKK CVE: NA ------------------------------ This realizes clear_dirty_log iommu ops based on clear_dirty_log io-pgtable ops. Co-developed-by: NKeqian Zhu <zhukeqian1@huawei.com> Signed-off-by: NKunkun Jiang <jiangkunkun@huawei.com> Reviewed-by: NKeqian Zhu <zhukeqian1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Kunkun Jiang 提交于
virt inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I3ZUKK CVE: NA ------------------------------ This realizes sync_dirty_log iommu ops based on sync_dirty_log io-pgtable ops. Co-developed-by: NKeqian Zhu <zhukeqian1@huawei.com> Signed-off-by: NKunkun Jiang <jiangkunkun@huawei.com> Reviewed-by: NKeqian Zhu <zhukeqian1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Kunkun Jiang 提交于
virt inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I3ZUKK CVE: NA ------------------------------ This realizes switch_dirty_log. In order to get finer dirty granule, it invokes arm_smmu_split_block when start dirty log, and invokes arm_smmu_merge_page() to recover block mapping when stop dirty log. Co-developed-by: NKeqian Zhu <zhukeqian1@huawei.com> Signed-off-by: NKunkun Jiang <jiangkunkun@huawei.com> Reviewed-by: NKeqian Zhu <zhukeqian1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Kunkun Jiang 提交于
virt inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I3ZUKK CVE: NA ------------------------------ This detects BBML feature and if SMMU supports it, transfer BBMLx quirk to io-pgtable. Co-developed-by: NKeqian Zhu <zhukeqian1@huawei.com> Signed-off-by: NKunkun Jiang <jiangkunkun@huawei.com> Reviewed-by: NKeqian Zhu <zhukeqian1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Kunkun Jiang 提交于
virt inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I3ZUKK CVE: NA ------------------------------ As nested mode is not upstreamed now, we just aim to support dirty log tracking for stage1 with io-pgtable mapping (means not support SVA mapping). If HTTU is supported, we enable HA/HD bits in the SMMU CD and transfer ARM_HD quirk to io-pgtable. Co-developed-by: NKeqian Zhu <zhukeqian1@huawei.com> Signed-off-by: NKunkun Jiang <jiangkunkun@huawei.com> Reviewed-by: NKeqian Zhu <zhukeqian1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Kunkun Jiang 提交于
virt inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I3ZUKK CVE: NA ------------------------------ After dirty log is retrieved, user should clear dirty log to re-enable dirty log tracking for these dirtied pages. This clears the dirty state (As we just set DBM bit for stage1 mapping, so should set the AP[2] bit) of these leaf TTDs that are specified by the user provided bitmap. Co-developed-by: NKeqian Zhu <zhukeqian1@huawei.com> Signed-off-by: NKunkun Jiang <jiangkunkun@huawei.com> Reviewed-by: NKeqian Zhu <zhukeqian1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Kunkun Jiang 提交于
virt inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I3ZUKK CVE: NA ------------------------------ During dirty log tracking, user will try to retrieve dirty log from iommu if it supports hardware dirty log. Scan leaf TTD and treat it is dirty if it's writable. As we just set DBM bit for stage1 mapping, so check whether AP[2] is not set. Co-developed-by: NKeqian Zhu <zhukeqian1@huawei.com> Signed-off-by: NKunkun Jiang <jiangkunkun@huawei.com> Reviewed-by: NKeqian Zhu <zhukeqian1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Kunkun Jiang 提交于
virt inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I3ZUKK CVE: NA ------------------------------ If block(largepage) mappings are split during start dirty log, then when stop dirty log, we need to recover them for better DMA performance. This recovers block mappings and unmap the span of page mappings. BBML1 or BBML2 feature is required. Merging page is designed to be only used by dirty log tracking, which does not concurrently work with other pgtable ops that access underlying page table, so race condition does not exist. Co-developed-by: NKeqian Zhu <zhukeqian1@huawei.com> Signed-off-by: NKunkun Jiang <jiangkunkun@huawei.com> Reviewed-by: NKeqian Zhu <zhukeqian1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Kunkun Jiang 提交于
virt inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I3ZUKK CVE: NA ------------------------------ Block(largepage) mapping is not a proper granule for dirty log tracking. Take an extreme example, if DMA writes one byte, under 1G mapping, the dirty amount reported is 1G, but under 4K mapping, the dirty amount is just 4K. This splits block descriptor to an span of page descriptors. BBML1 or BBML2 feature is required. Spliting block is designed to be only used by dirty log tracking, which does not concurrently work with other pgtable ops that access underlying page table, so race condition does not exist. Co-developed-by: NKeqian Zhu <zhukeqian1@huawei.com> Signed-off-by: NKunkun Jiang <jiangkunkun@huawei.com> Reviewed-by: NKeqian Zhu <zhukeqian1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-