- 16 7月, 2021 5 次提交
-
-
由 Kunkun Jiang 提交于
virt inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I3ZUKK CVE: NA ------------------------------ This realizes clear_dirty_log iommu ops based on clear_dirty_log io-pgtable ops. Co-developed-by: NKeqian Zhu <zhukeqian1@huawei.com> Signed-off-by: NKunkun Jiang <jiangkunkun@huawei.com> Reviewed-by: NKeqian Zhu <zhukeqian1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Kunkun Jiang 提交于
virt inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I3ZUKK CVE: NA ------------------------------ This realizes sync_dirty_log iommu ops based on sync_dirty_log io-pgtable ops. Co-developed-by: NKeqian Zhu <zhukeqian1@huawei.com> Signed-off-by: NKunkun Jiang <jiangkunkun@huawei.com> Reviewed-by: NKeqian Zhu <zhukeqian1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Kunkun Jiang 提交于
virt inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I3ZUKK CVE: NA ------------------------------ This realizes switch_dirty_log. In order to get finer dirty granule, it invokes arm_smmu_split_block when start dirty log, and invokes arm_smmu_merge_page() to recover block mapping when stop dirty log. Co-developed-by: NKeqian Zhu <zhukeqian1@huawei.com> Signed-off-by: NKunkun Jiang <jiangkunkun@huawei.com> Reviewed-by: NKeqian Zhu <zhukeqian1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Kunkun Jiang 提交于
virt inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I3ZUKK CVE: NA ------------------------------ This detects BBML feature and if SMMU supports it, transfer BBMLx quirk to io-pgtable. Co-developed-by: NKeqian Zhu <zhukeqian1@huawei.com> Signed-off-by: NKunkun Jiang <jiangkunkun@huawei.com> Reviewed-by: NKeqian Zhu <zhukeqian1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Kunkun Jiang 提交于
virt inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I3ZUKK CVE: NA ------------------------------ As nested mode is not upstreamed now, we just aim to support dirty log tracking for stage1 with io-pgtable mapping (means not support SVA mapping). If HTTU is supported, we enable HA/HD bits in the SMMU CD and transfer ARM_HD quirk to io-pgtable. Co-developed-by: NKeqian Zhu <zhukeqian1@huawei.com> Signed-off-by: NKunkun Jiang <jiangkunkun@huawei.com> Reviewed-by: NKeqian Zhu <zhukeqian1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
- 03 6月, 2021 17 次提交
-
-
由 Jean-Philippe Brucker 提交于
maillist inclusion category: feature bugzilla: 51855 CVE: NA Reference: https://jpbrucker.net/git/linux/commit/?h=sva/2021-03-01&id=4411467daeff90c7c371cef6369e4bf8561fb00e --------------------------------------------- In commit a3a19592 ("iommu: Add APIs for multiple domains per device"), the IOMMU API gained the concept of auxiliary domains (AUXD), which allows to control the PASID-tagged address spaces of a device. With AUXD the PASID address space are not shared with the CPU, but are instead modified with iommu_map() and iommu_unmap() calls on auxiliary domains. Add auxiliary domain support to the SMMUv3 driver. Device drivers allocate an unmanaged IOMMU domain with iommu_domain_alloc(), and attach it to the device with iommu_aux_attach_domain(). The AUXD API is fairly permissive, and allows to attach an IOMMU domain in both normal and auxiliary mode at the same time - one device can be attached to the domain normally, and another device can be attached through one of its PASIDs. To avoid excessive complexity in the SMMU implementation we pose some restrictions on supported AUXD usage: * A domain is either in auxiliary mode or normal mode. And that state is sticky. Once detached the domain has to be re-attached in the same mode. * An auxiliary domain can have a single parent domain. Two devices can be attached to the same auxiliary domain only if they are attached to the same parent domain. In practice these shouldn't be problematic, since we have the same kind of restriction on normal domains and users have been able to cope so far: at the moment a domain cannot be attached to two devices behind different SMMUs. When VFIO puts two such devices in the same container, it simply falls back to allocating two separate IOMMU domains. Be careful with mixing ATS and PASID. PCIe does not provide a way to only invalidate non-PASID ATC entries, without also invalidating all PASID-tagged ATC entries in the same address range! Try to avoid using PASID and non-PASID contexts at the same time. FIXME: if a device is removed from the domain while we drop an auxiliary domain, there may be a problem. We remove the ctx desc, invalidate CD cache for all devices, then invaliate the ATC for all devices. But we drop the devices lock between the two invalidations so if a device is removed we might miss the ATC inval? Signed-off-by: NJean-Philippe Brucker <jean-philippe@linaro.org> Signed-off-by: NLijun Fang <fanglijun3@huawei.com> Reviewed-by: NWeilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Jean-Philippe Brucker 提交于
maillist inclusion category: feature bugzilla: 51855 CVE: NA Reference: https://jpbrucker.net/git/linux/commit/?h=sva/2021-03-01&id=b61d8bac171c221b3d14c03eba78338bfbbb8825 --------------------------------------------- When a device or driver misbehaves, it is possible to receive events much faster than we can print them out. Ratelimit the printing of events. Signed-off-by: NJean-Philippe Brucker <jean-philippe@linaro.org> Signed-off-by: NLijun Fang <fanglijun3@huawei.com> Reviewed-by: NWeilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Jean-Philippe Brucker 提交于
maillist inclusion category: feature bugzilla: 51855 CVE: NA Reference: https://jpbrucker.net/git/linux/commit/?h=sva/2021-03-01&id=9c63ea52c28effb4b797e2f8528a09958185514c --------------------------------------------- If the SMMU supports it and the kernel was built with HTTU support, enable hardware update of access and dirty flags. This is essential for shared page tables, to reduce the number of access faults on the fault queue. Normal DMA with io-pgtables doesn't currently use the access or dirty flags. We can enable HTTU even if CPUs don't support it, because the kernel always checks for HW dirty bit and updates the PTE flags atomically. Signed-off-by: NJean-Philippe Brucker <jean-philippe@linaro.org> Signed-off-by: NLijun Fang <fanglijun3@huawei.com> Reviewed-by: NWeilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Jean-Philippe Brucker 提交于
maillist inclusion category: feature bugzilla: 51855 CVE: NA Reference: https://jpbrucker.net/git/linux/commit/?h=sva/2021-03-01&id=027d704339ca0e1a89987eac24e60678f6844a56 --------------------------------------------- The SMMUv3 can handle invalidation targeted at TLB entries with shared ASIDs. If the implementation supports broadcast TLB maintenance, enable it and keep track of it in a feature bit. The SMMU will then be affected by inner-shareable TLB invalidations from other agents. A major side-effect of this change is that stage-2 translation contexts are now affected by all invalidations by VMID. VMIDs are all shared and the only ways to prevent over-invalidation, since the stage-2 page tables are not shared between CPU and SMMU, are to either disable BTM or allocate different VMIDs. This patch does not address the problem. Signed-off-by: NJean-Philippe Brucker <jean-philippe@linaro.org> Signed-off-by: NLijun Fang <fanglijun3@huawei.com> Reviewed-by: NWeilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Jean-Philippe Brucker 提交于
maillist inclusion category: feature bugzilla: 51855 CVE: NA Reference: https://jpbrucker.net/git/linux/commit/?h=sva/2021-03-01&id=b6805ab77b4ca7a844ce3b1c4584133e81e3f071 --------------------------------------------- For PCI devices that support it, enable the PRI capability and handle PRI Page Requests with the generic fault handler. It is enabled when device driver enables IOMMU_DEV_FEAT_SVA. Signed-off-by: NJean-Philippe Brucker <jean-philippe@linaro.org> Signed-off-by: NLijun Fang <fanglijun3@huawei.com> Reviewed-by: NWeilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Jean-Philippe Brucker 提交于
maillist inclusion category: feature bugzilla: 51855 CVE: NA Reference: https://jpbrucker.net/git/linux/commit/?h=sva/2021-03-01&id=86593f8f8d03edca9cf1b29f87f524ac18fce3ae --------------------------------------------- The SMMU provides a Stall model for handling page faults in platform devices. It is similar to PCIe PRI, but doesn't require devices to have their own translation cache. Instead, faulting transactions are parked and the OS is given a chance to fix the page tables and retry the transaction. Enable stall for devices that support it (opt-in by firmware). When an event corresponds to a translation error, call the IOMMU fault handler. If the fault is recoverable, it will call us back to terminate or continue the stall. To use stall device drivers need to enable IOMMU_DEV_FEAT_IOPF, which initializes the fault queue for the device. Tested-by: NZhangfei Gao <zhangfei.gao@linaro.org> Reviewed-by: NJonathan Cameron <Jonathan.Cameron@huawei.com> Signed-off-by: NJean-Philippe Brucker <jean-philippe@linaro.org> Signed-off-by: NLijun Fang <fanglijun3@huawei.com> Reviewed-by: NWeilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Jean-Philippe Brucker 提交于
mainline inclusion from mainline-5.13-rc1 commit cdf315f9 category: feature bugzilla: 51855 CVE: NA --------------------------------------------- When handling faults from the event or PRI queue, we need to find the struct device associated with a SID. Add a rb_tree to keep track of SIDs. Acked-by: NJonathan Cameron <Jonathan.Cameron@huawei.com> Reviewed-by: NEric Auger <eric.auger@redhat.com> Reviewed-by: NKeqian Zhu <zhukeqian1@huawei.com> Signed-off-by: NJean-Philippe Brucker <jean-philippe@linaro.org> Acked-by: NWill Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210401154718.307519-8-jean-philippe@linaro.orgSigned-off-by: NJoerg Roedel <jroedel@suse.de> Signed-off-by: NLijun Fang <fanglijun3@huawei.com> Reviewed-by: NWeilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Jean-Philippe Brucker 提交于
mainline inclusion from mainline-5.13-rc1 commit 434b73e6 category: feature bugzilla: 51855 CVE: NA --------------------------------------------- The pasid-num-bits property shouldn't need a dedicated fwspec field, it's a job for device properties. Add properties for IORT, and access the number of PASID bits using device_property_read_u32(). Suggested-by: NRobin Murphy <robin.murphy@arm.com> Acked-by: NJonathan Cameron <Jonathan.Cameron@huawei.com> Acked-by: NWill Deacon <will@kernel.org> Reviewed-by: NEric Auger <eric.auger@redhat.com> Signed-off-by: NJean-Philippe Brucker <jean-philippe@linaro.org> Acked-by: NHanjun Guo <guohanjun@huawei.com> Link: https://lore.kernel.org/r/20210401154718.307519-3-jean-philippe@linaro.orgSigned-off-by: NJoerg Roedel <jroedel@suse.de> Signed-off-by: NLijun Fang <fanglijun3@huawei.com> Reviewed-by: NWeilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Jean-Philippe Brucker 提交于
mainline inclusion from mainline-5.11-rc1 commit 9111aebf category: feature bugzilla: 51855 CVE: NA --------------------------------------------- ARMv8.1 extensions added Virtualization Host Extensions (VHE), which allow to run a host kernel at EL2. When using normal DMA, Device and CPU address spaces are dissociated, and do not need to implement the same capabilities, so VHE hasn't been used in the SMMU until now. With shared address spaces however, ASIDs are shared between MMU and SMMU, and broadcast TLB invalidations issued by a CPU are taken into account by the SMMU. TLB entries on both sides need to have identical exception level in order to be cleared with a single invalidation. When the CPU is using VHE, enable VHE in the SMMU for all STEs. Normal DMA mappings will need to use TLBI_EL2 commands instead of TLBI_NH, but shouldn't be otherwise affected by this change. Acked-by: NWill Deacon <will@kernel.org> Reviewed-by: NJonathan Cameron <Jonathan.Cameron@huawei.com> Signed-off-by: NJean-Philippe Brucker <jean-philippe@linaro.org> Link: https://lore.kernel.org/r/20210122151054.2833521-4-jean-philippe@linaro.orgSigned-off-by: NWill Deacon <will@kernel.org> Signed-off-by: NLijun Fang <fanglijun3@huawei.com> Reviewed-by: NWeilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Jean-Philippe Brucker 提交于
mainline inclusion from mainline-5.12-rc1 commit 51d113c3 category: feature bugzilla: 51855 CVE: NA --------------------------------------------- When BTM isn't supported by the SMMU, send invalidations on the command queue. Reviewed-by: NJonathan Cameron <Jonathan.Cameron@huawei.com> Signed-off-by: NJean-Philippe Brucker <jean-philippe@linaro.org> Link: https://lore.kernel.org/r/20210122151054.2833521-3-jean-philippe@linaro.orgSigned-off-by: NWill Deacon <will@kernel.org> Signed-off-by: NLijun Fang <fanglijun3@huawei.com> Reviewed-by: NWeilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Jean-Philippe Brucker 提交于
mainline inclusion from mainline-5.12-rc1 commit eba8d2f8 category: feature bugzilla: 51855 CVE: NA --------------------------------------------- Extract some of the cmd initialization and the ATC invalidation from arm_smmu_tlb_inv_range(), to allow an MMU notifier to invalidate a VA range by ASID. Reviewed-by: NJonathan Cameron <Jonathan.Cameron@huawei.com> Signed-off-by: NJean-Philippe Brucker <jean-philippe@linaro.org> Link: https://lore.kernel.org/r/20210122151054.2833521-2-jean-philippe@linaro.orgSigned-off-by: NWill Deacon <will@kernel.org> Signed-off-by: NLijun Fang <fanglijun3@huawei.com> Reviewed-by: NWeilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Robin Murphy 提交于
mainline inclusion from mainline-5.11-rc1 commit fefe8527 category: feature bugzilla: 51855 CVE: NA --------------------------------------------- The only user of tlb_flush_leaf is a particularly hairy corner of the Arm short-descriptor code, which wants a synchronous invalidation to minimise the races inherent in trying to split a large page mapping. This is already far enough into "here be dragons" territory that no sensible caller should ever hit it, and thus it really doesn't need optimising. Although using tlb_flush_walk there may technically be more heavyweight than needed, it does the job and saves everyone else having to carry around useless baggage. Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Reviewed-by: NSteven Price <steven.price@arm.com> Link: https://lore.kernel.org/r/9844ab0c5cb3da8b2f89c6c2da16941910702b41.1606324115.git.robin.murphy@arm.comSigned-off-by: NWill Deacon <will@kernel.org> Signed-off-by: NLijun Fang <fanglijun3@huawei.com> Reviewed-by: NWeilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Robin Murphy 提交于
mainline inclusion from mainline-5.12-rc1 commit 86d2d921 category: feature bugzilla: 51855 CVE: NA --------------------------------------------- Since we now keep track of page 1 via a separate pointer that already encapsulates aliasing to page 0 as necessary, we can remove the clunky fixup routine and simply use the relevant bases directly. The current architecture spec (IHI0070D.a) defines SMMU_{EVENTQ,PRIQ}_{PROD,CONS} as offsets relative to page 1, so the cleanup represents a little bit of convergence as well as just lines of code saved. Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/08d9bda570bb5681f11a2f250a31be9ef763b8c5.1611238182.git.robin.murphy@arm.comSigned-off-by: NWill Deacon <will@kernel.org> Signed-off-by: NLijun Fang <fanglijun3@huawei.com> Reviewed-by: NWeilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Zhen Lei 提交于
mainline inclusion from mainline-5.12-rc1 commit 932bc8c7 category: feature bugzilla: 51855 CVE: NA --------------------------------------------- No functional change. Signed-off-by: NZhen Lei <thunder.leizhen@huawei.com> Link: https://lore.kernel.org/r/20210122131448.1167-1-thunder.leizhen@huawei.comSigned-off-by: NWill Deacon <will@kernel.org> Signed-off-by: NLijun Fang <fanglijun3@huawei.com> Reviewed-by: NWeilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Kaixu Xia 提交于
mainline inclusion from mainline-5.11-rc1 commit 2f7e8c55 category: feature bugzilla: 51855 CVE: NA --------------------------------------------- Fix the following coccinelle warnings: ./drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c:36:12-26: WARNING: Assignment of 0/1 to bool variable Signed-off-by: NKaixu Xia <kaixuxia@tencent.com> Link: https://lore.kernel.org/r/1604744439-6846-1-git-send-email-kaixuxia@tencent.comSigned-off-by: NWill Deacon <will@kernel.org> Signed-off-by: NLijun Fang <fanglijun3@huawei.com> Reviewed-by: NWeilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Jean-Philippe Brucker 提交于
mainline inclusion from mainline-5.11-rc1 commit 2f7e8c55 category: feature bugzilla: 51855 CVE: NA --------------------------------------------- The invalidate_range() notifier is called for any change to the address space. Perform the required ATC invalidations. Signed-off-by: NJean-Philippe Brucker <jean-philippe@linaro.org> Link: https://lore.kernel.org/r/20201106155048.997886-5-jean-philippe@linaro.orgSigned-off-by: NWill Deacon <will@kernel.org> Signed-off-by: NLijun Fang <fanglijun3@huawei.com> Reviewed-by: NWeilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Jean-Philippe Brucker 提交于
mainline inclusion from mainline-5.11-rc1 commit 32784a95 category: feature bugzilla: 51855 CVE: NA --------------------------------------------- The sva_bind() function allows devices to access process address spaces using a PASID (aka SSID). (1) bind() allocates or gets an existing MMU notifier tied to the (domain, mm) pair. Each mm gets one PASID. (2) Any change to the address space calls invalidate_range() which sends ATC invalidations (in a subsequent patch). (3) When the process address space dies, the release() notifier disables the CD to allow reclaiming the page tables. Since release() has to be light we do not instruct device drivers to stop DMA here, we just ignore incoming page faults from this point onwards. To avoid any event 0x0a print (C_BAD_CD) we disable translation without clearing CD.V. PCIe Translation Requests and Page Requests are silently denied. Don't clear the R bit because the S bit can't be cleared when STALL_MODEL==0b10 (forced), and clearing R without clearing S is useless. Faulting transactions will stall and will be aborted by the IOPF handler. (4) After stopping DMA, the device driver releases the bond by calling unbind(). We release the MMU notifier, free the PASID and the bond. Three structures keep track of bonds: * arm_smmu_bond: one per {device, mm} pair, the handle returned to the device driver for a bind() request. * arm_smmu_mmu_notifier: one per {domain, mm} pair, deals with ATS/TLB invalidations and clearing the context descriptor on mm exit. * arm_smmu_ctx_desc: one per mm, holds the pinned ASID and pgd. Signed-off-by: NJean-Philippe Brucker <jean-philippe@linaro.org> Link: https://lore.kernel.org/r/20201106155048.997886-4-jean-philippe@linaro.orgSigned-off-by: NWill Deacon <will@kernel.org> Signed-off-by: NLijun Fang <fanglijun3@huawei.com> Reviewed-by: NWeilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
- 09 4月, 2021 1 次提交
-
-
由 Yong Wu 提交于
stable inclusion from stable-5.10.20 commit 48e6713713575184244e84c8d66d42d78d1832a1 bugzilla: 50608 -------------------------------- [ Upstream commit 862c3715 ] Currently gather->end is "unsigned long" which may be overflow in arch32 in the corner case: 0xfff00000 + 0x100000(iova + size). Although it doesn't affect the size(end - start), it affects the checking "gather->end < end" This patch changes this "end" to the real end address (end = start + size - 1). Correspondingly, update the length to "end - start + 1". Fixes: a7d20dc1 ("iommu: Introduce struct iommu_iotlb_gather for batching TLB flushes") Signed-off-by: NYong Wu <yong.wu@mediatek.com> Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Acked-by: NWill Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210107122909.16317-5-yong.wu@mediatek.comSigned-off-by: NWill Deacon <will@kernel.org> Signed-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: NXie XiuQi <xiexiuqi@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
- 29 9月, 2020 6 次提交
-
-
由 Jean-Philippe Brucker 提交于
Implement the IOMMU device feature callbacks to support the SVA feature. At the moment dev_has_feat() returns false since I/O Page Faults and BTM aren't yet implemented. Signed-off-by: NJean-Philippe Brucker <jean-philippe@linaro.org> Reviewed-by: NJonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/20200918101852.582559-12-jean-philippe@linaro.orgSigned-off-by: NWill Deacon <will@kernel.org>
-
由 Jean-Philippe Brucker 提交于
Aggregate all sanity-checks for sharing CPU page tables with the SMMU under a single ARM_SMMU_FEAT_SVA bit. For PCIe SVA, users also need to check FEAT_ATS and FEAT_PRI. For platform SVA, they will have to check FEAT_STALLS. Introduce ARM_SMMU_FEAT_BTM (Broadcast TLB Maintenance), but don't enable it at the moment. Since the entire VMID space is shared with the CPU, enabling DVM (by clearing SMMU_CR2.PTM) could result in over-invalidation and affect performance of stage-2 mappings. Signed-off-by: NJean-Philippe Brucker <jean-philippe@linaro.org> Reviewed-by: NJonathan Cameron <Jonathan.Cameron@huawei.com> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Link: https://lore.kernel.org/r/20200918101852.582559-11-jean-philippe@linaro.orgSigned-off-by: NWill Deacon <will@kernel.org>
-
由 Jean-Philippe Brucker 提交于
The SMMU has a single ASID space, the union of shared and private ASID sets. This means that the SMMU driver competes with the arch allocator for ASIDs. Shared ASIDs are those of Linux processes, allocated by the arch, and contribute in broadcast TLB maintenance. Private ASIDs are allocated by the SMMU driver and used for "classic" map/unmap DMA. They require command-queue TLB invalidations. When we pin down an mm_context and get an ASID that is already in use by the SMMU, it belongs to a private context. We used to simply abort the bind, but this is unfair to users that would be unable to bind a few seemingly random processes. Try to allocate a new private ASID for the context, and make the old ASID shared. Signed-off-by: NJean-Philippe Brucker <jean-philippe@linaro.org> Reviewed-by: NJonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/20200918101852.582559-10-jean-philippe@linaro.orgSigned-off-by: NWill Deacon <will@kernel.org>
-
由 Jean-Philippe Brucker 提交于
With Shared Virtual Addressing (SVA), we need to mirror CPU TTBR, TCR, MAIR and ASIDs in SMMU contexts. Each SMMU has a single ASID space split into two sets, shared and private. Shared ASIDs correspond to those obtained from the arch ASID allocator, and private ASIDs are used for "classic" map/unmap DMA. A possible conflict happens when trying to use a shared ASID that has already been allocated for private use by the SMMU driver. This will be addressed in a later patch by replacing the private ASID. At the moment we return -EBUSY. Each mm_struct shared with the SMMU will have a single context descriptor. Add a refcount to keep track of this. It will be protected by the global SVA lock. Introduce a new arm-smmu-v3-sva.c file and the CONFIG_ARM_SMMU_V3_SVA option to let users opt in SVA support. Signed-off-by: NJean-Philippe Brucker <jean-philippe@linaro.org> Reviewed-by: NJonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/20200918101852.582559-9-jean-philippe@linaro.orgSigned-off-by: NWill Deacon <will@kernel.org>
-
由 Jean-Philippe Brucker 提交于
Allow sharing structure definitions with the upcoming SVA support for Arm SMMUv3, by moving them to a separate header. We could surgically extract only what is needed but keeping all definitions in one place looks nicer. Signed-off-by: NJean-Philippe Brucker <jean-philippe@linaro.org> Reviewed-by: NEric Auger <eric.auger@redhat.com> Reviewed-by: NJonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/20200918101852.582559-8-jean-philippe@linaro.orgSigned-off-by: NWill Deacon <will@kernel.org>
-
由 Zhou Wang 提交于
Reading the 'prod' MMIO register in order to determine whether or not there is valid data beyond 'cons' for a given queue does not provide sufficient dependency ordering, as the resulting access is address dependent only on 'cons' and can therefore be speculated ahead of time, potentially allowing stale data to be read by the CPU. Use readl() instead of readl_relaxed() when updating the shadow copy of the 'prod' pointer, so that all speculated memory reads from the corresponding queue can occur only from valid slots. Signed-off-by: NZhou Wang <wangzhou1@hisilicon.com> Link: https://lore.kernel.org/r/1601281922-117296-1-git-send-email-wangzhou1@hisilicon.com [will: Use readl() instead of explicit barrier. Update 'cons' side to match.] Signed-off-by: NWill Deacon <will@kernel.org>
-
- 22 9月, 2020 1 次提交
-
-
由 Jean-Philippe Brucker 提交于
When building with C=1, sparse reports some issues regarding endianness annotations: arm-smmu-v3.c:221:26: warning: cast to restricted __le64 arm-smmu-v3.c:221:24: warning: incorrect type in assignment (different base types) arm-smmu-v3.c:221:24: expected restricted __le64 [usertype] arm-smmu-v3.c:221:24: got unsigned long long [usertype] arm-smmu-v3.c:229:20: warning: incorrect type in argument 1 (different base types) arm-smmu-v3.c:229:20: expected restricted __le64 [usertype] *[assigned] dst arm-smmu-v3.c:229:20: got unsigned long long [usertype] *ent arm-smmu-v3.c:229:25: warning: incorrect type in argument 2 (different base types) arm-smmu-v3.c:229:25: expected unsigned long long [usertype] *[assigned] src arm-smmu-v3.c:229:25: got restricted __le64 [usertype] * arm-smmu-v3.c:396:20: warning: incorrect type in argument 1 (different base types) arm-smmu-v3.c:396:20: expected restricted __le64 [usertype] *[assigned] dst arm-smmu-v3.c:396:20: got unsigned long long * arm-smmu-v3.c:396:25: warning: incorrect type in argument 2 (different base types) arm-smmu-v3.c:396:25: expected unsigned long long [usertype] *[assigned] src arm-smmu-v3.c:396:25: got restricted __le64 [usertype] * arm-smmu-v3.c:1349:32: warning: invalid assignment: |= arm-smmu-v3.c:1349:32: left side has type restricted __le64 arm-smmu-v3.c:1349:32: right side has type unsigned long arm-smmu-v3.c:1396:53: warning: incorrect type in argument 3 (different base types) arm-smmu-v3.c:1396:53: expected restricted __le64 [usertype] *dst arm-smmu-v3.c:1396:53: got unsigned long long [usertype] *strtab arm-smmu-v3.c:1424:39: warning: incorrect type in argument 1 (different base types) arm-smmu-v3.c:1424:39: expected unsigned long long [usertype] *[assigned] strtab arm-smmu-v3.c:1424:39: got restricted __le64 [usertype] *l2ptr While harmless, they are incorrect and could hide actual errors during development. Fix them. Signed-off-by: NJean-Philippe Brucker <jean-philippe@linaro.org> Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/20200918141856.629722-1-jean-philippe@linaro.orgSigned-off-by: NWill Deacon <will@kernel.org>
-
- 07 9月, 2020 4 次提交
-
-
由 Barry Song 提交于
Polling by MSI isn't necessarily faster than polling by SEV. Tests on hi1620 show hns3 100G NIC network throughput can improve from 25G to 27G if we disable MSI polling while running 16 netperf threads sending UDP packets in size 32KB. TX throughput can improve from 7G to 7.7G for single thread. The reason for the throughput improvement is that the latency to poll the completion of CMD_SYNC becomes smaller. After sending a CMD_SYNC in an empty cmd queue, typically we need to wait for 280ns using MSI polling. But we only need around 190ns after disabling MSI polling. This patch provides a command line option so that users can decide to use MSI polling or not based on their tests. Signed-off-by: NBarry Song <song.bao.hua@hisilicon.com> Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/20200827092957.22500-4-song.bao.hua@hisilicon.comSigned-off-by: NWill Deacon <will@kernel.org>
-
由 Barry Song 提交于
Just use module_param() - going out of the way to specify a "different" name that's identical to the variable name is silly. Signed-off-by: NBarry Song <song.bao.hua@hisilicon.com> Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/20200827092957.22500-3-song.bao.hua@hisilicon.comSigned-off-by: NWill Deacon <will@kernel.org>
-
由 Barry Song 提交于
This fixed the below checkpatch issue: WARNING: Symbolic permissions 'S_IRUGO' are not preferred. Consider using octal permissions '0444'. 417: FILE: drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c:417: module_param_named(disable_bypass, disable_bypass, bool, S_IRUGO); Signed-off-by: NBarry Song <song.bao.hua@hisilicon.com> Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/20200827092957.22500-2-song.bao.hua@hisilicon.comSigned-off-by: NWill Deacon <will@kernel.org>
-
由 Zenghui Yu 提交于
The actual size of level-1 stream table is l1size. This looks like an oversight on commit d2e88e7c ("iommu/arm-smmu: Fix LOG2SIZE setting for 2-level stream tables") which forgot to update the @size in error message as well. As memory allocation failure is already bad enough, nothing worse would happen. But let's be careful. Signed-off-by: NZenghui Yu <yuzenghui@huawei.com> Link: https://lore.kernel.org/r/20200826141758.341-1-yuzenghui@huawei.comSigned-off-by: NWill Deacon <will@kernel.org>
-
- 24 8月, 2020 1 次提交
-
-
由 Gustavo A. R. Silva 提交于
Replace the existing /* fall through */ comments and its variants with the new pseudo-keyword macro fallthrough[1]. Also, remove unnecessary fall-through markings when it is the case. [1] https://www.kernel.org/doc/html/v5.7/process/deprecated.html?highlight=fallthrough#implicit-switch-case-fall-throughSigned-off-by: NGustavo A. R. Silva <gustavoars@kernel.org>
-
- 27 7月, 2020 1 次提交
-
-
由 Will Deacon 提交于
The Arm SMMU drivers are getting fat on vendor value-add, so move them to their own subdirectory out of the way of the other IOMMU drivers. Suggested-by: NJoerg Roedel <joro@8bytes.org> Signed-off-by: NWill Deacon <will@kernel.org>
-
- 24 7月, 2020 1 次提交
-
-
由 Baolin Wang 提交于
Now the ARM page tables are always allocated by GFP_ATOMIC parameter, but the iommu_ops->map() function has been added a gfp_t parameter by commit 781ca2de ("iommu: Add gfp parameter to iommu_ops::map"), thus io_pgtable_ops->map() should use the gfp parameter passed from iommu_ops->map() to allocate page pages, which can avoid wasting the memory allocators atomic pools for some non-atomic contexts. Signed-off-by: NBaolin Wang <baolin.wang@linux.alibaba.com> Acked-by: NWill Deacon <will@kernel.org> Link: https://lore.kernel.org/r/3093df4cb95497aaf713fca623ce4ecebb197c2e.1591930156.git.baolin.wang@linux.alibaba.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 16 7月, 2020 1 次提交
-
-
由 John Garry 提交于
Set "cmq" -> "cmdq". Signed-off-by: NJohn Garry <john.garry@huawei.com> Signed-off-by: NWill Deacon <will@kernel.org>
-
- 27 5月, 2020 1 次提交
-
-
由 Jean-Philippe Brucker 提交于
The new pci_ats_supported() function checks if a device supports ATS and is allowed to use it. Signed-off-by: NJean-Philippe Brucker <jean-philippe@linaro.org> Acked-by: NWill Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20200520152201.3309416-4-jean-philippe@linaro.orgSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 21 5月, 2020 1 次提交
-
-
由 Jean-Philippe Brucker 提交于
In preparation for sharing some ASIDs with the CPU, use a global xarray to store ASIDs and their context. ASID#0 is now reserved, and the ASID space is global. Signed-off-by: NJean-Philippe Brucker <jean-philippe@linaro.org> Link: https://lore.kernel.org/r/20200519175502.2504091-9-jean-philippe@linaro.orgSigned-off-by: NWill Deacon <will@kernel.org>
-