- 08 12月, 2020 6 次提交
-
-
由 Robin Murphy 提交于
The only user of tlb_flush_leaf is a particularly hairy corner of the Arm short-descriptor code, which wants a synchronous invalidation to minimise the races inherent in trying to split a large page mapping. This is already far enough into "here be dragons" territory that no sensible caller should ever hit it, and thus it really doesn't need optimising. Although using tlb_flush_walk there may technically be more heavyweight than needed, it does the job and saves everyone else having to carry around useless baggage. Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Reviewed-by: NSteven Price <steven.price@arm.com> Link: https://lore.kernel.org/r/9844ab0c5cb3da8b2f89c6c2da16941910702b41.1606324115.git.robin.murphy@arm.comSigned-off-by: NWill Deacon <will@kernel.org>
-
由 John Garry 提交于
It has no user outside iova.c Signed-off-by: NJohn Garry <john.garry@huawei.com> Link: https://lore.kernel.org/r/1607020492-189471-4-git-send-email-john.garry@huawei.comSigned-off-by: NWill Deacon <will@kernel.org>
-
由 John Garry 提交于
It is not used outside iova.c Signed-off-by: NJohn Garry <john.garry@huawei.com> Link: https://lore.kernel.org/r/1607020492-189471-3-git-send-email-john.garry@huawei.comSigned-off-by: NWill Deacon <will@kernel.org>
-
由 John Garry 提交于
Function split_and_remove_iova() has not been referenced since commit e70b081c ("iommu/vt-d: Remove IOVA handling code from the non-dma_ops path"), so delete it. Signed-off-by: NJohn Garry <john.garry@huawei.com> Link: https://lore.kernel.org/r/1607020492-189471-2-git-send-email-john.garry@huawei.comSigned-off-by: NWill Deacon <will@kernel.org>
-
由 Kunkun Jiang 提交于
The 'level' parameter to the iopte_type() macro is unused, so remove it. Signed-off-by: NKunkun Jiang <jiangkunkun@huawei.com> Link: https://lore.kernel.org/r/20201207120150.1891-1-jiangkunkun@huawei.comSigned-off-by: NWill Deacon <will@kernel.org>
-
由 Keqian Zhu 提交于
Although handling a mapping request with no permissions is a trivial no-op, defer the early return until after the size/range checks so that we are consistent with other mapping requests. Signed-off-by: NKeqian Zhu <zhukeqian1@huawei.com> Link: https://lore.kernel.org/r/20201207115758.9400-1-zhukeqian1@huawei.comSigned-off-by: NWill Deacon <will@kernel.org>
-
- 07 12月, 2020 2 次提交
-
-
由 Suravee Suthikulpanit 提交于
According to the AMD IOMMU spec, the commit 73db2fc5 ("iommu/amd: Increase interrupt remapping table limit to 512 entries") also requires the interrupt table length (IntTabLen) to be set to 9 (power of 2) in the device table mapping entry (DTE). Fixes: 73db2fc5 ("iommu/amd: Increase interrupt remapping table limit to 512 entries") Reported-by: NJerry Snitselaar <jsnitsel@redhat.com> Signed-off-by: NSuravee Suthikulpanit <suravee.suthikulpanit@amd.com> Reviewed-by: NJerry Snitselaar <jsnitsel@redhat.com> Link: https://lore.kernel.org/r/20201207091920.3052-1-suravee.suthikulpanit@amd.comSigned-off-by: NWill Deacon <will@kernel.org>
-
由 Yong Wu 提交于
Currently direct_mapping always use the smallest pgsize which is SZ_4K normally to mapping. This is unnecessary. we could gather the size, and call iommu_map then, iommu_map could decide how to map better with the just right pgsize. >From the original comment, we should take care overlap, otherwise, iommu_map may return -EEXIST. In this overlap case, we should map the previous region before overlap firstly. then map the left part. Each a iommu device will call this direct_mapping when its iommu initialize, This patch is effective to improve the boot/initialization time especially while it only needs level 1 mapping. Signed-off-by: NAnan Sun <anan.sun@mediatek.com> Signed-off-by: NYong Wu <yong.wu@mediatek.com> Link: https://lore.kernel.org/r/20201207093553.8635-1-yong.wu@mediatek.comSigned-off-by: NWill Deacon <will@kernel.org>
-
- 02 12月, 2020 1 次提交
-
-
由 Cong Wang 提交于
Both find_iova() and __free_iova() take iova_rbtree_lock, there is no reason to take and release it twice inside free_iova(). Fold them into one critical section by calling the unlock versions instead. Signed-off-by: NCong Wang <xiyou.wangcong@gmail.com> Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NJohn Garry <john.garry@huawei.com> Link: https://lore.kernel.org/r/1605608734-84416-5-git-send-email-john.garry@huawei.comSigned-off-by: NWill Deacon <will@kernel.org>
-
- 01 12月, 2020 1 次提交
-
-
由 Christophe JAILLET 提交于
There is no reason to use GFP_ATOMIC in a 'suspend' function. Use GFP_KERNEL instead to give more opportunities to allocate the requested memory. Signed-off-by: NChristophe JAILLET <christophe.jaillet@wanadoo.fr> Link: https://lore.kernel.org/r/20201030182630.5154-1-christophe.jaillet@wanadoo.frSigned-off-by: NLu Baolu <baolu.lu@linux.intel.com> Link: https://lore.kernel.org/r/20201201013149.2466272-2-baolu.lu@linux.intel.comSigned-off-by: NWill Deacon <will@kernel.org>
-
- 27 11月, 2020 1 次提交
-
-
由 Lu Baolu 提交于
Fixes gcc '-Wunused-but-set-variable' warning: drivers/iommu/intel/iommu.c:5643:27: warning: variable 'last_pfn' set but not used [-Wunused-but-set-variable] 5643 | unsigned long start_pfn, last_pfn; | ^~~~~~~~ This variable is never used, so remove it. Fixes: 2a2b8eaa ("iommu: Handle freelists when using deferred flushing in iommu drivers") Reported-by: Nkernel test robot <lkp@intel.com> Signed-off-by: NLu Baolu <baolu.lu@linux.intel.com> Link: https://lore.kernel.org/r/20201127013308.1833610-1-baolu.lu@linux.intel.comSigned-off-by: NWill Deacon <will@kernel.org>
-
- 26 11月, 2020 2 次提交
-
-
由 Yang Yingliang 提交于
Although iommu_group_get() in iommu_probe_device() will always succeed thanks to __iommu_probe_device() creating the group if it's not present, it's still worth initialising 'ret' to -ENODEV in case this path is reachable in the future. For now, this patch results in no functional change. Reported-by: NHulk Robot <hulkci@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> Link: https://lore.kernel.org/r/20201126133825.3643852-1-yangyingliang@huawei.comSigned-off-by: NWill Deacon <will@kernel.org>
-
由 David Woodhouse 提交于
My virtual IOMMU implementation is whining that the guest is reading a register that doesn't exist. Only read the VCCAP_REG if the corresponding capability is set in ECAP_REG to indicate that it actually exists. Fixes: 3375303e ("iommu/vt-d: Add custom allocator for IOASID") Signed-off-by: NDavid Woodhouse <dwmw@amazon.co.uk> Reviewed-by: NLiu Yi L <yi.l.liu@intel.com> Cc: stable@vger.kernel.org # v5.7+ Acked-by: NLu Baolu <baolu.lu@linux.intel.com> Link: https://lore.kernel.org/r/de32b150ffaa752e0cff8571b17dfb1213fbe71c.camel@infradead.orgSigned-off-by: NWill Deacon <will@kernel.org>
-
- 25 11月, 2020 20 次提交
-
-
由 Sai Prakash Ranjan 提交于
Fix the checkpatch warning for space required before the open parenthesis. Signed-off-by: NSai Prakash Ranjan <saiprakash.ranjan@codeaurora.org> Acked-by: NWill Deacon <will@kernel.org> Link: https://lore.kernel.org/r/0b4c3718a87992f11340a1cdd99fd746c647e485.1606287059.git.saiprakash.ranjan@codeaurora.orgSigned-off-by: NWill Deacon <will@kernel.org>
-
由 Sai Prakash Ranjan 提交于
Use table and of_match_node() to match qcom implementation instead of multiple of_device_compatible() calls for each QCOM SMMU implementation. Signed-off-by: NSai Prakash Ranjan <saiprakash.ranjan@codeaurora.org> Acked-by: NWill Deacon <will@kernel.org> Link: https://lore.kernel.org/r/4e11899bc02102a6e6155db215911e8b5aaba950.1606287059.git.saiprakash.ranjan@codeaurora.orgSigned-off-by: NWill Deacon <will@kernel.org>
-
由 Sai Prakash Ranjan 提交于
Now that we have a struct io_pgtable_domain_attr with quirks, use that for non_strict mode as well thereby removing the need for more members of arm_smmu_domain in the future. Signed-off-by: NSai Prakash Ranjan <saiprakash.ranjan@codeaurora.org> Link: https://lore.kernel.org/r/c191265f3db1f6b3e136d4057ca917666680a066.1606287059.git.saiprakash.ranjan@codeaurora.orgSigned-off-by: NWill Deacon <will@kernel.org>
-
由 Sai Prakash Ranjan 提交于
Add support for domain attribute DOMAIN_ATTR_IO_PGTABLE_CFG to get/set pagetable configuration data which initially will be used to set quirks and later can be extended to include other pagetable configuration data. Signed-off-by: NSai Prakash Ranjan <saiprakash.ranjan@codeaurora.org> Link: https://lore.kernel.org/r/2ab52ced2f853115c32461259a075a2877feffa6.1606287059.git.saiprakash.ranjan@codeaurora.orgSigned-off-by: NWill Deacon <will@kernel.org>
-
由 Sai Prakash Ranjan 提交于
Add a quirk IO_PGTABLE_QUIRK_ARM_OUTER_WBWA to override the outer-cacheability attributes set in the TCR for a non-coherent page table walker when using system cache. Signed-off-by: NSai Prakash Ranjan <saiprakash.ranjan@codeaurora.org> Link: https://lore.kernel.org/r/f818676b4a2a9ad1edb92721947d47db41ed6a7c.1606287059.git.saiprakash.ranjan@codeaurora.orgSigned-off-by: NWill Deacon <will@kernel.org>
-
由 Sai Praneeth Prakhya 提交于
"/sys/kernel/iommu_groups/<grp_id>/type" file could be read to find out the default domain type of an iommu group. The default domain of an iommu group doesn't change after booting and hence could be read directly. But, after addding support to dynamically change iommu group default domain, the above assumption no longer stays valid. iommu group default domain type could be changed at any time by writing to "/sys/kernel/iommu_groups/<grp_id>/type". So, take group mutex before reading iommu group default domain type so that the user wouldn't see stale values or iommu_group_show_type() doesn't try to derefernce stale pointers. Signed-off-by: NSai Praneeth Prakhya <sai.praneeth.prakhya@intel.com> Signed-off-by: NLu Baolu <baolu.lu@linux.intel.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Joerg Roedel <joro@8bytes.org> Cc: Ashok Raj <ashok.raj@intel.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Sohil Mehta <sohil.mehta@intel.com> Cc: Robin Murphy <robin.murphy@arm.com> Cc: Jacob Pan <jacob.jun.pan@linux.intel.com> Link: https://lore.kernel.org/r/20201124130604.2912899-4-baolu.lu@linux.intel.comSigned-off-by: NWill Deacon <will@kernel.org>
-
由 Sai Praneeth Prakhya 提交于
Presently, the default domain of an iommu group is allocated during boot time and it cannot be changed later. So, the device would typically be either in identity (also known as pass_through) mode or the device would be in DMA mode as long as the machine is up and running. There is no way to change the default domain type dynamically i.e. after booting, a device cannot switch between identity mode and DMA mode. But, assume a use case wherein the user trusts the device and believes that the OS is secure enough and hence wants *only* this device to bypass IOMMU (so that it could be high performing) whereas all the other devices to go through IOMMU (so that the system is protected). Presently, this use case is not supported. It will be helpful if there is some way to change the default domain of an iommu group dynamically. Hence, add such support. A privileged user could request the kernel to change the default domain type of a iommu group by writing to "/sys/kernel/iommu_groups/<grp_id>/type" file. Presently, only three values are supported 1. identity: all the DMA transactions from the device in this group are *not* translated by the iommu 2. DMA: all the DMA transactions from the device in this group are translated by the iommu 3. auto: change to the type the device was booted with Note: 1. Default domain of an iommu group with two or more devices cannot be changed. 2. The device in the iommu group shouldn't be bound to any driver. 3. The device shouldn't be assigned to user for direct access. 4. The change request will fail if any device in the group has a mandatory default domain type and the requested one conflicts with that. Please see "Documentation/ABI/testing/sysfs-kernel-iommu_groups" for more information. Signed-off-by: NSai Praneeth Prakhya <sai.praneeth.prakhya@intel.com> Signed-off-by: NLu Baolu <baolu.lu@linux.intel.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Joerg Roedel <joro@8bytes.org> Cc: Ashok Raj <ashok.raj@intel.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Sohil Mehta <sohil.mehta@intel.com> Cc: Robin Murphy <robin.murphy@arm.com> Cc: Jacob Pan <jacob.jun.pan@linux.intel.com> Link: https://lore.kernel.org/r/20201124130604.2912899-3-baolu.lu@linux.intel.comSigned-off-by: NWill Deacon <will@kernel.org>
-
由 Lu Baolu 提交于
So that the vendor iommu drivers are no more required to provide the def_domain_type callback to always isolate the untrusted devices. Signed-off-by: NLu Baolu <baolu.lu@linux.intel.com> Cc: Shameerali Kolothum Thodi <shameerali.kolothum.thodi@huawei.com> Link: https://lore.kernel.org/linux-iommu/243ce89c33fe4b9da4c56ba35acebf81@huawei.com/ Link: https://lore.kernel.org/r/20201124130604.2912899-2-baolu.lu@linux.intel.comSigned-off-by: NWill Deacon <will@kernel.org>
-
由 Lu Baolu 提交于
Some cleanups after converting the driver to use dma-iommu ops. - Remove nobounce option; - Cleanup and simplify the path in domain mapping. Signed-off-by: NLu Baolu <baolu.lu@linux.intel.com> Tested-by: NLogan Gunthorpe <logang@deltatee.com> Link: https://lore.kernel.org/r/20201124082057.2614359-8-baolu.lu@linux.intel.comSigned-off-by: NWill Deacon <will@kernel.org>
-
由 Tom Murphy 提交于
Convert the intel iommu driver to the dma-iommu api. Remove the iova handling and reserve region code from the intel iommu driver. Signed-off-by: NTom Murphy <murphyt7@tcd.ie> Signed-off-by: NLu Baolu <baolu.lu@linux.intel.com> Tested-by: NLogan Gunthorpe <logang@deltatee.com> Link: https://lore.kernel.org/r/20201124082057.2614359-7-baolu.lu@linux.intel.comSigned-off-by: NWill Deacon <will@kernel.org>
-
由 Lu Baolu 提交于
The iommu-dma constrains IOVA allocation based on the domain geometry that the driver reports. Update domain geometry everytime a domain is attached to or detached from a device. Signed-off-by: NLu Baolu <baolu.lu@linux.intel.com> Tested-by: NLogan Gunthorpe <logang@deltatee.com> Link: https://lore.kernel.org/r/20201124082057.2614359-6-baolu.lu@linux.intel.comSigned-off-by: NWill Deacon <will@kernel.org>
-
由 Lu Baolu 提交于
Combining the sg segments exposes a bug in the Intel i915 driver which causes visual artifacts and the screen to freeze. This is most likely because of how the i915 handles the returned list. It probably doesn't respect the returned value specifying the number of elements in the list and instead depends on the previous behaviour of the Intel iommu driver which would return the same number of elements in the output list as in the input list. [ This has been fixed in the i915 tree, but we agreed to carry this fix temporarily in the iommu tree and revert it before 5.11 is released: https://lore.kernel.org/linux-iommu/20201103105442.GD22888@8bytes.org/ -- Will ] Signed-off-by: NTom Murphy <murphyt7@tcd.ie> Signed-off-by: NLu Baolu <baolu.lu@linux.intel.com> Tested-by: NLogan Gunthorpe <logang@deltatee.com> Link: https://lore.kernel.org/r/20201124082057.2614359-5-baolu.lu@linux.intel.comSigned-off-by: NWill Deacon <will@kernel.org>
-
由 Tom Murphy 提交于
Allow the dma-iommu api to use bounce buffers for untrusted devices. This is a copy of the intel bounce buffer code. Co-developed-by: NLu Baolu <baolu.lu@linux.intel.com> Signed-off-by: NTom Murphy <murphyt7@tcd.ie> Signed-off-by: NLu Baolu <baolu.lu@linux.intel.com> Tested-by: NLogan Gunthorpe <logang@deltatee.com> Link: https://lore.kernel.org/r/20201124082057.2614359-4-baolu.lu@linux.intel.comSigned-off-by: NWill Deacon <will@kernel.org>
-
由 Tom Murphy 提交于
Add a iommu_dma_free_cpu_cached_iovas function to allow drivers which use the dma-iommu ops to free cached cpu iovas. Signed-off-by: NTom Murphy <murphyt7@tcd.ie> Signed-off-by: NLu Baolu <baolu.lu@linux.intel.com> Tested-by: NLogan Gunthorpe <logang@deltatee.com> Link: https://lore.kernel.org/r/20201124082057.2614359-3-baolu.lu@linux.intel.comSigned-off-by: NWill Deacon <will@kernel.org>
-
由 Tom Murphy 提交于
Allow the iommu_unmap_fast to return newly freed page table pages and pass the freelist to queue_iova in the dma-iommu ops path. This is useful for iommu drivers (in this case the intel iommu driver) which need to wait for the ioTLB to be flushed before newly free/unmapped page table pages can be freed. This way we can still batch ioTLB free operations and handle the freelists. Signed-off-by: NTom Murphy <murphyt7@tcd.ie> Signed-off-by: NLu Baolu <baolu.lu@linux.intel.com> Tested-by: NLogan Gunthorpe <logang@deltatee.com> Link: https://lore.kernel.org/r/20201124082057.2614359-2-baolu.lu@linux.intel.comSigned-off-by: NWill Deacon <will@kernel.org>
-
由 Nicolin Chen 提交于
This patch simply adds support for PCI devices. Signed-off-by: NNicolin Chen <nicoleotsuka@gmail.com> Tested-by: NDmitry Osipenko <digetx@gmail.com> Reviewed-by: NDmitry Osipenko <digetx@gmail.com> Acked-by: NThierry Reding <treding@nvidia.com> Link: https://lore.kernel.org/r/20201125101013.14953-6-nicoleotsuka@gmail.comSigned-off-by: NWill Deacon <will@kernel.org>
-
由 Nicolin Chen 提交于
The bus_set_iommu() in tegra_smmu_probe() enumerates all clients to call in tegra_smmu_probe_device() where each client searches its DT node for smmu pointer and swgroup ID, so as to configure an fwspec. But this requires a valid smmu pointer even before mc and smmu drivers are probed. So in tegra_smmu_probe() we added a line of code to fill mc->smmu, marking "a bit of a hack". This works for most of clients in the DTB, however, doesn't work for a client that doesn't exist in DTB, a PCI device for example. Actually, if we return ERR_PTR(-ENODEV) in ->probe_device() when it's called from bus_set_iommu(), iommu core will let everything carry on. Then when a client gets probed, of_iommu_configure() in iommu core will search DTB for swgroup ID and call ->of_xlate() to prepare an fwspec, similar to tegra_smmu_probe_device() and tegra_smmu_configure(). Then it'll call tegra_smmu_probe_device() again, and this time we shall return smmu->iommu pointer properly. So we can get rid of tegra_smmu_find() and tegra_smmu_configure() along with DT polling code by letting the iommu core handle every thing, except a problem that we search iommus property in DTB not only for swgroup ID but also for mc node to get mc->smmu pointer to call dev_iommu_priv_set() and return the smmu->iommu pointer. So we'll need to find another way to get smmu pointer. Referencing the implementation of sun50i-iommu driver, of_xlate() has client's dev pointer, mc node and swgroup ID. This means that we can call dev_iommu_priv_set() in of_xlate() instead, so we can simply get smmu pointer in ->probe_device(). This patch reworks tegra_smmu_probe_device() by: 1) Removing mc->smmu hack in tegra_smmu_probe() so as to return ERR_PTR(-ENODEV) in tegra_smmu_probe_device() during stage of tegra_smmu_probe/tegra_mc_probe(). 2) Moving dev_iommu_priv_set() to of_xlate() so we can get smmu pointer in tegra_smmu_probe_device() to replace DTB polling. 3) Removing tegra_smmu_configure() accordingly since iommu core takes care of it. This also fixes a problem that previously we could add clients to iommu groups before iommu core initializes its default domain: ubuntu@jetson:~$ dmesg | grep iommu platform 50000000.host1x: Adding to iommu group 1 platform 57000000.gpu: Adding to iommu group 2 iommu: Default domain type: Translated platform 54200000.dc: Adding to iommu group 3 platform 54240000.dc: Adding to iommu group 3 platform 54340000.vic: Adding to iommu group 4 Though it works fine with IOMMU_DOMAIN_UNMANAGED, but will have warnings if switching to IOMMU_DOMAIN_DMA: iommu: Failed to allocate default IOMMU domain of type 0 for group (null) - Falling back to IOMMU_DOMAIN_DMA iommu: Failed to allocate default IOMMU domain of type 0 for group (null) - Falling back to IOMMU_DOMAIN_DMA Now, bypassing the first probe_device() call from bus_set_iommu() fixes the sequence: ubuntu@jetson:~$ dmesg | grep iommu iommu: Default domain type: Translated tegra-host1x 50000000.host1x: Adding to iommu group 0 tegra-dc 54200000.dc: Adding to iommu group 1 tegra-dc 54240000.dc: Adding to iommu group 1 tegra-vic 54340000.vic: Adding to iommu group 2 nouveau 57000000.gpu: Adding to iommu group 3 Note that dmesg log above is testing with IOMMU_DOMAIN_UNMANAGED. Signed-off-by: NNicolin Chen <nicoleotsuka@gmail.com> Tested-by: NDmitry Osipenko <digetx@gmail.com> Reviewed-by: NDmitry Osipenko <digetx@gmail.com> Acked-by: NThierry Reding <treding@nvidia.com> Link: https://lore.kernel.org/r/20201125101013.14953-5-nicoleotsuka@gmail.comSigned-off-by: NWill Deacon <will@kernel.org>
-
由 Nicolin Chen 提交于
In tegra_smmu_(de)attach_dev() functions, we poll DTB for each client's iommus property to get swgroup ID in order to prepare "as" and enable smmu. Actually tegra_smmu_configure() prepared an fwspec for each client, and added to the fwspec all swgroup IDs of client DT node in DTB. So this patch uses fwspec in tegra_smmu_(de)attach_dev() so as to replace the redundant DT polling code. Signed-off-by: NNicolin Chen <nicoleotsuka@gmail.com> Tested-by: NDmitry Osipenko <digetx@gmail.com> Reviewed-by: NDmitry Osipenko <digetx@gmail.com> Acked-by: NThierry Reding <treding@nvidia.com> Link: https://lore.kernel.org/r/20201125101013.14953-4-nicoleotsuka@gmail.comSigned-off-by: NWill Deacon <will@kernel.org>
-
由 Nicolin Chen 提交于
This is used to protect potential race condition at use_count. since probes of client drivers, calling attach_dev(), may run concurrently. Signed-off-by: NNicolin Chen <nicoleotsuka@gmail.com> Tested-by: NDmitry Osipenko <digetx@gmail.com> Reviewed-by: NDmitry Osipenko <digetx@gmail.com> Acked-by: NThierry Reding <treding@nvidia.com> Link: https://lore.kernel.org/r/20201125101013.14953-3-nicoleotsuka@gmail.comSigned-off-by: NWill Deacon <will@kernel.org>
-
由 Nicolin Chen 提交于
The tegra_smmu_group_get was added to group devices in different SWGROUPs and it'd return a NULL group pointer upon a mismatch at tegra_smmu_find_group(), so for most of clients/devices, it very likely would mismatch and need a fallback generic_device_group(). But now tegra_smmu_group_get handles devices in same SWGROUP too, which means that it would allocate a group for every new SWGROUP or would directly return an existing one upon matching a SWGROUP, i.e. any device will go through this function. So possibility of having a NULL group pointer in device_group() is upon failure of either devm_kzalloc() or iommu_group_alloc(). In either case, calling generic_device_group() no longer makes a sense. Especially for devm_kzalloc() failing case, it'd cause a problem if it fails at devm_kzalloc() yet succeeds at a fallback generic_device_group(), because it does not create a group->list for other devices to match. This patch simply unwraps the function to clean it up. Signed-off-by: NNicolin Chen <nicoleotsuka@gmail.com> Tested-by: NDmitry Osipenko <digetx@gmail.com> Reviewed-by: NDmitry Osipenko <digetx@gmail.com> Acked-by: NThierry Reding <treding@nvidia.com> Link: https://lore.kernel.org/r/20201125101013.14953-2-nicoleotsuka@gmail.comSigned-off-by: NWill Deacon <will@kernel.org>
-
- 23 11月, 2020 7 次提交
-
-
由 Shameer Kolothum 提交于
Currently iommu_create_device_direct_mappings() is called without checking the return of __iommu_attach_device(). This may result in failures in iommu driver if dev attach returns error. Fixes: ce574c27 ("iommu: Move iommu_group_create_direct_mappings() out of iommu_group_add_device()") Signed-off-by: NShameer Kolothum <shameerali.kolothum.thodi@huawei.com> Link: https://lore.kernel.org/r/20201119165846.34180-1-shameerali.kolothum.thodi@huawei.comSigned-off-by: NWill Deacon <will@kernel.org>
-
由 John Stultz 提交于
Robin Murphy pointed out that if the arm-smmu driver probes before the qcom_scm driver, we may call qcom_scm_qsmmu500_wait_safe_toggle() before the __scm is initialized. Now, getting this to happen is a bit contrived, as in my efforts it required enabling asynchronous probing for both drivers, moving the firmware dts node to the end of the dtsi file, as well as forcing a long delay in the qcom_scm_probe function. With those tweaks we ran into the following crash: [ 2.631040] arm-smmu 15000000.iommu: Stage-1: 48-bit VA -> 48-bit IPA [ 2.633372] Unable to handle kernel NULL pointer dereference at virtual address 0000000000000000 ... [ 2.633402] [0000000000000000] user address but active_mm is swapper [ 2.633409] Internal error: Oops: 96000005 [#1] PREEMPT SMP [ 2.633415] Modules linked in: [ 2.633427] CPU: 5 PID: 117 Comm: kworker/u16:2 Tainted: G W 5.10.0-rc1-mainline-00025-g272a618fc36-dirty #3971 [ 2.633430] Hardware name: Thundercomm Dragonboard 845c (DT) [ 2.633448] Workqueue: events_unbound async_run_entry_fn [ 2.633456] pstate: 80c00005 (Nzcv daif +PAN +UAO -TCO BTYPE=--) [ 2.633465] pc : qcom_scm_qsmmu500_wait_safe_toggle+0x78/0xb0 [ 2.633473] lr : qcom_smmu500_reset+0x58/0x78 [ 2.633476] sp : ffffffc0105a3b60 ... [ 2.633567] Call trace: [ 2.633572] qcom_scm_qsmmu500_wait_safe_toggle+0x78/0xb0 [ 2.633576] qcom_smmu500_reset+0x58/0x78 [ 2.633581] arm_smmu_device_reset+0x194/0x270 [ 2.633585] arm_smmu_device_probe+0xc94/0xeb8 [ 2.633592] platform_drv_probe+0x58/0xa8 [ 2.633597] really_probe+0xec/0x398 [ 2.633601] driver_probe_device+0x5c/0xb8 [ 2.633606] __driver_attach_async_helper+0x64/0x88 [ 2.633610] async_run_entry_fn+0x4c/0x118 [ 2.633617] process_one_work+0x20c/0x4b0 [ 2.633621] worker_thread+0x48/0x460 [ 2.633628] kthread+0x14c/0x158 [ 2.633634] ret_from_fork+0x10/0x18 [ 2.633642] Code: a9034fa0 d0007f73 29107fa0 91342273 (f9400020) To avoid this, this patch adds a check on qcom_scm_is_available() in the qcom_smmu_impl_init() function, returning -EPROBE_DEFER if its not ready. This allows the driver to try to probe again later after qcom_scm has finished probing. Reported-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NJohn Stultz <john.stultz@linaro.org> Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Cc: Robin Murphy <robin.murphy@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Andy Gross <agross@kernel.org> Cc: Maulik Shah <mkshah@codeaurora.org> Cc: Bjorn Andersson <bjorn.andersson@linaro.org> Cc: Saravana Kannan <saravanak@google.com> Cc: Marc Zyngier <maz@kernel.org> Cc: Lina Iyer <ilina@codeaurora.org> Cc: iommu@lists.linux-foundation.org Cc: linux-arm-msm <linux-arm-msm@vger.kernel.org> Link: https://lore.kernel.org/r/20201112220520.48159-1-john.stultz@linaro.orgSigned-off-by: NWill Deacon <will@kernel.org>
-
由 Jean-Philippe Brucker 提交于
The invalidate_range() notifier is called for any change to the address space. Perform the required ATC invalidations. Signed-off-by: NJean-Philippe Brucker <jean-philippe@linaro.org> Link: https://lore.kernel.org/r/20201106155048.997886-5-jean-philippe@linaro.orgSigned-off-by: NWill Deacon <will@kernel.org>
-
由 Jean-Philippe Brucker 提交于
The sva_bind() function allows devices to access process address spaces using a PASID (aka SSID). (1) bind() allocates or gets an existing MMU notifier tied to the (domain, mm) pair. Each mm gets one PASID. (2) Any change to the address space calls invalidate_range() which sends ATC invalidations (in a subsequent patch). (3) When the process address space dies, the release() notifier disables the CD to allow reclaiming the page tables. Since release() has to be light we do not instruct device drivers to stop DMA here, we just ignore incoming page faults from this point onwards. To avoid any event 0x0a print (C_BAD_CD) we disable translation without clearing CD.V. PCIe Translation Requests and Page Requests are silently denied. Don't clear the R bit because the S bit can't be cleared when STALL_MODEL==0b10 (forced), and clearing R without clearing S is useless. Faulting transactions will stall and will be aborted by the IOPF handler. (4) After stopping DMA, the device driver releases the bond by calling unbind(). We release the MMU notifier, free the PASID and the bond. Three structures keep track of bonds: * arm_smmu_bond: one per {device, mm} pair, the handle returned to the device driver for a bind() request. * arm_smmu_mmu_notifier: one per {domain, mm} pair, deals with ATS/TLB invalidations and clearing the context descriptor on mm exit. * arm_smmu_ctx_desc: one per mm, holds the pinned ASID and pgd. Signed-off-by: NJean-Philippe Brucker <jean-philippe@linaro.org> Link: https://lore.kernel.org/r/20201106155048.997886-4-jean-philippe@linaro.orgSigned-off-by: NWill Deacon <will@kernel.org>
-
由 Jean-Philippe Brucker 提交于
Let IOMMU drivers allocate a single PASID per mm. Store the mm in the IOASID set to allow refcounting and searching mm by PASID, when handling an I/O page fault. Signed-off-by: NJean-Philippe Brucker <jean-philippe@linaro.org> Reviewed-by: NLu Baolu <baolu.lu@linux.intel.com> Link: https://lore.kernel.org/r/20201106155048.997886-3-jean-philippe@linaro.orgSigned-off-by: NWill Deacon <will@kernel.org>
-
由 Jean-Philippe Brucker 提交于
Let IOASID users take references to existing ioasids with ioasid_get(). ioasid_put() drops a reference and only frees the ioasid when its reference number is zero. It returns true if the ioasid was freed. For drivers that don't call ioasid_get(), ioasid_put() is the same as ioasid_free(). Signed-off-by: NJean-Philippe Brucker <jean-philippe@linaro.org> Reviewed-by: NEric Auger <eric.auger@redhat.com> Reviewed-by: NLu Baolu <baolu.lu@linux.intel.com> Link: https://lore.kernel.org/r/20201106155048.997886-2-jean-philippe@linaro.orgSigned-off-by: NWill Deacon <will@kernel.org>
-
由 Suravee Suthikulpanit 提交于
AMD IOMMU requires 4k-aligned pages for the event log, the PPR log, and the completion wait write-back regions. However, when allocating the pages, they could be part of large mapping (e.g. 2M) page. This causes #PF due to the SNP RMP hardware enforces the check based on the page level for these data structures. So, fix by calling set_memory_4k() on the allocated pages. Fixes: c69d89af ("iommu/amd: Use 4K page for completion wait write-back semaphore") Signed-off-by: NSuravee Suthikulpanit <suravee.suthikulpanit@amd.com> Cc: Brijesh Singh <brijesh.singh@amd.com> Link: https://lore.kernel.org/r/20201105145832.3065-1-suravee.suthikulpanit@amd.comSigned-off-by: NWill Deacon <will@kernel.org>
-