- 15 7月, 2022 1 次提交
-
-
由 Christoph Hellwig 提交于
This method is never actually called. Signed-off-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NLu Baolu <baolu.lu@linux.intel.com> Link: https://lore.kernel.org/r/20220708080616.238833-2-hch@lst.deSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 06 7月, 2022 2 次提交
-
-
由 Shameer Kolothum 提交于
Parse through the IORT RMR nodes and populate the reserve region list corresponding to a given IOMMU and device(optional). Also, go through the ID mappings of the RMR node and retrieve all the SIDs associated with it. Reviewed-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Tested-by: NSteven Price <steven.price@arm.com> Tested-by: NLaurentiu Tudor <laurentiu.tudor@nxp.com> Tested-by: NHanjun Guo <guohanjun@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NShameer Kolothum <shameerali.kolothum.thodi@huawei.com> Acked-by: NRobin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/20220615101044.1972-5-shameerali.kolothum.thodi@huawei.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Shameer Kolothum 提交于
A callback is introduced to struct iommu_resv_region to free memory allocations associated with the reserved region. This will be useful when we introduce support for IORT RMR based reserved regions. Reviewed-by: NChristoph Hellwig <hch@lst.de> Tested-by: NSteven Price <steven.price@arm.com> Tested-by: NLaurentiu Tudor <laurentiu.tudor@nxp.com> Tested-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NShameer Kolothum <shameerali.kolothum.thodi@huawei.com> Acked-by: NRobin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/20220615101044.1972-2-shameerali.kolothum.thodi@huawei.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 28 4月, 2022 6 次提交
-
-
由 Jason Gunthorpe 提交于
While the comment was correct that this flag was intended to convey the block no-snoop support in the IOMMU, it has become widely implemented and used to mean the IOMMU supports IOMMU_CACHE as a map flag. Only the Intel driver was different. Now that the Intel driver is using enforce_cache_coherency() update the comment to make it clear that IOMMU_CAP_CACHE_COHERENCY is only about IOMMU_CACHE. Fix the Intel driver to return true since IOMMU_CACHE always works. The two places that test this flag, usnic and vdpa, are both assigning userspace pages to a driver controlled iommu_domain and require IOMMU_CACHE behavior as they offer no way for userspace to synchronize caches. Reviewed-by: NKevin Tian <kevin.tian@intel.com> Reviewed-by: NLu Baolu <baolu.lu@linux.intel.com> Acked-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/3-v3-2cf356649677+a32-intel_no_snoop_jgg@nvidia.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Jason Gunthorpe 提交于
This new mechanism will replace using IOMMU_CAP_CACHE_COHERENCY and IOMMU_CACHE to control the no-snoop blocking behavior of the IOMMU. Currently only Intel and AMD IOMMUs are known to support this feature. They both implement it as an IOPTE bit, that when set, will cause PCIe TLPs to that IOVA with the no-snoop bit set to be treated as though the no-snoop bit was clear. The new API is triggered by calling enforce_cache_coherency() before mapping any IOVA to the domain which globally switches on no-snoop blocking. This allows other implementations that might block no-snoop globally and outside the IOPTE - AMD also documents such a HW capability. Leave AMD out of sync with Intel and have it block no-snoop even for in-kernel users. This can be trivially resolved in a follow up patch. Only VFIO needs to call this API because it does not have detailed control over the device to avoid requesting no-snoop behavior at the device level. Other places using domains with real kernel drivers should simply avoid asking their devices to set the no-snoop bit. Reviewed-by: NLu Baolu <baolu.lu@linux.intel.com> Reviewed-by: NKevin Tian <kevin.tian@intel.com> Acked-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/1-v3-2cf356649677+a32-intel_no_snoop_jgg@nvidia.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Lu Baolu 提交于
The iommu group changes notifer is not referenced in the tree. Remove it to avoid dead code. Suggested-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NLu Baolu <baolu.lu@linux.intel.com> Reviewed-by: NJason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/20220418005000.897664-12-baolu.lu@linux.intel.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Lu Baolu 提交于
Multiple devices may be placed in the same IOMMU group because they cannot be isolated from each other. These devices must either be entirely under kernel control or userspace control, never a mixture. This adds dma ownership management in iommu core and exposes several interfaces for the device drivers and the device userspace assignment framework (i.e. VFIO), so that any conflict between user and kernel controlled dma could be detected at the beginning. The device driver oriented interfaces are, int iommu_device_use_default_domain(struct device *dev); void iommu_device_unuse_default_domain(struct device *dev); By calling iommu_device_use_default_domain(), the device driver tells the iommu layer that the device dma is handled through the kernel DMA APIs. The iommu layer will manage the IOVA and use the default domain for DMA address translation. The device user-space assignment framework oriented interfaces are, int iommu_group_claim_dma_owner(struct iommu_group *group, void *owner); void iommu_group_release_dma_owner(struct iommu_group *group); bool iommu_group_dma_owner_claimed(struct iommu_group *group); The device userspace assignment must be disallowed if the DMA owner claiming interface returns failure. Signed-off-by: NJason Gunthorpe <jgg@nvidia.com> Signed-off-by: NKevin Tian <kevin.tian@intel.com> Signed-off-by: NLu Baolu <baolu.lu@linux.intel.com> Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/20220418005000.897664-2-baolu.lu@linux.intel.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Robin Murphy 提交于
VT-d's dmar_platform_optin() actually represents a combination of properties fairly well standardised by Microsoft as "Pre-boot DMA Protection" and "Kernel DMA Protection"[1]. As such, we can provide interested consumers with an abstracted capability rather than driver-specific interfaces that won't scale. We name it for the former aspect since that's what external callers are most likely to be interested in; the latter is for the IOMMU layer to handle itself. [1] https://docs.microsoft.com/en-us/windows-hardware/design/device-experiences/oem-kernel-dma-protectionSuggested-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NLu Baolu <baolu.lu@linux.intel.com> Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/d6218dff2702472da80db6aec2c9589010684551.1650878781.git.robin.murphy@arm.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Robin Murphy 提交于
iommu_capable() only really works for systems where all IOMMU instances are completely homogeneous, and all devices are IOMMU-mapped. Implement the new variant which will be able to give a more accurate answer for whichever device the caller is actually interested in, and even more so once all the external users have been converted and we can reliably pass the device pointer through the internal driver interface too. Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/8407eb9586677995b7a9fd70d0fd82d85929a9bb.1650878781.git.robin.murphy@arm.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 28 2月, 2022 6 次提交
-
-
由 Lu Baolu 提交于
Move the domain specific operations out of struct iommu_ops into a new structure that only has domain specific operations. This solves the problem of needing to know if the method vector for a given operation needs to be retrieved from the device or the domain. Logically the domain ops are the ones that make sense for external subsystems and endpoint drivers to use, while device ops, with the sole exception of domain_alloc, are IOMMU API internals. Signed-off-by: NLu Baolu <baolu.lu@linux.intel.com> Reviewed-by: NJason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/20220216025249.3459465-10-baolu.lu@linux.intel.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Lu Baolu 提交于
The is_attach_deferred iommu_ops callback is a device op. The domain argument is unnecessary and never used. Remove it to make code clean. Suggested-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NLu Baolu <baolu.lu@linux.intel.com> Reviewed-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NJason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/20220216025249.3459465-9-baolu.lu@linux.intel.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Lu Baolu 提交于
The common iommu_ops is hooked to both device and domain. When a helper has both device and domain pointer, the way to get the iommu_ops looks messy in iommu core. This sorts out the way to get iommu_ops. The device related helpers go through device pointer, while the domain related ones go through domain pointer. Signed-off-by: NLu Baolu <baolu.lu@linux.intel.com> Reviewed-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NJason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/20220216025249.3459465-8-baolu.lu@linux.intel.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Lu Baolu 提交于
The apply_resv_region callback in iommu_ops was introduced to reserve an IOVA range in the given DMA domain when the IOMMU driver manages the IOVA by itself. As all drivers converted to use dma-iommu in the core, there's no driver using this anymore. Remove it to avoid dead code. Suggested-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NLu Baolu <baolu.lu@linux.intel.com> Reviewed-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NJason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/20220216025249.3459465-6-baolu.lu@linux.intel.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Lu Baolu 提交于
The aux-domain related interfaces and iommu_ops are not referenced anywhere in the tree. We've also reached a consensus to redesign it based the new iommufd framework. Remove them to avoid dead code. Signed-off-by: NLu Baolu <baolu.lu@linux.intel.com> Reviewed-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NJason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/20220216025249.3459465-5-baolu.lu@linux.intel.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Lu Baolu 提交于
The guest pasid related uapi interfaces and definitions are not referenced anywhere in the tree. We've also reached a consensus to replace them with a new iommufd design. Remove them to avoid dead code. Signed-off-by: NLu Baolu <baolu.lu@linux.intel.com> Reviewed-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NJason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/20220216025249.3459465-3-baolu.lu@linux.intel.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 20 12月, 2021 1 次提交
-
-
由 Matthew Wilcox (Oracle) 提交于
page->freelist is for the use of slab. We already have the ability to free a list of pages in the core mm, but it requires the use of a list_head and for the pages to be chained together through page->lru. Switch the Intel IOMMU and IOVA code over to using free_pages_list(). Signed-off-by: NMatthew Wilcox (Oracle) <willy@infradead.org> [rm: split from original patch, cosmetic tweaks, fix fq entries] Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Reviewed-by: NLu Baolu <baolu.lu@linux.intel.com> Link: https://lore.kernel.org/r/2115b560d9a0ce7cd4b948bd51a2b7bde8fdfd59.1639753638.git.robin.murphy@arm.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 20 8月, 2021 1 次提交
-
-
由 Robin Murphy 提交于
Previously io-pgtable merely passed the iommu_iotlb_gather pointer through to helpers, but now it has grown its own direct dereference. This turns out to break the build for !IOMMU_API configs where the structure only has a dummy definition. It will probably also crash drivers who don't use the gather mechanism and simply pass in NULL. Wrap this dereference in a suitable helper which can both be stubbed out for !IOMMU_API and encapsulate a NULL check otherwise. Fixes: 7a7c5bad ("iommu: Indicate queued flushes via gather data") Reported-by: Nkernel test robot <lkp@intel.com> Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/83672ee76f6405c82845a55c148fa836f56fbbc1.1629465282.git.robin.murphy@arm.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 18 8月, 2021 4 次提交
-
-
由 Robin Murphy 提交于
Eliminate the iommu_get_dma_strict() indirection and pipe the information through the domain type from the beginning. Besides the flow simplification this also has several nice side-effects: - Automatically implies strict mode for untrusted devices by virtue of their IOMMU_DOMAIN_DMA override. - Ensures that we only end up using flush queues for drivers which are aware of them and can actually benefit. - Allows us to handle flush queue init failure by falling back to strict mode instead of leaving it to possibly blow up later. Reviewed-by: NLu Baolu <baolu.lu@linux.intel.com> Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/47083d69155577f1367877b1594921948c366eb3.1628682049.git.robin.murphy@arm.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Robin Murphy 提交于
Promote the difference between strict and non-strict DMA domains from an internal detail to a distinct domain feature and type, to pave the road for exposing it through the sysfs default domain interface. Reviewed-by: NLu Baolu <baolu.lu@linux.intel.com> Reviewed-by: NJean-Philippe Brucker <jean-philippe@linaro.org> Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/08cd2afaf6b63c58ad49acec3517c9b32c2bb946.1628682049.git.robin.murphy@arm.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Robin Murphy 提交于
Since iommu_iotlb_gather exists to help drivers optimise flushing for a given unmap request, it is also the logical place to indicate whether the unmap is strict or not, and thus help them further optimise for whether to expect a sync or a flush_all subsequently. As part of that, it also seems fair to make the flush queue code take responsibility for enforcing the really subtle ordering requirement it brings, so that we don't need to worry about forgetting that if new drivers want to add flush queue support, and can consolidate the existing versions. While we're adding to the kerneldoc, also fill in some info for @freelist which was overlooked previously. Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/bf5f8e2ad84e48c712ccbf80fa8c610594c7595f.1628682049.git.robin.murphy@arm.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Robin Murphy 提交于
Now that everyone has converged on iommu-dma for IOMMU_DOMAIN_DMA support, we can abandon the notion of drivers being responsible for the cookie type, and consolidate all the management into the core code. CC: Yong Wu <yong.wu@mediatek.com> CC: Chunyan Zhang <chunyan.zhang@unisoc.com> CC: Maxime Ripard <mripard@kernel.org> Tested-by: NHeiko Stuebner <heiko@sntech.de> Tested-by: NMarek Szyprowski <m.szyprowski@samsung.com> Tested-by: NYoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com> Reviewed-by: NJean-Philippe Brucker <jean-philippe@linaro.org> Reviewed-by: NLu Baolu <baolu.lu@linux.intel.com> Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/46a2c0e7419c7d1d931762dc7b6a69fa082d199a.1628682048.git.robin.murphy@arm.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 09 8月, 2021 1 次提交
-
-
由 Logan Gunthorpe 提交于
Convert to ssize_t return code so the return code from __iommu_map() can be returned all the way down through dma_iommu_map_sg(). Signed-off-by: NLogan Gunthorpe <logang@deltatee.com> Cc: Joerg Roedel <joro@8bytes.org> Cc: Will Deacon <will@kernel.org> Signed-off-by: NChristoph Hellwig <hch@lst.de>
-
- 02 8月, 2021 2 次提交
-
-
由 Nadav Amit 提交于
Refactor iommu_iotlb_gather_add_page() and factor out the logic that detects whether IOTLB gather range and a new range are disjoint. To be used by the next patch that implements different gathering logic for AMD. Note that updating gather->pgsize unconditionally does not affect correctness as the function had (and has) an invariant, in which gather->pgsize always represents the flushing granularity of its range. Arguably, “size" should never be zero, but lets assume for the matter of discussion that it might. If "size" equals to "gather->pgsize", then the assignment in question has no impact. Otherwise, if "size" is non-zero, then iommu_iotlb_sync() would initialize the size and range (see iommu_iotlb_gather_init()), and the invariant is kept. Otherwise, "size" is zero, and "gather" already holds a range, so gather->pgsize is non-zero and (gather->pgsize && gather->pgsize != size) is true. Therefore, again, iommu_iotlb_sync() would be called and initialize the size. Cc: Joerg Roedel <joro@8bytes.org> Cc: Jiajun Cao <caojiajun@vmware.com> Cc: Lu Baolu <baolu.lu@linux.intel.com> Cc: iommu@lists.linux-foundation.org Cc: linux-kernel@vger.kernel.org> Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Acked-by: NWill Deacon <will@kernel.org> Signed-off-by: NNadav Amit <namit@vmware.com> Link: https://lore.kernel.org/r/20210723093209.714328-5-namit@vmware.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Robin Murphy 提交于
The Mediatek driver is not the only one which might want a basic address-based gathering behaviour, so although it's arguably simple enough to open-code, let's factor it out for the sake of cleanliness. Let's also take this opportunity to document the intent of these helpers for clarity. Cc: Joerg Roedel <joro@8bytes.org> Cc: Will Deacon <will@kernel.org> Cc: Jiajun Cao <caojiajun@vmware.com> Cc: Robin Murphy <robin.murphy@arm.com> Cc: Lu Baolu <baolu.lu@linux.intel.com> Cc: iommu@lists.linux-foundation.org Cc: linux-kernel@vger.kernel.org Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NNadav Amit <namit@vmware.com> Link: https://lore.kernel.org/r/20210723093209.714328-4-namit@vmware.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 26 7月, 2021 3 次提交
-
-
由 John Garry 提交于
We only ever now set strict mode enabled in iommu_set_dma_strict(), so just remove the argument. Signed-off-by: NJohn Garry <john.garry@huawei.com> Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Reviewed-by: NLu Baolu <baolu.lu@linux.intel.com> Link: https://lore.kernel.org/r/1626088340-5838-7-git-send-email-john.garry@huawei.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Isaac J. Manjarres 提交于
Add a callback for IOMMU drivers to provide a path for the IOMMU framework to call into an IOMMU driver, which can call into the io-pgtable code, to map a physically contiguous rnage of pages of the same size. For IOMMU drivers that do not specify a map_pages() callback, the existing logic of mapping memory one page block at a time will be used. Signed-off-by: NIsaac J. Manjarres <isaacm@codeaurora.org> Suggested-by: NWill Deacon <will@kernel.org> Acked-by: NLu Baolu <baolu.lu@linux.intel.com> Signed-off-by: NGeorgi Djakov <quic_c_gdjako@quicinc.com> Link: https://lore.kernel.org/r/1623850736-389584-5-git-send-email-quic_c_gdjako@quicinc.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Isaac J. Manjarres 提交于
Add a callback for IOMMU drivers to provide a path for the IOMMU framework to call into an IOMMU driver, which can call into the io-pgtable code, to unmap a virtually contiguous range of pages of the same size. For IOMMU drivers that do not specify an unmap_pages() callback, the existing logic of unmapping memory one page block at a time will be used. Signed-off-by: NIsaac J. Manjarres <isaacm@codeaurora.org> Suggested-by: NWill Deacon <will@kernel.org> Signed-off-by: NWill Deacon <will@kernel.org> Acked-by: NLu Baolu <baolu.lu@linux.intel.com> Signed-off-by: NGeorgi Djakov <quic_c_gdjako@quicinc.com> Link: https://lore.kernel.org/r/1623850736-389584-3-git-send-email-quic_c_gdjako@quicinc.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 16 4月, 2021 2 次提交
-
-
由 Robin Murphy 提交于
Rather than have separate opaque setter functions that are easy to overlook and lead to repetitive boilerplate in drivers, let's pass the relevant initialisation parameters directly to iommu_device_register(). Acked-by: NWill Deacon <will@kernel.org> Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/ab001b87c533b6f4db71eb90db6f888953986c36.1617285386.git.robin.murphy@arm.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Robin Murphy 提交于
It happens that the 3 drivers which first supported being modular are also ones which play games with their pgsize_bitmap, so have non-const iommu_ops where dynamically setting the owner manages to work out OK. However, it's less than ideal to force that upon all drivers which want to be modular - like the new sprd-iommu driver which now has a potential bug in that regard - so let's just statically set the module owner and let ops remain const wherever possible. Reviewed-by: NChristoph Hellwig <hch@lst.de> Acked-by: NWill Deacon <will@kernel.org> Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/31423b99ff609c3d4b291c701a7a7a810d9ce8dc.1617285386.git.robin.murphy@arm.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 07 4月, 2021 11 次提交
-
-
由 Christoph Hellwig 提交于
Remove the now unused iommu attr infrastructure. Signed-off-by: NChristoph Hellwig <hch@lst.de> Acked-by: NWill Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210401155256.298656-21-hch@lst.deSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Christoph Hellwig 提交于
Use an explicit set_pgtable_quirks method instead that just passes the actual quirk bitmask instead. Signed-off-by: NChristoph Hellwig <hch@lst.de> Acked-by: NWill Deacon <will@kernel.org> Acked-by: NLi Yang <leoyang.li@nxp.com> Link: https://lore.kernel.org/r/20210401155256.298656-20-hch@lst.deSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Robin Murphy 提交于
Instead make the global iommu_dma_strict paramete in iommu.c canonical by exporting helpers to get and set it and use those directly in the drivers. This make sure that the iommu.strict parameter also works for the AMD and Intel IOMMU drivers on x86. As those default to lazy flushing a new IOMMU_CMD_LINE_STRICT is used to turn the value into a tristate to represent the default if not overriden by an explicit parameter. [ported on top of the other iommu_attr changes and added a few small missing bits] Signed-off-by: Robin Murphy <robin.murphy@arm.com>. Signed-off-by: NChristoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/r/20210401155256.298656-19-hch@lst.deSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Christoph Hellwig 提交于
Use an explicit enable_nesting method instead. Signed-off-by: NChristoph Hellwig <hch@lst.de> Acked-by: NWill Deacon <will@kernel.org> Acked-by: NLi Yang <leoyang.li@nxp.com> Link: https://lore.kernel.org/r/20210401155256.298656-17-hch@lst.deSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Christoph Hellwig 提交于
The geometry information can be trivially queried from the iommu_domain struture. Signed-off-by: NChristoph Hellwig <hch@lst.de> Acked-by: NWill Deacon <will@kernel.org> Acked-by: NLi Yang <leoyang.li@nxp.com> Link: https://lore.kernel.org/r/20210401155256.298656-16-hch@lst.deSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Christoph Hellwig 提交于
DOMAIN_ATTR_PAGING is never used. Signed-off-by: NChristoph Hellwig <hch@lst.de> Acked-by: NWill Deacon <will@kernel.org> Acked-by: NLi Yang <leoyang.li@nxp.com> Link: https://lore.kernel.org/r/20210401155256.298656-15-hch@lst.deSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Christoph Hellwig 提交于
Instead of a separate call to enable all devices from the list, just enable the liodn once the device is attached to the iommu domain. This also remove the DOMAIN_ATTR_FSL_PAMU_ENABLE iommu_attr. Signed-off-by: NChristoph Hellwig <hch@lst.de> Acked-by: NWill Deacon <will@kernel.org> Acked-by: NLi Yang <leoyang.li@nxp.com> Link: https://lore.kernel.org/r/20210401155256.298656-11-hch@lst.deSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Christoph Hellwig 提交于
Add a fsl_pamu_configure_l1_stash API that qman_portal can call directly instead of indirecting through the iommu attr API. Signed-off-by: NChristoph Hellwig <hch@lst.de> Acked-by: NWill Deacon <will@kernel.org> Acked-by: NLi Yang <leoyang.li@nxp.com> Link: https://lore.kernel.org/r/20210401155256.298656-8-hch@lst.deSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Christoph Hellwig 提交于
The only thing that fsl_pamu_window_enable does for the current caller is to fill in the prot value in the only dma_window structure, and to propagate a few values from the iommu_domain_geometry struture into the dma_window. Remove the dma_window entirely, hardcode the prot value and otherwise use the iommu_domain_geometry structure instead. Remove the now unused ->domain_window_enable iommu method. Signed-off-by: NChristoph Hellwig <hch@lst.de> Acked-by: NLi Yang <leoyang.li@nxp.com> Link: https://lore.kernel.org/r/20210401155256.298656-7-hch@lst.deSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Christoph Hellwig 提交于
The only domains allocated forces use of a single window. Remove all the code related to multiple window support, as well as the need for qman_portal to force a single window. Remove the now unused DOMAIN_ATTR_WINDOWS iommu_attr. Signed-off-by: NChristoph Hellwig <hch@lst.de> Acked-by: NWill Deacon <will@kernel.org> Acked-by: NLi Yang <leoyang.li@nxp.com> Link: https://lore.kernel.org/r/20210401155256.298656-6-hch@lst.deSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Christoph Hellwig 提交于
None of the values returned by this function are ever queried. Also remove the DOMAIN_ATTR_FSL_PAMUV1 enum value that is not otherwise used. Signed-off-by: NChristoph Hellwig <hch@lst.de> Acked-by: NWill Deacon <will@kernel.org> Acked-by: NLi Yang <leoyang.li@nxp.com> Link: https://lore.kernel.org/r/20210401155256.298656-3-hch@lst.deSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-