- 03 9月, 2019 1 次提交
-
-
由 YueHaibing 提交于
If CONFIG_PCI_ATS is not set, building fails: drivers/iommu/arm-smmu-v3.c: In function arm_smmu_ats_supported: drivers/iommu/arm-smmu-v3.c:2325:35: error: struct pci_dev has no member named ats_cap; did you mean msi_cap? return !pdev->untrusted && pdev->ats_cap; ^~~~~~~ ats_cap should only used when CONFIG_PCI_ATS is defined, so use #ifdef block to guard this. Fixes: bfff88ec ("iommu/arm-smmu-v3: Rework enabling/disabling of ATS for PCI masters") Signed-off-by: NYueHaibing <yuehaibing@huawei.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 23 8月, 2019 2 次提交
-
-
由 Will Deacon 提交于
This reverts commit b5e86196. Now that ATC invalidation is performed in the correct places and without incurring a locking overhead for non-ATS systems, we can re-enable the corresponding SMMU feature detection. Signed-off-by: NWill Deacon <will@kernel.org>
-
由 Will Deacon 提交于
When ATS is not in use, we can avoid taking the 'devices_lock' for the domain on the invalidation path by simply caching the number of ATS masters currently attached. The fiddly part is handling a concurrent ->attach() of an ATS-enabled master to a domain that is being invalidated, but we can handle this using an 'smp_mb()' to ensure that our check of the count is ordered after completion of our prior TLB invalidation. This also makes our ->attach() and ->detach() flows symmetric wrt ATS interactions. Acked-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NWill Deacon <will@kernel.org>
-
- 22 8月, 2019 5 次提交
-
-
由 Will Deacon 提交于
When invalidating the ATC for an PCIe endpoint using ATS, we must take care to complete invalidation of the main SMMU TLBs beforehand, otherwise the device could immediately repopulate its ATC with stale translations. Hooking the ATC invalidation into ->unmap() as we currently do does the exact opposite: it ensures that the ATC is invalidated *before* the main TLBs, which is bogus. Move ATC invalidation into the actual (leaf) invalidation routines so that it is always called after completing main TLB invalidation. Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NWill Deacon <will@kernel.org>
-
由 Will Deacon 提交于
To prevent any potential issues arising from speculative Address Translation Requests from an ATS-enabled PCIe endpoint, rework our ATS enabling/disabling logic so that we enable ATS at the SMMU before we enable it at the endpoint, and disable things in the opposite order. Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NWill Deacon <will@kernel.org>
-
由 Will Deacon 提交于
Calling arm_smmu_tlb_inv_range() with a size of zero, perhaps due to an empty 'iommu_iotlb_gather' structure, should be a NOP. Elide the CMD_SYNC when there is no invalidation to be performed. Signed-off-by: NWill Deacon <will@kernel.org>
-
由 Will Deacon 提交于
There's really no need for this to be a bitfield, particularly as we don't have bitwise addressing on arm64. Signed-off-by: NWill Deacon <will@kernel.org>
-
由 Will Deacon 提交于
Detecting the ATS capability of the SMMU at probe time introduces a spinlock into the ->unmap() fast path, even when ATS is not actually in use. Furthermore, the ATC invalidation that exists is broken, as it occurs before invalidation of the main SMMU TLB which leaves a window where the ATC can be repopulated with stale entries. Given that ATS is both a new feature and a specialist sport, disable it for now whilst we fix it properly in subsequent patches. Since PRI requires ATS, disable that too. Cc: <stable@vger.kernel.org> Fixes: 9ce27afc ("iommu/arm-smmu-v3: Add support for PCI ATS") Acked-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NWill Deacon <will@kernel.org>
-
- 21 8月, 2019 1 次提交
-
-
由 Will Deacon 提交于
It turns out that we've always relied on some subtle ordering guarantees when inserting commands into the SMMUv3 command queue. With the recent changes to elide locking when possible, these guarantees become more subtle and even more important. Add a comment documented the barrier semantics of command insertion so that we don't have to derive the behaviour from scratch each time it comes up on the list. Signed-off-by: NWill Deacon <will@kernel.org>
-
- 08 8月, 2019 2 次提交
-
-
由 Will Deacon 提交于
Update the iommu_iotlb_gather structure passed to ->tlb_add_page() and use this information to defer all TLB invalidation until ->iotlb_sync(). This drastically reduces contention on the command queue, since we can insert our commands in batches rather than one-by-one. Tested-by: NGanapatrao Kulkarni <gkulkarni@marvell.com> Signed-off-by: NWill Deacon <will@kernel.org>
-
由 Will Deacon 提交于
The SMMU command queue is a bottleneck in large systems, thanks to the spin_lock which serialises accesses from all CPUs to the single queue supported by the hardware. Attempt to improve this situation by moving to a new algorithm for inserting commands into the queue, which is lock-free on the fast-path. Tested-by: NGanapatrao Kulkarni <gkulkarni@marvell.com> Signed-off-by: NWill Deacon <will@kernel.org>
-
- 06 8月, 2019 1 次提交
-
-
由 Anders Roxell 提交于
Now that -Wimplicit-fallthrough is passed to GCC by default, the following warning shows up: ../drivers/iommu/arm-smmu-v3.c: In function ‘arm_smmu_write_strtab_ent’: ../drivers/iommu/arm-smmu-v3.c:1189:7: warning: this statement may fall through [-Wimplicit-fallthrough=] if (disable_bypass) ^ ../drivers/iommu/arm-smmu-v3.c:1191:3: note: here default: ^~~~~~~ Rework so that the compiler doesn't warn about fall-through. Make it clearer by calling 'BUG_ON()' when disable_bypass is set, and always 'break;' Signed-off-by: NAnders Roxell <anders.roxell@linaro.org> Acked-by: NWill Deacon <will@kernel.org> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 30 7月, 2019 11 次提交
-
-
由 Suzuki K Poulose 提交于
Add a helper to match the firmware node handle of a device and provide wrappers for {bus/class/driver}_find_device() APIs to avoid proliferation of duplicate custom match functions. Cc: "David S. Miller" <davem@davemloft.net> Cc: Doug Ledford <dledford@redhat.com> Cc: Jason Gunthorpe <jgg@ziepe.ca> Cc: linux-usb@vger.kernel.org Cc: "Rafael J. Wysocki" <rafael@kernel.org> Cc: Ulf Hansson <ulf.hansson@linaro.org> Cc: Joe Perches <joe@perches.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Joerg Roedel <joro@8bytes.org> Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Acked-by: NRobin Murphy <robin.murphy@arm.com> Reviewed-by: NMathieu Poirier <mathieu.poirier@linaro.org> Reviewed-by: NHeikki Krogerus <heikki.krogerus@linux.intel.com> Link: https://lore.kernel.org/r/20190723221838.12024-4-suzuki.poulose@arm.comSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Will Deacon 提交于
In preparation for rewriting the command queue insertion code to use a new algorithm, rework many of our queue macro accessors and manipulation functions so that they operate on the arm_smmu_ll_queue structure where possible. This will allow us to call these helpers on local variables without having to construct a full-blown arm_smmu_queue on the stack. No functional change. Tested-by: NGanapatrao Kulkarni <gkulkarni@marvell.com> Signed-off-by: NWill Deacon <will@kernel.org>
-
由 Will Deacon 提交于
In preparation for rewriting the command queue insertion code to use a new algorithm, introduce a new arm_smmu_ll_queue structure which contains only the information necessary to perform queue arithmetic for a queue and will later be extended so that we can perform complex atomic manipulation on some of the fields. No functional change. Tested-by: NGanapatrao Kulkarni <gkulkarni@marvell.com> Signed-off-by: NWill Deacon <will@kernel.org>
-
由 Will Deacon 提交于
The Q_OVF macro doesn't need to access the arm_smmu_queue structure, so drop the unused macro argument. No functional change. Tested-by: NGanapatrao Kulkarni <gkulkarni@marvell.com> Signed-off-by: NWill Deacon <will@kernel.org>
-
由 Will Deacon 提交于
In preparation for rewriting the command queue insertion code to use a new algorithm, separate the software and hardware views of the prod and cons indexes so that manipulating the software state doesn't automatically update the hardware state at the same time. No functional change. Tested-by: NGanapatrao Kulkarni <gkulkarni@marvell.com> Signed-off-by: NWill Deacon <will@kernel.org>
-
由 Will Deacon 提交于
With all the pieces in place, we can finally propagate the iommu_iotlb_gather structure from the call to unmap() down to the IOMMU drivers' implementation of ->tlb_add_page(). Currently everybody ignores it, but the machinery is now there to defer invalidation. Signed-off-by: NWill Deacon <will@kernel.org>
-
由 Will Deacon 提交于
Update the io-pgtable ->unmap() function to take an iommu_iotlb_gather pointer as an argument, and update the callers as appropriate. Signed-off-by: NWill Deacon <will@kernel.org>
-
由 Will Deacon 提交于
The ->tlb_sync() callback is no longer used, so it can be removed. Signed-off-by: NWill Deacon <will@kernel.org>
-
由 Will Deacon 提交于
The ->tlb_add_flush() callback in the io-pgtable API now looks a bit silly: - It takes a size and a granule, which are always the same - It takes a 'bool leaf', which is always true - It only ever flushes a single page With that in mind, replace it with an optional ->tlb_add_page() callback that drops the useless parameters. Signed-off-by: NWill Deacon <will@kernel.org>
-
由 Will Deacon 提交于
Hook up ->tlb_flush_walk() and ->tlb_flush_leaf() in drivers using the io-pgtable API so that we can start making use of them in the page-table code. For now, they can just wrap the implementations of ->tlb_add_flush and ->tlb_sync pending future optimisation in each driver. Signed-off-by: NWill Deacon <will@kernel.org>
-
由 Will Deacon 提交于
To allow IOMMU drivers to batch up TLB flushing operations and postpone them until ->iotlb_sync() is called, extend the prototypes for the ->unmap() and ->iotlb_sync() IOMMU ops callbacks to take a pointer to the current iommu_iotlb_gather structure. All affected IOMMU drivers are updated, but there should be no functional change since the extra parameter is ignored for now. Signed-off-by: NWill Deacon <will@kernel.org>
-
- 24 7月, 2019 1 次提交
-
-
由 Will Deacon 提交于
In preparation for TLB flush gathering in the IOMMU API, rename the iommu_gather_ops structure in io-pgtable to iommu_flush_ops, which better describes its purpose and avoids the potential for confusion between different levels of the API. $ find linux/ -type f -name '*.[ch]' | xargs sed -i 's/gather_ops/flush_ops/g' Signed-off-by: NWill Deacon <will@kernel.org>
-
- 04 7月, 2019 1 次提交
-
-
由 Jean-Philippe Brucker 提交于
We make the invalid assumption in arm_smmu_detach_dev() that the ATC is clear after calling pci_disable_ats(). For one thing, only enabling the PCIe ATS capability constitutes an implicit invalidation event, so the comment was wrong. More importantly, the ATS capability isn't necessarily disabled by pci_disable_ats() in a PF, if the associated VFs have ATS enabled. Explicitly invalidate all ATC entries in arm_smmu_detach_dev(). The endpoint cannot form new ATC entries because STE.EATS is clear. Fixes: 9ce27afc ("iommu/arm-smmu-v3: Add support for PCI ATS") Reported-by: NManoj Kumar <Manoj.Kumar3@arm.com> Reported-by: NRobin Murphy <Robin.Murphy@arm.com> Signed-off-by: NJean-Philippe Brucker <jean-philippe.brucker@arm.com> Acked-by: NWill Deacon <will@kernel.org> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 02 7月, 2019 1 次提交
-
-
由 Will Deacon 提交于
When compiling a kernel without support for CMA, CONFIG_CMA_ALIGNMENT is not defined which results in the following build failure: In file included from ./include/linux/list.h:9:0 from ./include/linux/kobject.h:19, from ./include/linux/of.h:17 from ./include/linux/irqdomain.h:35, from ./include/linux/acpi.h:13, from drivers/iommu/arm-smmu-v3.c:12: drivers/iommu/arm-smmu-v3.c: In function ‘arm_smmu_device_hw_probe’: drivers/iommu/arm-smmu-v3.c:194:40: error: ‘CONFIG_CMA_ALIGNMENT’ undeclared (first use in this function) #define Q_MAX_SZ_SHIFT (PAGE_SHIFT + CONFIG_CMA_ALIGNMENT) Fix the breakage by capping the maximum queue size based on MAX_ORDER when CMA is not enabled. Reported-by: NZhangshaokun <zhangshaokun@hisilicon.com> Signed-off-by: NWill Deacon <will@kernel.org> Tested-by: NShaokun Zhang <zhangshaokun@hisilicon.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 25 6月, 2019 1 次提交
-
-
由 Will Deacon 提交于
IO_PGTABLE_QUIRK_NO_DMA is a bit of a misnomer, since it's really just an indication of whether or not the page-table walker for the IOMMU is coherent with the CPU caches. Since cache coherency is more than just a quirk, replace the flag with its own field in the io_pgtable_cfg structure. Cc: Bjorn Andersson <bjorn.andersson@linaro.org> Signed-off-by: NWill Deacon <will@kernel.org>
-
- 24 6月, 2019 1 次提交
-
-
由 Suzuki K Poulose 提交于
The driver_find_device() accepts a match function pointer to filter the devices for lookup, similar to bus/class_find_device(). However, there is a minor difference in the prototype for the match parameter for driver_find_device() with the now unified version accepted by {bus/class}_find_device(), where it doesn't accept a "const" qualifier for the data argument. This prevents us from reusing the generic match functions for driver_find_device(). For this reason, change the prototype of the driver_find_device() to make the "match" parameter in line with {bus/class}_find_device() and adjust its callers to use the const qualifier. Also, we could now promote the "data" parameter to const as we pass it down as a const parameter to the match functions. Cc: Corey Minyard <minyard@acm.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Thierry Reding <thierry.reding@gmail.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: "Rafael J. Wysocki" <rafael@kernel.org> Cc: Will Deacon <will.deacon@arm.com> Cc: Joerg Roedel <joro@8bytes.org> Cc: Peter Oberparleiter <oberpar@linux.ibm.com> Cc: Sebastian Ott <sebott@linux.ibm.com> Cc: David Airlie <airlied@linux.ie> Cc: Daniel Vetter <daniel@ffwll.ch> Cc: Nehal Shah <nehal-bakulchandra.shah@amd.com> Cc: Shyam Sundar S K <shyam-sundar.s-k@amd.com> Cc: Lee Jones <lee.jones@linaro.org> Cc: Christian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 19 6月, 2019 1 次提交
-
-
由 Will Deacon 提交于
We've been artificially limiting the size of our queues to 4k so that we don't end up allocating huge amounts of physically-contiguous memory at probe time. However, 4k is only enough for 256 commands in the command queue, so instead let's try to allocate the largest queue that the SMMU supports, retrying with a smaller size if the allocation fails. The caveat here is that we have to limit our upper bound based on CONFIG_CMA_ALIGNMENT to ensure that our queue allocations remain natually aligned, which is required by the SMMU architecture. Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 23 4月, 2019 7 次提交
-
-
由 Will Deacon 提交于
Disabling the SMMU when probing from within a kdump kernel so that all incoming transactions are terminated can prevent the core of the crashed kernel from being transferred off the machine if all I/O devices are behind the SMMU. Instead, continue to probe the SMMU after it is disabled so that we can reinitialise it entirely and re-attach the DMA masters as they are reset. Since the kdump kernel may not have drivers for all of the active DMA masters, we suppress fault reporting to avoid spamming the console and swamping the IRQ threads. Reported-by: N"Leizhen (ThunderTown)" <thunder.leizhen@huawei.com> Tested-by: N"Leizhen (ThunderTown)" <thunder.leizhen@huawei.com> Tested-by: NBhupesh Sharma <bhsharma@redhat.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Jean-Philippe Brucker 提交于
The ARM architecture has a "Top Byte Ignore" (TBI) option that makes the MMU mask out bits [63:56] of an address, allowing a userspace application to store data in its pointers. This option is incompatible with PCI ATS. If TBI is enabled in the SMMU and userspace triggers DMA transactions on tagged pointers, the endpoint might create ATC entries for addresses that include a tag. Software would then have to send ATC invalidation packets for each 255 possible alias of an address, or just wipe the whole address space. This is not a viable option, so disable TBI. The impact of this change is unclear, since there are very few users of tagged pointers, much less SVA. But the requirement introduced by this patch doesn't seem excessive: a userspace application using both tagged pointers and SVA should now sanitize addresses (clear the tag) before using them for device DMA. Signed-off-by: NJean-Philippe Brucker <jean-philippe.brucker@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Jean-Philippe Brucker 提交于
PCIe devices can implement their own TLB, named Address Translation Cache (ATC). Enable Address Translation Service (ATS) for devices that support it and send them invalidation requests whenever we invalidate the IOTLBs. ATC invalidation is allowed to take up to 90 seconds, according to the PCIe spec, so it is possible to get a SMMU command queue timeout during normal operations. However we expect implementations to complete invalidation in reasonable time. We only enable ATS for "trusted" devices, and currently rely on the pci_dev->untrusted bit. For ATS we have to trust that: (a) The device doesn't issue "translated" memory requests for addresses that weren't returned by the SMMU in a Translation Completion. In particular, if we give control of a device or device partition to a VM or userspace, software cannot program the device to access arbitrary "translated" addresses. (b) The device follows permissions granted by the SMMU in a Translation Completion. If the device requested read+write permission and only got read, then it doesn't write. (c) The device doesn't send Translated transactions for an address that was invalidated by an ATC invalidation. Note that the PCIe specification explicitly requires all of these, so we can assume that implementations will cleanly shield ATCs from software. All ATS translated requests still go through the SMMU, to walk the stream table and check that the device is actually allowed to send translated requests. Signed-off-by: NJean-Philippe Brucker <jean-philippe.brucker@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Jean-Philippe Brucker 提交于
When removing a mapping from a domain, we need to send an invalidation to all devices that might have stored it in their Address Translation Cache (ATC). In addition when updating the context descriptor of a live domain, we'll need to send invalidations for all devices attached to it. Maintain a list of devices in each domain, protected by a spinlock. It is updated every time we attach or detach devices to and from domains. It needs to be a spinlock because we'll invalidate ATC entries from within hardirq-safe contexts, but it may be possible to relax the read side with RCU later. Signed-off-by: NJean-Philippe Brucker <jean-philippe.brucker@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Jean-Philippe Brucker 提交于
As we're going to track domain-master links more closely for ATS and CD invalidation, add pointer to the attached domain in struct arm_smmu_master. As a result, arm_smmu_strtab_ent is redundant and can be removed. Signed-off-by: NJean-Philippe Brucker <jean-philippe.brucker@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Jean-Philippe Brucker 提交于
Simplify the attach/detach code a bit by keeping a pointer to the stream IDs in the master structure. Although not completely obvious here, it does make the subsequent support for ATS, PRI and PASID a bit simpler. Signed-off-by: NJean-Philippe Brucker <jean-philippe.brucker@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Jean-Philippe Brucker 提交于
The arm_smmu_master_data structure already represents more than just the firmware data associated to a master, and will be used extensively to represent a device's state when implementing more SMMU features. Rename the structure to arm_smmu_master. Signed-off-by: NJean-Philippe Brucker <jean-philippe.brucker@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 11 2月, 2019 1 次提交
-
-
由 Rob Herring 提交于
Move io-pgtable.h to include/linux/ and export alloc_io_pgtable_ops and free_io_pgtable_ops. This enables drivers outside drivers/iommu/ to use the page table library. Specifically, some ARM Mali GPUs use the ARM page table formats. Cc: Will Deacon <will.deacon@arm.com> Cc: Robin Murphy <robin.murphy@arm.com> Cc: Joerg Roedel <joro@8bytes.org> Cc: Matthias Brugger <matthias.bgg@gmail.com> Cc: Rob Clark <robdclark@gmail.com> Cc: linux-arm-kernel@lists.infradead.org Cc: iommu@lists.linux-foundation.org Cc: linux-mediatek@lists.infradead.org Cc: linux-arm-msm@vger.kernel.org Signed-off-by: NRob Herring <robh@kernel.org> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 17 12月, 2018 1 次提交
-
-
由 Joerg Roedel 提交于
Use the new helpers dev_iommu_fwspec_get()/set() to access the dev->iommu_fwspec pointer. This makes it easier to move that pointer later into another struct. Cc: Will Deacon <will.deacon@arm.com> Cc: Robin Murphy <robin.murphy@arm.com> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 10 12月, 2018 2 次提交
-
-
由 Will Deacon 提交于
After removing an entry from a queue (e.g. reading an event in arm_smmu_evtq_thread()) it is necessary to advance the MMIO consumer pointer to free the queue slot back to the SMMU. A memory barrier is required here so that all reads targetting the queue entry have completed before the consumer pointer is updated. The implementation of queue_inc_cons() relies on a writel() to complete the previous reads, but this is incorrect because writel() is only guaranteed to complete prior writes. This patch replaces the call to writel() with an mb(); writel_relaxed() sequence, which gives us the read->write ordering which we require. Cc: Robin Murphy <robin.murphy@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Zhen Lei 提交于
The GITS_TRANSLATER MMIO doorbell register in the ITS hardware is architected to be 4 bytes in size, yet on hi1620 and earlier, Hisilicon have allocated the adjacent 4 bytes to carry some IMPDEF sideband information which results in an 8-byte MSI payload being delivered when signalling an interrupt: MSIAddr: |----4bytes----|----4bytes----| | MSIData | IMPDEF | This poses no problem for the ITS hardware because the adjacent 4 bytes are reserved in the memory map. However, when delivering MSIs to memory, as we do in the SMMUv3 driver for signalling the completion of a SYNC command, the extended payload will corrupt the 4 bytes adjacent to the "sync_count" member in struct arm_smmu_device. Fortunately, the current layout allocates these bytes to padding, but this is fragile and we should make this explicit. Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NZhen Lei <thunder.leizhen@huawei.com> [will: Rewrote commit message and comment] Signed-off-by: NWill Deacon <will.deacon@arm.com>
-