- 11 8月, 2017 1 次提交
-
-
由 Artem Savkov 提交于
Commit c54451a5 "iommu/arm-smmu: Fix the error path in arm_smmu_add_device" removed fwspec assignment in legacy_binding path as redundant which is wrong. It needs to be updated after fwspec initialisation in arm_smmu_register_legacy_master() as it is dereferenced later. Without this there is a NULL-pointer dereference panic during boot on some hosts. Signed-off-by: NArtem Savkov <asavkov@redhat.com> Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 20 7月, 2017 2 次提交
-
-
由 Vivek Gautam 提交于
fwspec->iommu_priv is available only after arm_smmu_master_cfg instance has been allocated. We shouldn't free it before that. Also it's logical to free the master cfg itself without checking for fwspec. Signed-off-by: NVivek Gautam <vivek.gautam@codeaurora.org> [will: remove redundant assignment to fwspec] Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Will Deacon 提交于
Commit 523d7423 ("iommu/arm-smmu: Remove io-pgtable spinlock") removed the locking used to serialise map/unmap calls into the io-pgtable code from the ARM SMMU driver. This is good for performance, but opens us up to a nasty race with TLB syncs because the TLB sync register is shared within a context bank (or even globally for stage-2 on SMMUv1). There are two cases to consider: 1. A CPU can be spinning on the completion of a TLB sync, take an interrupt which issues a subsequent TLB sync, and then report a timeout on return from the interrupt. 2. A CPU can be spinning on the completion of a TLB sync, but other CPUs can continuously issue additional TLB syncs in such a way that the backoff logic reports a timeout. Rather than fix this by spinning for completion of prior TLB syncs before issuing a new one (which may suffer from fairness issues on large systems), instead reintroduce locking around TLB sync operations in the ARM SMMU driver. Fixes: 523d7423 ("iommu/arm-smmu: Remove io-pgtable spinlock") Cc: Robin Murphy <robin.murphy@arm.com> Reported-by: NRay Jui <ray.jui@broadcom.com> Tested-by: NRay Jui <ray.jui@broadcom.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 24 6月, 2017 3 次提交
-
-
由 Robin Murphy 提交于
With the io-pgtable code now robust against (valid) races, we no longer need to serialise all operations with a lock. This might make broken callers who issue concurrent operations on overlapping addresses go even more wrong than before, but hey, they already had little hope of useful or deterministic results. We do however still have to keep a lock around to serialise the ATS1* translation ops, as parallel iova_to_phys() calls could lead to unpredictable hardware behaviour otherwise. Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Robin Murphy 提交于
Once we remove the serialising spinlock, a potential race opens up for non-coherent IOMMUs whereby a caller of .map() can be sure that cache maintenance has been performed on their new PTE, but will have no guarantee that such maintenance for table entries above it has actually completed (e.g. if another CPU took an interrupt immediately after writing the table entry, but before initiating the DMA sync). Handling this race safely will add some potentially non-trivial overhead to installing a table entry, which we would much rather avoid on coherent systems where it will be unnecessary, and where we are stirivng to minimise latency by removing the locking in the first place. To that end, let's introduce an explicit notion of cache-coherency to io-pgtable, such that we will be able to avoid penalising IOMMUs which know enough to know when they are coherent. Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Robin Murphy 提交于
Revision C of IORT now allows us to identify ARM MMU-401 and the Cavium ThunderX implementation. Wire them up so that we can probe these models once firmware starts using the new codes in place of generic ones, and so that the appropriate features and quirks get enabled when we do. For the sake of backports and mitigating sychronisation problems with the ACPICA headers, we'll carry a backup copy of the new definitions locally for the short term to make life simpler. CC: stable@vger.kernel.org # 4.10 Acked-by: NRobert Richter <rrichter@cavium.com> Tested-by: NRobert Richter <rrichter@cavium.com> Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 26 4月, 2017 1 次提交
-
-
由 Sunil Goutham 提交于
For software initiated address translation, when domain type is IOMMU_DOMAIN_IDENTITY i.e SMMU is bypassed, mimic HW behavior i.e return the same IOVA as translated address. This patch is an extension to Will Deacon's patchset "Implement SMMU passthrough using the default domain". Signed-off-by: NSunil Goutham <sgoutham@cavium.com> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 25 4月, 2017 1 次提交
-
-
由 Peng Fan 提交于
From code "SMR mask 0x%x out of range for SMMU", so, we need to use mask, not sid. Signed-off-by: NPeng Fan <peng.fan@nxp.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Robin Murphy <robin.murphy@arm.com> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 20 4月, 2017 1 次提交
-
-
由 Robin Murphy 提交于
Now that the appropriate ordering is enforced via probe-deferral of masters in core code, rip it all out and bask in the simplicity. Tested-by: NHanjun Guo <hanjun.guo@linaro.org> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRobin Murphy <robin.murphy@arm.com> [Sricharan: Rebased on top of ACPI IORT SMMU series] Signed-off-by: NSricharan R <sricharan@codeaurora.org> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 06 4月, 2017 9 次提交
-
-
由 Will Deacon 提交于
In preparation for allowing the default domain type to be overridden, this patch adds support for IOMMU_DOMAIN_IDENTITY domains to the ARM SMMU driver. An identity domain is created by placing the corresponding S2CR registers into "bypass" mode, which allows transactions to flow through the SMMU without any translation. Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Will Deacon 提交于
The ARM SMMU drivers provide a DOMAIN_ATTR_NESTING domain attribute, which allows callers of the IOMMU API to request that the page table for a domain is installed at stage-2, if supported by the hardware. Since setting this attribute only makes sense for UNMANAGED domains, this patch returns -ENODEV if the domain_{get,set}_attr operations are called on other domain types. Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Robin Murphy 提交于
The current SMR masking support using a 2-cell iommu-specifier is primarily intended to handle individual masters with large and/or complex Stream ID assignments; it quickly gets a bit clunky in other SMR use-cases where we just want to consistently mask out the same part of every Stream ID (e.g. for MMU-500 configurations where the appended TBU number gets in the way unnecessarily). Let's add a new property to allow a single global mask value to better fit the latter situation. Acked-by: NMark Rutland <mark.rutland@arm.com> Tested-by: NNipun Gupta <nipun.gupta@nxp.com> Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Robin Murphy 提交于
On relatively slow development platforms and software models, the inefficiency of our TLB sync loop tends not to show up - for instance on a Juno r1 board I typically see the TLBI has completed of its own accord by the time we get to the sync, such that the latter finishes instantly. However, on larger systems doing real I/O, it's less realistic for the TLBs to go idle immediately, and at that point falling into the 1MHz polling loop turns out to throw away performance drastically. Let's strike a balance by polling more than once between pauses, such that we have much more chance of catching normal operations completing before committing to the fixed delay, but also backing off exponentially, since if a sync really hasn't completed within one or two "reasonable time" periods, it becomes increasingly unlikely that it ever will. Reviewed-by: NJordan Crouse <jcrouse@codeaurora.org> Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Robin Murphy 提交于
TLB synchronisation typically involves the SMMU blocking all incoming transactions until the TLBs report completion of all outstanding operations. In the common SMMUv2 configuration of a single distributed SMMU serving multiple peripherals, that means that a single unmap request has the potential to bring the hammer down on the entire system if synchronised globally. Since stage 1 contexts, and stage 2 contexts under SMMUv2, offer local sync operations, let's make use of those wherever we can in the hope of minimising global disruption. To that end, rather than add any more branches to the already unwieldy monolithic TLB maintenance ops, break them up into smaller, neater, functions which we can then mix and match as appropriate. Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Robin Murphy 提交于
ARM_AMMU_CB() is calculated relative to ARM_SMMU_CB_BASE(), but the latter is never of use on its own, and what we end up with is the same ARM_SMMU_CB_BASE() + ARM_AMMU_CB() expression being duplicated at every callsite. Folding the two together gives us a self-contained context bank accessor which is much more pleasant to work with. Secondly, we might as well simplify CB_BASE itself at the same time. We use the address space size for its own sake precisely once, at probe time, and every other usage is to dynamically calculate CB_BASE over and over and over again. Let's flip things around so that we just maintain the CB_BASE address directly. Reviewed-by: NJordan Crouse <jcrouse@codeaurora.org> Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Robin Murphy 提交于
Calculating ASIDs/VMIDs dynamically from arm_smmu_cfg was a neat trick, but the global uniqueness workaround makes it somewhat more awkward, and means we end up having to pass extra state around in certain cases just to keep a handle on the offset. We already have 16 bits going spare in arm_smmu_cfg; let's just precalculate an ASID/VMID, plop it in there, and tidy up the users accordingly. We'd also need something like this anyway if we ever get near to thinking about SVM, so it's no bad thing. Reviewed-by: NJordan Crouse <jcrouse@codeaurora.org> Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Sunil Goutham 提交于
16-bit ASID should be enabled before initializing TTBR0/1, otherwise only LSB 8-bit ASID will be considered. Hence moving configuration of TTBCR register ahead of TTBR0/1 while initializing context bank. Signed-off-by: NSunil Goutham <sgoutham@cavium.com> [will: rewrote comment] Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Robert Richter 提交于
Firmware is responsible for properly enabling smmu workarounds. Print a message for better diagnostics when Cavium erratum 27704 was detected. Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NRobert Richter <rrichter@cavium.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 22 3月, 2017 2 次提交
-
-
由 Robin Murphy 提交于
Now that we're applying the IOMMU API reserved regions to our IOVA domains, we shouldn't need to privately special-case PCI windows, or indeed anything else which isn't specific to our iommu-dma layer. However, since those aren't IOMMU-specific either, rather than start duplicating code into IOMMU drivers let's transform the existing function into an iommu_get_resv_regions() helper that they can share. Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Robin Murphy 提交于
The introduction of reserved regions has left a couple of rough edges which we could do with sorting out sooner rather than later. Since we are not yet addressing the potential dynamic aspect of software-managed reservations and presenting them at arbitrary fixed addresses, it is incongruous that we end up displaying hardware vs. software-managed MSI regions to userspace differently, especially since ARM-based systems may actually require one or the other, or even potentially both at once, (which iommu-dma currently has no hope of dealing with at all). Let's resolve the former user-visible inconsistency ASAP before the ABI has been baked into a kernel release, in a way that also lays the groundwork for the latter shortcoming to be addressed by follow-up patches. For clarity, rename the software-managed type to IOMMU_RESV_SW_MSI, use IOMMU_RESV_MSI to describe the hardware type, and document everything a little bit. Since the x86 MSI remapping hardware falls squarely under this meaning of IOMMU_RESV_MSI, apply that type to their regions as well, so that we tell the same story to userspace across all platforms. Secondly, as the various region types require quite different handling, and it really makes little sense to ever try combining them, convert the bitfield-esque #defines to a plain enum in the process before anyone gets the wrong impression. Fixes: d30ddcaa ("iommu: Add a new type field in iommu_resv_region") Reviewed-by: NEric Auger <eric.auger@redhat.com> CC: Alex Williamson <alex.williamson@redhat.com> CC: David Woodhouse <dwmw2@infradead.org> CC: kvm@vger.kernel.org Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 10 2月, 2017 2 次提交
-
-
由 Joerg Roedel 提交于
And also move its remaining functionality to iommu_device_register() and 'struct iommu_device'. Cc: Rob Herring <robh+dt@kernel.org> Cc: Frank Rowand <frowand.list@gmail.com> Cc: Matthias Brugger <matthias.bgg@gmail.com> Cc: Marek Szyprowski <m.szyprowski@samsung.com> Cc: devicetree@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
Also add the smmu devices to sysfs. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 27 1月, 2017 2 次提交
-
-
由 Tomasz Nowicki 提交于
The goal of erratum #27704 workaround was to make sure that ASIDs and VMIDs are unique across all SMMU instances on affected Cavium systems. Currently, the workaround code partitions ASIDs and VMIDs by increasing global cavium_smmu_context_count which in turn becomes the base ASID and VMID value for the given SMMU instance upon the context bank initialization. For systems with multiple SMMU instances this approach implies the risk of crossing 8-bit ASID, like for 1-socket CN88xx capable of 4 SMMUv2, 128 context banks each: SMMU_0 (0-127 ASID RANGE) SMMU_1 (127-255 ASID RANGE) SMMU_2 (256-383 ASID RANGE) <--- crossing 8-bit ASID SMMU_3 (384-511 ASID RANGE) <--- crossing 8-bit ASID Since now we use 8-bit ASID (SMMU_CBn_TCR2.AS = 0) we effectively misconfigure ASID[15:8] bits of SMMU_CBn_TTBRm register for SMMU_2/3. Moreover, we still assume non-zero ASID[15:8] bits upon context invalidation. In the end, except SMMU_0/1 devices all other devices under other SMMUs will fail on guest power off/on. Since we try to invalidate TLB with 16-bit ASID but we actually have 8-bit zero padded 16-bit entry. This patch adds 16-bit ASID support for stage-1 AArch64 contexts so that we use ASIDs consistently for all SMMU instances. Signed-off-by: NTomasz Nowicki <tn@semihalf.com> Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Reviewed-by: NTirumalesh Chalamarla <Tirumalesh.Chalamarla@cavium.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Aleksey Makarov 提交于
It is the time we have the real 16-bit Stream ID user, which is the ThunderX. Its IO topology uses 1:1 map for Requester ID to Stream ID translation for each root complex which allows to get full 16-bit Stream ID. Firmware assigns bus IDs that are greater than 128 (0x80) to some buses under PEM (external PCIe interface). Eventually SMMU drops devices on that buses because their Stream ID is out of range: pci 0006:90:00.0: stream ID 0x9000 out of range for SMMU (0x7fff) To fix above issue enable the Extended Stream ID optional feature when available. Reviewed-by: NTomasz Nowicki <tomasz.nowicki@caviumnetworks.com> Signed-off-by: NAleksey Makarov <aleksey.makarov@linaro.org> Tested-by: NTomasz Nowicki <tomasz.nowicki@caviumnetworks.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 23 1月, 2017 2 次提交
-
-
由 Eric Auger 提交于
IOMMU_CAP_INTR_REMAP has been advertised in arm-smmu(-v3) although on ARM this property is not attached to the IOMMU but rather is implemented in the MSI controller (GICv3 ITS). Now vfio_iommu_type1 checks MSI remapping capability at MSI controller level, let's correct this. Signed-off-by: NEric Auger <eric.auger@redhat.com> Acked-by: NWill Deacon <will.deacon@arm.com> Reviewed-by: NTomasz Nowicki <tomasz.nowicki@caviumnetworks.com> Tested-by: NTomasz Nowicki <tomasz.nowicki@caviumnetworks.com> Tested-by: NBharat Bhushan <bharat.bhushan@nxp.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Eric Auger 提交于
The get() populates the list with the MSI IOVA reserved window. At the moment an arbitray MSI IOVA window is set at 0x8000000 of size 1MB. This will allow to report those info in iommu-group sysfs. Signed-off-by: NEric Auger <eric.auger@redhat.com> Reviewed-by: NTomasz Nowicki <tomasz.nowicki@caviumnetworks.com> Tested-by: NTomasz Nowicki <tomasz.nowicki@caviumnetworks.com> Tested-by: NBharat Bhushan <bharat.bhushan@nxp.com> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 19 1月, 2017 1 次提交
-
-
由 Sricharan R 提交于
Currently the driver sets all the device transactions privileges to UNPRIVILEGED, but there are cases where the iommu masters wants to isolate privileged supervisor and unprivileged user. So don't override the privileged setting to unprivileged, instead set it to default as incoming and let it be controlled by the pagetable settings. Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NSricharan R <sricharan@codeaurora.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 29 11月, 2016 5 次提交
-
-
由 Lorenzo Pieralisi 提交于
In ACPI based systems, in order to be able to create platform devices and initialize them for ARM SMMU components, the IORT kernel implementation requires a set of static functions to be used by the IORT kernel layer to configure platform devices for ARM SMMU components. Add static configuration functions to the IORT kernel layer for the ARM SMMU components, so that the ARM SMMU driver can initialize its respective platform device by relying on the IORT kernel infrastructure and by adding a corresponding ACPI device early probe section entry. Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Reviewed-by: NTomasz Nowicki <tn@semihalf.com> Tested-by: NHanjun Guo <hanjun.guo@linaro.org> Tested-by: NTomasz Nowicki <tn@semihalf.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Robin Murphy <robin.murphy@arm.com> Cc: Joerg Roedel <joro@8bytes.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Lorenzo Pieralisi 提交于
Current ARM SMMU probe functions intermingle HW and DT probing in the initialization functions to detect and programme the ARM SMMU driver features. In order to allow probing the ARM SMMU with other firmwares than DT, this patch splits the ARM SMMU init functions into DT and HW specific portions so that other FW interfaces (ie ACPI) can reuse the HW probing functions and skip the DT portion accordingly. This patch implements no functional change, only code reshuffling. Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Reviewed-by: NWill Deacon <will.deacon@arm.com> Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Reviewed-by: NTomasz Nowicki <tn@semihalf.com> Tested-by: NHanjun Guo <hanjun.guo@linaro.org> Tested-by: NTomasz Nowicki <tn@semihalf.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Hanjun Guo <hanjun.guo@linaro.org> Cc: Robin Murphy <robin.murphy@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Lorenzo Pieralisi 提交于
Current ARM SMMU driver rely on the struct device.of_node pointer for device look-up and iommu_ops retrieval. In preparation for ACPI probing enablement, convert the driver to use the struct device.fwnode member for device and iommu_ops look-up so that the driver infrastructure can be used also on systems that do not associate an of_node pointer to a struct device (eg ACPI), making the device look-up and iommu_ops retrieval firmware agnostic. Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Acked-by: NWill Deacon <will.deacon@arm.com> Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Reviewed-by: NTomasz Nowicki <tn@semihalf.com> Tested-by: NHanjun Guo <hanjun.guo@linaro.org> Tested-by: NTomasz Nowicki <tn@semihalf.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Hanjun Guo <hanjun.guo@linaro.org> Cc: Robin Murphy <robin.murphy@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Nipun Gupta 提交于
The SMTNMB_TLBEN in the Auxiliary Configuration Register (ACR) provides an option to enable the updation of TLB in case of bypass transactions due to no stream match in the stream match table. This reduces the latencies of the subsequent transactions with the same stream-id which bypasses the SMMU. This provides a significant performance benefit for certain networking workloads. With this change substantial performance improvement of ~9% is observed with DPDK l3fwd application (http://dpdk.org/doc/guides/sample_app_ug/l3_forward.html) on NXP's LS2088a platform. Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NNipun Gupta <nipun.gupta@nxp.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Bhumika Goyal 提交于
Check for iommu_gather_ops structures that are only stored in the tlb field of an io_pgtable_cfg structure. The tlb field is of type const struct iommu_gather_ops *, so iommu_gather_ops structures having this property can be declared as const. Acked-by: NJulia Lawall <julia.lawall@lip6.fr> Signed-off-by: NBhumika Goyal <bhumirks@gmail.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 15 11月, 2016 1 次提交
-
-
由 Robin Murphy 提交于
When arm_smmu_device_group() finds an existing group due to Stream ID aliasing, it should be taking an additional reference on that group. Otherwise, the caller of iommu_group_get_for_dev() will inadvertently remove the reference taken by iommu_group_add_device(), and the group will be freed prematurely if any device is removed. Reported-by: NSricharan R <sricharan@codeaurora.org> Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 08 11月, 2016 3 次提交
-
-
由 Robin Murphy 提交于
When we iterate a master's config entries, what we generally care about is the entry's stream map index, rather than the entry index itself, so it's nice to have the iterator automatically assign the former from the latter. Unfortunately, booting with KASAN reveals the oversight that using a simple comma operator results in the entry index being dereferenced before being checked for validity, so we always access one element past the end of the fwspec array. Flip things around so that the check always happens before the index may be dereferenced. Fixes: adfec2e7 ("iommu/arm-smmu: Convert to iommu_fwspec") Reported-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Robin Murphy 提交于
We seem to have forgotten to check that iommu_fwspecs actually belong to us before we go ahead and dereference their private data. Oops. Fixes: 021bb842 ("iommu/arm-smmu: Wire up generic configuration support") Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Robin Murphy 提交于
The 32-bit ARM DMA configuration code predates the IOMMU core's default domain functionality, and instead relies on allocating its own domains and attaching any devices using the generic IOMMU binding to them. Unfortunately, it does this relatively early on in the creation of the device, before we've seen our add_device callback, which leads us to attempt to operate on a half-configured master. To avoid a crash, check for this situation on attach, but refuse to play, as there's nothing we can do. This at least allows VFIO to keep working for people who update their 32-bit DTs to the generic binding, albeit with a few (innocuous) warnings from the DMA layer on boot. Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 16 9月, 2016 4 次提交
-
-
由 Robin Murphy 提交于
For non-aperture-based IOMMUs, the domain geometry seems to have become the de-facto way of indicating the input address space size. That is quite a useful thing from the users' perspective, so let's do the same. Reviewed-by: NEric Auger <eric.auger@redhat.com> Tested-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Robin Murphy 提交于
With everything else now in place, fill in an of_xlate callback and the appropriate registration to plumb into the generic configuration machinery, and watch everything just work. Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Robin Murphy 提交于
In the final step of preparation for full generic configuration support, swap our fixed-size master_cfg for the generic iommu_fwspec. For the legacy DT bindings, the driver simply gets to act as its own 'firmware'. Farewell, arbitrary MAX_MASTER_STREAMIDS! Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Robin Murphy 提交于
Stream Match Registers are one of the more awkward parts of the SMMUv2 architecture; there are typically never enough to assign one to each stream ID in the system, and configuring them such that a single ID matches multiple entries is catastrophically bad - at best, every transaction raises a global fault; at worst, they go *somewhere*. To address the former issue, we can mask ID bits such that a single register may be used to match multiple IDs belonging to the same device or group, but doing so also heightens the risk of the latter problem (which can be nasty to debug). Tackle both problems at once by replacing the simple bitmap allocator with something much cleverer. Now that we have convenient in-memory representations of the stream mapping table, it becomes straightforward to properly validate new SMR entries against the current state, opening the door to arbitrary masking and SMR sharing. Another feature which falls out of this is that with IDs shared by separate devices being automatically accounted for, simply associating a group pointer with the S2CR offers appropriate group allocation almost for free, so hook that up in the process. Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-