- 12 10月, 2019 6 次提交
-
-
由 Andrew Murray 提交于
commit 5e731073bc0a4a53a213412dbd33982d829560f1 upstream Simplify the code by removing an unnecessary wrapper function. This was left behind by commit 2f657add ("iommu/arm-smmu-v3: Specialise CMD_SYNC handling") Signed-off-by: NAndrew Murray <andrew.murray@arm.com> Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de> Signed-off-by: NZou Cao <zoucao@linux.alibaba.com> Reviewed-by: NBaoyou Xie <xie.baoyou@linux.alibaba.com>
-
由 Zhen Lei 提交于
commit 9662b99a19abccb0b7bfc91abb3fec1447c35bf0 upstream Now that io-pgtable knows how to dodge strict TLB maintenance, all that's left to do is bridge the gap between the IOMMU core requesting DOMAIN_ATTR_DMA_USE_FLUSH_QUEUE for default domains, and showing the appropriate IO_PGTABLE_QUIRK_NON_STRICT flag to alloc_io_pgtable_ops(). Signed-off-by: NZhen Lei <thunder.leizhen@huawei.com> [rm: convert to domain attribute, tweak commit message] Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NZou Cao <zoucao@linux.alibaba.com> Reviewed-by: NBaoyou Xie <xie.baoyou@linux.alibaba.com>
-
由 Zhen Lei 提交于
commit 07fdef34d2be6811f00c6f9e4e2a1483cf86696c upstream .flush_iotlb_all is currently stubbed to arm_smmu_iotlb_sync() since the only time it would ever need to actually do anything is for callers doing their own explicit batching, e.g.: iommu_unmap_fast(domain, ...); iommu_unmap_fast(domain, ...); iommu_iotlb_flush_all(domain, ...); where since io-pgtable still issues the TLBI commands implicitly in the unmap instead of implementing .iotlb_range_add, the "flush" only needs to ensure completion of those already-in-flight invalidations. However, we're about to start using it in anger with flush queues, so let's get a proper implementation wired up. Signed-off-by: NZhen Lei <thunder.leizhen@huawei.com> Reviewed-by: NRobin Murphy <robin.murphy@arm.com> [rm: document why it wasn't a bug] Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NZou Cao <zoucao@linux.alibaba.com> Reviewed-by: NBaoyou Xie <xie.baoyou@linux.alibaba.com>
-
由 Zhen Lei 提交于
commit 901510ee32f7190902f6fe4affb463e5d86a804c upstream Putting adjacent CMD_SYNCs into the command queue is nonsensical, but can happen when multiple CPUs are inserting commands. Rather than leave the poor old hardware to chew through these operations, we can instead drop the subsequent SYNCs and poll for completion of the first. This has been shown to improve IO performance under pressure, where the number of SYNC operations reduces by about a third: CMD_SYNCs reduced: 19542181 CMD_SYNCs total: 58098548 (include reduced) CMDs total: 116197099 (TLBI:SYNC about 1:1) Signed-off-by: NZhen Lei <thunder.leizhen@huawei.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NZou Cao <zoucao@linux.alibaba.com> Reviewed-by: NBaoyou Xie <xie.baoyou@linux.alibaba.com>
-
由 Zhen Lei 提交于
commit 0f02477d16980938a84aba8688a4e3a303306116 upstream The condition break condition of: (int)(VAL - sync_idx) >= 0 in the __arm_smmu_sync_poll_msi() polling loop requires that sync_idx must be increased monotonically according to the sequence of the CMDs in the cmdq. However, since the msidata is populated using atomic_inc_return_relaxed() before taking the command-queue spinlock, then the following scenario can occur: CPU0 CPU1 msidata=0 msidata=1 insert cmd1 insert cmd0 smmu execute cmd1 smmu execute cmd0 poll timeout, because msidata=1 is overridden by cmd0, that means VAL=0, sync_idx=1. This is not a functional problem, since the caller will eventually either timeout or exit due to another CMD_SYNC, however it's clearly not what the code is supposed to be doing. Fix it, by incrementing the sequence count with the command-queue lock held, allowing us to drop the atomic operations altogether. Signed-off-by: NZhen Lei <thunder.leizhen@huawei.com> [will: dropped the specialised cmd building routine for now] Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NZou Cao <zoucao@linux.alibaba.com> Reviewed-by: NBaoyou Xie <xie.baoyou@linux.alibaba.com>
-
由 John Garry 提交于
commit 657135f3108122556c3cf60a78c6f0e76aeb60e6 commit Fix some comment typos spotted. Signed-off-by: NJohn Garry <john.garry@huawei.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NZou Cao <zoucao@linux.alibaba.com> Reviewed-by: NBaoyou Xie <xie.baoyou@linux.alibaba.com>
-
- 15 6月, 2019 1 次提交
-
-
由 Will Deacon 提交于
[ Upstream commit 3f54c447df34ff9efac7809a4a80fd3208efc619 ] Disabling the SMMU when probing from within a kdump kernel so that all incoming transactions are terminated can prevent the core of the crashed kernel from being transferred off the machine if all I/O devices are behind the SMMU. Instead, continue to probe the SMMU after it is disabled so that we can reinitialise it entirely and re-attach the DMA masters as they are reset. Since the kdump kernel may not have drivers for all of the active DMA masters, we suppress fault reporting to avoid spamming the console and swamping the IRQ threads. Reported-by: N"Leizhen (ThunderTown)" <thunder.leizhen@huawei.com> Tested-by: N"Leizhen (ThunderTown)" <thunder.leizhen@huawei.com> Tested-by: NBhupesh Sharma <bhsharma@redhat.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
- 13 2月, 2019 2 次提交
-
-
由 Will Deacon 提交于
[ Upstream commit a868e8530441286342f90c1fd9c5f24de3aa2880 ] After removing an entry from a queue (e.g. reading an event in arm_smmu_evtq_thread()) it is necessary to advance the MMIO consumer pointer to free the queue slot back to the SMMU. A memory barrier is required here so that all reads targetting the queue entry have completed before the consumer pointer is updated. The implementation of queue_inc_cons() relies on a writel() to complete the previous reads, but this is incorrect because writel() is only guaranteed to complete prior writes. This patch replaces the call to writel() with an mb(); writel_relaxed() sequence, which gives us the read->write ordering which we require. Cc: Robin Murphy <robin.murphy@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Zhen Lei 提交于
[ Upstream commit 84a9a75774961612d0c7dd34a1777e8f98a65abd ] The GITS_TRANSLATER MMIO doorbell register in the ITS hardware is architected to be 4 bytes in size, yet on hi1620 and earlier, Hisilicon have allocated the adjacent 4 bytes to carry some IMPDEF sideband information which results in an 8-byte MSI payload being delivered when signalling an interrupt: MSIAddr: |----4bytes----|----4bytes----| | MSIData | IMPDEF | This poses no problem for the ITS hardware because the adjacent 4 bytes are reserved in the memory map. However, when delivering MSIs to memory, as we do in the SMMUv3 driver for signalling the completion of a SYNC command, the extended payload will corrupt the 4 bytes adjacent to the "sync_count" member in struct arm_smmu_device. Fortunately, the current layout allocates these bytes to padding, but this is fragile and we should make this explicit. Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NZhen Lei <thunder.leizhen@huawei.com> [will: Rewrote commit message and comment] Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
- 10 1月, 2019 1 次提交
-
-
由 Robin Murphy 提交于
commit 3cd508a8c1379427afb5e16c2e0a7c986d907853 upstream. When we insert the sync sequence number into the CMD_SYNC.MSIData field, we do so in CPU-native byte order, before writing out the whole command as explicitly little-endian dwords. Thus on big-endian systems, the SMMU will receive and write back a byteswapped version of sync_nr, which would be perfect if it were targeting a similarly-little-endian ITS, but since it's actually writing back to memory being polled by the CPUs, they're going to end up seeing the wrong thing. Since the SMMU doesn't care what the MSIData actually contains, the minimal-overhead solution is to simply add an extra byteswap initially, such that it then writes back the big-endian format directly. Cc: <stable@vger.kernel.org> Fixes: 37de98f8 ("iommu/arm-smmu-v3: Use CMD_SYNC completion MSI") Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 08 8月, 2018 1 次提交
-
-
由 Christoph Hellwig 提交于
All iommu drivers use the default_iommu_map_sg implementation, and there is no good reason to ever override it. Just expose it as iommu_map_sg directly and remove the indirection, specially in our post-spectre world where indirect calls are horribly expensive. Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 27 7月, 2018 2 次提交
-
-
由 Will Deacon 提交于
If we find that the SMMU is enabled during probe, we reset it by re-initialising its registers and either enabling translation or placing it into bypass based on the disable_bypass commandline option. In the case of a kdump kernel, the SMMU won't have been shutdown cleanly by the previous kernel and there may be concurrent DMA through the SMMU. Rather than reset the SMMU to bypass, which would likely lead to rampant data corruption, we can instead configure the SMMU to abort all incoming transactions when we find that it is enabled from within a kdump kernel. Reported-by: NSameer Goel <sgoel@codeaurora.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Zhen Lei 提交于
Stream bypass is a potential security hole since a malicious device can be hotplugged in without matching any drivers, yet be granted the ability to access all of physical memory. Now that we attach devices to domains by default, we can toggle the disable_bypass default to "on", preventing DMA from unknown devices. Signed-off-by: NZhen Lei <thunder.leizhen@huawei.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 26 7月, 2018 1 次提交
-
-
由 Miao Zhong 提交于
When PRI queue occurs overflow, driver should update the OVACKFLG to the PRIQ consumer register, otherwise subsequent PRI requests will not be processed. Cc: Will Deacon <will.deacon@arm.com> Cc: Robin Murphy <robin.murphy@arm.com> Signed-off-by: NMiao Zhong <zhongmiao@hisilicon.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 10 7月, 2018 1 次提交
-
-
由 Rob Herring 提交于
Now that we use the driver core to stop deferred probe for missing drivers, IOMMU_OF_DECLARE can be removed. This is slightly less optimal than having a list of built-in drivers in that we'll now defer probe twice before giving up. This shouldn't have a significant impact on boot times as past discussions about deferred probe have given no evidence of deferred probe having a substantial impact. Cc: Robin Murphy <robin.murphy@arm.com> Cc: Kukjin Kim <kgene@kernel.org> Cc: Krzysztof Kozlowski <krzk@kernel.org> Cc: Rob Clark <robdclark@gmail.com> Cc: Heiko Stuebner <heiko@sntech.de> Cc: Frank Rowand <frowand.list@gmail.com> Cc: linux-arm-kernel@lists.infradead.org Cc: iommu@lists.linux-foundation.org Cc: linux-samsung-soc@vger.kernel.org Cc: linux-arm-msm@vger.kernel.org Cc: linux-rockchip@lists.infradead.org Cc: devicetree@vger.kernel.org Acked-by: NWill Deacon <will.deacon@arm.com> Acked-by: NMarek Szyprowski <m.szyprowski@samsung.com> Acked-by: NJoerg Roedel <jroedel@suse.de> Signed-off-by: NRob Herring <robh@kernel.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 27 3月, 2018 8 次提交
-
-
由 Robin Murphy 提交于
Stage 1 input addresses are effectively 64-bit in SMMUv3 anyway, so really all that's involved is letting io-pgtable know the appropriate upper bound for T0SZ. Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Robin Murphy 提交于
Implement SMMUv3.1 support for 52-bit physical addresses. Since a 52-bit OAS implies 64KB translation granule support, permitting level 1 block entries there is simple, and the rest is just extending address fields. Tested-by: NNate Watterson <nwatters@codeaurora.org> Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Robin Murphy 提交于
As with registers and tables, use GENMASK and the bitfield accessors consistently for queue fields, to save some lines and ease maintenance a little. This now leaves everything in a nice state where all named field definitions expect to be used with bitfield accessors (although since single-bit fields can still be used directly we leave some of those uses as-is to avoid unnecessary churn), while the few remaining *_MASK definitions apply exclusively to in-place values. Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Robin Murphy 提交于
As with registers, use GENMASK and the bitfield accessors consistently for table fields, to save some lines and ease maintenance a little. This also catches a subtle off-by-one wherein bit 5 of CD.T0SZ was missing. Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Robin Murphy 提交于
The FIELD_{GET,PREP} accessors provided by linux/bitfield.h allow us to define multi-bit register fields solely in terms of their bit positions via GENMASK(), without needing explicit *_SHIFT and *_MASK definitions. As well as the immediate reduction in lines of code, this avoids the awkwardness of values sometimes being pre-shifted and sometimes not, which means we can factor out some common values like memory attributes. Furthermore, it also makes it trivial to verify the definitions against the architecture spec, on which note let's also fix up a few field names to properly match the current release (IHI0070B). Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Robin Murphy 提交于
Before trying to add the SMMUv3.1 support for 52-bit addresses, make things bearable by cleaning up the various address mask definitions to use GENMASK_ULL() consistently. The fact that doing so reveals (and fixes) a latent off-by-one in Q_BASE_ADDR_MASK only goes to show what a jolly good idea it is... Tested-by: NNate Watterson <nwatters@codeaurora.org> Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Nate Watterson 提交于
Currently, the arm-smmu-v3 driver expects to allocate MSIs for all SMMUs with FEAT_MSI set. This results in unwarranted "failed to allocate MSIs" warnings being printed on systems where FW was either deliberately configured to force the use of SMMU wired interrupts -or- is altogether incapable of describing SMMU MSI topology (ACPI IORT prior to rev.C). Remedy this by checking msi_domain before attempting to allocate SMMU MSIs. Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NNate Watterson <nwatters@codeaurora.org> Signed-off-by: NSinan Kaya <okaya@codeaurora.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Robin Murphy 提交于
It is annoyingly non-obvious when DMA transactions silently go missing due to undetected SMMU faults. Help skip the first few debugging steps in those situations by making it clear when we have neither wired IRQs nor MSIs with which to raise error conditions. Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 17 1月, 2018 1 次提交
-
-
由 Robin Murphy 提交于
Now that no more drivers rely on arbitrary early initialisation via an of_iommu_init_fn hook, let's clean up the redundant remnants. The IOMMU_OF_DECLARE() macro needs to remain for now, as the probe-deferral mechanism has no other nice way to detect built-in drivers before they have registered themselves, such that it can make the right decision. Reviewed-by: NSricharan R <sricharan@codeaurora.org> Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 03 1月, 2018 2 次提交
-
-
由 Robin Murphy 提交于
For PCI devices behind an aliasing PCIe-to-PCI/X bridge, the bridge alias to DevFn 0.0 on the subordinate bus may match the original RID of the device, resulting in the same SID being present in the device's fwspec twice. This causes trouble later in arm_smmu_write_strtab_ent() when we wind up visiting the STE a second time and find it already live. Avoid the issue by giving arm_smmu_install_ste_for_dev() the cleverness to skip over duplicates. It seems mildly counterintuitive compared to preventing the duplicates from existing in the first place, but since the DT and ACPI probe paths build their fwspecs differently, this is actually the cleanest and most self-contained way to deal with it. Cc: <stable@vger.kernel.org> Fixes: 8f785154 ("iommu/arm-smmu: Implement of_xlate() for SMMUv3") Reported-by: NTomasz Nowicki <tomasz.nowicki@caviumnetworks.com> Tested-by: NTomasz Nowicki <Tomasz.Nowicki@cavium.com> Tested-by: NJayachandran C. <jnair@caviumnetworks.com> Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Jean-Philippe Brucker 提交于
Kasan reports a double free when finalise_stage_fn fails: the io_pgtable ops are freed by arm_smmu_domain_finalise and then again by arm_smmu_domain_free. Prevent this by leaving pgtbl_ops empty on failure. Cc: <stable@vger.kernel.org> Fixes: 48ec83bc ("iommu/arm-smmu: Add initial driver support for ARM SMMUv3 devices") Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NJean-Philippe Brucker <jean-philippe.brucker@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 20 10月, 2017 10 次提交
-
-
由 Robin Murphy 提交于
While CMD_SYNC is unlikely to complete immediately such that we never go round the polling loop, with a lightly-loaded queue it may still do so long before the delay period is up. If we have no better completion notifier, use similar logic as we have for SMMUv2 to spin a number of times before each backoff, so that we have more chance of catching syncs which complete relatively quickly and avoid delaying unnecessarily. Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Will Deacon 提交于
We have separate (identical) timeout values for polling for a queue to drain and waiting for an MSI to signal CMD_SYNC completion. In reality, we only wait for the command queue to drain if we're waiting on a sync, so just merged these two timeouts into a single constant. Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Will Deacon 提交于
arm_smmu_cmdq_issue_sync is a little unwieldy now that it supports both MSI and event-based polling, so split it into two functions to make things easier to follow. Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Robin Murphy 提交于
As an IRQ, the CMD_SYNC interrupt is not particularly useful, not least because we often need to wait for sync completion within someone else's IRQ handler anyway. However, when the SMMU is both coherent and supports MSIs, we can have a lot more fun by not using it as an interrupt at all. Following the example suggested in the architecture and using a write targeting normal memory, we can let callers wait on a status variable outside the lock instead of having to stall the entire queue or even touch MMIO registers. Since multiple sync commands are guaranteed to complete in order, a simple incrementing sequence count is all we need to unambiguously support any realistic number of overlapping waiters. Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Robin Murphy 提交于
The cmdq-sync interrupt is never going to be particularly useful, since for stage 1 DMA at least we'll often need to wait for sync completion within someone else's IRQ handler, thus have to implement polling anyway. Beyond that, the overhead of taking an interrupt, then still having to grovel around in the queue to figure out *which* sync command completed, doesn't seem much more attractive than simple polling either. Furthermore, if an implementation both has wired interrupts and supports MSIs, then we don't want to be taking the IRQ unnecessarily if we're using the MSI write to update memory. Let's just make life simpler by not even bothering to claim it in the first place. Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Robin Murphy 提交于
CMD_SYNC already has a bit of special treatment here and there, but as we're about to extend it with more functionality for completing outside the CMDQ lock, things are going to get rather messy if we keep trying to cram everything into a single generic command interface. Instead, let's break out the issuing of CMD_SYNC into its own specific helper where upcoming changes will have room to breathe. Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Robin Murphy 提交于
Slightly confusingly, when reporting a mismatch of the ID register value, we still refer to the IORT COHACC override flag as the "dma-coherent property" if we booted with ACPI. Update the message to be firmware-agnostic in line with SMMUv2. Acked-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Reported-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Yisheng Xie 提交于
According to Spec, it is ILLEGAL to set STE.S1STALLD if STALL_MODEL is not 0b00, which means we should not disable stall mode if stall or terminate mode is not configuable. Meanwhile, it is also ILLEGAL when STALL_MODEL==0b10 && CD.S==0 which means if stall mode is force we should always set CD.S. As Jean-Philippe's suggestion, this patch introduce a feature bit ARM_SMMU_FEAT_STALL_FORCE, which means smmu only supports stall force. Therefore, we can avoid the ILLEGAL setting of STE.S1STALLD.by checking ARM_SMMU_FEAT_STALL_FORCE. This patch keeps the ARM_SMMU_FEAT_STALLS as the meaning of stall supported (force or configuable) to easy to expand the future function, i.e. we can only use ARM_SMMU_FEAT_STALLS to check whether we should register fault handle or enable master can_stall, etc to supporte platform SVM. The feature bit, STE.S1STALLD and CD.S setting will be like: STALL_MODEL FEATURE S1STALLD CD.S 0b00 ARM_SMMU_FEAT_STALLS 0b1 0b0 0b01 !ARM_SMMU_FEAT_STALLS && !ARM_SMMU_FEAT_STALL_FORCE 0b0 0b0 0b10 ARM_SMMU_FEAT_STALLS && ARM_SMMU_FEAT_STALL_FORCE 0b0 0b1 after apply this patch. Signed-off-by: NYisheng Xie <xieyisheng1@huawei.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Will Deacon 提交于
The SMMUv3 architecture permits caching of data structures deemed to be "reachable" by the SMU, which includes STEs marked as invalid. When transitioning an STE to a bypass/fault configuration at init or detach time, we mistakenly elide the CMDQ_OP_CFGI_STE operation in some cases, therefore potentially leaving the old STE state cached in the SMMU. This patch fixes the problem by ensuring that we perform the CMDQ_OP_CFGI_STE operation irrespective of the validity of the previous STE. Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Reported-by: NEric Auger <eric.auger@redhat.com> Reviewed-by: NEric Auger <eric.auger@redhat.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Robin Murphy 提交于
Now that the kernel headers have synced with the relevant upstream ACPICA updates, it's time to clean up the temporary local definitions. Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 02 10月, 2017 1 次提交
-
-
由 Robin Murphy 提交于
Now that the core API issues its own post-unmap TLB sync call, push that operation out from the io-pgtable-arm internals into the users. For now, we leave the invalidation implicit in the unmap operation, since none of the current users would benefit much from any change to that. CC: Magnus Damm <damm+renesas@opensource.se> CC: Laurent Pinchart <laurent.pinchart@ideasonboard.com> Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 17 8月, 2017 1 次提交
-
-
由 Nate Watterson 提交于
The shutdown method disables the SMMU to avoid corrupting a new kernel started with kexec. Signed-off-by: NNate Watterson <nwatters@codeaurora.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 24 6月, 2017 2 次提交
-
-
由 Geetha Sowjanya 提交于
Cavium ThunderX2 SMMU doesn't support MSI and also doesn't have unique irq lines for gerror, eventq and cmdq-sync. New named irq "combined" is set as a errata workaround, which allows to share the irq line by register single irq handler for all the interrupts. Acked-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by: NGeetha sowjanya <gakula@caviumnetworks.com> [will: reworked irq equality checking and added SPI check] Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 shameer 提交于
HiSilicon SMMUv3 on Hip06/Hip07 platforms doesn't support CMD_PREFETCH command. The dt based support for this quirk is already present in the driver(hisilicon,broken-prefetch-cmd). This adds ACPI support for the quirk using the IORT smmu model number. Signed-off-by: Nshameer <shameerali.kolothum.thodi@huawei.com> Signed-off-by: Nhanjun <guohanjun@huawei.com> [will: rewrote patch] Signed-off-by: NWill Deacon <will.deacon@arm.com>
-