- 30 7月, 2019 5 次提交
-
-
由 Will Deacon 提交于
Update the io-pgtable ->unmap() function to take an iommu_iotlb_gather pointer as an argument, and update the callers as appropriate. Signed-off-by: NWill Deacon <will@kernel.org>
-
由 Will Deacon 提交于
The ->tlb_sync() callback is no longer used, so it can be removed. Signed-off-by: NWill Deacon <will@kernel.org>
-
由 Will Deacon 提交于
The ->tlb_add_flush() callback in the io-pgtable API now looks a bit silly: - It takes a size and a granule, which are always the same - It takes a 'bool leaf', which is always true - It only ever flushes a single page With that in mind, replace it with an optional ->tlb_add_page() callback that drops the useless parameters. Signed-off-by: NWill Deacon <will@kernel.org>
-
由 Will Deacon 提交于
Hook up ->tlb_flush_walk() and ->tlb_flush_leaf() in drivers using the io-pgtable API so that we can start making use of them in the page-table code. For now, they can just wrap the implementations of ->tlb_add_flush and ->tlb_sync pending future optimisation in each driver. Signed-off-by: NWill Deacon <will@kernel.org>
-
由 Will Deacon 提交于
To allow IOMMU drivers to batch up TLB flushing operations and postpone them until ->iotlb_sync() is called, extend the prototypes for the ->unmap() and ->iotlb_sync() IOMMU ops callbacks to take a pointer to the current iommu_iotlb_gather structure. All affected IOMMU drivers are updated, but there should be no functional change since the extra parameter is ignored for now. Signed-off-by: NWill Deacon <will@kernel.org>
-
- 24 7月, 2019 1 次提交
-
-
由 Will Deacon 提交于
In preparation for TLB flush gathering in the IOMMU API, rename the iommu_gather_ops structure in io-pgtable to iommu_flush_ops, which better describes its purpose and avoids the potential for confusion between different levels of the API. $ find linux/ -type f -name '*.[ch]' | xargs sed -i 's/gather_ops/flush_ops/g' Signed-off-by: NWill Deacon <will@kernel.org>
-
- 04 7月, 2019 1 次提交
-
-
由 Jean-Philippe Brucker 提交于
We make the invalid assumption in arm_smmu_detach_dev() that the ATC is clear after calling pci_disable_ats(). For one thing, only enabling the PCIe ATS capability constitutes an implicit invalidation event, so the comment was wrong. More importantly, the ATS capability isn't necessarily disabled by pci_disable_ats() in a PF, if the associated VFs have ATS enabled. Explicitly invalidate all ATC entries in arm_smmu_detach_dev(). The endpoint cannot form new ATC entries because STE.EATS is clear. Fixes: 9ce27afc ("iommu/arm-smmu-v3: Add support for PCI ATS") Reported-by: NManoj Kumar <Manoj.Kumar3@arm.com> Reported-by: NRobin Murphy <Robin.Murphy@arm.com> Signed-off-by: NJean-Philippe Brucker <jean-philippe.brucker@arm.com> Acked-by: NWill Deacon <will@kernel.org> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 02 7月, 2019 1 次提交
-
-
由 Will Deacon 提交于
When compiling a kernel without support for CMA, CONFIG_CMA_ALIGNMENT is not defined which results in the following build failure: In file included from ./include/linux/list.h:9:0 from ./include/linux/kobject.h:19, from ./include/linux/of.h:17 from ./include/linux/irqdomain.h:35, from ./include/linux/acpi.h:13, from drivers/iommu/arm-smmu-v3.c:12: drivers/iommu/arm-smmu-v3.c: In function ‘arm_smmu_device_hw_probe’: drivers/iommu/arm-smmu-v3.c:194:40: error: ‘CONFIG_CMA_ALIGNMENT’ undeclared (first use in this function) #define Q_MAX_SZ_SHIFT (PAGE_SHIFT + CONFIG_CMA_ALIGNMENT) Fix the breakage by capping the maximum queue size based on MAX_ORDER when CMA is not enabled. Reported-by: NZhangshaokun <zhangshaokun@hisilicon.com> Signed-off-by: NWill Deacon <will@kernel.org> Tested-by: NShaokun Zhang <zhangshaokun@hisilicon.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 25 6月, 2019 1 次提交
-
-
由 Will Deacon 提交于
IO_PGTABLE_QUIRK_NO_DMA is a bit of a misnomer, since it's really just an indication of whether or not the page-table walker for the IOMMU is coherent with the CPU caches. Since cache coherency is more than just a quirk, replace the flag with its own field in the io_pgtable_cfg structure. Cc: Bjorn Andersson <bjorn.andersson@linaro.org> Signed-off-by: NWill Deacon <will@kernel.org>
-
- 24 6月, 2019 1 次提交
-
-
由 Suzuki K Poulose 提交于
The driver_find_device() accepts a match function pointer to filter the devices for lookup, similar to bus/class_find_device(). However, there is a minor difference in the prototype for the match parameter for driver_find_device() with the now unified version accepted by {bus/class}_find_device(), where it doesn't accept a "const" qualifier for the data argument. This prevents us from reusing the generic match functions for driver_find_device(). For this reason, change the prototype of the driver_find_device() to make the "match" parameter in line with {bus/class}_find_device() and adjust its callers to use the const qualifier. Also, we could now promote the "data" parameter to const as we pass it down as a const parameter to the match functions. Cc: Corey Minyard <minyard@acm.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Thierry Reding <thierry.reding@gmail.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: "Rafael J. Wysocki" <rafael@kernel.org> Cc: Will Deacon <will.deacon@arm.com> Cc: Joerg Roedel <joro@8bytes.org> Cc: Peter Oberparleiter <oberpar@linux.ibm.com> Cc: Sebastian Ott <sebott@linux.ibm.com> Cc: David Airlie <airlied@linux.ie> Cc: Daniel Vetter <daniel@ffwll.ch> Cc: Nehal Shah <nehal-bakulchandra.shah@amd.com> Cc: Shyam Sundar S K <shyam-sundar.s-k@amd.com> Cc: Lee Jones <lee.jones@linaro.org> Cc: Christian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 19 6月, 2019 1 次提交
-
-
由 Will Deacon 提交于
We've been artificially limiting the size of our queues to 4k so that we don't end up allocating huge amounts of physically-contiguous memory at probe time. However, 4k is only enough for 256 commands in the command queue, so instead let's try to allocate the largest queue that the SMMU supports, retrying with a smaller size if the allocation fails. The caveat here is that we have to limit our upper bound based on CONFIG_CMA_ALIGNMENT to ensure that our queue allocations remain natually aligned, which is required by the SMMU architecture. Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 23 4月, 2019 7 次提交
-
-
由 Will Deacon 提交于
Disabling the SMMU when probing from within a kdump kernel so that all incoming transactions are terminated can prevent the core of the crashed kernel from being transferred off the machine if all I/O devices are behind the SMMU. Instead, continue to probe the SMMU after it is disabled so that we can reinitialise it entirely and re-attach the DMA masters as they are reset. Since the kdump kernel may not have drivers for all of the active DMA masters, we suppress fault reporting to avoid spamming the console and swamping the IRQ threads. Reported-by: N"Leizhen (ThunderTown)" <thunder.leizhen@huawei.com> Tested-by: N"Leizhen (ThunderTown)" <thunder.leizhen@huawei.com> Tested-by: NBhupesh Sharma <bhsharma@redhat.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Jean-Philippe Brucker 提交于
The ARM architecture has a "Top Byte Ignore" (TBI) option that makes the MMU mask out bits [63:56] of an address, allowing a userspace application to store data in its pointers. This option is incompatible with PCI ATS. If TBI is enabled in the SMMU and userspace triggers DMA transactions on tagged pointers, the endpoint might create ATC entries for addresses that include a tag. Software would then have to send ATC invalidation packets for each 255 possible alias of an address, or just wipe the whole address space. This is not a viable option, so disable TBI. The impact of this change is unclear, since there are very few users of tagged pointers, much less SVA. But the requirement introduced by this patch doesn't seem excessive: a userspace application using both tagged pointers and SVA should now sanitize addresses (clear the tag) before using them for device DMA. Signed-off-by: NJean-Philippe Brucker <jean-philippe.brucker@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Jean-Philippe Brucker 提交于
PCIe devices can implement their own TLB, named Address Translation Cache (ATC). Enable Address Translation Service (ATS) for devices that support it and send them invalidation requests whenever we invalidate the IOTLBs. ATC invalidation is allowed to take up to 90 seconds, according to the PCIe spec, so it is possible to get a SMMU command queue timeout during normal operations. However we expect implementations to complete invalidation in reasonable time. We only enable ATS for "trusted" devices, and currently rely on the pci_dev->untrusted bit. For ATS we have to trust that: (a) The device doesn't issue "translated" memory requests for addresses that weren't returned by the SMMU in a Translation Completion. In particular, if we give control of a device or device partition to a VM or userspace, software cannot program the device to access arbitrary "translated" addresses. (b) The device follows permissions granted by the SMMU in a Translation Completion. If the device requested read+write permission and only got read, then it doesn't write. (c) The device doesn't send Translated transactions for an address that was invalidated by an ATC invalidation. Note that the PCIe specification explicitly requires all of these, so we can assume that implementations will cleanly shield ATCs from software. All ATS translated requests still go through the SMMU, to walk the stream table and check that the device is actually allowed to send translated requests. Signed-off-by: NJean-Philippe Brucker <jean-philippe.brucker@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Jean-Philippe Brucker 提交于
When removing a mapping from a domain, we need to send an invalidation to all devices that might have stored it in their Address Translation Cache (ATC). In addition when updating the context descriptor of a live domain, we'll need to send invalidations for all devices attached to it. Maintain a list of devices in each domain, protected by a spinlock. It is updated every time we attach or detach devices to and from domains. It needs to be a spinlock because we'll invalidate ATC entries from within hardirq-safe contexts, but it may be possible to relax the read side with RCU later. Signed-off-by: NJean-Philippe Brucker <jean-philippe.brucker@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Jean-Philippe Brucker 提交于
As we're going to track domain-master links more closely for ATS and CD invalidation, add pointer to the attached domain in struct arm_smmu_master. As a result, arm_smmu_strtab_ent is redundant and can be removed. Signed-off-by: NJean-Philippe Brucker <jean-philippe.brucker@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Jean-Philippe Brucker 提交于
Simplify the attach/detach code a bit by keeping a pointer to the stream IDs in the master structure. Although not completely obvious here, it does make the subsequent support for ATS, PRI and PASID a bit simpler. Signed-off-by: NJean-Philippe Brucker <jean-philippe.brucker@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Jean-Philippe Brucker 提交于
The arm_smmu_master_data structure already represents more than just the firmware data associated to a master, and will be used extensively to represent a device's state when implementing more SMMU features. Rename the structure to arm_smmu_master. Signed-off-by: NJean-Philippe Brucker <jean-philippe.brucker@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 11 2月, 2019 1 次提交
-
-
由 Rob Herring 提交于
Move io-pgtable.h to include/linux/ and export alloc_io_pgtable_ops and free_io_pgtable_ops. This enables drivers outside drivers/iommu/ to use the page table library. Specifically, some ARM Mali GPUs use the ARM page table formats. Cc: Will Deacon <will.deacon@arm.com> Cc: Robin Murphy <robin.murphy@arm.com> Cc: Joerg Roedel <joro@8bytes.org> Cc: Matthias Brugger <matthias.bgg@gmail.com> Cc: Rob Clark <robdclark@gmail.com> Cc: linux-arm-kernel@lists.infradead.org Cc: iommu@lists.linux-foundation.org Cc: linux-mediatek@lists.infradead.org Cc: linux-arm-msm@vger.kernel.org Signed-off-by: NRob Herring <robh@kernel.org> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 17 12月, 2018 1 次提交
-
-
由 Joerg Roedel 提交于
Use the new helpers dev_iommu_fwspec_get()/set() to access the dev->iommu_fwspec pointer. This makes it easier to move that pointer later into another struct. Cc: Will Deacon <will.deacon@arm.com> Cc: Robin Murphy <robin.murphy@arm.com> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 10 12月, 2018 3 次提交
-
-
由 Will Deacon 提交于
After removing an entry from a queue (e.g. reading an event in arm_smmu_evtq_thread()) it is necessary to advance the MMIO consumer pointer to free the queue slot back to the SMMU. A memory barrier is required here so that all reads targetting the queue entry have completed before the consumer pointer is updated. The implementation of queue_inc_cons() relies on a writel() to complete the previous reads, but this is incorrect because writel() is only guaranteed to complete prior writes. This patch replaces the call to writel() with an mb(); writel_relaxed() sequence, which gives us the read->write ordering which we require. Cc: Robin Murphy <robin.murphy@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Zhen Lei 提交于
The GITS_TRANSLATER MMIO doorbell register in the ITS hardware is architected to be 4 bytes in size, yet on hi1620 and earlier, Hisilicon have allocated the adjacent 4 bytes to carry some IMPDEF sideband information which results in an 8-byte MSI payload being delivered when signalling an interrupt: MSIAddr: |----4bytes----|----4bytes----| | MSIData | IMPDEF | This poses no problem for the ITS hardware because the adjacent 4 bytes are reserved in the memory map. However, when delivering MSIs to memory, as we do in the SMMUv3 driver for signalling the completion of a SYNC command, the extended payload will corrupt the 4 bytes adjacent to the "sync_count" member in struct arm_smmu_device. Fortunately, the current layout allocates these bytes to padding, but this is fragile and we should make this explicit. Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NZhen Lei <thunder.leizhen@huawei.com> [will: Rewrote commit message and comment] Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Robin Murphy 提交于
When we insert the sync sequence number into the CMD_SYNC.MSIData field, we do so in CPU-native byte order, before writing out the whole command as explicitly little-endian dwords. Thus on big-endian systems, the SMMU will receive and write back a byteswapped version of sync_nr, which would be perfect if it were targeting a similarly-little-endian ITS, but since it's actually writing back to memory being polled by the CPUs, they're going to end up seeing the wrong thing. Since the SMMU doesn't care what the MSIData actually contains, the minimal-overhead solution is to simply add an extra byteswap initially, such that it then writes back the big-endian format directly. Cc: <stable@vger.kernel.org> Fixes: 37de98f8 ("iommu/arm-smmu-v3: Use CMD_SYNC completion MSI") Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 03 12月, 2018 1 次提交
-
-
由 Paul Gortmaker 提交于
The Kconfig currently controlling compilation of this code is: drivers/iommu/Kconfig:config ARM_SMMU_V3 drivers/iommu/Kconfig: bool "ARM Ltd. System MMU Version 3 (SMMUv3) Support" ...meaning that it currently is not being built as a module by anyone. Lets remove the modular code that is essentially orphaned, so that when reading the driver there is no doubt it is builtin-only. Since module_platform_driver() uses the same init level priority as builtin_platform_driver() the init ordering remains unchanged with this commit. We explicitly disallow a driver unbind, since that doesn't have a sensible use case anyway, but unlike most drivers, we can't delete the function tied to the ".remove" field. This is because as of commit 7aa8619a ("iommu/arm-smmu-v3: Implement shutdown method") the .remove function was given a one line wrapper and re-used to provide a .shutdown service. So we delete the wrapper and re-name the function from remove to shutdown. We add a moduleparam.h include since the file does actually declare some module parameters, and leaving them as such is the easiest way currently to remain backwards compatible with existing use cases. Also note that MODULE_DEVICE_TABLE is a no-op for non-modular code. We also delete the MODULE_LICENSE tag etc. since all that information is already contained at the top of the file in the comments. Cc: Will Deacon <will.deacon@arm.com> Cc: Joerg Roedel <joro@8bytes.org> Cc: Robin Murphy <robin.murphy@arm.com> Cc: Nate Watterson <nwatters@codeaurora.org> Cc: linux-arm-kernel@lists.infradead.org Cc: iommu@lists.linux-foundation.org Acked-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 11 10月, 2018 2 次提交
-
-
由 Andrew Murray 提交于
Simplify the code by removing an unnecessary wrapper function. This was left behind by commit 2f657add ("iommu/arm-smmu-v3: Specialise CMD_SYNC handling") Signed-off-by: NAndrew Murray <andrew.murray@arm.com> Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Andrew Murray 提交于
Replace license text with SDPX header Signed-off-by: NAndrew Murray <andrew.murray@arm.com> Acked-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 01 10月, 2018 5 次提交
-
-
由 Zhen Lei 提交于
Now that io-pgtable knows how to dodge strict TLB maintenance, all that's left to do is bridge the gap between the IOMMU core requesting DOMAIN_ATTR_DMA_USE_FLUSH_QUEUE for default domains, and showing the appropriate IO_PGTABLE_QUIRK_NON_STRICT flag to alloc_io_pgtable_ops(). Signed-off-by: NZhen Lei <thunder.leizhen@huawei.com> [rm: convert to domain attribute, tweak commit message] Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Zhen Lei 提交于
.flush_iotlb_all is currently stubbed to arm_smmu_iotlb_sync() since the only time it would ever need to actually do anything is for callers doing their own explicit batching, e.g.: iommu_unmap_fast(domain, ...); iommu_unmap_fast(domain, ...); iommu_iotlb_flush_all(domain, ...); where since io-pgtable still issues the TLBI commands implicitly in the unmap instead of implementing .iotlb_range_add, the "flush" only needs to ensure completion of those already-in-flight invalidations. However, we're about to start using it in anger with flush queues, so let's get a proper implementation wired up. Signed-off-by: NZhen Lei <thunder.leizhen@huawei.com> Reviewed-by: NRobin Murphy <robin.murphy@arm.com> [rm: document why it wasn't a bug] Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Zhen Lei 提交于
Putting adjacent CMD_SYNCs into the command queue is nonsensical, but can happen when multiple CPUs are inserting commands. Rather than leave the poor old hardware to chew through these operations, we can instead drop the subsequent SYNCs and poll for completion of the first. This has been shown to improve IO performance under pressure, where the number of SYNC operations reduces by about a third: CMD_SYNCs reduced: 19542181 CMD_SYNCs total: 58098548 (include reduced) CMDs total: 116197099 (TLBI:SYNC about 1:1) Signed-off-by: NZhen Lei <thunder.leizhen@huawei.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Zhen Lei 提交于
The condition break condition of: (int)(VAL - sync_idx) >= 0 in the __arm_smmu_sync_poll_msi() polling loop requires that sync_idx must be increased monotonically according to the sequence of the CMDs in the cmdq. However, since the msidata is populated using atomic_inc_return_relaxed() before taking the command-queue spinlock, then the following scenario can occur: CPU0 CPU1 msidata=0 msidata=1 insert cmd1 insert cmd0 smmu execute cmd1 smmu execute cmd0 poll timeout, because msidata=1 is overridden by cmd0, that means VAL=0, sync_idx=1. This is not a functional problem, since the caller will eventually either timeout or exit due to another CMD_SYNC, however it's clearly not what the code is supposed to be doing. Fix it, by incrementing the sequence count with the command-queue lock held, allowing us to drop the atomic operations altogether. Signed-off-by: NZhen Lei <thunder.leizhen@huawei.com> [will: dropped the specialised cmd building routine for now] Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 John Garry 提交于
Fix some comment typos spotted. Signed-off-by: NJohn Garry <john.garry@huawei.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 08 8月, 2018 1 次提交
-
-
由 Christoph Hellwig 提交于
All iommu drivers use the default_iommu_map_sg implementation, and there is no good reason to ever override it. Just expose it as iommu_map_sg directly and remove the indirection, specially in our post-spectre world where indirect calls are horribly expensive. Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 27 7月, 2018 2 次提交
-
-
由 Will Deacon 提交于
If we find that the SMMU is enabled during probe, we reset it by re-initialising its registers and either enabling translation or placing it into bypass based on the disable_bypass commandline option. In the case of a kdump kernel, the SMMU won't have been shutdown cleanly by the previous kernel and there may be concurrent DMA through the SMMU. Rather than reset the SMMU to bypass, which would likely lead to rampant data corruption, we can instead configure the SMMU to abort all incoming transactions when we find that it is enabled from within a kdump kernel. Reported-by: NSameer Goel <sgoel@codeaurora.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Zhen Lei 提交于
Stream bypass is a potential security hole since a malicious device can be hotplugged in without matching any drivers, yet be granted the ability to access all of physical memory. Now that we attach devices to domains by default, we can toggle the disable_bypass default to "on", preventing DMA from unknown devices. Signed-off-by: NZhen Lei <thunder.leizhen@huawei.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 26 7月, 2018 1 次提交
-
-
由 Miao Zhong 提交于
When PRI queue occurs overflow, driver should update the OVACKFLG to the PRIQ consumer register, otherwise subsequent PRI requests will not be processed. Cc: Will Deacon <will.deacon@arm.com> Cc: Robin Murphy <robin.murphy@arm.com> Signed-off-by: NMiao Zhong <zhongmiao@hisilicon.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 10 7月, 2018 1 次提交
-
-
由 Rob Herring 提交于
Now that we use the driver core to stop deferred probe for missing drivers, IOMMU_OF_DECLARE can be removed. This is slightly less optimal than having a list of built-in drivers in that we'll now defer probe twice before giving up. This shouldn't have a significant impact on boot times as past discussions about deferred probe have given no evidence of deferred probe having a substantial impact. Cc: Robin Murphy <robin.murphy@arm.com> Cc: Kukjin Kim <kgene@kernel.org> Cc: Krzysztof Kozlowski <krzk@kernel.org> Cc: Rob Clark <robdclark@gmail.com> Cc: Heiko Stuebner <heiko@sntech.de> Cc: Frank Rowand <frowand.list@gmail.com> Cc: linux-arm-kernel@lists.infradead.org Cc: iommu@lists.linux-foundation.org Cc: linux-samsung-soc@vger.kernel.org Cc: linux-arm-msm@vger.kernel.org Cc: linux-rockchip@lists.infradead.org Cc: devicetree@vger.kernel.org Acked-by: NWill Deacon <will.deacon@arm.com> Acked-by: NMarek Szyprowski <m.szyprowski@samsung.com> Acked-by: NJoerg Roedel <jroedel@suse.de> Signed-off-by: NRob Herring <robh@kernel.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 27 3月, 2018 4 次提交
-
-
由 Robin Murphy 提交于
Stage 1 input addresses are effectively 64-bit in SMMUv3 anyway, so really all that's involved is letting io-pgtable know the appropriate upper bound for T0SZ. Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Robin Murphy 提交于
Implement SMMUv3.1 support for 52-bit physical addresses. Since a 52-bit OAS implies 64KB translation granule support, permitting level 1 block entries there is simple, and the rest is just extending address fields. Tested-by: NNate Watterson <nwatters@codeaurora.org> Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Robin Murphy 提交于
As with registers and tables, use GENMASK and the bitfield accessors consistently for queue fields, to save some lines and ease maintenance a little. This now leaves everything in a nice state where all named field definitions expect to be used with bitfield accessors (although since single-bit fields can still be used directly we leave some of those uses as-is to avoid unnecessary churn), while the few remaining *_MASK definitions apply exclusively to in-place values. Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Robin Murphy 提交于
As with registers, use GENMASK and the bitfield accessors consistently for table fields, to save some lines and ease maintenance a little. This also catches a subtle off-by-one wherein bit 5 of CD.T0SZ was missing. Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-