- 23 12月, 2019 2 次提交
-
-
由 Qian Cai 提交于
When running heavy memory pressure workloads, this 5+ old system is throwing endless warnings below because disk IO is too slow to recover from swapping. Since the volume from alloc_iova_fast() could be large, once it calls printk(), it will trigger disk IO (writing to the log files) and pending softirqs which could cause an infinite loop and make no progress for days by the ongoimng memory reclaim. This is the counter part for Intel where the AMD part has already been merged. See the commit 3d708895 ("iommu/amd: Silence warnings under memory pressure"). Since the allocation failure will be reported in intel_alloc_iova(), so just call dev_err_once() there because even the "ratelimited" is too much, and silence the one in alloc_iova_mem() to avoid the expensive warn_alloc(). hpsa 0000:03:00.0: DMAR: Allocating 1-page iova failed hpsa 0000:03:00.0: DMAR: Allocating 1-page iova failed hpsa 0000:03:00.0: DMAR: Allocating 1-page iova failed hpsa 0000:03:00.0: DMAR: Allocating 1-page iova failed hpsa 0000:03:00.0: DMAR: Allocating 1-page iova failed hpsa 0000:03:00.0: DMAR: Allocating 1-page iova failed hpsa 0000:03:00.0: DMAR: Allocating 1-page iova failed hpsa 0000:03:00.0: DMAR: Allocating 1-page iova failed slab_out_of_memory: 66 callbacks suppressed SLUB: Unable to allocate memory on node -1, gfp=0xa20(GFP_ATOMIC) cache: iommu_iova, object size: 40, buffer size: 448, default order: 0, min order: 0 node 0: slabs: 1822, objs: 16398, free: 0 node 1: slabs: 2051, objs: 18459, free: 31 SLUB: Unable to allocate memory on node -1, gfp=0xa20(GFP_ATOMIC) cache: iommu_iova, object size: 40, buffer size: 448, default order: 0, min order: 0 node 0: slabs: 1822, objs: 16398, free: 0 node 1: slabs: 2051, objs: 18459, free: 31 SLUB: Unable to allocate memory on node -1, gfp=0xa20(GFP_ATOMIC) cache: iommu_iova, object size: 40, buffer size: 448, default order: 0, min order: 0 SLUB: Unable to allocate memory on node -1, gfp=0xa20(GFP_ATOMIC) SLUB: Unable to allocate memory on node -1, gfp=0xa20(GFP_ATOMIC) SLUB: Unable to allocate memory on node -1, gfp=0xa20(GFP_ATOMIC) SLUB: Unable to allocate memory on node -1, gfp=0xa20(GFP_ATOMIC) SLUB: Unable to allocate memory on node -1, gfp=0xa20(GFP_ATOMIC) cache: skbuff_head_cache, object size: 208, buffer size: 640, default order: 0, min order: 0 cache: skbuff_head_cache, object size: 208, buffer size: 640, default order: 0, min order: 0 cache: skbuff_head_cache, object size: 208, buffer size: 640, default order: 0, min order: 0 cache: skbuff_head_cache, object size: 208, buffer size: 640, default order: 0, min order: 0 node 0: slabs: 697, objs: 4182, free: 0 node 0: slabs: 697, objs: 4182, free: 0 node 0: slabs: 697, objs: 4182, free: 0 node 0: slabs: 697, objs: 4182, free: 0 node 1: slabs: 381, objs: 2286, free: 27 node 1: slabs: 381, objs: 2286, free: 27 node 1: slabs: 381, objs: 2286, free: 27 node 1: slabs: 381, objs: 2286, free: 27 node 0: slabs: 1822, objs: 16398, free: 0 cache: skbuff_head_cache, object size: 208, buffer size: 640, default order: 0, min order: 0 node 1: slabs: 2051, objs: 18459, free: 31 node 0: slabs: 697, objs: 4182, free: 0 SLUB: Unable to allocate memory on node -1, gfp=0xa20(GFP_ATOMIC) node 1: slabs: 381, objs: 2286, free: 27 cache: skbuff_head_cache, object size: 208, buffer size: 640, default order: 0, min order: 0 node 0: slabs: 697, objs: 4182, free: 0 node 1: slabs: 381, objs: 2286, free: 27 hpsa 0000:03:00.0: DMAR: Allocating 1-page iova failed warn_alloc: 96 callbacks suppressed kworker/11:1H: page allocation failure: order:0, mode:0xa20(GFP_ATOMIC), nodemask=(null),cpuset=/,mems_allowed=0-1 CPU: 11 PID: 1642 Comm: kworker/11:1H Tainted: G B Hardware name: HP ProLiant XL420 Gen9/ProLiant XL420 Gen9, BIOS U19 12/27/2015 Workqueue: kblockd blk_mq_run_work_fn Call Trace: dump_stack+0xa0/0xea warn_alloc.cold.94+0x8a/0x12d __alloc_pages_slowpath+0x1750/0x1870 __alloc_pages_nodemask+0x58a/0x710 alloc_pages_current+0x9c/0x110 alloc_slab_page+0xc9/0x760 allocate_slab+0x48f/0x5d0 new_slab+0x46/0x70 ___slab_alloc+0x4ab/0x7b0 __slab_alloc+0x43/0x70 kmem_cache_alloc+0x2dd/0x450 SLUB: Unable to allocate memory on node -1, gfp=0xa20(GFP_ATOMIC) alloc_iova+0x33/0x210 cache: skbuff_head_cache, object size: 208, buffer size: 640, default order: 0, min order: 0 node 0: slabs: 697, objs: 4182, free: 0 alloc_iova_fast+0x62/0x3d1 node 1: slabs: 381, objs: 2286, free: 27 intel_alloc_iova+0xce/0xe0 intel_map_sg+0xed/0x410 scsi_dma_map+0xd7/0x160 scsi_queue_rq+0xbf7/0x1310 blk_mq_dispatch_rq_list+0x4d9/0xbc0 blk_mq_sched_dispatch_requests+0x24a/0x300 __blk_mq_run_hw_queue+0x156/0x230 blk_mq_run_work_fn+0x3b/0x40 process_one_work+0x579/0xb90 worker_thread+0x63/0x5b0 kthread+0x1e6/0x210 ret_from_fork+0x3a/0x50 Mem-Info: active_anon:2422723 inactive_anon:361971 isolated_anon:34403 active_file:2285 inactive_file:1838 isolated_file:0 unevictable:0 dirty:1 writeback:5 unstable:0 slab_reclaimable:13972 slab_unreclaimable:453879 mapped:2380 shmem:154 pagetables:6948 bounce:0 free:19133 free_pcp:7363 free_cma:0 Signed-off-by: NQian Cai <cai@lca.pw> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Krzysztof Kozlowski 提交于
Adjust indentation from spaces to tab (+optional two spaces) as in coding style with command like: $ sed -e 's/^ /\t/' -i */Kconfig Signed-off-by: NKrzysztof Kozlowski <krzk@kernel.org> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 19 12月, 2019 1 次提交
-
-
由 Robin Murphy 提交于
Since commit ece6e6f0 ("iommu/dma-iommu: Split iommu_dma_map_msi_msg() in two parts"), iommu_dma_prepare_msi() should no longer have to worry about preempting itself, nor being called in atomic context at all. Thus we can downgrade the IRQ-safe locking to a simple mutex to avoid angering the new might_sleep() check in iommu_map(). Reported-by: NQian Cai <cai@lca.pw> Tested-by: NJean-Philippe Brucker <jean-philippe@linaro.org> Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 18 12月, 2019 1 次提交
-
-
由 Lu Baolu 提交于
The PSI (Page Selective Invalidation) bit in the capability register is only valid for second-level translation. Intel IOMMU supporting scalable mode must support page/address selective IOTLB invalidation for first-level translation. Remove the PSI capability check in SVA cache invalidation code. Fixes: 8744daf4 ("iommu/vt-d: Remove global page flush support") Cc: Jacob Pan <jacob.jun.pan@linux.intel.com> Signed-off-by: NLu Baolu <baolu.lu@linux.intel.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 17 12月, 2019 7 次提交
-
-
由 Jerry Snitselaar 提交于
Currently the reserved region for ISA is allocated with no permissions. If a dma domain is being used, mapping this region will fail. Set the permissions to DMA_PTE_READ|DMA_PTE_WRITE. Cc: Joerg Roedel <jroedel@suse.de> Cc: Lu Baolu <baolu.lu@linux.intel.com> Cc: iommu@lists.linux-foundation.org Cc: stable@vger.kernel.org # v5.3+ Fixes: d850c2ee ("iommu/vt-d: Expose ISA direct mapping region via iommu_get_resv_regions") Signed-off-by: NJerry Snitselaar <jsnitsel@redhat.com> Acked-by: NLu Baolu <baolu.lu@linux.intel.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Jerry Snitselaar 提交于
iommu_group_create_direct_mappings uses group->default_domain, but right after it is called, request_default_domain_for_dev calls iommu_domain_free for the default domain, and sets the group default domain to a different domain. Move the iommu_group_create_direct_mappings call to after the group default domain is set, so the direct mappings get associated with that domain. Cc: Joerg Roedel <jroedel@suse.de> Cc: Lu Baolu <baolu.lu@linux.intel.com> Cc: iommu@lists.linux-foundation.org Cc: stable@vger.kernel.org Fixes: 7423e017 ("iommu: Add API to request DMA domain for device") Signed-off-by: NJerry Snitselaar <jsnitsel@redhat.com> Reviewed-by: NLu Baolu <baolu.lu@linux.intel.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Lu Baolu 提交于
If the default DMA domain of a group doesn't fit a device, it will still sit in the group but use a private identity domain. When map/unmap/iova_to_phys come through iommu API, the driver should still serve them, otherwise, other devices in the same group will be impacted. Since identity domain has been mapped with the whole available memory space and RMRRs, we don't need to worry about the impact on it. Link: https://www.spinics.net/lists/iommu/msg40416.html Cc: Jerry Snitselaar <jsnitsel@redhat.com> Reported-by: NJerry Snitselaar <jsnitsel@redhat.com> Fixes: 942067f1 ("iommu/vt-d: Identify default domains replaced with private") Cc: stable@vger.kernel.org # v5.3+ Signed-off-by: NLu Baolu <baolu.lu@linux.intel.com> Reviewed-by: NJerry Snitselaar <jsnitsel@redhat.com> Tested-by: NJerry Snitselaar <jsnitsel@redhat.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Alex Williamson 提交于
Commit d850c2ee ("iommu/vt-d: Expose ISA direct mapping region via iommu_get_resv_regions") created a direct-mapped reserved memory region in order to replace the static identity mapping of the ISA address space, where the latter was then removed in commit df4f3c60 ("iommu/vt-d: Remove static identity map code"). According to the history of this code and the Kconfig option surrounding it, this direct mapping exists for the benefit of legacy ISA drivers that are not compatible with the DMA API. In conjuntion with commit 9b77e5c7 ("vfio/type1: check dma map request is within a valid iova range") this change introduced a regression where the vfio IOMMU backend enforces reserved memory regions per IOMMU group, preventing userspace from creating IOMMU mappings conflicting with prescribed reserved regions. A necessary prerequisite for the vfio change was the introduction of "relaxable" direct mappings introduced by commit adfd3738 ("iommu: Introduce IOMMU_RESV_DIRECT_RELAXABLE reserved memory regions"). These relaxable direct mappings provide the same identity mapping support in the default domain, but also indicate that the reservation is software imposed and may be relaxed under some conditions, such as device assignment. Convert the ISA bridge direct-mapped reserved region to relaxable to reflect that the restriction is self imposed and need not be enforced by drivers such as vfio. Fixes: 1c5c59fb ("iommu/vt-d: Differentiate relaxable and non relaxable RMRRs") Cc: stable@vger.kernel.org # v5.3+ Link: https://lore.kernel.org/linux-iommu/20191211082304.2d4fab45@x1.homeReported-by: Ncprt <cprt@protonmail.com> Tested-by: Ncprt <cprt@protonmail.com> Signed-off-by: NAlex Williamson <alex.williamson@redhat.com> Acked-by: NLu Baolu <baolu.lu@linux.intel.com> Reviewed-by: NEric Auger <eric.auger@redhat.com> Tested-by: NJerry Snitselaar <jsnitsel@redhat.com> Reviewed-by: NJerry Snitselaar <jsnitsel@redhat.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Robin Murphy 提交于
Since iommu_dma_alloc_iova() combines incoming masks with the u64 bus limit, it makes more sense to pass them around in their native u64 rather than converting to dma_addr_t early. Do that, and resolve the remaining type discrepancy against the domain geometry with a cheeky cast to keep things simple. Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Reviewed-by: NChristoph Hellwig <hch@lst.de> Tested-by: Nathan Chancellor <natechancellor@gmail.com> # build Reviewed-by: NNicolas Saenz Julienne <nsaenzjulienne@suse.de> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Xiaotao Yin 提交于
During ethernet(Marvell octeontx2) set ring buffer test: ethtool -G eth1 rx <rx ring size> tx <tx ring size> following kmemleak will happen sometimes: unreferenced object 0xffff000b85421340 (size 64): comm "ethtool", pid 867, jiffies 4295323539 (age 550.500s) hex dump (first 64 bytes): 80 13 42 85 0b 00 ff ff ff ff ff ff ff ff ff ff ..B............. 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ ff ff ff ff ff ff ff ff 00 00 00 00 00 00 00 00 ................ 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ backtrace: [<000000001b204ddf>] kmem_cache_alloc+0x1b0/0x350 [<00000000d9ef2e50>] alloc_iova+0x3c/0x168 [<00000000ea30f99d>] alloc_iova_fast+0x7c/0x2d8 [<00000000b8bb2f1f>] iommu_dma_alloc_iova.isra.0+0x12c/0x138 [<000000002f1a43b5>] __iommu_dma_map+0x8c/0xf8 [<00000000ecde7899>] iommu_dma_map_page+0x98/0xf8 [<0000000082004e59>] otx2_alloc_rbuf+0xf4/0x158 [<000000002b107f6b>] otx2_rq_aura_pool_init+0x110/0x270 [<00000000c3d563c7>] otx2_open+0x15c/0x734 [<00000000a2f5f3a8>] otx2_dev_open+0x3c/0x68 [<00000000456a98b5>] otx2_set_ringparam+0x1ac/0x1d4 [<00000000f2fbb819>] dev_ethtool+0xb84/0x2028 [<0000000069b67c5a>] dev_ioctl+0x248/0x3a0 [<00000000af38663a>] sock_ioctl+0x280/0x638 [<000000002582384c>] do_vfs_ioctl+0x8b0/0xa80 [<000000004e1a2c02>] ksys_ioctl+0x84/0xb8 The reason: When alloc_iova_mem() without initial with Zero, sometimes fpn_lo will equal to IOVA_ANCHOR by chance, so when return with -ENOMEM(iova32_full) from __alloc_and_insert_iova_range(), the new_iova will not be freed in free_iova_mem(). Fixes: bb68b2fb ("iommu/iova: Add rbtree anchor node") Signed-off-by: NXiaotao Yin <xiaotao.yin@windriver.com> Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Eric Auger 提交于
In case the new region gets merged into another one, the nr list node is freed. Checking its type while completing the merge algorithm leads to a use-after-free. Use new->type instead. Fixes: 4dbd258f ("iommu: Revisit iommu_insert_resv_region() implementation") Signed-off-by: NEric Auger <eric.auger@redhat.com> Reported-by: NQian Cai <cai@lca.pw> Reviewed-by: NJerry Snitselaar <jsnitsel@redhat.com> Cc: Stable <stable@vger.kernel.org> #v5.3+ Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 22 11月, 2019 2 次提交
-
-
由 Boqun Feng 提交于
Currently hyperv-iommu is implemented in a x86 specific way, for example, apic is used. So make the HYPERV_IOMMU Kconfig depend on X86 as a preparation for enabling HyperV on architecture other than x86. Cc: Lan Tianyu <Tianyu.Lan@microsoft.com> Cc: Michael Kelley <mikelley@microsoft.com> Cc: linux-hyperv@vger.kernel.org Signed-off-by: NBoqun Feng (Microsoft) <boqun.feng@gmail.com> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Nicolas Saenz Julienne 提交于
Using a mask to represent bus DMA constraints has a set of limitations. The biggest one being it can only hold a power of two (minus one). The DMA mapping code is already aware of this and treats dev->bus_dma_mask as a limit. This quirk is already used by some architectures although still rare. With the introduction of the Raspberry Pi 4 we've found a new contender for the use of bus DMA limits, as its PCIe bus can only address the lower 3GB of memory (of a total of 4GB). This is impossible to represent with a mask. To make things worse the device-tree code rounds non power of two bus DMA limits to the next power of two, which is unacceptable in this case. In the light of this, rename dev->bus_dma_mask to dev->bus_dma_limit all over the tree and treat it as such. Note that dev->bus_dma_limit should contain the higher accessible DMA address. Signed-off-by: NNicolas Saenz Julienne <nsaenzjulienne@suse.de> Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NChristoph Hellwig <hch@lst.de>
-
- 21 11月, 2019 2 次提交
-
-
由 Krzysztof Wilczynski 提交于
Remove <linux/pci.h> and <linux/msi.h> from being included directly as part of the include/linux/of_pci.h, and remove superfluous declaration of struct of_phandle_args. Move users of include <linux/of_pci.h> to include <linux/pci.h> and <linux/msi.h> directly rather than rely on both being included transitively through <linux/of_pci.h>. Link: https://lore.kernel.org/r/20190903113059.2901-1-kw@linux.comSigned-off-by: NKrzysztof Wilczynski <kw@linux.com> Signed-off-by: NBjorn Helgaas <bhelgaas@google.com> Reviewed-by: NRob Herring <robh@kernel.org>
-
由 Christoph Hellwig 提交于
These are pure cache maintainance routines, so drop the unused struct device argument. Signed-off-by: NChristoph Hellwig <hch@lst.de> Suggested-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 13 11月, 2019 1 次提交
-
-
由 Robin Murphy 提交于
Although we don't generally expect IRQs to fire for a suspended IOMMU, there are certain situations (particularly with debug options) where we might legitimately end up with the pm_runtime_get_if_in_use() call from rk_iommu_irq() returning 0. Since this doesn't represent an actual error, follow the other parts of the driver and save the WARN_ON() condition for a genuine negative value. Even if we do have spurious IRQs due to a wedged VOP asserting the shared line, it's not this driver's job to try to second-guess the IRQ core to warn about that. Reported-by: NVasily Khoruzhick <anarsoul@gmail.com> Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Acked-by: NMarc Zyngier <maz@kernel.org> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 11 11月, 2019 17 次提交
-
-
由 Deepa Dinamani 提交于
The intel-iommu driver assumes that the iommu state is cleaned up at the start of the new kernel. But, when we try to kexec boot something other than the Linux kernel, the cleanup cannot be relied upon. Hence, cleanup before we go down for reboot. Keeping the cleanup at initialization also, in case BIOS leaves the IOMMU enabled. I considered turning off iommu only during kexec reboot, but a clean shutdown seems always a good idea. But if someone wants to make it conditional, such as VMM live update, we can do that. There doesn't seem to be such a condition at this time. Tested that before, the info message 'DMAR: Translation was enabled for <iommu> but we are not in kdump mode' would be reported for each iommu. The message will not appear when the DMA-remapping is not enabled on entry to the kernel. Signed-off-by: NDeepa Dinamani <deepa.kernel@gmail.com> Acked-by: NLu Baolu <baolu.lu@linux.intel.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Yian Chen 提交于
VT-d RMRR (Reserved Memory Region Reporting) regions are reserved for device use only and should not be part of allocable memory pool of OS. BIOS e820_table reports complete memory map to OS, including OS usable memory ranges and BIOS reserved memory ranges etc. x86 BIOS may not be trusted to include RMRR regions as reserved type of memory in its e820 memory map, hence validate every RMRR entry with the e820 memory map to make sure the RMRR regions will not be used by OS for any other purposes. ia64 EFI is working fine so implement RMRR validation as a dummy function Reviewed-by: NLu Baolu <baolu.lu@linux.intel.com> Reviewed-by: NSohil Mehta <sohil.mehta@intel.com> Signed-off-by: NYian Chen <yian.chen@intel.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Jean-Philippe Brucker 提交于
Since commit 7723f4c5 ("driver core: platform: Add an error message to platform_get_irq*()"), platform_get_irq() displays an error when the IRQ isn't found. Remove the error print from the SMMU driver. Note the slight change of behaviour: no message is printed if platform_get_irq() returns -EPROBE_DEFER, which probably doesn't concern the SMMU. Fixes: 7723f4c5 ("driver core: platform: Add an error message to platform_get_irq*()") Signed-off-by: NJean-Philippe Brucker <jean-philippe@linaro.org> Acked-by: NWill Deacon <will@kernel.org> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Jean-Philippe Brucker 提交于
Since commit 7723f4c5 ("driver core: platform: Add an error message to platform_get_irq*()"), platform_get_irq_byname() displays an error when the IRQ isn't found. Since the SMMUv3 driver uses that function to query which interrupt method is available, the message is now displayed during boot for any SMMUv3 that doesn't implement the combined interrupt, or that implements MSIs. [ 20.700337] arm-smmu-v3 arm-smmu-v3.7.auto: IRQ combined not found [ 20.706508] arm-smmu-v3 arm-smmu-v3.7.auto: IRQ eventq not found [ 20.712503] arm-smmu-v3 arm-smmu-v3.7.auto: IRQ priq not found [ 20.718325] arm-smmu-v3 arm-smmu-v3.7.auto: IRQ gerror not found Use platform_get_irq_byname_optional() to avoid displaying a spurious error. Fixes: 7723f4c5 ("driver core: platform: Add an error message to platform_get_irq*()") Signed-off-by: NJean-Philippe Brucker <jean-philippe@linaro.org> Acked-by: NWill Deacon <will@kernel.org> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Yoshihiro Shimoda 提交于
Since we will have changed memory mapping of the IPMMU in the future, this patch adds a utlb_offset_base into struct ipmmu_features for IMUCTR and IMUASID registers. No behavior change. Signed-off-by: NYoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com> Reviewed-by: NNiklas Söderlund <niklas.soderlund+renesas@ragnatech.se> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Yoshihiro Shimoda 提交于
Since we will have changed memory mapping of the IPMMU in the future, This patch adds helper functions ipmmu_utlb_reg() and ipmmu_imu{asid,ctr}_write() for "uTLB" registers. No behavior change. Signed-off-by: NYoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com> Reviewed-by: NGeert Uytterhoeven <geert+renesas@glider.be> Reviewed-by: NNiklas Söderlund <niklas.soderlund+renesas@ragnatech.se> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Yoshihiro Shimoda 提交于
Since we will have changed memory mapping of the IPMMU in the future, this patch uses ipmmu_features values instead of a macro to calculate context registers offset. No behavior change. Signed-off-by: NYoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com> Reviewed-by: NGeert Uytterhoeven <geert+renesas@glider.be> Reviewed-by: NNiklas Söderlund <niklas.soderlund+renesas@ragnatech.se> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Yoshihiro Shimoda 提交于
Since we will have changed memory mapping of the IPMMU in the future, This patch adds helper functions ipmmu_ctx_{reg,read,write}() for MMU "context" registers. No behavior change. Signed-off-by: NYoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com> Reviewed-by: NGeert Uytterhoeven <geert+renesas@glider.be> Reviewed-by: NNiklas Söderlund <niklas.soderlund+renesas@ragnatech.se> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Yoshihiro Shimoda 提交于
To support different registers memory mapping hardware easily in the future, this patch tidies up the register definitions as below: - Add comments to state to which SoCs or SoC families they apply - Add categories about MMU "context" and uTLB registers No change behavior. Signed-off-by: NYoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com> Reviewed-by: NNiklas Söderlund <niklas.soderlund+renesas@ragnatech.se> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Yoshihiro Shimoda 提交于
To support different registers memory mapping hardware easily in the future, this patch removes all unused register definitions. Signed-off-by: NYoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com> Reviewed-by: NNiklas Söderlund <niklas.soderlund+renesas@ragnatech.se> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Yong Wu 提交于
Reduce the tlb timeout value from 100000us to 1000us. The original value would make the kernel stuck for 100 ms with interrupts disabled, which could have other side effects. The flush is expected to always take much less than 1 ms, so use that instead. Signed-off-by: NYong Wu <yong.wu@mediatek.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Yong Wu 提交于
Now we have tlb_lock for the HW tlb flush, then pgtable code hasn't needed the external "pgtlock" for a while. this patch remove the "pgtlock". Signed-off-by: NYong Wu <yong.wu@mediatek.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Yong Wu 提交于
Right now, the tlb_add_flush_nosync and tlb_sync always appear together. we merge the two functions into one(also move the tlb_lock into the new function). No functional change. Signed-off-by: NChao Hao <chao.hao@mediatek.com> Signed-off-by: NYong Wu <yong.wu@mediatek.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Yong Wu 提交于
In our tlb range flush, we don't care the "leaf". Remove it to simplify the code. no functional change. "granule" also is unnecessary for us, Keep it satisfy the format of tlb_flush_walk. Signed-off-by: NYong Wu <yong.wu@mediatek.com> Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Yong Wu 提交于
Use the iommu_gather mechanism to achieve the tlb range flush. Gather the iova range in the "tlb_add_page", then flush the merged iova range in iotlb_sync. Suggested-by: NTomasz Figa <tfiga@chromium.org> Signed-off-by: NYong Wu <yong.wu@mediatek.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Yong Wu 提交于
The commit 4d689b61 ("iommu/io-pgtable-arm-v7s: Convert to IOMMU API TLB sync") help move the tlb_sync of unmap from v7s into the iommu framework. It helps add a new function "mtk_iommu_iotlb_sync", But it lacked the lock, then it will cause the variable "tlb_flush_active" may be changed unexpectedly, we could see this warning log randomly: mtk-iommu 10205000.iommu: Partial TLB flush timed out, falling back to full flush The HW requires tlb_flush/tlb_sync in pairs strictly, this patch adds a new tlb_lock for tlb operations to fix this issue. Fixes: 4d689b61 ("iommu/io-pgtable-arm-v7s: Convert to IOMMU API TLB sync") Signed-off-by: NYong Wu <yong.wu@mediatek.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Yong Wu 提交于
Use the correct tlb_flush_all instead of the original one. Fixes: 4d689b61 ("iommu/io-pgtable-arm-v7s: Convert to IOMMU API TLB sync") Signed-off-by: NYong Wu <yong.wu@mediatek.com> Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 07 11月, 2019 1 次提交
-
-
由 Will Deacon 提交于
The 'IOMMU_QCOM_SYS_CACHE' IOMMU protection flag is exposed to all users of the IOMMU API. Despite its name, the idea behind it isn't especially tied to Qualcomm implementations and could conceivably be used by other systems. Rename it to 'IOMMU_SYS_CACHE_ONLY' and update the comment to describe a bit better the idea behind it. Cc: Robin Murphy <robin.murphy@arm.com> Cc: "Isaac J. Manjarres" <isaacm@codeaurora.org> Signed-off-by: NWill Deacon <will@kernel.org>
-
- 05 11月, 2019 6 次提交
-
-
由 Robin Murphy 提交于
Between VMSAv8-64 and the various 32-bit formats, there is either one 64-bit MAIR or a pair of 32-bit MAIR0/MAIR1 or NMRR/PMRR registers. As such, keeping two 64-bit values in io_pgtable_cfg has always been overkill. Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NWill Deacon <will@kernel.org>
-
由 Robin Murphy 提交于
The nature of the LPAE format means that data->pg_shift is always redundant with data->bits_per_level, since they represent the size of a page and the number of PTEs per page respectively, and the size of a PTE is constant. Thus it works out more efficient to only store the latter, and derive the former via a trivial addition where necessary. Signed-off-by: NRobin Murphy <robin.murphy@arm.com> [will: Reworked granule check in iopte_to_paddr()] Signed-off-by: NWill Deacon <will@kernel.org>
-
由 Robin Murphy 提交于
We use data->pgd_size directly for the one-off allocation and freeing of the top-level table, but otherwise it serves for ARM_LPAE_PGD_IDX() to repeatedly re-calculate the effective number of top-level address bits it represents. Flip this around so we store the form we most commonly need, and derive the lesser-used one instead. This cuts a whole bunch of code out of the map/unmap/iova_to_phys fast-paths. Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NWill Deacon <will@kernel.org>
-
由 Robin Murphy 提交于
Beyond a couple of allocation-time calculations, data->levels is only ever used to derive the start level. Storing the start level directly leads to a small reduction in object code, which should help eke out a little more efficiency, and slightly more readable source to boot. Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NWill Deacon <will@kernel.org>
-
由 Robin Murphy 提交于
We're merely checking that the relevant upper bits of each address are all zero, so there are cheaper ways to achieve that. Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NWill Deacon <will@kernel.org>
-
由 Robin Murphy 提交于
It makes little sense to only validate the requested size after we think we've found a matching block size - making the check up-front is simple, and far more logical than waiting to walk off the bottom of the table to infer that we must have been passed a bogus size to start with. We're missing an equivalent check on the unmap path, so add that as well for consistency. Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NWill Deacon <will@kernel.org>
-