- 27 5月, 2019 7 次提交
-
-
由 Robin Murphy 提交于
Since we duplicate the find_vm_area() logic a few times in places where we only care aboute the pages, factor out a helper to abstract it. Signed-off-by: NRobin Murphy <robin.murphy@arm.com> [hch: don't warn when not finding a region, as we'll rely on that later] Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Robin Murphy 提交于
The remaining internal callsites don't care about having prototypes compatible with the relevant dma_map_ops callbacks, so the extra level of indirection just wastes space and complictaes things. Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Robin Murphy 提交于
Most of the callers don't care, and the couple that do already have the domain to hand for other reasons are in slow paths where the (trivial) overhead of a repeated lookup will be utterly immaterial. Signed-off-by: NRobin Murphy <robin.murphy@arm.com> [hch: dropped the hunk touching iommu_dma_get_msi_page to avoid a conflict with another series] Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Christoph Hellwig 提交于
Moving this function up to its unmap counterpart helps to keep related code together for the following changes. Signed-off-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Christoph Hellwig 提交于
There is nothing really arm64 specific in the iommu_dma_ops implementation, so move it to dma-iommu.c and keep a lot of symbols self-contained. Note the implementation does depend on the DMA_DIRECT_REMAP infrastructure for now, so we'll have to make the DMA_IOMMU support depend on it, but this will be relaxed soon. Signed-off-by: NChristoph Hellwig <hch@lst.de> Acked-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Christoph Hellwig 提交于
arch_dma_prep_coherent can handle physically contiguous ranges larger than PAGE_SIZE just fine, which means we don't need a page-based iterator. Signed-off-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Christoph Hellwig 提交于
We now have a arch_dma_prep_coherent architecture hook that is used for the generic DMA remap allocator, and we should use the same interface for the dma-iommu code. Signed-off-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 21 5月, 2019 2 次提交
-
-
由 Thomas Gleixner 提交于
Add SPDX license identifiers to all Make/Kconfig files which: - Have no license information of any form These files fall under the project license, GPL v2 only. The resulting SPDX license identifier is: GPL-2.0-only Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Thomas Gleixner 提交于
Add SPDX license identifiers to all files which: - Have no license information of any form - Have EXPORT_.*_SYMBOL_GPL inside which was used in the initial scan/conversion to ignore the file These files fall under the project license, GPL v2 only. The resulting SPDX license identifier is: GPL-2.0-only Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 15 5月, 2019 2 次提交
-
-
由 Masahiro Yamada 提交于
Since commit dccd2304 ("ARM: 7430/1: sizes.h: move from asm-generic to <linux/sizes.h>"), <asm/sizes.h> and <asm-generic/sizes.h> are just wrappers of <linux/sizes.h>. This commit replaces all <asm/sizes.h> and <asm-generic/sizes.h> to prepare for the removal. Link: http://lkml.kernel.org/r/1553267665-27228-1-git-send-email-yamada.masahiro@socionext.comSigned-off-by: NMasahiro Yamada <yamada.masahiro@socionext.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Souptick Joarder 提交于
Convert to use vm_map_pages() to map range of kernel memory to user vma. Link: http://lkml.kernel.org/r/80c3d220fc6ada73a88ce43ca049afb55a889258.1552921225.git.jrdr.linux@gmail.comSigned-off-by: NSouptick Joarder <jrdr.linux@gmail.com> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com> Cc: David Airlie <airlied@linux.ie> Cc: Heiko Stuebner <heiko@sntech.de> Cc: Joerg Roedel <joro@8bytes.org> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Juergen Gross <jgross@suse.com> Cc: Kees Cook <keescook@chromium.org> Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com> Cc: Kyungmin Park <kyungmin.park@samsung.com> Cc: Marek Szyprowski <m.szyprowski@samsung.com> Cc: Matthew Wilcox <willy@infradead.org> Cc: Mauro Carvalho Chehab <mchehab@infradead.org> Cc: Michal Hocko <mhocko@suse.com> Cc: Mike Rapoport <rppt@linux.ibm.com> Cc: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com> Cc: Pawel Osciak <pawel@osciak.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Rik van Riel <riel@surriel.com> Cc: Robin Murphy <robin.murphy@arm.com> Cc: Russell King <linux@armlinux.org.uk> Cc: Sandy Huang <hjc@rock-chips.com> Cc: Stefan Richter <stefanr@s5r6.in-berlin.de> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Cc: Thierry Reding <treding@nvidia.com> Cc: Vlastimil Babka <vbabka@suse.cz> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 07 5月, 2019 2 次提交
-
-
由 Joerg Roedel 提交于
This reverts commit 1a107901. This commit caused a NULL-ptr deference bug and must be reverted for now. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Srinath Mannam 提交于
The dma_ranges list field of PCI host bridge structure has resource entries in sorted order representing address ranges allowed for DMA transfers. Process the list and reserve IOVA addresses that are not present in its resource entries (ie DMA memory holes) to prevent allocating IOVA addresses that cannot be accessed by PCI devices. Based-on-a-patch-by: NOza Pawandeep <oza.oza@broadcom.com> Signed-off-by: NSrinath Mannam <srinath.mannam@broadcom.com> Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by: NBjorn Helgaas <bhelgaas@google.com> Reviewed-by: NOza Pawandeep <poza@codeaurora.org> Acked-by: NRobin Murphy <robin.murphy@arm.com>
-
- 06 5月, 2019 1 次提交
-
-
由 Joerg Roedel 提交于
This reverts commit 7a5dbf3a. This commit not only removes the leftovers of bypass support, it also mostly removes the checking of the return value of the get_domain() function. This can lead to silent data corruption bugs when a device is not attached to its dma_ops domain and a DMA-API function is called for that device. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 03 5月, 2019 7 次提交
-
-
由 Eric Auger 提交于
If alloc_pages_node() fails, pasid_table is leaked. Free it. Fixes: cc580e41 ("iommu/vt-d: Per PCI device pasid table interfaces") Signed-off-by: NEric Auger <eric.auger@redhat.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Lu Baolu 提交于
The kernel parameter igfx_off is used by users to disable DMA remapping for the Intel integrated graphic device. It was designed for bare metal cases where a dedicated IOMMU is used for graphic. This doesn't apply to virtual IOMMU case where an include-all IOMMU is used. This makes the kernel parameter work with virtual IOMMU as well. Cc: Ashok Raj <ashok.raj@intel.com> Cc: Jacob Pan <jacob.jun.pan@linux.intel.com> Suggested-by: NKevin Tian <kevin.tian@intel.com> Fixes: c0771df8 ("intel-iommu: Export a flag indicating that the IOMMU is used for iGFX.") Signed-off-by: NLu Baolu <baolu.lu@linux.intel.com> Tested-by: NZhenyu Wang <zhenyuw@linux.intel.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Lu Baolu 提交于
The intel_iommu_gfx_mapped flag is exported by the Intel IOMMU driver to indicate whether an IOMMU is used for the graphic device. In a virtualized IOMMU environment (e.g. QEMU), an include-all IOMMU is used for graphic device. This flag is found to be clear even the IOMMU is used. Cc: Ashok Raj <ashok.raj@intel.com> Cc: Jacob Pan <jacob.jun.pan@linux.intel.com> Cc: Kevin Tian <kevin.tian@intel.com> Reported-by: NZhenyu Wang <zhenyuw@linux.intel.com> Fixes: c0771df8 ("intel-iommu: Export a flag indicating that the IOMMU is used for iGFX.") Suggested-by: NKevin Tian <kevin.tian@intel.com> Signed-off-by: NLu Baolu <baolu.lu@linux.intel.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Tom Murphy 提交于
check if there is a not-present cache present and flush it if there is. Signed-off-by: NTom Murphy <tmurphy@arista.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Lu Baolu 提交于
Replace the whitespaces at the start of a line with tabs. No functional changes. Signed-off-by: NLu Baolu <baolu.lu@linux.intel.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Julien Grall 提交于
A recent change split iommu_dma_map_msi_msg() in two new functions. The function was still implemented to avoid modifying all the callers at once. Now that all the callers have been reworked, iommu_dma_map_msi_msg() can be removed. Signed-off-by: NJulien Grall <julien.grall@arm.com> Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Reviewed-by: NEric Auger <eric.auger@redhat.com> Acked-by: NJoerg Roedel <jroedel@suse.de> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Julien Grall 提交于
On RT, iommu_dma_map_msi_msg() may be called from non-preemptible context. This will lead to a splat with CONFIG_DEBUG_ATOMIC_SLEEP as the function is using spin_lock (they can sleep on RT). iommu_dma_map_msi_msg() is used to map the MSI page in the IOMMU PT and update the MSI message with the IOVA. Only the part to lookup for the MSI page requires to be called in preemptible context. As the MSI page cannot change over the lifecycle of the MSI interrupt, the lookup can be cached and re-used later on. iomma_dma_map_msi_msg() is now split in two functions: - iommu_dma_prepare_msi(): This function will prepare the mapping in the IOMMU and store the cookie in the structure msi_desc. This function should be called in preemptible context. - iommu_dma_compose_msi_msg(): This function will update the MSI message with the IOVA when the device is behind an IOMMU. Signed-off-by: NJulien Grall <julien.grall@arm.com> Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Reviewed-by: NEric Auger <eric.auger@redhat.com> Acked-by: NJoerg Roedel <jroedel@suse.de> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
- 30 4月, 2019 2 次提交
-
-
由 Heiner Kallweit 提交于
Use new helper pci_dev_id() to simplify the code. Signed-off-by: NHeiner Kallweit <hkallweit1@gmail.com> Signed-off-by: NBjorn Helgaas <bhelgaas@google.com> Reviewed-by: NJoerg Roedel <jroedel@suse.de>
-
由 Heiner Kallweit 提交于
Use new helper pci_dev_id() to simplify the code. Signed-off-by: NHeiner Kallweit <hkallweit1@gmail.com> Signed-off-by: NBjorn Helgaas <bhelgaas@google.com> Reviewed-by: NJoerg Roedel <jroedel@suse.de>
-
- 26 4月, 2019 4 次提交
-
-
由 Lu Baolu 提交于
Requesting page reqest irq under dmar_global_lock could cause potential lock race condition (caught by lockdep). [ 4.100055] ====================================================== [ 4.100063] WARNING: possible circular locking dependency detected [ 4.100072] 5.1.0-rc4+ #2169 Not tainted [ 4.100078] ------------------------------------------------------ [ 4.100086] swapper/0/1 is trying to acquire lock: [ 4.100094] 000000007dcbe3c3 (dmar_lock){+.+.}, at: dmar_alloc_hwirq+0x35/0x140 [ 4.100112] but task is already holding lock: [ 4.100120] 0000000060bbe946 (dmar_global_lock){++++}, at: intel_iommu_init+0x191/0x1438 [ 4.100136] which lock already depends on the new lock. [ 4.100146] the existing dependency chain (in reverse order) is: [ 4.100155] -> #2 (dmar_global_lock){++++}: [ 4.100169] down_read+0x44/0xa0 [ 4.100178] intel_irq_remapping_alloc+0xb2/0x7b0 [ 4.100186] mp_irqdomain_alloc+0x9e/0x2e0 [ 4.100195] __irq_domain_alloc_irqs+0x131/0x330 [ 4.100203] alloc_isa_irq_from_domain.isra.4+0x9a/0xd0 [ 4.100212] mp_map_pin_to_irq+0x244/0x310 [ 4.100221] setup_IO_APIC+0x757/0x7ed [ 4.100229] x86_late_time_init+0x17/0x1c [ 4.100238] start_kernel+0x425/0x4e3 [ 4.100247] secondary_startup_64+0xa4/0xb0 [ 4.100254] -> #1 (irq_domain_mutex){+.+.}: [ 4.100265] __mutex_lock+0x7f/0x9d0 [ 4.100273] __irq_domain_add+0x195/0x2b0 [ 4.100280] irq_domain_create_hierarchy+0x3d/0x40 [ 4.100289] msi_create_irq_domain+0x32/0x110 [ 4.100297] dmar_alloc_hwirq+0x111/0x140 [ 4.100305] dmar_set_interrupt.part.14+0x1a/0x70 [ 4.100314] enable_drhd_fault_handling+0x2c/0x6c [ 4.100323] apic_bsp_setup+0x75/0x7a [ 4.100330] x86_late_time_init+0x17/0x1c [ 4.100338] start_kernel+0x425/0x4e3 [ 4.100346] secondary_startup_64+0xa4/0xb0 [ 4.100352] -> #0 (dmar_lock){+.+.}: [ 4.100364] lock_acquire+0xb4/0x1c0 [ 4.100372] __mutex_lock+0x7f/0x9d0 [ 4.100379] dmar_alloc_hwirq+0x35/0x140 [ 4.100389] intel_svm_enable_prq+0x61/0x180 [ 4.100397] intel_iommu_init+0x1128/0x1438 [ 4.100406] pci_iommu_init+0x16/0x3f [ 4.100414] do_one_initcall+0x5d/0x2be [ 4.100422] kernel_init_freeable+0x1f0/0x27c [ 4.100431] kernel_init+0xa/0x110 [ 4.100438] ret_from_fork+0x3a/0x50 [ 4.100444] other info that might help us debug this: [ 4.100454] Chain exists of: dmar_lock --> irq_domain_mutex --> dmar_global_lock [ 4.100469] Possible unsafe locking scenario: [ 4.100476] CPU0 CPU1 [ 4.100483] ---- ---- [ 4.100488] lock(dmar_global_lock); [ 4.100495] lock(irq_domain_mutex); [ 4.100503] lock(dmar_global_lock); [ 4.100512] lock(dmar_lock); [ 4.100518] *** DEADLOCK *** Cc: Ashok Raj <ashok.raj@intel.com> Cc: Jacob Pan <jacob.jun.pan@linux.intel.com> Cc: Kevin Tian <kevin.tian@intel.com> Reported-by: NDave Jiang <dave.jiang@intel.com> Fixes: a222a7f0 ("iommu/vt-d: Implement page request handling") Signed-off-by: NLu Baolu <baolu.lu@linux.intel.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Gustavo A. R. Silva 提交于
Make use of the struct_size() helper instead of an open-coded version in order to avoid any potential type mistakes, in particular in the context in which this code is being used. So, replace code of the following form: size = sizeof(*info) + level * sizeof(info->path[0]); with: size = struct_size(info, path, level); Signed-off-by: NGustavo A. R. Silva <gustavo@embeddedor.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Wen Yang 提交于
The call to of_parse_phandle returns a node pointer with refcount incremented thus it must be explicitly decremented after the last usage. 581 static int mtk_iommu_probe(struct platform_device *pdev) 582 { ... 626 for (i = 0; i < larb_nr; i++) { 627 struct device_node *larbnode; ... 631 larbnode = of_parse_phandle(...); 632 if (!larbnode) 633 return -EINVAL; 634 635 if (!of_device_is_available(larbnode)) 636 continue; ---> leaked here 637 ... 643 if (!plarbdev) 644 return -EPROBE_DEFER; ---> leaked here ... 647 component_match_add_release(dev, &match, release_of, 648 compare_of, larbnode); ---> release_of will call of_node_put 649 } ... 650 Detected by coccinelle with the following warnings: ./drivers/iommu/mtk_iommu.c:644:3-9: ERROR: missing of_node_put; acquired a node pointer with refcount incremented on line 631, but without a corresponding object release within this function. Signed-off-by: NWen Yang <wen.yang99@zte.com.cn> Cc: Joerg Roedel <joro@8bytes.org> Cc: Matthias Brugger <matthias.bgg@gmail.com> Cc: iommu@lists.linux-foundation.org Cc: linux-arm-kernel@lists.infradead.org Cc: linux-mediatek@lists.infradead.org Cc: linux-kernel@vger.kernel.org Reviewed-by: NMatthias Brugger <mbrugger@suse.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
This variable hold a global list of allocated protection domains in the AMD IOMMU driver. By now this list is never traversed anymore, so the list and the lock protecting it can be removed. Cc: Tom Murphy <tmurphy@arista.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 23 4月, 2019 8 次提交
-
-
由 Vivek Gautam 提交于
Bits[15:0] in CBFRSYNRA register contain information about StreamID of the incoming transaction that generated the fault. Dump CBFRSYNRA register to get this info. This is specially useful in a distributed SMMU architecture where multiple masters are connected to the SMMU. SID information helps to quickly identify the faulting master device. Reviewed-by: NBjorn Andersson <bjorn.andersson@linaro.org> Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Acked-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NVivek Gautam <vivek.gautam@codeaurora.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Will Deacon 提交于
Disabling the SMMU when probing from within a kdump kernel so that all incoming transactions are terminated can prevent the core of the crashed kernel from being transferred off the machine if all I/O devices are behind the SMMU. Instead, continue to probe the SMMU after it is disabled so that we can reinitialise it entirely and re-attach the DMA masters as they are reset. Since the kdump kernel may not have drivers for all of the active DMA masters, we suppress fault reporting to avoid spamming the console and swamping the IRQ threads. Reported-by: N"Leizhen (ThunderTown)" <thunder.leizhen@huawei.com> Tested-by: N"Leizhen (ThunderTown)" <thunder.leizhen@huawei.com> Tested-by: NBhupesh Sharma <bhsharma@redhat.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Jean-Philippe Brucker 提交于
The ARM architecture has a "Top Byte Ignore" (TBI) option that makes the MMU mask out bits [63:56] of an address, allowing a userspace application to store data in its pointers. This option is incompatible with PCI ATS. If TBI is enabled in the SMMU and userspace triggers DMA transactions on tagged pointers, the endpoint might create ATC entries for addresses that include a tag. Software would then have to send ATC invalidation packets for each 255 possible alias of an address, or just wipe the whole address space. This is not a viable option, so disable TBI. The impact of this change is unclear, since there are very few users of tagged pointers, much less SVA. But the requirement introduced by this patch doesn't seem excessive: a userspace application using both tagged pointers and SVA should now sanitize addresses (clear the tag) before using them for device DMA. Signed-off-by: NJean-Philippe Brucker <jean-philippe.brucker@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Jean-Philippe Brucker 提交于
PCIe devices can implement their own TLB, named Address Translation Cache (ATC). Enable Address Translation Service (ATS) for devices that support it and send them invalidation requests whenever we invalidate the IOTLBs. ATC invalidation is allowed to take up to 90 seconds, according to the PCIe spec, so it is possible to get a SMMU command queue timeout during normal operations. However we expect implementations to complete invalidation in reasonable time. We only enable ATS for "trusted" devices, and currently rely on the pci_dev->untrusted bit. For ATS we have to trust that: (a) The device doesn't issue "translated" memory requests for addresses that weren't returned by the SMMU in a Translation Completion. In particular, if we give control of a device or device partition to a VM or userspace, software cannot program the device to access arbitrary "translated" addresses. (b) The device follows permissions granted by the SMMU in a Translation Completion. If the device requested read+write permission and only got read, then it doesn't write. (c) The device doesn't send Translated transactions for an address that was invalidated by an ATC invalidation. Note that the PCIe specification explicitly requires all of these, so we can assume that implementations will cleanly shield ATCs from software. All ATS translated requests still go through the SMMU, to walk the stream table and check that the device is actually allowed to send translated requests. Signed-off-by: NJean-Philippe Brucker <jean-philippe.brucker@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Jean-Philippe Brucker 提交于
When removing a mapping from a domain, we need to send an invalidation to all devices that might have stored it in their Address Translation Cache (ATC). In addition when updating the context descriptor of a live domain, we'll need to send invalidations for all devices attached to it. Maintain a list of devices in each domain, protected by a spinlock. It is updated every time we attach or detach devices to and from domains. It needs to be a spinlock because we'll invalidate ATC entries from within hardirq-safe contexts, but it may be possible to relax the read side with RCU later. Signed-off-by: NJean-Philippe Brucker <jean-philippe.brucker@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Jean-Philippe Brucker 提交于
As we're going to track domain-master links more closely for ATS and CD invalidation, add pointer to the attached domain in struct arm_smmu_master. As a result, arm_smmu_strtab_ent is redundant and can be removed. Signed-off-by: NJean-Philippe Brucker <jean-philippe.brucker@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Jean-Philippe Brucker 提交于
Simplify the attach/detach code a bit by keeping a pointer to the stream IDs in the master structure. Although not completely obvious here, it does make the subsequent support for ATS, PRI and PASID a bit simpler. Signed-off-by: NJean-Philippe Brucker <jean-philippe.brucker@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Jean-Philippe Brucker 提交于
The arm_smmu_master_data structure already represents more than just the firmware data associated to a master, and will be used extensively to represent a device's state when implementing more SMMU features. Rename the structure to arm_smmu_master. Signed-off-by: NJean-Philippe Brucker <jean-philippe.brucker@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 13 4月, 2019 1 次提交
-
-
由 Rob Herring 提交于
ARM Mali midgard GPU is similar to standard 64-bit stage 1 page tables, but have a few differences. Add a new format type to represent the format. The input address size is 48-bits and the output address size is 40-bits (and possibly less?). Note that the later bifrost GPUs follow the standard 64-bit stage 1 format. The differences in the format compared to 64-bit stage 1 format are: The 3rd level page entry bits are 0x1 instead of 0x3 for page entries. The access flags are not read-only and unprivileged, but read and write. This is similar to stage 2 entries, but the memory attributes field matches stage 1 being an index. The nG bit is not set by the vendor driver. This one didn't seem to matter, but we'll keep it aligned to the vendor driver. Cc: Will Deacon <will.deacon@arm.com> Acked-by: NRobin Murphy <robin.murphy@arm.com> Cc: linux-arm-kernel@lists.infradead.org Cc: iommu@lists.linux-foundation.org Acked-by: NAlyssa Rosenzweig <alyssa@rosenzweig.io> Acked-by: NJoerg Roedel <jroedel@suse.de> Signed-off-by: NRob Herring <robh@kernel.org> Link: https://patchwork.freedesktop.org/patch/msgid/20190409205427.6943-2-robh@kernel.org
-
- 12 4月, 2019 2 次提交
-
-
由 Lu Baolu 提交于
By default, for performance consideration, Intel IOMMU driver won't flush IOTLB immediately after a buffer is unmapped. It schedules a thread and flushes IOTLB in a batched mode. This isn't suitable for untrusted device since it still can access the memory even if it isn't supposed to do so. Cc: Ashok Raj <ashok.raj@intel.com> Cc: Jacob Pan <jacob.jun.pan@linux.intel.com> Signed-off-by: NLu Baolu <baolu.lu@linux.intel.com> Tested-by: NXu Pengfei <pengfei.xu@intel.com> Tested-by: NMika Westerberg <mika.westerberg@intel.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
The exlcusion range limit register needs to contain the base-address of the last page that is part of the range, as bits 0-11 of this register are treated as 0xfff by the hardware for comparisons. So correctly set the exclusion range in the hardware to the last page which is _in_ the range. Fixes: b2026aa2 ('x86, AMD IOMMU: add functions for programming IOMMU MMIO space') Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 11 4月, 2019 2 次提交
-
-
由 Christoph Hellwig 提交于
We already do this in the caller. Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Christoph Hellwig 提交于
The intel-iommu driver currently has a partial reimplementation of the direct mapping code for devices that use pass through mode. Replace that code with calls to the relevant dma_direct routines at the highest level. This means we have exactly the same behvior as the dma direct code itself, and can prepare for eventually only attaching the intel_iommu ops to devices that actually need dynamic iommu mappings. Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-