- 06 7月, 2018 1 次提交
-
-
由 yzhai003@ucr.edu 提交于
Argument "page_size" passing to function "fetch_pte" could be uninitialized if the function returns NULL The caller "iommu_unmap_page" checks the return value but the page_size is used outside the if block. Signed-off-by: Nyzhai003@ucr.edu <yzhai003@ucr.edu> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 12 6月, 2018 1 次提交
-
-
由 Linus Torvalds 提交于
This reverts commit b468620f. It turns out that this broke drm on AMD platforms. Quoting Gabriel C: "I can confirm reverting b468620f fixes that issue for me. The GPU is working fine with SME enabled. Now with working GPU :) I can also confirm performance is back to normal without doing any other workarounds" Christan König analyzed it partially: "As far as I analyzed it we now get an -ENOMEM from dma_alloc_attrs() in drivers/gpu/drm/ttm/ttm_page_alloc_dma.c when IOMMU is enabled" and Christoph Hellwig responded: "I think the prime issue is that dma_direct_alloc respects the dma mask. Which we don't need if actually using the iommu. This would be mostly harmless exept for the the SEV bit high in the address that makes the checks fail. For now I'd say revert this commit for 4.17/4.18-rc and I'll look into addressing these issues properly" Reported-and-bisected-by: NGabriel C <nix.or.die@gmail.com> Acked-by: NChristoph Hellwig <hch@lst.de> Cc: Christian König <christian.koenig@amd.com> Cc: Michel Dänzer <michel.daenzer@amd.com> Cc: Joerg Roedel <jroedel@suse.de> Cc: Tom Lendacky <thomas.lendacky@amd.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: stable@kernel.org # v4.17 Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 06 6月, 2018 1 次提交
-
-
由 Thomas Gleixner 提交于
To address the EBUSY fail of interrupt affinity settings in case that the previous setting has not been cleaned up yet, use the new apic_ack_irq() function instead of the special ir_ack_apic_edge() implementation which is merily a wrapper around ack_APIC_irq(). Preparatory change for the real fix Fixes: dccfe314 ("x86/vector: Simplify vector move cleanup") Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Tested-by: NSong Liu <songliubraving@fb.com> Cc: Joerg Roedel <jroedel@suse.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Song Liu <liu.song.a23@gmail.com> Cc: Dmitry Safonov <0x7f454c46@gmail.com> Cc: stable@vger.kernel.org Cc: Mike Travis <mike.travis@hpe.com> Cc: Borislav Petkov <bp@alien8.de> Cc: Tariq Toukan <tariqt@mellanox.com> Link: https://lkml.kernel.org/r/20180604162224.555716895@linutronix.de
-
- 15 5月, 2018 2 次提交
-
-
由 Anna-Maria Gleixner 提交于
The check for !dev_data->domain in __detach_device() emits a warning and returns. The calling code in detach_device() dereferences dev_data->domain afterwards unconditionally, so in case that dev_data->domain is NULL the warning will be immediately followed by a NULL pointer dereference. The calling code in cleanup_domain() loops infinite when !dev_data->domain and the check in __detach_device() returns immediately because dev_list is not changed. do_detach() duplicates this check without throwing a warning. Move the check with the explanation of the do_detach() code into the caller detach_device() and return immediately. Throw an error, when hitting the condition in cleanup_domain(). Signed-off-by: NAnna-Maria Gleixner <anna-maria@linutronix.de> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Anna-Maria Gleixner 提交于
Suggested-by: NGary R Hook <gary.hook@amd.com> Signed-off-by: NAnna-Maria Gleixner <anna-maria@linutronix.de> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 11 5月, 2018 1 次提交
-
-
由 Gil Kupfer 提交于
Adds a "pci=noats" boot parameter. When supplied, all ATS related functions fail immediately and the IOMMU is configured to not use device-IOTLB. Any function that checks for ATS capabilities directly against the devices should also check this flag. Currently, such functions exist only in IOMMU drivers, and they are covered by this patch. The motivation behind this patch is the existence of malicious devices. Lots of research has been done about how to use the IOMMU as protection from such devices. When ATS is supported, any I/O device can access any physical address by faking device-IOTLB entries. Adding the ability to ignore these entries lets sysadmins enhance system security. Signed-off-by: NGil Kupfer <gilkup@cs.technion.ac.il> Signed-off-by: NBjorn Helgaas <bhelgaas@google.com> Acked-by: NJoerg Roedel <jroedel@suse.de>
-
- 03 5月, 2018 3 次提交
-
-
由 Gary R Hook 提交于
A new events have been defined in the AMD IOMMU spec: 0x09 - "invalid PPR request" Add support for logging this type of event. Signed-off-by: NGary R Hook <gary.hook@amd.com> ~ ~ ~ Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Gary R Hook 提交于
Provide detailed data for each event, as appropriate. Signed-off-by: NGary R Hook <gary.hook@amd.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Arnd Bergmann 提交于
The newly introduced lock is only used when CONFIG_IRQ_REMAP is enabled: drivers/iommu/amd_iommu.c:86:24: error: 'iommu_table_lock' defined but not used [-Werror=unused-variable] static DEFINE_SPINLOCK(iommu_table_lock); This moves the definition next to the user, within the #ifdef protected section of the file. Fixes: ea6166f4 ("iommu/amd: Split irq_lookup_table out of the amd_iommu_devtable_lock") Signed-off-by: NArnd Bergmann <arnd@arndb.de> Acked-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 29 3月, 2018 10 次提交
-
-
In the unlikely case when alloc_irq_table() is not able to return a remap table then "ret" will be assigned with an error code. Later, the code checks `index' and if it is negative (which it is because it is initialized with `-1') and then then function properly aborts but returns `-1' instead `-ENOMEM' what was intended. In order to correct this, I assign -ENOMEM to index. Signed-off-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
Before commit 0bb6e243 ("iommu/amd: Support IOMMU_DOMAIN_DMA type allocation") amd_iommu_devtable_lock had a read_lock() user but now there are none. In fact, after the mentioned commit we had only write_lock() user of the lock. Since there is no reason to keep it as writer lock, change its type to a spin_lock. I *think* that we might even be able to remove the lock because all its current user seem to have their own protection. Signed-off-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
The irq_remap_table is allocated while the iommu_table_lock is held with interrupts disabled. >From looking at the call sites, all callers are in the early device initialisation (apic_bsp_setup(), pci_enable_device(), pci_enable_msi()) so make sense to drop the lock which also enables interrupts and try to allocate that memory with GFP_KERNEL instead GFP_ATOMIC. Since during the allocation the iommu_table_lock is dropped, we need to recheck if table exists after the lock has been reacquired. I *think* that it is impossible that the "devid" entry appears in irq_lookup_table while the lock is dropped since the same device can only be probed once. However I check for both cases, just to be sure. Signed-off-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
Setting the IRQ remap table for a specific devid (or its alias devid) includes three steps. Those three steps are always repeated each time this is done. Introduce a new helper function, move those steps there and use that function instead. The compiler can still decide if it is worth to inline. Signed-off-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
The variable of type struct irq_remap_table is always named `table' except in amd_iommu_update_ga() where it is called `irt'. Make it consistent and name it also `table'. Signed-off-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
alloc_irq_table() has a special ioapic argument. If set then it will pre-allocate / reserve the first 32 indexes. The argument is only once true and it would make alloc_irq_table() a little simpler if we would extract the special bits to the caller. The caller of irq_remapping_alloc() is holding irq_domain_mutex so the initialization of iommu->irte_ops->set_allocated() should not race against other user. Signed-off-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
The function get_irq_table() reads/writes irq_lookup_table while holding the amd_iommu_devtable_lock. It also modifies amd_iommu_dev_table[].data[2]. set_dte_entry() is using amd_iommu_dev_table[].data[0|1] (under the domain->lock) so it should be okay. The access to the iommu is serialized with its own (iommu's) lock. So split out get_irq_table() out of amd_iommu_devtable_lock's lock. Signed-off-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
domain_id_alloc() and domain_id_free() is used for id management. Those two function share a bitmap (amd_iommu_pd_alloc_bitmap) and set/clear bits based on id allocation. There is no need to share this with amd_iommu_devtable_lock, it can use its own lock for this operation. Signed-off-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
alloc_dev_data() adds new items to dev_data_list and search_dev_data() is searching for items in this list. Both protect the access to the list with a spinlock. There is no need to navigate forth and back within the list and there is also no deleting of a specific item. This qualifies the list to become a lock less list and as part of this, the spinlock can be removed. With this change the ordering of those items within the list is changed: before the change new items were added to the end of the list, now they are added to the front. I don't think it matters but wanted to mention it. Signed-off-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
find_dev_data() does not check whether the return value alloc_dev_data() is NULL. This was okay once because the pointer was returned once as-is. Since commit df3f7a6e ("iommu/amd: Use is_attach_deferred call-back") the pointer may be used within find_dev_data() so a NULL check is required. Cc: Baoquan He <bhe@redhat.com> Fixes: df3f7a6e ("iommu/amd: Use is_attach_deferred call-back") Signed-off-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 20 3月, 2018 2 次提交
-
-
由 Christoph Hellwig 提交于
This cleans up the code a lot by removing duplicate logic. Tested-by: NTom Lendacky <thomas.lendacky@amd.com> Tested-by: NJoerg Roedel <jroedel@suse.de> Signed-off-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NThomas Gleixner <tglx@linutronix.de> Acked-by: NJoerg Roedel <jroedel@suse.de> Cc: David Woodhouse <dwmw2@infradead.org> Cc: Joerg Roedel <joro@8bytes.org> Cc: Jon Mason <jdmason@kudzu.us> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Muli Ben-Yehuda <mulix@mulix.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: iommu@lists.linux-foundation.org Link: http://lkml.kernel.org/r/20180319103826.12853-8-hch@lst.deSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Christoph Hellwig 提交于
The generic DMA-direct (CONFIG_DMA_DIRECT_OPS=y) implementation is now functionally equivalent to the x86 nommu dma_map implementation, so switch over to using it. That includes switching from using x86_dma_supported in various IOMMU drivers to use dma_direct_supported instead, which provides the same functionality. Tested-by: NTom Lendacky <thomas.lendacky@amd.com> Signed-off-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NThomas Gleixner <tglx@linutronix.de> Cc: David Woodhouse <dwmw2@infradead.org> Cc: Joerg Roedel <joro@8bytes.org> Cc: Jon Mason <jdmason@kudzu.us> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Muli Ben-Yehuda <mulix@mulix.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: iommu@lists.linux-foundation.org Link: http://lkml.kernel.org/r/20180319103826.12853-4-hch@lst.deSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 15 3月, 2018 2 次提交
-
-
由 Gary R Hook 提交于
Remove printk and use a more preferable error logging function. Signed-off-by: NGary R Hook <gary.hook@amd.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Suravee Suthikulpanit 提交于
Since AMD IOMMU driver currently flushes all TLB entries when page size is more than one, use the same interface for both iommu_ops.flush_iotlb_all() and iommu_ops.iotlb_sync(). Cc: Joerg Roedel <joro@8bytes.org> Signed-off-by: NSuravee Suthikulpanit <suravee.suthikulpanit@amd.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 15 2月, 2018 1 次提交
-
-
由 Scott Wood 提交于
get_irq_table() previously acquired amd_iommu_devtable_lock which is not a raw lock, and thus cannot be acquired from atomic context on PREEMPT_RT. Many calls to modify_irte*() come from atomic context due to the IRQ desc->lock, as does amd_iommu_update_ga() due to the preemption disabling in vcpu_load/put(). The only difference between calling get_irq_table() and reading from irq_lookup_table[] directly, other than the lock acquisition and amd_iommu_rlookup_table[] check, is if the table entry is unpopulated, which should never happen when looking up a devid that came from an irq_2_irte struct, as get_irq_table() would have already been called on that devid during irq_remapping_alloc(). The lock acquisition is not needed in these cases because entries in irq_lookup_table[] never change once non-NULL -- nor would the amd_iommu_devtable_lock usage in get_irq_table() provide meaningful protection if they did, since it's released before using the looked up table in the get_irq_table() caller. Rename the old get_irq_table() to alloc_irq_table(), and create a new lockless get_irq_table() to be used in non-allocating contexts that WARNs if it doesn't find what it's looking for. Signed-off-by: NScott Wood <swood@redhat.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 14 2月, 2018 1 次提交
-
-
由 Suravee Suthikulpanit 提交于
Currently, iommu_unmap, iommu_unmap_fast and iommu_map_sg return size_t. However, some of the return values are error codes (< 0), which can be misinterpreted as large size. Therefore, returning size 0 instead to signify failure to map/unmap. Cc: Joerg Roedel <joro@8bytes.org> Cc: Alex Williamson <alex.williamson@redhat.com> Signed-off-by: NSuravee Suthikulpanit <suravee.suthikulpanit@amd.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 13 2月, 2018 2 次提交
-
-
由 Scott Wood 提交于
search_dev_data() acquires a non-raw lock, which can't be done from atomic context on PREEMPT_RT. There is no need to look at dev_data because guest_mode should never be set if use_vapic is not set. Signed-off-by: NScott Wood <swood@redhat.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Scott Wood 提交于
Several functions in this driver are called from atomic context, and thus raw locks must be used in order to be safe on PREEMPT_RT. This includes paths that must wait for command completion, which is a potential PREEMPT_RT latency concern but not easily avoidable. Signed-off-by: NScott Wood <swood@redhat.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 12 1月, 2018 1 次提交
-
-
由 Sinan Kaya 提交于
pci_get_bus_and_slot() is restrictive such that it assumes domain=0 as where a PCI device is present. This restricts the device drivers to be reused for other domain numbers. Getting ready to remove pci_get_bus_and_slot() function in favor of pci_get_domain_bus_and_slot(). Hard-code the domain number as 0 for the AMD IOMMU driver. Signed-off-by: NSinan Kaya <okaya@codeaurora.org> Signed-off-by: NBjorn Helgaas <helgaas@kernel.org> Reviewed-by: NGary R Hook <gary.hook@amd.com> Acked-by: NJoerg Roedel <jroedel@suse.de>
-
- 30 12月, 2017 1 次提交
-
-
由 Thomas Gleixner 提交于
The 'early' argument of irq_domain_activate_irq() is actually used to denote reservation mode. To avoid confusion, rename it before abuse happens. No functional change. Fixes: 72491643 ("genirq/irqdomain: Update irq_domain_ops.activate() signature") Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Cc: Alexandru Chirvasitu <achirvasub@gmail.com> Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Cc: Dou Liyang <douly.fnst@cn.fujitsu.com> Cc: Pavel Machek <pavel@ucw.cz> Cc: Maciej W. Rozycki <macro@linux-mips.org> Cc: Mikael Pettersson <mikpelinux@gmail.com> Cc: Josh Poulson <jopoulso@microsoft.com> Cc: Mihai Costache <v-micos@microsoft.com> Cc: Stephen Hemminger <sthemmin@microsoft.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: linux-pci@vger.kernel.org Cc: Haiyang Zhang <haiyangz@microsoft.com> Cc: Dexuan Cui <decui@microsoft.com> Cc: Simon Xiao <sixiao@microsoft.com> Cc: Saeed Mahameed <saeedm@mellanox.com> Cc: Jork Loeser <Jork.Loeser@microsoft.com> Cc: Bjorn Helgaas <bhelgaas@google.com> Cc: devel@linuxdriverproject.org Cc: KY Srinivasan <kys@microsoft.com> Cc: Alan Cox <alan@linux.intel.com> Cc: Sakari Ailus <sakari.ailus@intel.com>, Cc: linux-media@vger.kernel.org
-
- 21 12月, 2017 2 次提交
-
-
由 Gary R Hook 提交于
The AMD IOMMU specification Rev 3.00 (December 2016) introduces a new Enhanced PPR Handling Support (EPHSup) bit in the MMIO register offset 0030h (IOMMU Extended Feature Register). When EPHSup=1, the IOMMU hardware requires the PPR bit of the device table entry (DTE) to be set in order to support PPR for a particular endpoint device. Please see https://support.amd.com/TechDocs/48882_IOMMU.pdf for this revision of the AMD IOMMU specification. Signed-off-by: NGary R Hook <gary.hook@amd.com> Signed-off-by: NAlex Williamson <alex.williamson@redhat.com>
-
由 Gary R Hook 提交于
When an unknown type event occurs, the default information written to the syslog should dump raw event data. This could provide insight into the event that occurred. Signed-off-by: NGary R Hook <gary.hook@amd.com> Signed-off-by: NAlex Williamson <alex.williamson@redhat.com>
-
- 04 11月, 2017 3 次提交
-
-
由 Gary R Hook 提交于
The extent of pages specified when applying a reserved region should include up to the last page of the range, but not the page following the range. Signed-off-by: NGary R Hook <gary.hook@amd.com> Fixes: 8d54d6c8 ('iommu/amd: Implement apply_dm_region call-back') Signed-off-by: NAlex Williamson <alex.williamson@redhat.com>
-
由 Colin Ian King 提交于
Variable flush_addr is being assigned but is never read; it is redundant and can be removed. Cleans up the clang warning: drivers/iommu/amd_iommu.c:2388:2: warning: Value stored to 'flush_addr' is never read Signed-off-by: NColin Ian King <colin.king@canonical.com> Signed-off-by: NAlex Williamson <alex.williamson@redhat.com>
-
由 Alex Williamson 提交于
On an is_allocated() interrupt index, we ALIGN() the current index and then increment it via the for loop, guaranteeing that it is no longer aligned for alignments >1. We instead need to align the next index, to guarantee forward progress, moving the increment-only to the case where the index was found to be unallocated. Fixes: 37946d95 ('iommu/amd: Add align parameter to alloc_irq_index()') Signed-off-by: NAlex Williamson <alex.williamson@redhat.com>
-
- 13 10月, 2017 1 次提交
-
-
由 Joerg Roedel 提交于
The function only sends the flush command to the IOMMU(s), but does not wait for its completion when it returns. Fix that. Fixes: 601367d7 ('x86/amd-iommu: Remove iommu_flush_domain function') Cc: stable@vger.kernel.org # >= 2.6.33 Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 12 10月, 2017 1 次提交
-
-
由 Tomasz Nowicki 提交于
Since IOVA allocation failure is not unusual case we need to flush CPUs' rcache in hope we will succeed in next round. However, it is useful to decide whether we need rcache flush step because of two reasons: - Not scalability. On large system with ~100 CPUs iterating and flushing rcache for each CPU becomes serious bottleneck so we may want to defer it. - free_cpu_cached_iovas() does not care about max PFN we are interested in. Thus we may flush our rcaches and still get no new IOVA like in the commonly used scenario: if (dma_limit > DMA_BIT_MASK(32) && dev_is_pci(dev)) iova = alloc_iova_fast(iovad, iova_len, DMA_BIT_MASK(32) >> shift); if (!iova) iova = alloc_iova_fast(iovad, iova_len, dma_limit >> shift); 1. First alloc_iova_fast() call is limited to DMA_BIT_MASK(32) to get PCI devices a SAC address 2. alloc_iova() fails due to full 32-bit space 3. rcaches contain PFNs out of 32-bit space so free_cpu_cached_iovas() throws entries away for nothing and alloc_iova() fails again 4. Next alloc_iova_fast() call cannot take advantage of rcache since we have just defeated caches. In this case we pick the slowest option to proceed. This patch reworks flushed_rcache local flag to be additional function argument instead and control rcache flush step. Also, it updates all users to do the flush as the last chance. Signed-off-by: NTomasz Nowicki <Tomasz.Nowicki@caviumnetworks.com> Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Tested-by: NNate Watterson <nwatters@codeaurora.org> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 11 10月, 2017 1 次提交
-
-
由 Tom Lendacky 提交于
When SME memory encryption is active it will rely on SWIOTLB to handle DMA for devices that cannot support the addressing requirements of having the encryption mask set in the physical address. The IOMMU currently disables SWIOTLB if it is not running in passthrough mode. This is not desired as non-PCI devices attempting DMA may fail. Update the code to check if SME is active and not disable SWIOTLB. Fixes: 2543a786 ("iommu/amd: Allow the AMD IOMMU to work with memory encryption") Signed-off-by: NTom Lendacky <thomas.lendacky@amd.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 10 10月, 2017 2 次提交
-
-
由 Joerg Roedel 提交于
Make use of the new alignment capability of alloc_irq_index() to enforce IRQ index alignment for MSI. Reported-by: NThomas Gleixner <tglx@linutronix.de> Fixes: 2b324506 ('iommu/amd: Add routines to manage irq remapping tables') Reviewed-by: NThomas Gleixner <tglx@linutronix.de> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
For multi-MSI IRQ ranges the IRQ index needs to be aligned to the power-of-two of the requested IRQ count. Extend the alloc_irq_index() function to allow such an allocation. Reported-by: NThomas Gleixner <tglx@linutronix.de> Fixes: 2b324506 ('iommu/amd: Add routines to manage irq remapping tables') Reviewed-by: NThomas Gleixner <tglx@linutronix.de> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 27 9月, 2017 1 次提交
-
-
由 Zhen Lei 提交于
Now that the cached node optimisation can apply to all allocations, the couple of users which were playing tricks with dma_32bit_pfn in order to benefit from it can stop doing so. Conversely, there is also no need for all the other users to explicitly calculate a 'real' 32-bit PFN, when init_iova_domain() can happily do that itself from the page granularity. CC: Thierry Reding <thierry.reding@gmail.com> CC: Jonathan Hunter <jonathanh@nvidia.com> CC: David Airlie <airlied@linux.ie> CC: Sudeep Dutt <sudeep.dutt@intel.com> CC: Ashutosh Dixit <ashutosh.dixit@intel.com> Signed-off-by: NZhen Lei <thunder.leizhen@huawei.com> Tested-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Tested-by: NZhen Lei <thunder.leizhen@huawei.com> Tested-by: NNate Watterson <nwatters@codeaurora.org> [rm: use iova_shift(), rewrote commit message] Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-