- 29 3月, 2018 7 次提交
-
-
Setting the IRQ remap table for a specific devid (or its alias devid) includes three steps. Those three steps are always repeated each time this is done. Introduce a new helper function, move those steps there and use that function instead. The compiler can still decide if it is worth to inline. Signed-off-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
The variable of type struct irq_remap_table is always named `table' except in amd_iommu_update_ga() where it is called `irt'. Make it consistent and name it also `table'. Signed-off-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
alloc_irq_table() has a special ioapic argument. If set then it will pre-allocate / reserve the first 32 indexes. The argument is only once true and it would make alloc_irq_table() a little simpler if we would extract the special bits to the caller. The caller of irq_remapping_alloc() is holding irq_domain_mutex so the initialization of iommu->irte_ops->set_allocated() should not race against other user. Signed-off-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
The function get_irq_table() reads/writes irq_lookup_table while holding the amd_iommu_devtable_lock. It also modifies amd_iommu_dev_table[].data[2]. set_dte_entry() is using amd_iommu_dev_table[].data[0|1] (under the domain->lock) so it should be okay. The access to the iommu is serialized with its own (iommu's) lock. So split out get_irq_table() out of amd_iommu_devtable_lock's lock. Signed-off-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
domain_id_alloc() and domain_id_free() is used for id management. Those two function share a bitmap (amd_iommu_pd_alloc_bitmap) and set/clear bits based on id allocation. There is no need to share this with amd_iommu_devtable_lock, it can use its own lock for this operation. Signed-off-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
alloc_dev_data() adds new items to dev_data_list and search_dev_data() is searching for items in this list. Both protect the access to the list with a spinlock. There is no need to navigate forth and back within the list and there is also no deleting of a specific item. This qualifies the list to become a lock less list and as part of this, the spinlock can be removed. With this change the ordering of those items within the list is changed: before the change new items were added to the end of the list, now they are added to the front. I don't think it matters but wanted to mention it. Signed-off-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
find_dev_data() does not check whether the return value alloc_dev_data() is NULL. This was okay once because the pointer was returned once as-is. Since commit df3f7a6e ("iommu/amd: Use is_attach_deferred call-back") the pointer may be used within find_dev_data() so a NULL check is required. Cc: Baoquan He <bhe@redhat.com> Fixes: df3f7a6e ("iommu/amd: Use is_attach_deferred call-back") Signed-off-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 15 3月, 2018 2 次提交
-
-
由 Gary R Hook 提交于
Remove printk and use a more preferable error logging function. Signed-off-by: NGary R Hook <gary.hook@amd.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Suravee Suthikulpanit 提交于
Since AMD IOMMU driver currently flushes all TLB entries when page size is more than one, use the same interface for both iommu_ops.flush_iotlb_all() and iommu_ops.iotlb_sync(). Cc: Joerg Roedel <joro@8bytes.org> Signed-off-by: NSuravee Suthikulpanit <suravee.suthikulpanit@amd.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 15 2月, 2018 1 次提交
-
-
由 Scott Wood 提交于
get_irq_table() previously acquired amd_iommu_devtable_lock which is not a raw lock, and thus cannot be acquired from atomic context on PREEMPT_RT. Many calls to modify_irte*() come from atomic context due to the IRQ desc->lock, as does amd_iommu_update_ga() due to the preemption disabling in vcpu_load/put(). The only difference between calling get_irq_table() and reading from irq_lookup_table[] directly, other than the lock acquisition and amd_iommu_rlookup_table[] check, is if the table entry is unpopulated, which should never happen when looking up a devid that came from an irq_2_irte struct, as get_irq_table() would have already been called on that devid during irq_remapping_alloc(). The lock acquisition is not needed in these cases because entries in irq_lookup_table[] never change once non-NULL -- nor would the amd_iommu_devtable_lock usage in get_irq_table() provide meaningful protection if they did, since it's released before using the looked up table in the get_irq_table() caller. Rename the old get_irq_table() to alloc_irq_table(), and create a new lockless get_irq_table() to be used in non-allocating contexts that WARNs if it doesn't find what it's looking for. Signed-off-by: NScott Wood <swood@redhat.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 13 2月, 2018 2 次提交
-
-
由 Scott Wood 提交于
search_dev_data() acquires a non-raw lock, which can't be done from atomic context on PREEMPT_RT. There is no need to look at dev_data because guest_mode should never be set if use_vapic is not set. Signed-off-by: NScott Wood <swood@redhat.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Scott Wood 提交于
Several functions in this driver are called from atomic context, and thus raw locks must be used in order to be safe on PREEMPT_RT. This includes paths that must wait for command completion, which is a potential PREEMPT_RT latency concern but not easily avoidable. Signed-off-by: NScott Wood <swood@redhat.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 12 1月, 2018 1 次提交
-
-
由 Sinan Kaya 提交于
pci_get_bus_and_slot() is restrictive such that it assumes domain=0 as where a PCI device is present. This restricts the device drivers to be reused for other domain numbers. Getting ready to remove pci_get_bus_and_slot() function in favor of pci_get_domain_bus_and_slot(). Hard-code the domain number as 0 for the AMD IOMMU driver. Signed-off-by: NSinan Kaya <okaya@codeaurora.org> Signed-off-by: NBjorn Helgaas <helgaas@kernel.org> Reviewed-by: NGary R Hook <gary.hook@amd.com> Acked-by: NJoerg Roedel <jroedel@suse.de>
-
- 30 12月, 2017 1 次提交
-
-
由 Thomas Gleixner 提交于
The 'early' argument of irq_domain_activate_irq() is actually used to denote reservation mode. To avoid confusion, rename it before abuse happens. No functional change. Fixes: 72491643 ("genirq/irqdomain: Update irq_domain_ops.activate() signature") Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Cc: Alexandru Chirvasitu <achirvasub@gmail.com> Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Cc: Dou Liyang <douly.fnst@cn.fujitsu.com> Cc: Pavel Machek <pavel@ucw.cz> Cc: Maciej W. Rozycki <macro@linux-mips.org> Cc: Mikael Pettersson <mikpelinux@gmail.com> Cc: Josh Poulson <jopoulso@microsoft.com> Cc: Mihai Costache <v-micos@microsoft.com> Cc: Stephen Hemminger <sthemmin@microsoft.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: linux-pci@vger.kernel.org Cc: Haiyang Zhang <haiyangz@microsoft.com> Cc: Dexuan Cui <decui@microsoft.com> Cc: Simon Xiao <sixiao@microsoft.com> Cc: Saeed Mahameed <saeedm@mellanox.com> Cc: Jork Loeser <Jork.Loeser@microsoft.com> Cc: Bjorn Helgaas <bhelgaas@google.com> Cc: devel@linuxdriverproject.org Cc: KY Srinivasan <kys@microsoft.com> Cc: Alan Cox <alan@linux.intel.com> Cc: Sakari Ailus <sakari.ailus@intel.com>, Cc: linux-media@vger.kernel.org
-
- 21 12月, 2017 2 次提交
-
-
由 Gary R Hook 提交于
The AMD IOMMU specification Rev 3.00 (December 2016) introduces a new Enhanced PPR Handling Support (EPHSup) bit in the MMIO register offset 0030h (IOMMU Extended Feature Register). When EPHSup=1, the IOMMU hardware requires the PPR bit of the device table entry (DTE) to be set in order to support PPR for a particular endpoint device. Please see https://support.amd.com/TechDocs/48882_IOMMU.pdf for this revision of the AMD IOMMU specification. Signed-off-by: NGary R Hook <gary.hook@amd.com> Signed-off-by: NAlex Williamson <alex.williamson@redhat.com>
-
由 Gary R Hook 提交于
When an unknown type event occurs, the default information written to the syslog should dump raw event data. This could provide insight into the event that occurred. Signed-off-by: NGary R Hook <gary.hook@amd.com> Signed-off-by: NAlex Williamson <alex.williamson@redhat.com>
-
- 04 11月, 2017 3 次提交
-
-
由 Gary R Hook 提交于
The extent of pages specified when applying a reserved region should include up to the last page of the range, but not the page following the range. Signed-off-by: NGary R Hook <gary.hook@amd.com> Fixes: 8d54d6c8 ('iommu/amd: Implement apply_dm_region call-back') Signed-off-by: NAlex Williamson <alex.williamson@redhat.com>
-
由 Colin Ian King 提交于
Variable flush_addr is being assigned but is never read; it is redundant and can be removed. Cleans up the clang warning: drivers/iommu/amd_iommu.c:2388:2: warning: Value stored to 'flush_addr' is never read Signed-off-by: NColin Ian King <colin.king@canonical.com> Signed-off-by: NAlex Williamson <alex.williamson@redhat.com>
-
由 Alex Williamson 提交于
On an is_allocated() interrupt index, we ALIGN() the current index and then increment it via the for loop, guaranteeing that it is no longer aligned for alignments >1. We instead need to align the next index, to guarantee forward progress, moving the increment-only to the case where the index was found to be unallocated. Fixes: 37946d95 ('iommu/amd: Add align parameter to alloc_irq_index()') Signed-off-by: NAlex Williamson <alex.williamson@redhat.com>
-
- 13 10月, 2017 1 次提交
-
-
由 Joerg Roedel 提交于
The function only sends the flush command to the IOMMU(s), but does not wait for its completion when it returns. Fix that. Fixes: 601367d7 ('x86/amd-iommu: Remove iommu_flush_domain function') Cc: stable@vger.kernel.org # >= 2.6.33 Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 12 10月, 2017 1 次提交
-
-
由 Tomasz Nowicki 提交于
Since IOVA allocation failure is not unusual case we need to flush CPUs' rcache in hope we will succeed in next round. However, it is useful to decide whether we need rcache flush step because of two reasons: - Not scalability. On large system with ~100 CPUs iterating and flushing rcache for each CPU becomes serious bottleneck so we may want to defer it. - free_cpu_cached_iovas() does not care about max PFN we are interested in. Thus we may flush our rcaches and still get no new IOVA like in the commonly used scenario: if (dma_limit > DMA_BIT_MASK(32) && dev_is_pci(dev)) iova = alloc_iova_fast(iovad, iova_len, DMA_BIT_MASK(32) >> shift); if (!iova) iova = alloc_iova_fast(iovad, iova_len, dma_limit >> shift); 1. First alloc_iova_fast() call is limited to DMA_BIT_MASK(32) to get PCI devices a SAC address 2. alloc_iova() fails due to full 32-bit space 3. rcaches contain PFNs out of 32-bit space so free_cpu_cached_iovas() throws entries away for nothing and alloc_iova() fails again 4. Next alloc_iova_fast() call cannot take advantage of rcache since we have just defeated caches. In this case we pick the slowest option to proceed. This patch reworks flushed_rcache local flag to be additional function argument instead and control rcache flush step. Also, it updates all users to do the flush as the last chance. Signed-off-by: NTomasz Nowicki <Tomasz.Nowicki@caviumnetworks.com> Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Tested-by: NNate Watterson <nwatters@codeaurora.org> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 11 10月, 2017 1 次提交
-
-
由 Tom Lendacky 提交于
When SME memory encryption is active it will rely on SWIOTLB to handle DMA for devices that cannot support the addressing requirements of having the encryption mask set in the physical address. The IOMMU currently disables SWIOTLB if it is not running in passthrough mode. This is not desired as non-PCI devices attempting DMA may fail. Update the code to check if SME is active and not disable SWIOTLB. Fixes: 2543a786 ("iommu/amd: Allow the AMD IOMMU to work with memory encryption") Signed-off-by: NTom Lendacky <thomas.lendacky@amd.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 10 10月, 2017 2 次提交
-
-
由 Joerg Roedel 提交于
Make use of the new alignment capability of alloc_irq_index() to enforce IRQ index alignment for MSI. Reported-by: NThomas Gleixner <tglx@linutronix.de> Fixes: 2b324506 ('iommu/amd: Add routines to manage irq remapping tables') Reviewed-by: NThomas Gleixner <tglx@linutronix.de> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
For multi-MSI IRQ ranges the IRQ index needs to be aligned to the power-of-two of the requested IRQ count. Extend the alloc_irq_index() function to allow such an allocation. Reported-by: NThomas Gleixner <tglx@linutronix.de> Fixes: 2b324506 ('iommu/amd: Add routines to manage irq remapping tables') Reviewed-by: NThomas Gleixner <tglx@linutronix.de> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 27 9月, 2017 1 次提交
-
-
由 Zhen Lei 提交于
Now that the cached node optimisation can apply to all allocations, the couple of users which were playing tricks with dma_32bit_pfn in order to benefit from it can stop doing so. Conversely, there is also no need for all the other users to explicitly calculate a 'real' 32-bit PFN, when init_iova_domain() can happily do that itself from the page granularity. CC: Thierry Reding <thierry.reding@gmail.com> CC: Jonathan Hunter <jonathanh@nvidia.com> CC: David Airlie <airlied@linux.ie> CC: Sudeep Dutt <sudeep.dutt@intel.com> CC: Ashutosh Dixit <ashutosh.dixit@intel.com> Signed-off-by: NZhen Lei <thunder.leizhen@huawei.com> Tested-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Tested-by: NZhen Lei <thunder.leizhen@huawei.com> Tested-by: NNate Watterson <nwatters@codeaurora.org> [rm: use iova_shift(), rewrote commit message] Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 26 9月, 2017 2 次提交
-
-
由 Thomas Gleixner 提交于
With the upcoming reservation/management scheme, early activation will assign a special vector. The final activation at request_irq() assigns a real vector, which needs to be updated in the tables. Split out the reconfiguration code in set_affinity and use it for reactivation. Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Tested-by: NJuergen Gross <jgross@suse.com> Tested-by: NYu Chen <yu.c.chen@intel.com> Acked-by: NJuergen Gross <jgross@suse.com> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com> Cc: Tony Luck <tony.luck@intel.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Alok Kataria <akataria@vmware.com> Cc: Joerg Roedel <joro@8bytes.org> Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Christoph Hellwig <hch@lst.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: iommu@lists.linux-foundation.org Cc: Borislav Petkov <bp@alien8.de> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Rui Zhang <rui.zhang@intel.com> Cc: "K. Y. Srinivasan" <kys@microsoft.com> Cc: Arjan van de Ven <arjan@linux.intel.com> Cc: Dan Williams <dan.j.williams@intel.com> Cc: Len Brown <lenb@kernel.org> Link: https://lkml.kernel.org/r/20170913213155.944883733@linutronix.de
-
由 Thomas Gleixner 提交于
The irq_domain_ops.activate() callback has no return value and no way to tell the function that the activation is early. The upcoming changes to support a reservation scheme which allows to assign interrupt vectors on x86 only when the interrupt is actually requested requires: - A return value, so activation can fail at request_irq() time - Information that the activate invocation is early, i.e. before request_irq(). Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Tested-by: NJuergen Gross <jgross@suse.com> Tested-by: NYu Chen <yu.c.chen@intel.com> Acked-by: NJuergen Gross <jgross@suse.com> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com> Cc: Tony Luck <tony.luck@intel.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Alok Kataria <akataria@vmware.com> Cc: Joerg Roedel <joro@8bytes.org> Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Christoph Hellwig <hch@lst.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Rui Zhang <rui.zhang@intel.com> Cc: "K. Y. Srinivasan" <kys@microsoft.com> Cc: Arjan van de Ven <arjan@linux.intel.com> Cc: Dan Williams <dan.j.williams@intel.com> Cc: Len Brown <lenb@kernel.org> Link: https://lkml.kernel.org/r/20170913213152.848490816@linutronix.de
-
- 28 8月, 2017 2 次提交
-
-
由 Joerg Roedel 提交于
Rename a few iommu cache-flush functions that start with iommu_ to start with amd_iommu now. This is to prevent name collisions with generic iommu code later on. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Baoquan He 提交于
In get_domain(), 'domain' could be NULL before it's passed to dma_ops_domain() to dereference. And the current code calling get_domain() can't deal with the returned 'domain' well if its value is NULL. So before dma_ops_domain() calling, check if 'domain' is NULL, If yes just return ERR_PTR(-EBUSY) directly. Reported-by: NDan Carpenter <dan.carpenter@oracle.com> Fixes: df3f7a6e ('iommu/amd: Use is_attach_deferred call-back') Signed-off-by: NBaoquan He <bhe@redhat.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 16 8月, 2017 7 次提交
-
-
由 Joerg Roedel 提交于
Rip out the implementation in the AMD IOMMU driver and use the one in the common iova code instead. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Baoquan He 提交于
When iommu is pre_enabled in kdump kernel, if a device is set up with guest translations (DTE.GV=1), then don't copy GCR3 table root pointer but move the device over to an empty guest-cr3 table and handle the faults in the PPR log (which answer them with INVALID). After all these PPR faults are recoverable for the device and we should not allow the device to change old-kernels data when we don't have to. Signed-off-by: NBaoquan He <bhe@redhat.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Baoquan He 提交于
Implement call-back is_attach_deferred and use it to defer the domain attach from iommu driver init to device driver init when iommu is pre-enabled in kdump kernel. Signed-off-by: NBaoquan He <bhe@redhat.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Baoquan He 提交于
Firstly split the dev table entry copy into address translation part and irq remapping part. Because these two parts could be enabled independently. Secondly do sanity check for address translation and irq remap of old dev table entry separately. Signed-off-by: NBaoquan He <bhe@redhat.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Baoquan He 提交于
Add function copy_dev_tables to copy the old DEV table entries of the panicked kernel to the new allocated device table. Since all iommus share the same device table the copy only need be done one time. Here add a new global old_dev_tbl_cpy to point to the newly allocated device table which the content of old device table will be copied to. Besides, we also need to: - Check whether all IOMMUs actually use the same device table with the same size - Verify that the size of the old device table is the expected size. - Reserve the old domain id occupied in 1st kernel to avoid touching the old io-page tables. Then on-flight DMA can continue looking it up. And also define MACRO DEV_DOMID_MASK to replace magic number 0xffffULL, it can be reused in copy_dev_tables(). Signed-off-by: NBaoquan He <bhe@redhat.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Baoquan He 提交于
In AMD-Vi spec several bits of IO PTE fields and DTE fields are similar so that both of them can share the same MACRO definition. However defining them respectively can make code more read-able. Do it now. Signed-off-by: NBaoquan He <bhe@redhat.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Baoquan He 提交于
This reverts commit 54bd6357. We still need the IO_PAGE_FAULT message to warn error after the issue of on-flight dma in kdump kernel is fixed. Signed-off-by: NBaoquan He <bhe@redhat.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 25 7月, 2017 1 次提交
-
-
由 Suravee Suthikulpanit 提交于
IRTE[GALogIntr] bit should set when enabling guest_mode, which enables IOMMU to generate entry in GALog when IRTE[IsRun] is not set, and send an interrupt to notify IOMMU driver. Signed-off-by: NSuravee Suthikulpanit <suravee.suthikulpanit@amd.com> Cc: Joerg Roedel <jroedel@suse.de> Cc: stable@vger.kernel.org # v4.9+ Fixes: d98de49a ('iommu/amd: Enable vAPIC interrupt remapping mode by default') Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 18 7月, 2017 1 次提交
-
-
由 Tom Lendacky 提交于
The IOMMU is programmed with physical addresses for the various tables and buffers that are used to communicate between the device and the driver. When the driver allocates this memory it is encrypted. In order for the IOMMU to access the memory as encrypted the encryption mask needs to be included in these physical addresses during configuration. The PTE entries created by the IOMMU should also include the encryption mask so that when the device behind the IOMMU performs a DMA, the DMA will be performed to encrypted memory. Signed-off-by: NTom Lendacky <thomas.lendacky@amd.com> Reviewed-by: NThomas Gleixner <tglx@linutronix.de> Reviewed-by: NBorislav Petkov <bp@suse.de> Acked-by: NJoerg Roedel <jroedel@suse.de> Cc: <iommu@lists.linux-foundation.org> Cc: Alexander Potapenko <glider@google.com> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Borislav Petkov <bp@alien8.de> Cc: Brijesh Singh <brijesh.singh@amd.com> Cc: Dave Young <dyoung@redhat.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Cc: Larry Woodman <lwoodman@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Matt Fleming <matt@codeblueprint.co.uk> Cc: Michael S. Tsirkin <mst@redhat.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Radim Krčmář <rkrcmar@redhat.com> Cc: Rik van Riel <riel@redhat.com> Cc: Toshimitsu Kani <toshi.kani@hpe.com> Cc: kasan-dev@googlegroups.com Cc: kvm@vger.kernel.org Cc: linux-arch@vger.kernel.org Cc: linux-doc@vger.kernel.org Cc: linux-efi@vger.kernel.org Cc: linux-mm@kvack.org Link: http://lkml.kernel.org/r/3053631ea25ba8b1601c351cb7c541c496f6d9bc.1500319216.git.thomas.lendacky@amd.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 28 6月, 2017 2 次提交
-
-
由 Christoph Hellwig 提交于
And instead wire it up as method for all the dma_map_ops instances. Note that this also means the arch specific check will be fully instead of partially applied in the AMD iommu driver. Signed-off-by: NChristoph Hellwig <hch@lst.de>
-
由 Christoph Hellwig 提交于
DMA_ERROR_CODE is going to go away, so don't rely on it. Signed-off-by: NChristoph Hellwig <hch@lst.de>
-