- 15 10月, 2019 3 次提交
-
-
由 Tom Murphy 提交于
Convert the AMD iommu driver to the dma-iommu api. Remove the iova handling and reserve region code from the AMD iommu driver. Signed-off-by: NTom Murphy <murphyt7@tcd.ie> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Tom Murphy 提交于
Add a gfp_t parameter to the iommu_ops::map function. Remove the needless locking in the AMD iommu driver. The iommu_ops::map function (or the iommu_map function which calls it) was always supposed to be sleepable (according to Joerg's comment in this thread: https://lore.kernel.org/patchwork/patch/977520/ ) and so should probably have had a "might_sleep()" since it was written. However currently the dma-iommu api can call iommu_map in an atomic context, which it shouldn't do. This doesn't cause any problems because any iommu driver which uses the dma-iommu api uses gfp_atomic in it's iommu_ops::map function. But doing this wastes the memory allocators atomic pools. Signed-off-by: NTom Murphy <murphyt7@tcd.ie> Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Reviewed-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Tom Murphy 提交于
With or without locking it doesn't make sense for two writers to be writing to the same IOVA range at the same time. Even with locking we still have a race condition, whoever gets the lock first, so we still can't be sure what the result will be. With locking the result will be more sane, it will be correct for the last writer, but still useless because we can't be sure which writer will get the lock last. It's a fundamentally broken design to have two writers writing to the same IOVA range at the same time. So we can remove the locking and work on the assumption that no two writers will be writing to the same IOVA range at the same time. The only exception is when we have to allocate a middle page in the page tables, the middle page can cover more than just the IOVA range a writer has been allocated. However this isn't an issue in the AMD driver because it can atomically allocate middle pages using "cmpxchg64()". Signed-off-by: NTom Murphy <murphyt7@tcd.ie> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 28 9月, 2019 6 次提交
-
-
由 Joerg Roedel 提交于
The traversing of this list requires protection_domain->lock to be taken to avoid nasty races with attach/detach code. Make sure the lock is held on all code-paths traversing this list. Reported-by: NFilippo Sironi <sironi@amazon.de> Fixes: 92d420ec ("iommu/amd: Relax locking in dma_ops path") Reviewed-by: NFilippo Sironi <sironi@amazon.de> Reviewed-by: NJerry Snitselaar <jsnitsel@redhat.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
Make sure that attaching a detaching a device can't race against each other and protect the iommu_dev_data with a spin_lock in these code paths. Fixes: 92d420ec ("iommu/amd: Relax locking in dma_ops path") Reviewed-by: NFilippo Sironi <sironi@amazon.de> Reviewed-by: NJerry Snitselaar <jsnitsel@redhat.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
Check early in attach_device whether the device is already attached to a domain. This also simplifies the code path so that __attach_device() can be removed. Fixes: 92d420ec ("iommu/amd: Relax locking in dma_ops path") Reviewed-by: NFilippo Sironi <sironi@amazon.de> Reviewed-by: NJerry Snitselaar <jsnitsel@redhat.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
The code-paths before __attach_device() and __detach_device() are called also access and modify domain state, so take the domain lock there too. This allows to get rid of the __detach_device() function. Fixes: 92d420ec ("iommu/amd: Relax locking in dma_ops path") Reviewed-by: NFilippo Sironi <sironi@amazon.de> Reviewed-by: NJerry Snitselaar <jsnitsel@redhat.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
The lock is not necessary because the device table does not contain shared state that needs protection. Locking is only needed on an individual entry basis, and that needs to happen on the iommu_dev_data level. Fixes: 92d420ec ("iommu/amd: Relax locking in dma_ops path") Reviewed-by: NFilippo Sironi <sironi@amazon.de> Reviewed-by: NJerry Snitselaar <jsnitsel@redhat.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
This struct member was used to track whether a domain change requires updates to the device-table and IOMMU cache flushes. The problem is, that access to this field is racy since locking in the common mapping code-paths has been eliminated. Move the updated field to the stack to get rid of all potential races and remove the field from the struct. Fixes: 92d420ec ("iommu/amd: Relax locking in dma_ops path") Reviewed-by: NFilippo Sironi <sironi@amazon.de> Reviewed-by: NJerry Snitselaar <jsnitsel@redhat.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 24 9月, 2019 5 次提交
-
-
由 Filippo Sironi 提交于
To make sure the domain tlb flush completes before the function returns, explicitly wait for its completion. Signed-off-by: NFilippo Sironi <sironi@amazon.de> Fixes: 42a49f96 ("amd-iommu: flush domain tlb when attaching a new device") [joro: Added commit message and fixes tag] Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Andrei Dulea 提交于
When replacing a large mapping created with page-mode 7 (i.e. non-default page size), tear down the entire series of replicated PTEs. Besides providing access to the old mapping, another thing that might go wrong with this issue is on the fetch_pte() code path that can return a PDE entry of the newly re-mapped range. While at it, make sure that we flush the TLB in case alloc_pte() fails and returns NULL at a lower level. Fixes: 6d568ef9 ("iommu/amd: Allow downgrading page-sizes in alloc_pte()") Signed-off-by: NAndrei Dulea <adulea@amazon.de>
-
由 Andrei Dulea 提交于
Given an arbitrary pte that is part of a large mapping, this function returns the first pte of the series (and optionally the mapped size and number of PTEs) It will be re-used in a subsequent patch to replace an existing L7 mapping. Fixes: 6d568ef9 ("iommu/amd: Allow downgrading page-sizes in alloc_pte()") Signed-off-by: NAndrei Dulea <adulea@amazon.de>
-
由 Andrei Dulea 提交于
Downgrading an existing large mapping to a mapping using smaller page-sizes works only for the mappings created with page-mode 7 (i.e. non-default page size). Treat large mappings created with page-mode 0 (i.e. default page size) like a non-present mapping and allow to overwrite it in alloc_pte(). While around, make sure that we flush the TLB only if we change an existing mapping, otherwise we might end up acting on garbage PTEs. Fixes: 6d568ef9 ("iommu/amd: Allow downgrading page-sizes in alloc_pte()") Signed-off-by: NAndrei Dulea <adulea@amazon.de>
-
由 Andrei Dulea 提交于
Take into account the gathered freelist in free_sub_pt(), otherwise we end up leaking all that pages. Fixes: 409afa44 ("iommu/amd: Introduce free_sub_pt() function") Signed-off-by: NAndrei Dulea <adulea@amazon.de>
-
- 06 9月, 2019 2 次提交
-
-
由 Joerg Roedel 提交于
After the conversion to lock-less dma-api call the increase_address_space() function can be called without any locking. Multiple CPUs could potentially race for increasing the address space, leading to invalid domain->mode settings and invalid page-tables. This has been happening in the wild under high IO load and memory pressure. Fix the race by locking this operation. The function is called infrequently so that this does not introduce a performance regression in the dma-api path again. Reported-by: NQian Cai <cai@lca.pw> Fixes: 256e4621 ('iommu/amd: Make use of the generic IOVA allocator') Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Stuart Hayes 提交于
When devices are attached to the amd_iommu in a kdump kernel, the old device table entries (DTEs), which were copied from the crashed kernel, will be overwritten with a new domain number. When the new DTE is written, the IOMMU is told to flush the DTE from its internal cache--but it is not told to flush the translation cache entries for the old domain number. Without this patch, AMD systems using the tg3 network driver fail when kdump tries to save the vmcore to a network system, showing network timeouts and (sometimes) IOMMU errors in the kernel log. This patch will flush IOMMU translation cache entries for the old domain when a DTE gets overwritten with a new domain number. Signed-off-by: NStuart Hayes <stuart.w.hayes@gmail.com> Fixes: 3ac3e5ee ('iommu/amd: Copy old trans table from old kernel') Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 04 9月, 2019 1 次提交
-
-
由 Christoph Hellwig 提交于
While the default ->mmap and ->get_sgtable implementations work for the majority of our dma_map_ops impementations they are inherently safe for others that don't use the page allocator or CMA and/or use their own way of remapping not covered by the common code. So remove the defaults if these methods are not wired up, but instead wire up the default implementations for all safe instances. Fixes: e1c7e324 ("dma-mapping: always provide the dma_map_ops based implementation") Signed-off-by: NChristoph Hellwig <hch@lst.de>
-
- 30 8月, 2019 1 次提交
-
-
由 Qian Cai 提交于
When running heavy memory pressure workloads, the system is throwing endless warnings, smartpqi 0000:23:00.0: AMD-Vi: IOMMU mapping error in map_sg (io-pages: 5 reason: -12) Hardware name: HPE ProLiant DL385 Gen10/ProLiant DL385 Gen10, BIOS A40 07/10/2019 swapper/10: page allocation failure: order:0, mode:0xa20(GFP_ATOMIC), nodemask=(null),cpuset=/,mems_allowed=0,4 Call Trace: <IRQ> dump_stack+0x62/0x9a warn_alloc.cold.43+0x8a/0x148 __alloc_pages_nodemask+0x1a5c/0x1bb0 get_zeroed_page+0x16/0x20 iommu_map_page+0x477/0x540 map_sg+0x1ce/0x2f0 scsi_dma_map+0xc6/0x160 pqi_raid_submit_scsi_cmd_with_io_request+0x1c3/0x470 [smartpqi] do_IRQ+0x81/0x170 common_interrupt+0xf/0xf </IRQ> because the allocation could fail from iommu_map_page(), and the volume of this call could be huge which may generate a lot of serial console output and cosumes all CPUs. Fix it by silencing the warning in this call site, and there is still a dev_err() later to notify the failure. Signed-off-by: NQian Cai <cai@lca.pw> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 23 8月, 2019 1 次提交
-
-
由 Joerg Roedel 提交于
Get rid of the iommu_pass_through variable and request passthrough mode via the new iommu core function. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 09 8月, 2019 1 次提交
-
-
由 Suthikulpanit, Suravee 提交于
Re-factore the logic for activate/deactivate guest virtual APIC mode (GAM) into helper functions, and export them for other drivers (e.g. SVM). to support run-time activate/deactivate of SVM AVIC. Cc: Joerg Roedel <joro@8bytes.org> Signed-off-by: NSuravee Suthikulpanit <suravee.suthikulpanit@amd.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 30 7月, 2019 1 次提交
-
-
由 Will Deacon 提交于
To allow IOMMU drivers to batch up TLB flushing operations and postpone them until ->iotlb_sync() is called, extend the prototypes for the ->unmap() and ->iotlb_sync() IOMMU ops callbacks to take a pointer to the current iommu_iotlb_gather structure. All affected IOMMU drivers are updated, but there should be no functional change since the extra parameter is ignored for now. Signed-off-by: NWill Deacon <will@kernel.org>
-
- 24 7月, 2019 1 次提交
-
-
由 Will Deacon 提交于
Commit add02cfd ("iommu: Introduce Interface for IOMMU TLB Flushing") added three new TLB flushing operations to the IOMMU API so that the underlying driver operations can be batched when unmapping large regions of IO virtual address space. However, the ->iotlb_range_add() callback has not been implemented by any IOMMU drivers (amd_iommu.c implements it as an empty function, which incurs the overhead of an indirect branch). Instead, drivers either flush the entire IOTLB in the ->iotlb_sync() callback or perform the necessary invalidation during ->unmap(). Attempting to implement ->iotlb_range_add() for arm-smmu-v3.c revealed two major issues: 1. The page size used to map the region in the page-table is not known, and so it is not generally possible to issue TLB flushes in the most efficient manner. 2. The only mutable state passed to the callback is a pointer to the iommu_domain, which can be accessed concurrently and therefore requires expensive synchronisation to keep track of the outstanding flushes. Remove the callback entirely in preparation for extending ->unmap() and ->iotlb_sync() to update a token on the caller's stack. Signed-off-by: NWill Deacon <will@kernel.org>
-
- 22 7月, 2019 1 次提交
-
-
由 Qian Cai 提交于
The commit b3aa14f0 ("iommu: remove the mapping_error dma_map_ops method") incorrectly changed the checking from dma_ops_alloc_iova() in map_sg() causes a crash under memory pressure as dma_ops_alloc_iova() never return DMA_MAPPING_ERROR on failure but 0, so the error handling is all wrong. kernel BUG at drivers/iommu/iova.c:801! Workqueue: kblockd blk_mq_run_work_fn RIP: 0010:iova_magazine_free_pfns+0x7d/0xc0 Call Trace: free_cpu_cached_iovas+0xbd/0x150 alloc_iova_fast+0x8c/0xba dma_ops_alloc_iova.isra.6+0x65/0xa0 map_sg+0x8c/0x2a0 scsi_dma_map+0xc6/0x160 pqi_aio_submit_io+0x1f6/0x440 [smartpqi] pqi_scsi_queue_command+0x90c/0xdd0 [smartpqi] scsi_queue_rq+0x79c/0x1200 blk_mq_dispatch_rq_list+0x4dc/0xb70 blk_mq_sched_dispatch_requests+0x249/0x310 __blk_mq_run_hw_queue+0x128/0x200 blk_mq_run_work_fn+0x27/0x30 process_one_work+0x522/0xa10 worker_thread+0x63/0x5b0 kthread+0x1d2/0x1f0 ret_from_fork+0x22/0x40 Fixes: b3aa14f0 ("iommu: remove the mapping_error dma_map_ops method") Signed-off-by: NQian Cai <cai@lca.pw> Reviewed-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 01 7月, 2019 1 次提交
-
-
由 Tom Murphy 提交于
check if there is a not-present cache present and flush it if there is. Signed-off-by: NTom Murphy <murphyt7@tcd.ie> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 05 6月, 2019 1 次提交
-
-
由 Thomas Gleixner 提交于
Based on 1 normalized pattern(s): this program is free software you can redistribute it and or modify it under the terms of the gnu general public license version 2 as published by the free software foundation this program is distributed in the hope that it will be useful but without any warranty without even the implied warranty of merchantability or fitness for a particular purpose see the gnu general public license for more details you should have received a copy of the gnu general public license along with this program if not write to the free software foundation inc 59 temple place suite 330 boston ma 02111 1307 usa extracted by the scancode license scanner the SPDX license identifier GPL-2.0-only has been chosen to replace the boilerplate/reference in 136 file(s). Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Reviewed-by: NAlexios Zavras <alexios.zavras@intel.com> Reviewed-by: NAllison Randal <allison@lohutok.net> Cc: linux-spdx@vger.kernel.org Link: https://lkml.kernel.org/r/20190530000436.384967451@linutronix.deSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 28 5月, 2019 1 次提交
-
-
由 YueHaibing 提交于
Fixes gcc '-Wunused-but-set-variable' warning: drivers/iommu/amd_iommu.c: In function 'iommu_print_event': drivers/iommu/amd_iommu.c:550:33: warning: variable 'tag' set but not used [-Wunused-but-set-variable] It was introduced in e7f63ffc ("iommu/amd: Update logging information for new event type") seems just missed in the error message, add it as suggested by Joerg. Signed-off-by: NYueHaibing <yuehaibing@huawei.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 27 5月, 2019 1 次提交
-
-
由 Colin Ian King 提交于
The variable npages is being initialized however this is never read and later it is being reassigned to a new value. The initialization is redundant and hence can be removed. Addresses-Coverity: ("Unused Value") Signed-off-by: NColin Ian King <colin.king@canonical.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 07 5月, 2019 1 次提交
-
-
由 Joerg Roedel 提交于
This reverts commit 1a107901. This commit caused a NULL-ptr deference bug and must be reverted for now. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 06 5月, 2019 1 次提交
-
-
由 Joerg Roedel 提交于
This reverts commit 7a5dbf3a. This commit not only removes the leftovers of bypass support, it also mostly removes the checking of the return value of the get_domain() function. This can lead to silent data corruption bugs when a device is not attached to its dma_ops domain and a DMA-API function is called for that device. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 03 5月, 2019 1 次提交
-
-
由 Tom Murphy 提交于
check if there is a not-present cache present and flush it if there is. Signed-off-by: NTom Murphy <tmurphy@arista.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 30 4月, 2019 1 次提交
-
-
由 Heiner Kallweit 提交于
Use new helper pci_dev_id() to simplify the code. Signed-off-by: NHeiner Kallweit <hkallweit1@gmail.com> Signed-off-by: NBjorn Helgaas <bhelgaas@google.com> Reviewed-by: NJoerg Roedel <jroedel@suse.de>
-
- 26 4月, 2019 1 次提交
-
-
由 Joerg Roedel 提交于
This variable hold a global list of allocated protection domains in the AMD IOMMU driver. By now this list is never traversed anymore, so the list and the lock protecting it can be removed. Cc: Tom Murphy <tmurphy@arista.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 11 4月, 2019 2 次提交
-
-
由 Christoph Hellwig 提交于
The AMD iommu dma_ops are only attached on a per-device basis when an actual translation is needed. Remove the leftover bypass support which in parts was already broken (e.g. it always returns 0 from ->map_sg). Use the opportunity to remove a few local variables and move assignments into the declaration line where they were previously separated by the bypass check. Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Jean-Philippe Brucker 提交于
Commit e5567f5f ("PCI/ATS: Add pci_prg_resp_pasid_required() interface.") added a common interface to check the PASID bit in the PRI capability. Use it in the AMD driver. Signed-off-by: NJean-Philippe Brucker <jean-philippe.brucker@arm.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 30 3月, 2019 1 次提交
-
-
由 Joerg Roedel 提交于
If a device has an exclusion range specified in the IVRS table, this region needs to be reserved in the iova-domain of that device. This hasn't happened until now and can cause data corruption on data transfered with these devices. Treat exclusion ranges as reserved regions in the iommu-core to fix the problem. Fixes: be2a022c ('x86, AMD IOMMU: add functions to parse IOMMU memory mapping requirements for devices') Signed-off-by: NJoerg Roedel <jroedel@suse.de> Reviewed-by: NGary R Hook <gary.hook@amd.com>
-
- 18 3月, 2019 1 次提交
-
-
由 Stanislaw Gruszka 提交于
Take into account that sg->offset can be bigger than PAGE_SIZE when setting segment sg->dma_address. Otherwise sg->dma_address will point at diffrent page, what makes DMA not possible with erros like this: xhci_hcd 0000:38:00.3: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x0000 address=0x00000000fdaa70c0 flags=0x0020] xhci_hcd 0000:38:00.3: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x0000 address=0x00000000fdaa7040 flags=0x0020] xhci_hcd 0000:38:00.3: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x0000 address=0x00000000fdaa7080 flags=0x0020] xhci_hcd 0000:38:00.3: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x0000 address=0x00000000fdaa7100 flags=0x0020] xhci_hcd 0000:38:00.3: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x0000 address=0x00000000fdaa7000 flags=0x0020] Additinally with wrong sg->dma_address unmap_sg will free wrong pages, what what can cause crashes like this: Feb 28 19:27:45 kernel: BUG: Bad page state in process cinnamon pfn:39e8b1 Feb 28 19:27:45 kernel: Disabling lock debugging due to kernel taint Feb 28 19:27:45 kernel: flags: 0x2ffff0000000000() Feb 28 19:27:45 kernel: raw: 02ffff0000000000 0000000000000000 ffffffff00000301 0000000000000000 Feb 28 19:27:45 kernel: raw: 0000000000000000 0000000000000000 00000001ffffffff 0000000000000000 Feb 28 19:27:45 kernel: page dumped because: nonzero _refcount Feb 28 19:27:45 kernel: Modules linked in: ccm fuse arc4 nct6775 hwmon_vid amdgpu nls_iso8859_1 nls_cp437 edac_mce_amd vfat fat kvm_amd ccp rng_core kvm mt76x0u mt76x0_common mt76x02_usb irqbypass mt76_usb mt76x02_lib mt76 crct10dif_pclmul crc32_pclmul chash mac80211 amd_iommu_v2 ghash_clmulni_intel gpu_sched i2c_algo_bit ttm wmi_bmof snd_hda_codec_realtek snd_hda_codec_generic drm_kms_helper snd_hda_codec_hdmi snd_hda_intel drm snd_hda_codec aesni_intel snd_hda_core snd_hwdep aes_x86_64 crypto_simd snd_pcm cfg80211 cryptd mousedev snd_timer glue_helper pcspkr r8169 input_leds realtek agpgart libphy rfkill snd syscopyarea sysfillrect sysimgblt fb_sys_fops soundcore sp5100_tco k10temp i2c_piix4 wmi evdev gpio_amdpt pinctrl_amd mac_hid pcc_cpufreq acpi_cpufreq sg ip_tables x_tables ext4(E) crc32c_generic(E) crc16(E) mbcache(E) jbd2(E) fscrypto(E) sd_mod(E) hid_generic(E) usbhid(E) hid(E) dm_mod(E) serio_raw(E) atkbd(E) libps2(E) crc32c_intel(E) ahci(E) libahci(E) libata(E) xhci_pci(E) xhci_hcd(E) Feb 28 19:27:45 kernel: scsi_mod(E) i8042(E) serio(E) bcache(E) crc64(E) Feb 28 19:27:45 kernel: CPU: 2 PID: 896 Comm: cinnamon Tainted: G B W E 4.20.12-arch1-1-custom #1 Feb 28 19:27:45 kernel: Hardware name: To Be Filled By O.E.M. To Be Filled By O.E.M./B450M Pro4, BIOS P1.20 06/26/2018 Feb 28 19:27:45 kernel: Call Trace: Feb 28 19:27:45 kernel: dump_stack+0x5c/0x80 Feb 28 19:27:45 kernel: bad_page.cold.29+0x7f/0xb2 Feb 28 19:27:45 kernel: __free_pages_ok+0x2c0/0x2d0 Feb 28 19:27:45 kernel: skb_release_data+0x96/0x180 Feb 28 19:27:45 kernel: __kfree_skb+0xe/0x20 Feb 28 19:27:45 kernel: tcp_recvmsg+0x894/0xc60 Feb 28 19:27:45 kernel: ? reuse_swap_page+0x120/0x340 Feb 28 19:27:45 kernel: ? ptep_set_access_flags+0x23/0x30 Feb 28 19:27:45 kernel: inet_recvmsg+0x5b/0x100 Feb 28 19:27:45 kernel: __sys_recvfrom+0xc3/0x180 Feb 28 19:27:45 kernel: ? handle_mm_fault+0x10a/0x250 Feb 28 19:27:45 kernel: ? syscall_trace_enter+0x1d3/0x2d0 Feb 28 19:27:45 kernel: ? __audit_syscall_exit+0x22a/0x290 Feb 28 19:27:45 kernel: __x64_sys_recvfrom+0x24/0x30 Feb 28 19:27:45 kernel: do_syscall_64+0x5b/0x170 Feb 28 19:27:45 kernel: entry_SYSCALL_64_after_hwframe+0x44/0xa9 Cc: stable@vger.kernel.org Reported-and-tested-by: NJan Viktorin <jan.viktorin@gmail.com> Reviewed-by: NAlexander Duyck <alexander.h.duyck@linux.intel.com> Signed-off-by: NStanislaw Gruszka <sgruszka@redhat.com> Fixes: 80187fd3 ('iommu/amd: Optimize map_sg and unmap_sg') Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 15 3月, 2019 1 次提交
-
-
由 Aaron Ma 提交于
Add a non-NULL check to fix potential NULL pointer dereference Cleanup code to call function once. Signed-off-by: NAaron Ma <aaron.ma@canonical.com> Fixes: 2bf9a0a1 ('iommu/amd: Add iommu support for ACPI HID devices') Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 11 2月, 2019 1 次提交
-
-
由 Bjorn Helgaas 提交于
Use dev_printk() when possible so the IOMMU messages are more consistent with other messages related to the device. Signed-off-by: NBjorn Helgaas <bhelgaas@google.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 31 1月, 2019 1 次提交
-
-
由 Jerry Snitselaar 提交于
Since there are multiple possible failures in iommu_map_page it would be useful to know which case is being hit when the error message is printed in map_sg. While here, fix up checkpatch complaint about using function name in a string instead of __func__. Cc: Joerg Roedel <joro@8bytes.org> Signed-off-by: NJerry Snitselaar <jsnitsel@redhat.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 24 1月, 2019 1 次提交
-
-
由 Suravee Suthikulpanit 提交于
When a VM is terminated, the VFIO driver detaches all pass-through devices from VFIO domain by clearing domain id and page table root pointer from each device table entry (DTE), and then invalidates the DTE. Then, the VFIO driver unmap pages and invalidate IOMMU pages. Currently, the IOMMU driver keeps track of which IOMMU and how many devices are attached to the domain. When invalidate IOMMU pages, the driver checks if the IOMMU is still attached to the domain before issuing the invalidate page command. However, since VFIO has already detached all devices from the domain, the subsequent INVALIDATE_IOMMU_PAGES commands are being skipped as there is no IOMMU attached to the domain. This results in data corruption and could cause the PCI device to end up in indeterministic state. Fix this by invalidate IOMMU pages when detach a device, and before decrementing the per-domain device reference counts. Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com> Suggested-by: NJoerg Roedel <joro@8bytes.org> Co-developed-by: NBrijesh Singh <brijesh.singh@amd.com> Signed-off-by: NBrijesh Singh <brijesh.singh@amd.com> Signed-off-by: NSuravee Suthikulpanit <suravee.suthikulpanit@amd.com> Fixes: 6de8ad9b ('x86/amd-iommu: Make iommu_flush_pages aware of multiple IOMMUs') Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-