- 25 11月, 2020 1 次提交
-
-
由 Tom Murphy 提交于
Allow the iommu_unmap_fast to return newly freed page table pages and pass the freelist to queue_iova in the dma-iommu ops path. This is useful for iommu drivers (in this case the intel iommu driver) which need to wait for the ioTLB to be flushed before newly free/unmapped page table pages can be freed. This way we can still batch ioTLB free operations and handle the freelists. Signed-off-by: NTom Murphy <murphyt7@tcd.ie> Signed-off-by: NLu Baolu <baolu.lu@linux.intel.com> Tested-by: NLogan Gunthorpe <logang@deltatee.com> Link: https://lore.kernel.org/r/20201124082057.2614359-2-baolu.lu@linux.intel.comSigned-off-by: NWill Deacon <will@kernel.org>
-
- 18 11月, 2020 1 次提交
-
-
由 Lukas Bulwahn 提交于
Commit 6ee1b77b ("iommu/vt-d: Add svm/sva invalidate function") introduced intel_iommu_sva_invalidate() when CONFIG_INTEL_IOMMU_SVM. This function uses the dedicated static variable inv_type_granu_table and functions to_vtd_granularity() and to_vtd_size(). These parts are unused when !CONFIG_INTEL_IOMMU_SVM, and hence, make CC=clang W=1 warns with an -Wunused-function warning. Include these parts conditionally on CONFIG_INTEL_IOMMU_SVM. Fixes: 6ee1b77b ("iommu/vt-d: Add svm/sva invalidate function") Signed-off-by: NLukas Bulwahn <lukas.bulwahn@gmail.com> Acked-by: NLu Baolu <baolu.lu@linux.intel.com> Link: https://lore.kernel.org/r/20201115205951.20698-1-lukas.bulwahn@gmail.comSigned-off-by: NWill Deacon <will@kernel.org>
-
- 03 11月, 2020 1 次提交
-
-
由 Lu Baolu 提交于
If calling find_domain() for a device which hasn't been probed by the iommu core, below kernel NULL pointer dereference issue happens. [ 362.736947] BUG: kernel NULL pointer dereference, address: 0000000000000038 [ 362.743953] #PF: supervisor read access in kernel mode [ 362.749115] #PF: error_code(0x0000) - not-present page [ 362.754278] PGD 0 P4D 0 [ 362.756843] Oops: 0000 [#1] SMP NOPTI [ 362.760528] CPU: 0 PID: 844 Comm: cat Not tainted 5.9.0-rc4-intel-next+ #1 [ 362.767428] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake U DDR4 SODIMM PD RVP TLC, BIOS ICLSFWR1.R00.3384.A02.1909200816 09/20/2019 [ 362.781109] RIP: 0010:find_domain+0xd/0x40 [ 362.785234] Code: 48 81 fb 60 28 d9 b2 75 de 5b 41 5c 41 5d 5d c3 0f 1f 00 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 48 8b 87 e0 02 00 00 55 <48> 8b 40 38 48 89 e5 48 83 f8 fe 0f 94 c1 48 85 ff 0f 94 c2 08 d1 [ 362.804041] RSP: 0018:ffffb09cc1f0bd38 EFLAGS: 00010046 [ 362.809292] RAX: 0000000000000000 RBX: ffff905b98e4fac8 RCX: 0000000000000000 [ 362.816452] RDX: 0000000000000001 RSI: ffff905b98e4fac8 RDI: ffff905b9ccd40d0 [ 362.823617] RBP: ffffb09cc1f0bda0 R08: ffffb09cc1f0bd48 R09: 000000000000000f [ 362.830778] R10: ffffffffb266c080 R11: ffff905b9042602d R12: ffff905b98e4fac8 [ 362.837944] R13: ffffb09cc1f0bd48 R14: ffff905b9ccd40d0 R15: ffff905b98e4fac8 [ 362.845108] FS: 00007f8485460740(0000) GS:ffff905b9fc00000(0000) knlGS:0000000000000000 [ 362.853227] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 362.858996] CR2: 0000000000000038 CR3: 00000004627a6003 CR4: 0000000000770ef0 [ 362.866161] PKRU: fffffffc [ 362.868890] Call Trace: [ 362.871363] ? show_device_domain_translation+0x32/0x100 [ 362.876700] ? bind_store+0x110/0x110 [ 362.880387] ? klist_next+0x91/0x120 [ 362.883987] ? domain_translation_struct_show+0x50/0x50 [ 362.889237] bus_for_each_dev+0x79/0xc0 [ 362.893121] domain_translation_struct_show+0x36/0x50 [ 362.898204] seq_read+0x135/0x410 [ 362.901545] ? handle_mm_fault+0xeb8/0x1750 [ 362.905755] full_proxy_read+0x5c/0x90 [ 362.909526] vfs_read+0xa6/0x190 [ 362.912782] ksys_read+0x61/0xe0 [ 362.916037] __x64_sys_read+0x1a/0x20 [ 362.919725] do_syscall_64+0x37/0x80 [ 362.923329] entry_SYSCALL_64_after_hwframe+0x44/0xa9 [ 362.928405] RIP: 0033:0x7f84855c5e95 Filter out those devices to avoid such error. Fixes: e2726dae ("iommu/vt-d: debugfs: Add support to show page table internals") Reported-and-tested-by: NXu Pengfei <pengfei.xu@intel.com> Signed-off-by: NLu Baolu <baolu.lu@linux.intel.com> Cc: stable@vger.kernel.org#v5.6+ Link: https://lore.kernel.org/r/20201028070725.24979-1-baolu.lu@linux.intel.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 02 11月, 2020 1 次提交
-
-
由 Christoph Hellwig 提交于
The tbl_dma_addr argument is used to check the DMA boundary for the allocations, and thus needs to be a dma_addr_t. swiotlb-xen instead passed a physical address, which could lead to incorrect results for strange offsets. Fix this by removing the parameter entirely and hard code the DMA address for io_tlb_start instead. Fixes: 91ffe4ad ("swiotlb-xen: introduce phys_to_dma/dma_to_phys translations") Signed-off-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NStefano Stabellini <sstabellini@kernel.org> Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
-
- 06 10月, 2020 2 次提交
-
-
由 Christoph Hellwig 提交于
Merge dma-contiguous.h into dma-map-ops.h, after removing the comment describing the contiguous allocator into kernel/dma/contigous.c. Signed-off-by: NChristoph Hellwig <hch@lst.de>
-
由 Christoph Hellwig 提交于
Split out all the bits that are purely for dma_map_ops implementations and related code into a new <linux/dma-map-ops.h> header so that they don't get pulled into all the drivers. That also means the architecture specific <asm/dma-mapping.h> is not pulled in by <linux/dma-mapping.h> any more, which leads to a missing includes that were pulled in by the x86 or arm versions in a few not overly portable drivers. Signed-off-by: NChristoph Hellwig <hch@lst.de>
-
- 01 10月, 2020 3 次提交
-
-
由 Lu Baolu 提交于
Lock(&iommu->lock) without disabling irq causes lockdep warnings. [ 12.703950] ======================================================== [ 12.703962] WARNING: possible irq lock inversion dependency detected [ 12.703975] 5.9.0-rc6+ #659 Not tainted [ 12.703983] -------------------------------------------------------- [ 12.703995] systemd-udevd/284 just changed the state of lock: [ 12.704007] ffffffffbd6ff4d8 (device_domain_lock){..-.}-{2:2}, at: iommu_flush_dev_iotlb.part.57+0x2e/0x90 [ 12.704031] but this lock took another, SOFTIRQ-unsafe lock in the past: [ 12.704043] (&iommu->lock){+.+.}-{2:2} [ 12.704045] and interrupts could create inverse lock ordering between them. [ 12.704073] other info that might help us debug this: [ 12.704085] Possible interrupt unsafe locking scenario: [ 12.704097] CPU0 CPU1 [ 12.704106] ---- ---- [ 12.704115] lock(&iommu->lock); [ 12.704123] local_irq_disable(); [ 12.704134] lock(device_domain_lock); [ 12.704146] lock(&iommu->lock); [ 12.704158] <Interrupt> [ 12.704164] lock(device_domain_lock); [ 12.704174] *** DEADLOCK *** Signed-off-by: NLu Baolu <baolu.lu@linux.intel.com> Link: https://lore.kernel.org/r/20200927062428.13713-1-baolu.lu@linux.intel.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Jacob Pan 提交于
IOMMU generic layer already does sanity checks on UAPI data for version match and argsz range based on generic information. This patch adjusts the following data checking responsibilities: - removes the redundant version check from VT-d driver - removes the check for vendor specific data size - adds check for the use of reserved/undefined flags Signed-off-by: NJacob Pan <jacob.jun.pan@linux.intel.com> Reviewed-by: NEric Auger <eric.auger@redhat.com> Link: https://lore.kernel.org/r/1601051567-54787-7-git-send-email-jacob.jun.pan@linux.intel.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Jacob Pan 提交于
IOMMU UAPI data size is filled by the user space which must be validated by the kernel. To ensure backward compatibility, user data can only be extended by either re-purpose padding bytes or extend the variable sized union at the end. No size change is allowed before the union. Therefore, the minimum size is the offset of the union. To use offsetof() on the union, we must make it named. Signed-off-by: NJacob Pan <jacob.jun.pan@linux.intel.com> Reviewed-by: NLu Baolu <baolu.lu@linux.intel.com> Reviewed-by: NEric Auger <eric.auger@redhat.com> Link: https://lore.kernel.org/linux-iommu/20200611145518.0c2817d6@x1.home/ Link: https://lore.kernel.org/r/1601051567-54787-4-git-send-email-jacob.jun.pan@linux.intel.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 25 9月, 2020 1 次提交
-
-
由 Christoph Hellwig 提交于
This API is the equivalent of alloc_pages, except that the returned memory is guaranteed to be DMA addressable by the passed in device. The implementation will also be used to provide a more sensible replacement for DMA_ATTR_NON_CONSISTENT flag. Additionally dma_alloc_noncoherent is switched over to use dma_alloc_pages as its backend. Signed-off-by: NChristoph Hellwig <hch@lst.de> Acked-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de> (MIPS part)
-
- 24 9月, 2020 1 次提交
-
-
由 Lu Baolu 提交于
If there are multiple NUMA domains but the RHSA is missing in ACPI/DMAR table, we could default to the device NUMA domain as fall back. Signed-off-by: NLu Baolu <baolu.lu@linux.intel.com> Reviewed-by: NKevin Tian <kevin.tian@intel.com> Cc: Jacob Pan <jacob.jun.pan@linux.intel.com> Cc: Kevin Tian <kevin.tian@intel.com> Cc: Ashok Raj <ashok.raj@intel.com> Link: https://lore.kernel.org/r/20200904010303.2961-1-baolu.lu@linux.intel.com Link: https://lore.kernel.org/r/20200922060843.31546-2-baolu.lu@linux.intel.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 18 9月, 2020 1 次提交
-
-
由 Fenghua Yu 提交于
PASID is defined as a few different types in iommu including "int", "u32", and "unsigned int". To be consistent and to match with uapi definitions, define PASID and its variations (e.g. max PASID) as "u32". "u32" is also shorter and a little more explicit than "unsigned int". No PASID type change in uapi although it defines PASID as __u64 in some places. Suggested-by: NThomas Gleixner <tglx@linutronix.de> Signed-off-by: NFenghua Yu <fenghua.yu@intel.com> Signed-off-by: NBorislav Petkov <bp@suse.de> Reviewed-by: NTony Luck <tony.luck@intel.com> Reviewed-by: NLu Baolu <baolu.lu@linux.intel.com> Acked-by: NFelix Kuehling <Felix.Kuehling@amd.com> Acked-by: NJoerg Roedel <jroedel@suse.de> Link: https://lkml.kernel.org/r/1600187413-163670-2-git-send-email-fenghua.yu@intel.com
-
- 11 9月, 2020 1 次提交
-
-
由 Christoph Hellwig 提交于
The __phys_to_dma vs phys_to_dma distinction isn't exactly obvious. Try to improve the situation by renaming __phys_to_dma to phys_to_dma_unencryped, and not forcing architectures that want to override phys_to_dma to actually provide __phys_to_dma. Signed-off-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NRobin Murphy <robin.murphy@arm.com>
-
- 04 9月, 2020 2 次提交
-
-
由 Chris Wilson 提交于
Beware that the address size for x86-32 may exceed unsigned long. [ 0.368971] UBSAN: shift-out-of-bounds in drivers/iommu/intel/iommu.c:128:14 [ 0.369055] shift exponent 36 is too large for 32-bit type 'long unsigned int' If we don't handle the wide addresses, the pages are mismapped and the device read/writes go astray, detected as DMAR faults and leading to device failure. The behaviour changed (from working to broken) in commit fa954e68 ("iommu/vt-d: Delegate the dma domain to upper layer"), but the error looks older. Fixes: fa954e68 ("iommu/vt-d: Delegate the dma domain to upper layer") Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Acked-by: NLu Baolu <baolu.lu@linux.intel.com> Cc: James Sewart <jamessewart@arista.com> Cc: Lu Baolu <baolu.lu@linux.intel.com> Cc: Joerg Roedel <jroedel@suse.de> Cc: <stable@vger.kernel.org> # v5.3+ Link: https://lore.kernel.org/r/20200822160209.28512-1-chris@chris-wilson.co.ukSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Lu Baolu 提交于
The dev_iommu_priv_set() must be called after probe_device(). This fixes a NULL pointer deference bug when booting a system with kernel cmdline "intel_iommu=on,igfx_off", where the dev_iommu_priv_set() is abused. The following stacktrace was produced: Command line: BOOT_IMAGE=/isolinux/bzImage console=tty1 intel_iommu=on,igfx_off ... DMAR: Host address width 39 DMAR: DRHD base: 0x000000fed90000 flags: 0x0 DMAR: dmar0: reg_base_addr fed90000 ver 1:0 cap 1c0000c40660462 ecap 19e2ff0505e DMAR: DRHD base: 0x000000fed91000 flags: 0x1 DMAR: dmar1: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da DMAR: RMRR base: 0x0000009aa9f000 end: 0x0000009aabefff DMAR: RMRR base: 0x0000009d000000 end: 0x0000009f7fffff DMAR: No ATSR found BUG: kernel NULL pointer dereference, address: 0000000000000038 #PF: supervisor write access in kernel mode #PF: error_code(0x0002) - not-present page PGD 0 P4D 0 Oops: 0002 [#1] SMP PTI CPU: 1 PID: 1 Comm: swapper/0 Not tainted 5.9.0-devel+ #2 Hardware name: LENOVO 20HGS0TW00/20HGS0TW00, BIOS N1WET46S (1.25s ) 03/30/2018 RIP: 0010:intel_iommu_init+0xed0/0x1136 Code: fe e9 61 02 00 00 bb f4 ff ff ff e9 57 02 00 00 48 63 d1 48 c1 e2 04 48 03 50 20 48 8b 12 48 85 d2 74 0b 48 8b 92 d0 02 00 00 48 89 7a 38 ff c1 e9 15 f5 ff ff 48 c7 c7 60 99 ac a7 49 c7 c7 a0 RSP: 0000:ffff96d180073dd0 EFLAGS: 00010282 RAX: ffff8c91037a7d20 RBX: 0000000000000000 RCX: 0000000000000000 RDX: 0000000000000000 RSI: 0000000000000000 RDI: ffffffffffffffff RBP: ffff96d180073e90 R08: 0000000000000001 R09: ffff8c91039fe3c0 R10: 0000000000000226 R11: 0000000000000226 R12: 000000000000000b R13: ffff8c910367c650 R14: ffffffffa8426d60 R15: 0000000000000000 FS: 0000000000000000(0000) GS:ffff8c9107480000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000000000038 CR3: 00000004b100a001 CR4: 00000000003706e0 Call Trace: ? _raw_spin_unlock_irqrestore+0x1f/0x30 ? call_rcu+0x10e/0x320 ? trace_hardirqs_on+0x2c/0xd0 ? rdinit_setup+0x2c/0x2c ? e820__memblock_setup+0x8b/0x8b pci_iommu_init+0x16/0x3f do_one_initcall+0x46/0x1e4 kernel_init_freeable+0x169/0x1b2 ? rest_init+0x9f/0x9f kernel_init+0xa/0x101 ret_from_fork+0x22/0x30 Modules linked in: CR2: 0000000000000038 ---[ end trace 3653722a6f936f18 ]--- Fixes: 01b9d4e2 ("iommu/vt-d: Use dev_iommu_priv_get/set()") Reported-by: NTorsten Hilbrich <torsten.hilbrich@secunet.com> Reported-by: NWendy Wang <wendy.wang@intel.com> Signed-off-by: NLu Baolu <baolu.lu@linux.intel.com> Tested-by: NTorsten Hilbrich <torsten.hilbrich@secunet.com> Link: https://lore.kernel.org/linux-iommu/96717683-70be-7388-3d2f-61131070a96a@secunet.com/ Link: https://lore.kernel.org/r/20200903065132.16879-1-baolu.lu@linux.intel.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 24 8月, 2020 1 次提交
-
-
由 Gustavo A. R. Silva 提交于
Replace the existing /* fall through */ comments and its variants with the new pseudo-keyword macro fallthrough[1]. Also, remove unnecessary fall-through markings when it is the case. [1] https://www.kernel.org/doc/html/v5.7/process/deprecated.html?highlight=fallthrough#implicit-switch-case-fall-throughSigned-off-by: NGustavo A. R. Silva <gustavoars@kernel.org>
-
- 24 7月, 2020 8 次提交
-
-
由 Ashok Raj 提交于
For SR-IOV, the PF PRI is shared between the PF and any associated VFs, and the PRI Capability is allowed for PFs but not for VFs. Searching for the PRI Capability on a VF always fails, even if its associated PF supports PRI. Add pci_pri_supported() to check whether device or its associated PF supports PRI. [bhelgaas: commit log, avoid "!!"] Fixes: b16d0cb9 ("iommu/vt-d: Always enable PASID/PRI PCI capabilities before ATS") Link: https://lore.kernel.org/r/1595543849-19692-1-git-send-email-ashok.raj@intel.comSigned-off-by: NAshok Raj <ashok.raj@intel.com> Signed-off-by: NBjorn Helgaas <bhelgaas@google.com> Reviewed-by: NLu Baolu <baolu.lu@linux.intel.com> Acked-by: NJoerg Roedel <jroedel@suse.de> Cc: stable@vger.kernel.org # v4.4+
-
由 Lu Baolu 提交于
The VT-d spec requires (10.4.4 Global Command Register, TE field) that: Hardware implementations supporting DMA draining must drain any in-flight DMA read/write requests queued within the Root-Complex before completing the translation enable command and reflecting the status of the command through the TES field in the Global Status register. Unfortunately, some integrated graphic devices fail to do so after some kind of power state transition. As the result, the system might stuck in iommu_disable_translation(), waiting for the completion of TE transition. This provides a quirk list for those devices and skips TE disabling if the qurik hits. Fixes: https://bugzilla.kernel.org/show_bug.cgi?id=208363 Fixes: https://bugzilla.kernel.org/show_bug.cgi?id=206571Signed-off-by: NLu Baolu <baolu.lu@linux.intel.com> Tested-by: NKoba Ko <koba.ko@canonical.com> Tested-by: NJun Miao <jun.miao@windriver.com> Cc: Ashok Raj <ashok.raj@intel.com> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20200723013437.2268-1-baolu.lu@linux.intel.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Lu Baolu 提交于
As Intel VT-d files have been moved to its own subdirectory, the prefix makes no sense. No functional changes. Signed-off-by: NLu Baolu <baolu.lu@linux.intel.com> Link: https://lore.kernel.org/r/20200724014925.15523-13-baolu.lu@linux.intel.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Lu Baolu 提交于
After page requests are handled, software must respond to the device which raised the page request with the result. This is done through the iommu ops.page_response if the request was reported to outside of vendor iommu driver through iommu_report_device_fault(). This adds the VT-d implementation of page_response ops. Co-developed-by: NJacob Pan <jacob.jun.pan@linux.intel.com> Co-developed-by: NLiu Yi L <yi.l.liu@intel.com> Signed-off-by: NJacob Pan <jacob.jun.pan@linux.intel.com> Signed-off-by: NLiu Yi L <yi.l.liu@intel.com> Signed-off-by: NLu Baolu <baolu.lu@linux.intel.com> Reviewed-by: NKevin Tian <kevin.tian@intel.com> Link: https://lore.kernel.org/r/20200724014925.15523-12-baolu.lu@linux.intel.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Lu Baolu 提交于
It is refactored in two ways: - Make it global so that it could be used in other files. - Make bus/devfn optional so that callers could ignore these two returned values when they only want to get the coresponding iommu pointer. Signed-off-by: NLu Baolu <baolu.lu@linux.intel.com> Reviewed-by: NKevin Tian <kevin.tian@intel.com> Link: https://lore.kernel.org/r/20200724014925.15523-9-baolu.lu@linux.intel.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Jacob Pan 提交于
For guest requested IOTLB invalidation, address and mask are provided as part of the invalidation data. VT-d HW silently ignores any address bits below the mask. SW shall also allow such case but give warning if address does not align with the mask. This patch relax the fault handling from error to warning and proceed with invalidation request with the given mask. Fixes: 6ee1b77b ("iommu/vt-d: Add svm/sva invalidate function") Signed-off-by: NJacob Pan <jacob.jun.pan@linux.intel.com> Signed-off-by: NLu Baolu <baolu.lu@linux.intel.com> Reviewed-by: NEric Auger <eric.auger@redhat.com> Link: https://lore.kernel.org/r/20200724014925.15523-7-baolu.lu@linux.intel.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Liu Yi L 提交于
For guest SVA usage, in order to optimize for less VMEXIT, guest request of IOTLB flush also includes device TLB. On the host side, IOMMU driver performs IOTLB and implicit devTLB invalidation. When PASID-selective granularity is requested by the guest we need to derive the equivalent address range for devTLB instead of using the address information in the UAPI data. The reason for that is, unlike IOTLB flush, devTLB flush does not support PASID-selective granularity. This is to say, we need to set the following in the PASID based devTLB invalidation descriptor: - entire 64 bit range in address ~(0x1 << 63) - S bit = 1 (VT-d CH 6.5.2.6). Without this fix, device TLB flush range is not set properly for PASID selective granularity. This patch also merged devTLB flush code for both implicit and explicit cases. Fixes: 6ee1b77b ("iommu/vt-d: Add svm/sva invalidate function") Signed-off-by: NLiu Yi L <yi.l.liu@intel.com> Signed-off-by: NJacob Pan <jacob.jun.pan@linux.intel.com> Signed-off-by: NLu Baolu <baolu.lu@linux.intel.com> Reviewed-by: NEric Auger <eric.auger@redhat.com> Link: https://lore.kernel.org/r/20200724014925.15523-6-baolu.lu@linux.intel.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Jacob Pan 提交于
Global pages support is removed from VT-d spec 3.0 for dev TLB invalidation. This patch is to remove the bits for vSVA. Similar change already made for the native SVA. See the link below. Signed-off-by: NJacob Pan <jacob.jun.pan@linux.intel.com> Signed-off-by: NLu Baolu <baolu.lu@linux.intel.com> Reviewed-by: NEric Auger <eric.auger@redhat.com> Link: https://lore.kernel.org/linux-iommu/20190830142919.GE11578@8bytes.org/T/ Link: https://lore.kernel.org/r/20200724014925.15523-3-baolu.lu@linux.intel.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 17 7月, 2020 1 次提交
-
-
由 Kees Cook 提交于
Using uninitialized_var() is dangerous as it papers over real bugs[1] (or can in the future), and suppresses unrelated compiler warnings (e.g. "unused variable"). If the compiler thinks it is uninitialized, either simply initialize the variable or make compiler changes. In preparation for removing[2] the[3] macro[4], remove all remaining needless uses with the following script: git grep '\buninitialized_var\b' | cut -d: -f1 | sort -u | \ xargs perl -pi -e \ 's/\buninitialized_var\(([^\)]+)\)/\1/g; s:\s*/\* (GCC be quiet|to make compiler happy) \*/$::g;' drivers/video/fbdev/riva/riva_hw.c was manually tweaked to avoid pathological white-space. No outstanding warnings were found building allmodconfig with GCC 9.3.0 for x86_64, i386, arm64, arm, powerpc, powerpc64le, s390x, mips, sparc64, alpha, and m68k. [1] https://lore.kernel.org/lkml/20200603174714.192027-1-glider@google.com/ [2] https://lore.kernel.org/lkml/CA+55aFw+Vbj0i=1TGqCR5vQkCzWJ0QxK6CernOU6eedsudAixw@mail.gmail.com/ [3] https://lore.kernel.org/lkml/CA+55aFwgbgqhbp1fkxvRKEpzyR5J8n1vKT1VZdz9knmPuXhOeg@mail.gmail.com/ [4] https://lore.kernel.org/lkml/CA+55aFz2500WfbKXAx8s67wrm9=yVJu65TpLgN_ybYNv0VEOKA@mail.gmail.com/ Reviewed-by: Leon Romanovsky <leonro@mellanox.com> # drivers/infiniband and mlx4/mlx5 Acked-by: Jason Gunthorpe <jgg@mellanox.com> # IB Acked-by: Kalle Valo <kvalo@codeaurora.org> # wireless drivers Reviewed-by: Chao Yu <yuchao0@huawei.com> # erofs Signed-off-by: NKees Cook <keescook@chromium.org>
-
- 11 7月, 2020 1 次提交
-
-
由 Rajat Jain 提交于
"External-facing" devices are internal devices that expose PCIe hierarchies such as Thunderbolt outside the platform [1]. Previously these internal devices were marked as "untrusted" the same as devices downstream from them. Use the ACPI or DT information to identify external-facing devices, but only mark the devices *downstream* from them as "untrusted" [2]. The external-facing device itself is no longer marked as untrusted. [1] https://docs.microsoft.com/en-us/windows-hardware/drivers/pci/dsd-for-pcie-root-ports#identifying-externally-exposed-pcie-root-ports [2] https://lore.kernel.org/linux-pci/20200610230906.GA1528594@bjorn-Precision-5520/ Link: https://lore.kernel.org/r/20200707224604.3737893-3-rajatja@google.comSigned-off-by: NRajat Jain <rajatja@google.com> Signed-off-by: NBjorn Helgaas <bhelgaas@google.com>
-
- 30 6月, 2020 1 次提交
-
-
由 Joerg Roedel 提交于
Remove the use of dev->archdata.iommu and use the private per-device pointer provided by IOMMU core code instead. Signed-off-by: NJoerg Roedel <jroedel@suse.de> Reviewed-by: NJerry Snitselaar <jsnitsel@redhat.com> Reviewed-by: NLu Baolu <baolu.lu@linux.intel.com> Link: https://lore.kernel.org/r/20200625130836.1916-3-joro@8bytes.org
-
- 23 6月, 2020 4 次提交
-
-
由 Lu Baolu 提交于
The iommu_domain_identity_map() helper takes start/end PFN as arguments. Fix a misuse case where the start and end addresses are passed. Fixes: e70b081c ("iommu/vt-d: Remove IOVA handling code from the non-dma_ops path") Reported-by: NAlex Williamson <alex.williamson@redhat.com> Signed-off-by: NLu Baolu <baolu.lu@linux.intel.com> Reviewed-by: NJerry Snitselaar <jsnitsel@redhat.com> Cc: Tom Murphy <murphyt7@tcd.ie> Link: https://lore.kernel.org/r/20200622231345.29722-7-baolu.lu@linux.intel.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Lu Baolu 提交于
The Scalable-mode Page-walk Coherency (SMPWC) field in the VT-d extended capability register indicates the hardware coherency behavior on paging structures accessed through the pasid table entry. This is ignored in current code and using ECAP.C instead which is only valid in legacy mode. Fix this so that paging structure updates could be manually flushed from the cache line if hardware page walking is not snooped. Fixes: 765b6a98 ("iommu/vt-d: Enumerate the scalable mode capability") Signed-off-by: NLu Baolu <baolu.lu@linux.intel.com> Cc: Ashok Raj <ashok.raj@intel.com> Cc: Kevin Tian <kevin.tian@intel.com> Cc: Jacob Pan <jacob.jun.pan@linux.intel.com> Link: https://lore.kernel.org/r/20200622231345.29722-6-baolu.lu@linux.intel.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Rajat Jain 提交于
Currently, an external malicious PCI device can masquerade the VID:PID of faulty gfx devices, and thus apply iommu quirks to effectively disable the IOMMU restrictions for itself. Thus we need to ensure that the device we are applying quirks to, is indeed an internal trusted device. Signed-off-by: NRajat Jain <rajatja@google.com> Reviewed-by: NAshok Raj <ashok.raj@intel.com> Reviewed-by: NMika Westerberg <mika.westerberg@linux.intel.com> Acked-by: NLu Baolu <baolu.lu@linux.intel.com> Link: https://lore.kernel.org/r/20200622231345.29722-4-baolu.lu@linux.intel.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Lu Baolu 提交于
When using first-level translation for IOVA, currently the U/S bit in the page table is cleared which implies DMA requests with user privilege are blocked. As the result, following error messages might be observed when passing through a device to user level: DMAR: DRHD: handling fault status reg 3 DMAR: [DMA Read] Request device [41:00.0] PASID 1 fault addr 7ecdcd000 [fault reason 129] SM: U/S set 0 for first-level translation with user privilege This fixes it by setting U/S bit in the first level page table and makes IOVA over first level compatible with previous second-level translation. Fixes: b802d070 ("iommu/vt-d: Use iova over first level") Reported-by: NXin Zeng <xin.zeng@intel.com> Signed-off-by: NLu Baolu <baolu.lu@linux.intel.com> Link: https://lore.kernel.org/r/20200622231345.29722-3-baolu.lu@linux.intel.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 10 6月, 2020 1 次提交
-
-
由 Joerg Roedel 提交于
Move all files related to the Intel IOMMU driver into its own subdirectory. Signed-off-by: NJoerg Roedel <jroedel@suse.de> Reviewed-by: NJerry Snitselaar <jsnitsel@redhat.com> Reviewed-by: NLu Baolu <baolu.lu@linux.intel.com> Link: https://lore.kernel.org/r/20200609130303.26974-3-joro@8bytes.org
-
- 29 5月, 2020 3 次提交
-
-
由 Jon Derrick 提交于
By removing the real DMA indirection in find_domain(), we can allow sub-devices of a real DMA device to have their own valid device_domain_info. The dmar lookup and context entry removal paths have been fixed to account for sub-devices. Fixes: 2b0140c6 ("iommu/vt-d: Use pci_real_dma_dev() for mapping") Signed-off-by: NJon Derrick <jonathan.derrick@intel.com> Acked-by: NLu Baolu <baolu.lu@linux.intel.com> Link: https://lore.kernel.org/r/20200527165617.297470-4-jonathan.derrick@intel.com Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=207575Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Jon Derrick 提交于
Sub-devices of a real DMA device might exist on a separate segment than the real DMA device and its IOMMU. These devices should still have a valid device_domain_info, but the current dma alias model won't allocate info for the subdevice. This patch adds a segment member to struct device_domain_info and uses the sub-device's BDF so that these sub-devices won't alias to other devices. Fixes: 2b0140c6 ("iommu/vt-d: Use pci_real_dma_dev() for mapping") Cc: stable@vger.kernel.org # v5.6+ Signed-off-by: NJon Derrick <jonathan.derrick@intel.com> Acked-by: NLu Baolu <baolu.lu@linux.intel.com> Link: https://lore.kernel.org/r/20200527165617.297470-3-jonathan.derrick@intel.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Jon Derrick 提交于
Domain context mapping can encounter issues with sub-devices of a real DMA device. A sub-device cannot have a valid context entry due to it potentially aliasing another device's 16-bit ID. It's expected that sub-devices of the real DMA device uses the real DMA device's requester when context mapping. This is an issue when a sub-device is removed where the context entry is cleared for all aliases. Other sub-devices are still valid, resulting in those sub-devices being stranded without valid context entries. The correct approach is to use the real DMA device when programming the context entries. The insertion path is correct because device_to_iommu() will return the bus and devfn of the real DMA device. The removal path needs to only operate on the real DMA device, otherwise the entire context entry would be cleared for all sub-devices of the real DMA device. This patch also adds a helper to determine if a struct device is a sub-device of a real DMA device. Fixes: 2b0140c6 ("iommu/vt-d: Use pci_real_dma_dev() for mapping") Cc: stable@vger.kernel.org # v5.6+ Signed-off-by: NJon Derrick <jonathan.derrick@intel.com> Acked-by: NLu Baolu <baolu.lu@linux.intel.com> Link: https://lore.kernel.org/r/20200527165617.297470-2-jonathan.derrick@intel.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 27 5月, 2020 1 次提交
-
-
由 Jean-Philippe Brucker 提交于
The pci_ats_supported() helper checks if a device supports ATS and is allowed to use it. By checking the ATS capability it also integrates the pci_ats_disabled() check from pci_ats_init(). Simplify the vt-d checks. Signed-off-by: NJean-Philippe Brucker <jean-philippe@linaro.org> Acked-by: NLu Baolu <baolu.lu@linux.intel.com> Link: https://lore.kernel.org/r/20200520152201.3309416-5-jean-philippe@linaro.orgSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 25 5月, 2020 1 次提交
-
-
由 Qian Cai 提交于
The commit 6ee1b77b ("iommu/vt-d: Add svm/sva invalidate function") introduced a GCC warning, drivers/iommu/intel-iommu.c:5330:1: warning: 'static' is not at beginning of declaration [-Wold-style-declaration] const static int ^~~~~ Fixes: 6ee1b77b ("iommu/vt-d: Add svm/sva invalidate function") Signed-off-by: NQian Cai <cai@lca.pw> Acked-by: NLu Baolu <baolu.lu@linux.intel.com> Link: https://lore.kernel.org/r/20200521215030.16938-1-cai@lca.pwSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 18 5月, 2020 3 次提交
-
-
由 Tom Murphy 提交于
There's no need for the non-dma_ops path to keep track of IOVAs. The whole point of the non-dma_ops path is that it allows the IOVAs to be handled separately. The IOVA handling code removed in this patch is pointless. Signed-off-by: NTom Murphy <murphyt7@tcd.ie> Signed-off-by: NLu Baolu <baolu.lu@linux.intel.com> Link: https://lore.kernel.org/r/20200516062101.29541-19-baolu.lu@linux.intel.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Lu Baolu 提交于
When a PASID is used for SVA by the device, it's possible that the PASID entry is cleared before the device flushes all ongoing DMA requests. The IOMMU should tolerate and ignore the non-recoverable faults caused by the untranslated requests from this device. For example, when an exception happens, the process terminates before the device driver stops DMA and call IOMMU driver to unbind PASID. The flow of process exist is as follows: do_exit() { exit_mm() { mm_put(); exit_mmap() { intel_invalidate_range() //mmu notifier tlb_finish_mmu() mmu_notifier_release(mm) { intel_iommu_release() { [2] intel_iommu_teardown_pasid(); intel_iommu_flush_tlbs(); } } unmap_vmas(); free_pgtables(); }; } exit_files(tsk) { close_files() { dsa_close(); [1] dsa_stop_dma(); intel_svm_unbind_pasid(); } } } Care must be taken on VT-d to avoid unrecoverable faults between the time window of [1] and [2]. [Process exist flow was contributed by Jacob Pan.] Intel VT-d provides such function through the FPD bit of the PASID entry. This sets FPD bit when PASID entry is changing from present to nonpresent in the mm notifier and will clear it when the pasid is unbound. Signed-off-by: NLu Baolu <baolu.lu@linux.intel.com> Reviewed-by: NJacob Pan <jacob.jun.pan@linux.intel.com> Link: https://lore.kernel.org/r/20200516062101.29541-15-baolu.lu@linux.intel.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Jacob Pan 提交于
This patch is an initial step to replace Intel SVM code with the following IOMMU SVA ops: intel_svm_bind_mm() => iommu_sva_bind_device() intel_svm_unbind_mm() => iommu_sva_unbind_device() intel_svm_is_pasid_valid() => iommu_sva_get_pasid() The features below will continue to work but are not included in this patch in that they are handled mostly within the IOMMU subsystem. - IO page fault - mmu notifier Consolidation of the above will come after merging generic IOMMU sva code[1]. There should not be any changes needed for SVA users such as accelerator device drivers during this time. [1] http://jpbrucker.net/sva/Signed-off-by: NJacob Pan <jacob.jun.pan@linux.intel.com> Signed-off-by: NLu Baolu <baolu.lu@linux.intel.com> Link: https://lore.kernel.org/r/20200516062101.29541-12-baolu.lu@linux.intel.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-