- 28 4月, 2022 3 次提交
-
-
由 Jason Gunthorpe 提交于
This new mechanism will replace using IOMMU_CAP_CACHE_COHERENCY and IOMMU_CACHE to control the no-snoop blocking behavior of the IOMMU. Currently only Intel and AMD IOMMUs are known to support this feature. They both implement it as an IOPTE bit, that when set, will cause PCIe TLPs to that IOVA with the no-snoop bit set to be treated as though the no-snoop bit was clear. The new API is triggered by calling enforce_cache_coherency() before mapping any IOVA to the domain which globally switches on no-snoop blocking. This allows other implementations that might block no-snoop globally and outside the IOPTE - AMD also documents such a HW capability. Leave AMD out of sync with Intel and have it block no-snoop even for in-kernel users. This can be trivially resolved in a follow up patch. Only VFIO needs to call this API because it does not have detailed control over the device to avoid requesting no-snoop behavior at the device level. Other places using domains with real kernel drivers should simply avoid asking their devices to set the no-snoop bit. Reviewed-by: NLu Baolu <baolu.lu@linux.intel.com> Reviewed-by: NKevin Tian <kevin.tian@intel.com> Acked-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/1-v3-2cf356649677+a32-intel_no_snoop_jgg@nvidia.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Mario Limonciello 提交于
Bit 1 of the IVFS IVInfo field indicates that IOMMU has been used for pre-boot DMA protection. Export this capability to allow other places in the kernel to be able to check for it on AMD systems. Link: https://www.amd.com/system/files/TechDocs/48882_IOMMU.pdfReviewed-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NMario Limonciello <mario.limonciello@amd.com> Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/ce7627fa1c596878ca6515dd9d4381a45b6ee38c.1650878781.git.robin.murphy@arm.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Mario Limonciello 提交于
Previously the AMD IOMMU would only enable SWIOTLB in certain circumstances: * IOMMU in passthrough mode * SME enabled This logic however doesn't work when an untrusted device is plugged in that doesn't do page aligned DMA transactions. The expectation is that a bounce buffer is used for those transactions. This fails like this: swiotlb buffer is full (sz: 4096 bytes), total 0 (slots), used 0 (slots) That happens because the bounce buffers have been allocated, followed by freed during startup but the bounce buffering code expects that all IOMMUs have left it enabled. Remove the criteria to set up bounce buffers on AMD systems to ensure they're always available for supporting untrusted devices. Fixes: 82612d66 ("iommu: Allow the dma-iommu api to use bounce buffers") Suggested-by: NChristoph Hellwig <hch@infradead.org> Signed-off-by: NMario Limonciello <mario.limonciello@amd.com> Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Reviewed-by: NChristoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/r/20220404204723.9767-2-mario.limonciello@amd.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 28 2月, 2022 2 次提交
-
-
由 Lu Baolu 提交于
Move the domain specific operations out of struct iommu_ops into a new structure that only has domain specific operations. This solves the problem of needing to know if the method vector for a given operation needs to be retrieved from the device or the domain. Logically the domain ops are the ones that make sense for external subsystems and endpoint drivers to use, while device ops, with the sole exception of domain_alloc, are IOMMU API internals. Signed-off-by: NLu Baolu <baolu.lu@linux.intel.com> Reviewed-by: NJason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/20220216025249.3459465-10-baolu.lu@linux.intel.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Lu Baolu 提交于
The is_attach_deferred iommu_ops callback is a device op. The domain argument is unnecessary and never used. Remove it to make code clean. Suggested-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NLu Baolu <baolu.lu@linux.intel.com> Reviewed-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NJason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/20220216025249.3459465-9-baolu.lu@linux.intel.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 14 2月, 2022 1 次提交
-
-
由 Lennert Buytenhek 提交于
The AMD IOMMU logs I/O page faults and such to a ring buffer in system memory, and this ring buffer can overflow. The AMD IOMMU spec has the following to say about the interrupt status bit that signals this overflow condition: EventOverflow: Event log overflow. RW1C. Reset 0b. 1 = IOMMU event log overflow has occurred. This bit is set when a new event is to be written to the event log and there is no usable entry in the event log, causing the new event information to be discarded. An interrupt is generated when EventOverflow = 1b and MMIO Offset 0018h[EventIntEn] = 1b. No new event log entries are written while this bit is set. Software Note: To resume logging, clear EventOverflow (W1C), and write a 1 to MMIO Offset 0018h[EventLogEn]. The AMD IOMMU driver doesn't currently implement this recovery sequence, meaning that if a ring buffer overflow occurs, logging of EVT/PPR/GA events will cease entirely. This patch implements the spec-mandated reset sequence, with the minor tweak that the hardware seems to want to have a 0 written to MMIO Offset 0018h[EventLogEn] first, before writing an 1 into this field, or the IOMMU won't actually resume logging events. Signed-off-by: NLennert Buytenhek <buytenh@arista.com> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/YVrSXEdW2rzEfOvk@wantstofly.orgSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 04 10月, 2021 1 次提交
-
-
由 Tom Lendacky 提交于
Replace uses of mem_encrypt_active() with calls to cc_platform_has() with the CC_ATTR_MEM_ENCRYPT attribute. Remove the implementation of mem_encrypt_active() across all arches. For s390, since the default implementation of the cc_platform_has() matches the s390 implementation of mem_encrypt_active(), cc_platform_has() does not need to be implemented in s390 (the config option ARCH_HAS_CC_PLATFORM is not set). Signed-off-by: NTom Lendacky <thomas.lendacky@amd.com> Signed-off-by: NBorislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20210928191009.32551-9-bp@alien8.de
-
- 29 9月, 2021 1 次提交
-
-
由 Lennert Buytenhek 提交于
This patch makes iommu/amd call report_iommu_fault() when an I/O page fault occurs, which has two effects: 1) It allows device drivers to register a callback to be notified of I/O page faults, via the iommu_set_fault_handler() API. 2) It triggers the io_page_fault tracepoint in report_iommu_fault() when an I/O page fault occurs. The latter point is the main aim of this patch, as it allows rasdaemon-like daemons to be notified of I/O page faults, and to possibly initiate corrective action in response. A number of other IOMMU drivers already use report_iommu_fault(), and I/O page faults on those IOMMUs therefore already trigger this tracepoint -- but this isn't yet the case for AMD-Vi and Intel DMAR. The AMD IOMMU specification suggests that the bit in an I/O page fault event log entry that signals whether an I/O page fault was for a read request or for a write request is only meaningful when the faulting access was to a present page, but some testing on a Ryzen 3700X suggests that this bit encodes the correct value even for I/O page faults to non-present pages, and therefore, this patch passes the R/W information up the stack even for I/O page faults to non-present pages. Signed-off-by: NLennert Buytenhek <buytenh@arista.com> Link: https://lore.kernel.org/r/YVLyBW97vZLpOaAp@wantstofly.orgSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 18 8月, 2021 2 次提交
-
-
由 Robin Murphy 提交于
The DMA ops reset/setup can simply be unconditional, since iommu-dma already knows only to touch DMA domains. Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/6450b4f39a5a086d505297b4a53ff1e4a7a0fe7c.1628682049.git.robin.murphy@arm.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Robin Murphy 提交于
The core code bakes its own cookies now. Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/648e74e7422caa6a7db7fb0c36813c7bd2007af8.1628682048.git.robin.murphy@arm.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 02 8月, 2021 4 次提交
-
-
由 Nadav Amit 提交于
When running on an AMD vIOMMU, it is better to avoid TLB flushes of unmodified PTEs. vIOMMUs require the hypervisor to synchronize the virtualized IOMMU's PTEs with the physical ones. This process induce overheads. AMD IOMMU allows us to flush any range that is aligned to the power of 2. So when running on top of a vIOMMU, break the range into sub-ranges that are naturally aligned, and flush each one separately. This apporach is better when running with a vIOMMU, but on physical IOMMUs, the penalty of IOTLB misses due to unnecessary flushed entries is likely to be low. Repurpose (i.e., keeping the name, changing the logic) domain_flush_pages() so it is used to choose whether to perform one flush of the whole range or multiple ones to avoid flushing unnecessary ranges. Use NpCache, as usual, to infer whether the IOMMU is physical or virtual. Cc: Joerg Roedel <joro@8bytes.org> Cc: Will Deacon <will@kernel.org> Cc: Jiajun Cao <caojiajun@vmware.com> Cc: Lu Baolu <baolu.lu@linux.intel.com> Cc: iommu@lists.linux-foundation.org Cc: linux-kernel@vger.kernel.org Suggested-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NNadav Amit <namit@vmware.com> Link: https://lore.kernel.org/r/20210723093209.714328-8-namit@vmware.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Nadav Amit 提交于
On virtual machines, software must flush the IOTLB after each page table entry update. The iommu_map_sg() code iterates through the given scatter-gather list and invokes iommu_map() for each element in the scatter-gather list, which calls into the vendor IOMMU driver through iommu_ops callback. As the result, a single sg mapping may lead to multiple IOTLB flushes. Fix this by adding amd_iotlb_sync_map() callback and flushing at this point after all sg mappings we set. This commit is followed and inspired by commit 933fcd01 ("iommu/vt-d: Add iotlb_sync_map callback"). Cc: Joerg Roedel <joro@8bytes.org> Cc: Will Deacon <will@kernel.org> Cc: Jiajun Cao <caojiajun@vmware.com> Cc: Robin Murphy <robin.murphy@arm.com> Cc: Lu Baolu <baolu.lu@linux.intel.com> Cc: iommu@lists.linux-foundation.org Cc: linux-kernel@vger.kernel.org Signed-off-by: NNadav Amit <namit@vmware.com> Link: https://lore.kernel.org/r/20210723093209.714328-7-namit@vmware.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Nadav Amit 提交于
AMD's IOMMU can flush efficiently (i.e., in a single flush) any range. This is in contrast, for instnace, to Intel IOMMUs that have a limit on the number of pages that can be flushed in a single flush. In addition, AMD's IOMMU do not care about the page-size, so changes of the page size do not need to trigger a TLB flush. So in most cases, a TLB flush due to disjoint range is not needed for AMD. Yet, vIOMMUs require the hypervisor to synchronize the virtualized IOMMU's PTEs with the physical ones. This process induce overheads, so it is better not to cause unnecessary flushes, i.e., flushes of PTEs that were not modified. Implement and use amd_iommu_iotlb_gather_add_page() and use it instead of the generic iommu_iotlb_gather_add_page(). Ignore disjoint regions unless "non-present cache" feature is reported by the IOMMU capabilities, as this is an indication we are running on a physical IOMMU. A similar indication is used by VT-d (see "caching mode"). The new logic retains the same flushing behavior that we had before the introduction of page-selective IOTLB flushes for AMD. On virtualized environments, check if the newly flushed region and the gathered one are disjoint and flush if it is. Cc: Joerg Roedel <joro@8bytes.org> Cc: Will Deacon <will@kernel.org> Cc: Jiajun Cao <caojiajun@vmware.com> Cc: Lu Baolu <baolu.lu@linux.intel.com> Cc: iommu@lists.linux-foundation.org Cc: linux-kernel@vger.kernel.org> Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NNadav Amit <namit@vmware.com> Link: https://lore.kernel.org/r/20210723093209.714328-6-namit@vmware.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Nadav Amit 提交于
Recent patch attempted to enable selective page flushes on AMD IOMMU but neglected to adapt amd_iommu_iotlb_sync() to use the selective flushes. Adapt amd_iommu_iotlb_sync() to use selective flushes and change amd_iommu_unmap() to collect the flushes. As a defensive measure, to avoid potential issues as those that the Intel IOMMU driver encountered recently, flush the page-walk caches by always setting the "pde" parameter. This can be removed later. Cc: Joerg Roedel <joro@8bytes.org> Cc: Will Deacon <will@kernel.org> Cc: Jiajun Cao <caojiajun@vmware.com> Cc: Robin Murphy <robin.murphy@arm.com> Cc: Lu Baolu <baolu.lu@linux.intel.com> Cc: iommu@lists.linux-foundation.org Cc: linux-kernel@vger.kernel.org Signed-off-by: NNadav Amit <namit@vmware.com> Link: https://lore.kernel.org/r/20210723093209.714328-2-namit@vmware.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 26 7月, 2021 2 次提交
-
-
由 Lennert Buytenhek 提交于
For the printing of RMP_HW_ERROR / RMP_PAGE_FAULT / IO_PAGE_FAULT events, the AMD IOMMU code uses such logic: if (pdev) dev_data = dev_iommu_priv_get(&pdev->dev); if (dev_data && __ratelimit(&dev_data->rs)) { pci_err(pdev, ... } else { printk_ratelimit() / pr_err{,_ratelimited}(... } This means that if we receive an event for a PCI devid which actually does have a struct pci_dev and an attached struct iommu_dev_data, but rate limiting kicks in, we'll fall back to the non-PCI branch of the test, and print the event in a different format. Fix this by changing the logic to: if (dev_data) { if (__ratelimit(&dev_data->rs)) { pci_err(pdev, ... } } else { pr_err_ratelimited(... } Suggested-by: NSuravee Suthikulpanit <suravee.suthikulpanit@amd.com> Signed-off-by: NLennert Buytenhek <buytenh@wantstofly.org> Reviewed-by: NSuravee Suthikulpanit <suravee.suthikulpanit@amd.com> Link: https://lore.kernel.org/r/YPgk1dD1gPMhJXgY@wantstofly.orgSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Zhen Lei 提交于
Make IOMMU_DEFAULT_LAZY default for when AMD_IOMMU config is set, which matches current behaviour. For "fullflush" param, just call iommu_set_dma_strict(true) directly. Since we get a strict vs lazy mode print already in iommu_subsys_init(), and maintain a deprecation print when "fullflush" param is passed, drop the prints in amd_iommu_init_dma_ops(). Finally drop global flag amd_iommu_unmap_flush, as it has no longer has any purpose. [jpg: Rebase for relocated file and drop amd_iommu_unmap_flush] Signed-off-by: NZhen Lei <thunder.leizhen@huawei.com> Signed-off-by: NJohn Garry <john.garry@huawei.com> Link: https://lore.kernel.org/r/1626088340-5838-6-git-send-email-john.garry@huawei.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 25 6月, 2021 1 次提交
-
-
由 Jean-Philippe Brucker 提交于
Passing a 64-bit address width to iommu_setup_dma_ops() is valid on virtual platforms, but isn't currently possible. The overflow check in iommu_dma_init_domain() prevents this even when @dma_base isn't 0. Pass a limit address instead of a size, so callers don't have to fake a size to work around the check. The base and limit parameters are being phased out, because: * they are redundant for x86 callers. dma-iommu already reserves the first page, and the upper limit is already in domain->geometry. * they can now be obtained from dev->dma_range_map on Arm. But removing them on Arm isn't completely straightforward so is left for future work. As an intermediate step, simplify the x86 callers by passing dummy limits. Signed-off-by: NJean-Philippe Brucker <jean-philippe@linaro.org> Reviewed-by: NEric Auger <eric.auger@redhat.com> Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/20210618152059.1194210-5-jean-philippe@linaro.orgSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 07 6月, 2021 2 次提交
-
-
由 Shaokun Zhang 提交于
'err' will be initialized and cleanup the redundant initialization. Cc: Joerg Roedel <joro@8bytes.org> Signed-off-by: NShaokun Zhang <zhangshaokun@hisilicon.com> Link: https://lore.kernel.org/r/1621395447-34738-1-git-send-email-zhangshaokun@hisilicon.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Robin Murphy 提交于
Now that DMA ops are part of the core API via iommu-dma, fold the vestigial remains of the IOMMU_DMA_OPS init state into the IOMMU API phase, and clean up a few other leftovers. This should also close the race window wherein bus_set_iommu() effectively makes the DMA ops state visible before its nominal initialisation - it seems this was previously fairly benign, but since commit a250c23f ("iommu: remove DOMAIN_ATTR_DMA_USE_FLUSH_QUEUE") it can now lead to the strict flush queue policy inadvertently being picked for default domains allocated during that window, with a corresponding unexpected perfomance impact. Reported-by: NJussi Maki <joamaki@gmail.com> Tested-by: NJussi Maki <joamaki@gmail.com> Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Fixes: a250c23f ("iommu: remove DOMAIN_ATTR_DMA_USE_FLUSH_QUEUE") Link: https://lore.kernel.org/r/665db61e23ff8d54ac5eb391bef520b3a803fcb9.1622727974.git.robin.murphy@arm.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 18 5月, 2021 2 次提交
-
-
由 Nadav Amit 提交于
The logic to determine the mask of page-specific invalidations was tested in userspace. As the code was copied into the kernel, the parentheses were mistakenly set in the wrong place, resulting in the wrong mask. Fix it. Cc: Joerg Roedel <joro@8bytes.org> Cc: Will Deacon <will@kernel.org> Cc: Jiajun Cao <caojiajun@vmware.com> Cc: iommu@lists.linux-foundation.org Cc: linux-kernel@vger.kernel.org Fixes: 268aa454 ("iommu/amd: Page-specific invalidations for more than one page") Signed-off-by: NNadav Amit <namit@vmware.com> Link: https://lore.kernel.org/r/20210502070001.1559127-2-namit@vmware.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Jean-Philippe Brucker 提交于
Since commit 08a27c1c ("iommu: Add support to change default domain of an iommu group") a user can switch a device between IOMMU and direct DMA through sysfs. This doesn't work for AMD IOMMU at the moment because dev->dma_ops is not cleared when switching from a DMA to an identity IOMMU domain. The DMA layer thus attempts to use the dma-iommu ops on an identity domain, causing an oops: # echo 0000:00:05.0 > /sys/sys/bus/pci/drivers/e1000e/unbind # echo identity > /sys/bus/pci/devices/0000:00:05.0/iommu_group/type # echo 0000:00:05.0 > /sys/sys/bus/pci/drivers/e1000e/bind ... BUG: kernel NULL pointer dereference, address: 0000000000000028 ... Call Trace: iommu_dma_alloc e1000e_setup_tx_resources e1000e_open Since iommu_change_dev_def_domain() calls probe_finalize() again, clear the dma_ops there like Vt-d does. Fixes: 08a27c1c ("iommu: Add support to change default domain of an iommu group") Signed-off-by: NJean-Philippe Brucker <jean-philippe@linaro.org> Link: https://lore.kernel.org/r/20210422094216.2282097-1-jean-philippe@linaro.orgSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 15 4月, 2021 1 次提交
-
-
由 Shaokun Zhang 提交于
'devid' has been checked in function check_device, no need to double check and clean up this. Cc: Joerg Roedel <joro@8bytes.org> Cc: Will Deacon <will@kernel.org> Signed-off-by: NShaokun Zhang <zhangshaokun@hisilicon.com> Link: https://lore.kernel.org/r/1617939040-35579-1-git-send-email-zhangshaokun@hisilicon.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 08 4月, 2021 1 次提交
-
-
由 Nadav Amit 提交于
Currently, IOMMU invalidations and device-IOTLB invalidations using AMD IOMMU fall back to full address-space invalidation if more than a single page need to be flushed. Full flushes are especially inefficient when the IOMMU is virtualized by a hypervisor, since it requires the hypervisor to synchronize the entire address-space. AMD IOMMUs allow to provide a mask to perform page-specific invalidations for multiple pages that match the address. The mask is encoded as part of the address, and the first zero bit in the address (in bits [51:12]) indicates the mask size. Use this hardware feature to perform selective IOMMU and IOTLB flushes. Combine the logic between both for better code reuse. The IOMMU invalidations passed a smoke-test. The device IOTLB invalidations are untested. Cc: Joerg Roedel <joro@8bytes.org> Cc: Will Deacon <will@kernel.org> Cc: Jiajun Cao <caojiajun@vmware.com> Cc: iommu@lists.linux-foundation.org Cc: linux-kernel@vger.kernel.org Signed-off-by: NNadav Amit <namit@vmware.com> Link: https://lore.kernel.org/r/20210323210619.513069-1-namit@vmware.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 07 4月, 2021 4 次提交
-
-
由 Christoph Hellwig 提交于
Signed-off-by: NChristoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/r/20210402143312.372386-3-hch@lst.deSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Christoph Hellwig 提交于
The device errata mechism is entirely unused, so remove it. Signed-off-by: NChristoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/r/20210402143312.372386-2-hch@lst.deSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Robin Murphy 提交于
Instead make the global iommu_dma_strict paramete in iommu.c canonical by exporting helpers to get and set it and use those directly in the drivers. This make sure that the iommu.strict parameter also works for the AMD and Intel IOMMU drivers on x86. As those default to lazy flushing a new IOMMU_CMD_LINE_STRICT is used to turn the value into a tristate to represent the default if not overriden by an explicit parameter. [ported on top of the other iommu_attr changes and added a few small missing bits] Signed-off-by: Robin Murphy <robin.murphy@arm.com>. Signed-off-by: NChristoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/r/20210401155256.298656-19-hch@lst.deSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Qi Liu 提交于
Remove duplicate check of pasids in amd_iommu_domain_enable_v2(), as it has been guaranteed in amd_iommu_init_device(). Signed-off-by: NQi Liu <liuqi115@huawei.com> Link: https://lore.kernel.org/r/1617275956-4467-1-git-send-email-liuqi115@huawei.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 28 1月, 2021 11 次提交
-
-
由 Suravee Suthikulpanit 提交于
Switch to using IO page table framework for AMD IOMMU v1 page table. Signed-off-by: NSuravee Suthikulpanit <suravee.suthikulpanit@amd.com> Link: https://lore.kernel.org/r/20201215073705.123786-14-suravee.suthikulpanit@amd.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Suravee Suthikulpanit 提交于
These implement map and unmap for AMD IOMMU v1 pagetable, which will be used by the IO pagetable framework. Also clean up unused extern function declarations. Signed-off-by: NSuravee Suthikulpanit <suravee.suthikulpanit@amd.com> Link: https://lore.kernel.org/r/20201215073705.123786-13-suravee.suthikulpanit@amd.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Suravee Suthikulpanit 提交于
This implements iova_to_phys for AMD IOMMU v1 pagetable, which will be used by the IO page table framework. Signed-off-by: NSuravee Suthikulpanit <suravee.suthikulpanit@amd.com> Link: https://lore.kernel.org/r/20201215073705.123786-12-suravee.suthikulpanit@amd.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Suravee Suthikulpanit 提交于
To simplify the fetch_pte function. There is no functional change. Signed-off-by: NSuravee Suthikulpanit <suravee.suthikulpanit@amd.com> Link: https://lore.kernel.org/r/20201215073705.123786-11-suravee.suthikulpanit@amd.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Suravee Suthikulpanit 提交于
Since the IO page table root and mode parameters have been moved into the struct amd_io_pg, the function is no longer needed. Therefore, remove it along with the struct domain_pgtable. Signed-off-by: NSuravee Suthikulpanit <suravee.suthikulpanit@amd.com> Link: https://lore.kernel.org/r/20201215073705.123786-9-suravee.suthikulpanit@amd.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Suravee Suthikulpanit 提交于
By consolidate logic into v1_free_pgtable helper function, which is called from IO page table framework. Signed-off-by: NSuravee Suthikulpanit <suravee.suthikulpanit@amd.com> Link: https://lore.kernel.org/r/20201215073705.123786-8-suravee.suthikulpanit@amd.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Suravee Suthikulpanit 提交于
Preparing to migrate to use IO page table framework. There is no functional change. Signed-off-by: NSuravee Suthikulpanit <suravee.suthikulpanit@amd.com> Link: https://lore.kernel.org/r/20201215073705.123786-7-suravee.suthikulpanit@amd.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Suravee Suthikulpanit 提交于
And move declaration to header file so that they can be included across multiple files. There is no functional change. Signed-off-by: NSuravee Suthikulpanit <suravee.suthikulpanit@amd.com> Link: https://lore.kernel.org/r/20201215073705.123786-6-suravee.suthikulpanit@amd.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Suravee Suthikulpanit 提交于
Make use of the new struct amd_io_pgtable in preparation to remove the struct domain_pgtable. Signed-off-by: NSuravee Suthikulpanit <suravee.suthikulpanit@amd.com> Link: https://lore.kernel.org/r/20201215073705.123786-5-suravee.suthikulpanit@amd.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Suravee Suthikulpanit 提交于
To better organize the data structure since it contains IO page table related information. Signed-off-by: NSuravee Suthikulpanit <suravee.suthikulpanit@amd.com> Link: https://lore.kernel.org/r/20201215073705.123786-4-suravee.suthikulpanit@amd.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Suravee Suthikulpanit 提交于
Add initial hook up code to implement generic IO page table framework. Signed-off-by: NSuravee Suthikulpanit <suravee.suthikulpanit@amd.com> Link: https://lore.kernel.org/r/20201215073705.123786-3-suravee.suthikulpanit@amd.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 27 1月, 2021 1 次提交
-
-
由 Suravee Suthikulpanit 提交于
Move the function to header file to allow inclusion in other files. Signed-off-by: NSuravee Suthikulpanit <suravee.suthikulpanit@amd.com> Link: https://lore.kernel.org/r/20201215073705.123786-2-suravee.suthikulpanit@amd.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 06 1月, 2021 1 次提交
-
-
由 David Woodhouse 提交于
The AMD IOMMU initialisation registers the IRQ remapping domain for each IOMMU before doing the final sanity check that every I/OAPIC is covered. This means that the AMD irq_remapping_select() function gets invoked even when IRQ remapping has been disabled, eventually leading to a NULL pointer dereference in alloc_irq_table(). Unfortunately, the IVRS isn't fully parsed early enough that the sanity check can be done in time to registering the IRQ domain altogether. Doing that would be nice, but is a larger and more error-prone task. The simple fix is just for irq_remapping_select() to refuse to report a match when IRQ remapping has disabled. Link: https://lore.kernel.org/lkml/ed4be9b4-24ac-7128-c522-7ef359e8185d@gmx.at Fixes: a1a785b5 ("iommu/amd: Implement select() method on remapping irqdomain") Reported-by: NJohnathan Smithinovic <johnathan.smithinovic@gmx.at> Signed-off-by: NDavid Woodhouse <dwmw@amazon.co.uk> Link: https://lore.kernel.org/r/04bbe8bca87f81a3cfa93ec4299e53f47e00e5b3.camel@infradead.orgSigned-off-by: NWill Deacon <will@kernel.org>
-