- 29 3月, 2018 7 次提交
-
-
Setting the IRQ remap table for a specific devid (or its alias devid) includes three steps. Those three steps are always repeated each time this is done. Introduce a new helper function, move those steps there and use that function instead. The compiler can still decide if it is worth to inline. Signed-off-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
The variable of type struct irq_remap_table is always named `table' except in amd_iommu_update_ga() where it is called `irt'. Make it consistent and name it also `table'. Signed-off-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
alloc_irq_table() has a special ioapic argument. If set then it will pre-allocate / reserve the first 32 indexes. The argument is only once true and it would make alloc_irq_table() a little simpler if we would extract the special bits to the caller. The caller of irq_remapping_alloc() is holding irq_domain_mutex so the initialization of iommu->irte_ops->set_allocated() should not race against other user. Signed-off-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
The function get_irq_table() reads/writes irq_lookup_table while holding the amd_iommu_devtable_lock. It also modifies amd_iommu_dev_table[].data[2]. set_dte_entry() is using amd_iommu_dev_table[].data[0|1] (under the domain->lock) so it should be okay. The access to the iommu is serialized with its own (iommu's) lock. So split out get_irq_table() out of amd_iommu_devtable_lock's lock. Signed-off-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
domain_id_alloc() and domain_id_free() is used for id management. Those two function share a bitmap (amd_iommu_pd_alloc_bitmap) and set/clear bits based on id allocation. There is no need to share this with amd_iommu_devtable_lock, it can use its own lock for this operation. Signed-off-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
alloc_dev_data() adds new items to dev_data_list and search_dev_data() is searching for items in this list. Both protect the access to the list with a spinlock. There is no need to navigate forth and back within the list and there is also no deleting of a specific item. This qualifies the list to become a lock less list and as part of this, the spinlock can be removed. With this change the ordering of those items within the list is changed: before the change new items were added to the end of the list, now they are added to the front. I don't think it matters but wanted to mention it. Signed-off-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
find_dev_data() does not check whether the return value alloc_dev_data() is NULL. This was okay once because the pointer was returned once as-is. Since commit df3f7a6e ("iommu/amd: Use is_attach_deferred call-back") the pointer may be used within find_dev_data() so a NULL check is required. Cc: Baoquan He <bhe@redhat.com> Fixes: df3f7a6e ("iommu/amd: Use is_attach_deferred call-back") Signed-off-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 15 3月, 2018 2 次提交
-
-
由 Gary R Hook 提交于
Remove printk and use a more preferable error logging function. Signed-off-by: NGary R Hook <gary.hook@amd.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Suravee Suthikulpanit 提交于
Since AMD IOMMU driver currently flushes all TLB entries when page size is more than one, use the same interface for both iommu_ops.flush_iotlb_all() and iommu_ops.iotlb_sync(). Cc: Joerg Roedel <joro@8bytes.org> Signed-off-by: NSuravee Suthikulpanit <suravee.suthikulpanit@amd.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 15 2月, 2018 1 次提交
-
-
由 Scott Wood 提交于
get_irq_table() previously acquired amd_iommu_devtable_lock which is not a raw lock, and thus cannot be acquired from atomic context on PREEMPT_RT. Many calls to modify_irte*() come from atomic context due to the IRQ desc->lock, as does amd_iommu_update_ga() due to the preemption disabling in vcpu_load/put(). The only difference between calling get_irq_table() and reading from irq_lookup_table[] directly, other than the lock acquisition and amd_iommu_rlookup_table[] check, is if the table entry is unpopulated, which should never happen when looking up a devid that came from an irq_2_irte struct, as get_irq_table() would have already been called on that devid during irq_remapping_alloc(). The lock acquisition is not needed in these cases because entries in irq_lookup_table[] never change once non-NULL -- nor would the amd_iommu_devtable_lock usage in get_irq_table() provide meaningful protection if they did, since it's released before using the looked up table in the get_irq_table() caller. Rename the old get_irq_table() to alloc_irq_table(), and create a new lockless get_irq_table() to be used in non-allocating contexts that WARNs if it doesn't find what it's looking for. Signed-off-by: NScott Wood <swood@redhat.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 13 2月, 2018 2 次提交
-
-
由 Scott Wood 提交于
search_dev_data() acquires a non-raw lock, which can't be done from atomic context on PREEMPT_RT. There is no need to look at dev_data because guest_mode should never be set if use_vapic is not set. Signed-off-by: NScott Wood <swood@redhat.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Scott Wood 提交于
Several functions in this driver are called from atomic context, and thus raw locks must be used in order to be safe on PREEMPT_RT. This includes paths that must wait for command completion, which is a potential PREEMPT_RT latency concern but not easily avoidable. Signed-off-by: NScott Wood <swood@redhat.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 01 2月, 2018 1 次提交
-
-
由 David Rientjes 提交于
Commit 4d4bbd85 ("mm, oom_reaper: skip mm structs with mmu notifiers") prevented the oom reaper from unmapping private anonymous memory with the oom reaper when the oom victim mm had mmu notifiers registered. The rationale is that doing mmu_notifier_invalidate_range_{start,end}() around the unmap_page_range(), which is needed, can block and the oom killer will stall forever waiting for the victim to exit, which may not be possible without reaping. That concern is real, but only true for mmu notifiers that have blockable invalidate_range_{start,end}() callbacks. This patch adds a "flags" field to mmu notifier ops that can set a bit to indicate that these callbacks do not block. The implementation is steered toward an expensive slowpath, such as after the oom reaper has grabbed mm->mmap_sem of a still alive oom victim. [rientjes@google.com: mmu_notifier_invalidate_range_end() can also call the invalidate_range() must not block, fix comment] Link: http://lkml.kernel.org/r/alpine.DEB.2.10.1801091339570.240101@chino.kir.corp.google.com [akpm@linux-foundation.org: make mm_has_blockable_invalidate_notifiers() return bool, use rwsem_is_locked()] Link: http://lkml.kernel.org/r/alpine.DEB.2.10.1712141329500.74052@chino.kir.corp.google.comSigned-off-by: NDavid Rientjes <rientjes@google.com> Acked-by: NMichal Hocko <mhocko@suse.com> Acked-by: NPaolo Bonzini <pbonzini@redhat.com> Acked-by: NChristian König <christian.koenig@amd.com> Acked-by: NDimitri Sivanich <sivanich@hpe.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Oded Gabbay <oded.gabbay@gmail.com> Cc: Alex Deucher <alexander.deucher@amd.com> Cc: David Airlie <airlied@linux.ie> Cc: Joerg Roedel <joro@8bytes.org> Cc: Doug Ledford <dledford@redhat.com> Cc: Jani Nikula <jani.nikula@linux.intel.com> Cc: Mike Marciniszyn <mike.marciniszyn@intel.com> Cc: Sean Hefty <sean.hefty@intel.com> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com> Cc: Jérôme Glisse <jglisse@redhat.com> Cc: Radim Krčmář <rkrcmar@redhat.com> Signed-off-by: NDavid Rientjes <rientjes@google.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 17 1月, 2018 10 次提交
-
-
由 Robin Murphy 提交于
Now that no more drivers rely on arbitrary early initialisation via an of_iommu_init_fn hook, let's clean up the redundant remnants. The IOMMU_OF_DECLARE() macro needs to remain for now, as the probe-deferral mechanism has no other nice way to detect built-in drivers before they have registered themselves, such that it can make the right decision. Reviewed-by: NSricharan R <sricharan@codeaurora.org> Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Robin Murphy 提交于
Having of_iommu_init() call ipmmu_init() via ipmmu_vmsa_iommu_of_setup() does nothing that the subsys_initcall wouldn't do slightly later anyway, since probe-deferral of masters means it is no longer critical to register the driver super-early. Clean it up. Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Robin Murphy 提交于
Since the MSM IOMMU driver now probes via DT exclusively rather than platform data, dependent masters should be deferred until the IOMMU itself is ready. Thus we can do away with the early initialisation hook to unconditionally claim the bus ops, and instead do that only once an IOMMU is actually probed. Furthermore, this should also make the driver safe for multiplatform kernels on non-MSM SoCs. Reviewed-by: NSricharan R <sricharan@codeaurora.org> Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Sohil Mehta 提交于
If the CPU has support for 5-level paging enabled and the IOMMU also supports 5-level paging then enable the 5-level paging mode for first- level translations - used when SVM is enabled. Signed-off-by: NSohil Mehta <sohil.mehta@intel.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Sohil Mehta 提交于
Add a check to verify IOMMU 5-level paging support. If the CPU supports supports 5-level paging but the IOMMU does not support it then disable SVM by not allocating PASID tables. Signed-off-by: NSohil Mehta <sohil.mehta@intel.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Sohil Mehta 提交于
Add a check to verify IOMMU 1GB page support. If the CPU supports 1GB pages but the IOMMU does not support it then disable SVM by not allocating PASID tables. Signed-off-by: NSohil Mehta <sohil.mehta@intel.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Sohil Mehta 提交于
Update the IOMMU default domain address width to 57 bits. This would enable the IOMMU to do upto 5-levels of paging for second level translations - IOVA translation requests without PASID. Even though the maximum supported address width is being increased to 57, __iommu_calculate_agaw() would set the actual supported address width to the maximum support available in IOMMU hardware. Signed-off-by: NSohil Mehta <sohil.mehta@intel.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Peter Xu 提交于
after commit a1ddcbe9 ("iommu/vt-d: Pass dmar_domain directly into iommu_flush_iotlb_psi", 2015-08-12), we have domain pointer as parameter to iommu_flush_iotlb_psi(), so no need to fetch it from cache again. More importantly, a NULL reference pointer bug is reported on RHEL7 (and it can be reproduced on some old upstream kernels too, e.g., v4.13) by unplugging an 40g nic from a VM (hard to test unplug on real host, but it should be the same): https://bugzilla.redhat.com/show_bug.cgi?id=1531367 [ 24.391863] pciehp 0000:00:03.0:pcie004: Slot(0): Attention button pressed [ 24.393442] pciehp 0000:00:03.0:pcie004: Slot(0): Powering off due to button press [ 29.721068] i40evf 0000:01:00.0: Unable to send opcode 2 to PF, err I40E_ERR_QUEUE_EMPTY, aq_err OK [ 29.783557] iommu: Removing device 0000:01:00.0 from group 3 [ 29.784662] BUG: unable to handle kernel NULL pointer dereference at 0000000000000304 [ 29.785817] IP: iommu_flush_iotlb_psi+0xcf/0x120 [ 29.786486] PGD 0 [ 29.786487] P4D 0 [ 29.786812] [ 29.787390] Oops: 0000 [#1] SMP [ 29.787876] Modules linked in: ip6t_rpfilter ip6t_REJECT nf_reject_ipv6 xt_conntrack ip_set nfnetlink ebtable_nat ebtable_broute bridge stp llc ip6table_ng [ 29.795371] CPU: 0 PID: 156 Comm: kworker/0:2 Not tainted 4.13.0 #14 [ 29.796366] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.11.0-1.el7 04/01/2014 [ 29.797593] Workqueue: pciehp-0 pciehp_power_thread [ 29.798328] task: ffff94f5745b4a00 task.stack: ffffb326805ac000 [ 29.799178] RIP: 0010:iommu_flush_iotlb_psi+0xcf/0x120 [ 29.799919] RSP: 0018:ffffb326805afbd0 EFLAGS: 00010086 [ 29.800666] RAX: ffff94f5bc56e800 RBX: 0000000000000000 RCX: 0000000200000025 [ 29.801667] RDX: ffff94f5bc56e000 RSI: 0000000000000082 RDI: 0000000000000000 [ 29.802755] RBP: ffffb326805afbf8 R08: 0000000000000000 R09: ffff94f5bc86bbf0 [ 29.803772] R10: ffffb326805afba8 R11: 00000000000ffdc4 R12: ffff94f5bc86a400 [ 29.804789] R13: 0000000000000000 R14: 00000000ffdc4000 R15: 0000000000000000 [ 29.805792] FS: 0000000000000000(0000) GS:ffff94f5bfc00000(0000) knlGS:0000000000000000 [ 29.806923] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 29.807736] CR2: 0000000000000304 CR3: 000000003499d000 CR4: 00000000000006f0 [ 29.808747] Call Trace: [ 29.809156] flush_unmaps_timeout+0x126/0x1c0 [ 29.809800] domain_exit+0xd6/0x100 [ 29.810322] device_notifier+0x6b/0x70 [ 29.810902] notifier_call_chain+0x4a/0x70 [ 29.812822] __blocking_notifier_call_chain+0x47/0x60 [ 29.814499] blocking_notifier_call_chain+0x16/0x20 [ 29.816137] device_del+0x233/0x320 [ 29.817588] pci_remove_bus_device+0x6f/0x110 [ 29.819133] pci_stop_and_remove_bus_device+0x1a/0x20 [ 29.820817] pciehp_unconfigure_device+0x7a/0x1d0 [ 29.822434] pciehp_disable_slot+0x52/0xe0 [ 29.823931] pciehp_power_thread+0x8a/0xa0 [ 29.825411] process_one_work+0x18c/0x3a0 [ 29.826875] worker_thread+0x4e/0x3b0 [ 29.828263] kthread+0x109/0x140 [ 29.829564] ? process_one_work+0x3a0/0x3a0 [ 29.831081] ? kthread_park+0x60/0x60 [ 29.832464] ret_from_fork+0x25/0x30 [ 29.833794] Code: 85 ed 74 0b 5b 41 5c 41 5d 41 5e 41 5f 5d c3 49 8b 54 24 60 44 89 f8 0f b6 c4 48 8b 04 c2 48 85 c0 74 49 45 0f b6 ff 4a 8b 3c f8 <80> bf [ 29.838514] RIP: iommu_flush_iotlb_psi+0xcf/0x120 RSP: ffffb326805afbd0 [ 29.840362] CR2: 0000000000000304 [ 29.841716] ---[ end trace b10ec0d6900868d3 ]--- This patch fixes that problem if applied to v4.13 kernel. The bug does not exist on latest upstream kernel since it's fixed as a side effect of commit 13cf0174 ("iommu/vt-d: Make use of iova deferred flushing", 2017-08-15). But IMHO it's still good to have this patch upstream. CC: Alex Williamson <alex.williamson@redhat.com> Signed-off-by: NPeter Xu <peterx@redhat.com> Fixes: a1ddcbe9 ("iommu/vt-d: Pass dmar_domain directly into iommu_flush_iotlb_psi") Reviewed-by: NAlex Williamson <alex.williamson@redhat.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Robin Murphy 提交于
Removing the early device registration hook overlooked the fact that it only ran conditionally on a compatible device being present in the DT. With exynos_iommu_init() now running as an unconditional initcall, problems arise on non-Exynos systems when other IOMMU drivers find themselves unable to install their ops on the platform bus, or at worst the Exynos ops get called with someone else's domain and all hell breaks loose. The global ops/cache setup could probably all now be triggered from the first IOMMU probe, as with dma_dev assigment, but for the time being the simplest fix is to resurrect the logic from commit a7b67cd5 ("iommu/exynos: Play nice in multi-platform builds") to explicitly check the DT for the presence of an Exynos IOMMU before trying anything. Fixes: 928055a0 ("iommu/exynos: Remove custom platform device registration code") Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Acked-by: NMarek Szyprowski <m.szyprowski@samsung.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Geert Uytterhoeven 提交于
When exposing data access through debugfs, the correct debugfs_create_*() functions must be used, depending on data type. Remove all casts from data pointers passed to debugfs_create_*() functions, as such casts prevent the compiler from flagging bugs. omap_iommu.nr_tlb_entries is "int", hence casting to "u8 *" exposes only a part of it. Fix this by using debugfs_create_u32() instead. Signed-off-by: NGeert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 15 1月, 2018 1 次提交
-
-
由 Christoph Hellwig 提交于
Move the few remaining bits of swiotlb glue towards their callers, and remove the pointless on ia64 swiotlb variable. Signed-off-by: NChristoph Hellwig <hch@lst.de> Acked-by: NChristian König <christian.koenig@amd.com>
-
- 12 1月, 2018 1 次提交
-
-
由 Sinan Kaya 提交于
pci_get_bus_and_slot() is restrictive such that it assumes domain=0 as where a PCI device is present. This restricts the device drivers to be reused for other domain numbers. Getting ready to remove pci_get_bus_and_slot() function in favor of pci_get_domain_bus_and_slot(). Hard-code the domain number as 0 for the AMD IOMMU driver. Signed-off-by: NSinan Kaya <okaya@codeaurora.org> Signed-off-by: NBjorn Helgaas <helgaas@kernel.org> Reviewed-by: NGary R Hook <gary.hook@amd.com> Acked-by: NJoerg Roedel <jroedel@suse.de>
-
- 03 1月, 2018 2 次提交
-
-
由 Robin Murphy 提交于
For PCI devices behind an aliasing PCIe-to-PCI/X bridge, the bridge alias to DevFn 0.0 on the subordinate bus may match the original RID of the device, resulting in the same SID being present in the device's fwspec twice. This causes trouble later in arm_smmu_write_strtab_ent() when we wind up visiting the STE a second time and find it already live. Avoid the issue by giving arm_smmu_install_ste_for_dev() the cleverness to skip over duplicates. It seems mildly counterintuitive compared to preventing the duplicates from existing in the first place, but since the DT and ACPI probe paths build their fwspecs differently, this is actually the cleanest and most self-contained way to deal with it. Cc: <stable@vger.kernel.org> Fixes: 8f785154 ("iommu/arm-smmu: Implement of_xlate() for SMMUv3") Reported-by: NTomasz Nowicki <tomasz.nowicki@caviumnetworks.com> Tested-by: NTomasz Nowicki <Tomasz.Nowicki@cavium.com> Tested-by: NJayachandran C. <jnair@caviumnetworks.com> Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Jean-Philippe Brucker 提交于
Kasan reports a double free when finalise_stage_fn fails: the io_pgtable ops are freed by arm_smmu_domain_finalise and then again by arm_smmu_domain_free. Prevent this by leaving pgtbl_ops empty on failure. Cc: <stable@vger.kernel.org> Fixes: 48ec83bc ("iommu/arm-smmu: Add initial driver support for ARM SMMUv3 devices") Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NJean-Philippe Brucker <jean-philippe.brucker@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 30 12月, 2017 1 次提交
-
-
由 Thomas Gleixner 提交于
The 'early' argument of irq_domain_activate_irq() is actually used to denote reservation mode. To avoid confusion, rename it before abuse happens. No functional change. Fixes: 72491643 ("genirq/irqdomain: Update irq_domain_ops.activate() signature") Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Cc: Alexandru Chirvasitu <achirvasub@gmail.com> Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Cc: Dou Liyang <douly.fnst@cn.fujitsu.com> Cc: Pavel Machek <pavel@ucw.cz> Cc: Maciej W. Rozycki <macro@linux-mips.org> Cc: Mikael Pettersson <mikpelinux@gmail.com> Cc: Josh Poulson <jopoulso@microsoft.com> Cc: Mihai Costache <v-micos@microsoft.com> Cc: Stephen Hemminger <sthemmin@microsoft.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: linux-pci@vger.kernel.org Cc: Haiyang Zhang <haiyangz@microsoft.com> Cc: Dexuan Cui <decui@microsoft.com> Cc: Simon Xiao <sixiao@microsoft.com> Cc: Saeed Mahameed <saeedm@mellanox.com> Cc: Jork Loeser <Jork.Loeser@microsoft.com> Cc: Bjorn Helgaas <bhelgaas@google.com> Cc: devel@linuxdriverproject.org Cc: KY Srinivasan <kys@microsoft.com> Cc: Alan Cox <alan@linux.intel.com> Cc: Sakari Ailus <sakari.ailus@intel.com>, Cc: linux-media@vger.kernel.org
-
- 21 12月, 2017 5 次提交
-
-
由 Wei Yongjun 提交于
In case of error, the function iommu_group_alloc() returns ERR_PTR() and never returns NULL. The NULL test in the return value check should be replaced with IS_ERR(). Fixes: 7f4c9176 ("iommu/tegra: Allow devices to be grouped") Signed-off-by: NWei Yongjun <weiyongjun1@huawei.com> Acked-by: NAlex Williamson <alex.williamson@redhat.com> Signed-off-by: NThierry Reding <treding@nvidia.com>
-
由 Jerry Snitselaar 提交于
It is unlikely request_threaded_irq will fail, but if it does for some reason we should clear iommu->pr_irq in the error path. Also intel_svm_finish_prq shouldn't try to clean up the page request interrupt if pr_irq is 0. Without these, if request_threaded_irq were to fail the following occurs: fail with no fixes: [ 0.683147] ------------[ cut here ]------------ [ 0.683148] NULL pointer, cannot free irq [ 0.683158] WARNING: CPU: 1 PID: 1 at kernel/irq/irqdomain.c:1632 irq_domain_free_irqs+0x126/0x140 [ 0.683160] Modules linked in: [ 0.683163] CPU: 1 PID: 1 Comm: swapper/0 Not tainted 4.15.0-rc2 #3 [ 0.683165] Hardware name: /NUC7i3BNB, BIOS BNKBL357.86A.0036.2017.0105.1112 01/05/2017 [ 0.683168] RIP: 0010:irq_domain_free_irqs+0x126/0x140 [ 0.683169] RSP: 0000:ffffc90000037ce8 EFLAGS: 00010292 [ 0.683171] RAX: 000000000000001d RBX: ffff880276283c00 RCX: ffffffff81c5e5e8 [ 0.683172] RDX: 0000000000000001 RSI: 0000000000000096 RDI: 0000000000000246 [ 0.683174] RBP: ffff880276283c00 R08: 0000000000000000 R09: 000000000000023c [ 0.683175] R10: 0000000000000007 R11: 0000000000000000 R12: 000000000000007a [ 0.683176] R13: 0000000000000001 R14: 0000000000000000 R15: 0000010010000000 [ 0.683178] FS: 0000000000000000(0000) GS:ffff88027ec80000(0000) knlGS:0000000000000000 [ 0.683180] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 0.683181] CR2: 0000000000000000 CR3: 0000000001c09001 CR4: 00000000003606e0 [ 0.683182] Call Trace: [ 0.683189] intel_svm_finish_prq+0x3c/0x60 [ 0.683191] free_dmar_iommu+0x1ac/0x1b0 [ 0.683195] init_dmars+0xaaa/0xaea [ 0.683200] ? klist_next+0x19/0xc0 [ 0.683203] ? pci_do_find_bus+0x50/0x50 [ 0.683205] ? pci_get_dev_by_id+0x52/0x70 [ 0.683208] intel_iommu_init+0x498/0x5c7 [ 0.683211] pci_iommu_init+0x13/0x3c [ 0.683214] ? e820__memblock_setup+0x61/0x61 [ 0.683217] do_one_initcall+0x4d/0x1a0 [ 0.683220] kernel_init_freeable+0x186/0x20e [ 0.683222] ? set_debug_rodata+0x11/0x11 [ 0.683225] ? rest_init+0xb0/0xb0 [ 0.683226] kernel_init+0xa/0xff [ 0.683229] ret_from_fork+0x1f/0x30 [ 0.683259] Code: 89 ee 44 89 e7 e8 3b e8 ff ff 5b 5d 44 89 e7 44 89 ee 41 5c 41 5d 41 5e e9 a8 84 ff ff 48 c7 c7 a8 71 a7 81 31 c0 e8 6a d3 f9 ff <0f> ff 5b 5d 41 5c 41 5d 41 5 e c3 0f 1f 44 00 00 66 2e 0f 1f 84 [ 0.683285] ---[ end trace f7650e42792627ca ]--- with iommu->pr_irq = 0, but no check in intel_svm_finish_prq: [ 0.669561] ------------[ cut here ]------------ [ 0.669563] Trying to free already-free IRQ 0 [ 0.669573] WARNING: CPU: 3 PID: 1 at kernel/irq/manage.c:1546 __free_irq+0xa4/0x2c0 [ 0.669574] Modules linked in: [ 0.669577] CPU: 3 PID: 1 Comm: swapper/0 Not tainted 4.15.0-rc2 #4 [ 0.669579] Hardware name: /NUC7i3BNB, BIOS BNKBL357.86A.0036.2017.0105.1112 01/05/2017 [ 0.669581] RIP: 0010:__free_irq+0xa4/0x2c0 [ 0.669582] RSP: 0000:ffffc90000037cc0 EFLAGS: 00010082 [ 0.669584] RAX: 0000000000000021 RBX: 0000000000000000 RCX: ffffffff81c5e5e8 [ 0.669585] RDX: 0000000000000001 RSI: 0000000000000086 RDI: 0000000000000046 [ 0.669587] RBP: 0000000000000000 R08: 0000000000000000 R09: 000000000000023c [ 0.669588] R10: 0000000000000007 R11: 0000000000000000 R12: ffff880276253960 [ 0.669589] R13: ffff8802762538a4 R14: ffff880276253800 R15: ffff880276283600 [ 0.669593] FS: 0000000000000000(0000) GS:ffff88027ed80000(0000) knlGS:0000000000000000 [ 0.669594] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 0.669596] CR2: 0000000000000000 CR3: 0000000001c09001 CR4: 00000000003606e0 [ 0.669602] Call Trace: [ 0.669616] free_irq+0x30/0x60 [ 0.669620] intel_svm_finish_prq+0x34/0x60 [ 0.669623] free_dmar_iommu+0x1ac/0x1b0 [ 0.669627] init_dmars+0xaaa/0xaea [ 0.669631] ? klist_next+0x19/0xc0 [ 0.669634] ? pci_do_find_bus+0x50/0x50 [ 0.669637] ? pci_get_dev_by_id+0x52/0x70 [ 0.669639] intel_iommu_init+0x498/0x5c7 [ 0.669642] pci_iommu_init+0x13/0x3c [ 0.669645] ? e820__memblock_setup+0x61/0x61 [ 0.669648] do_one_initcall+0x4d/0x1a0 [ 0.669651] kernel_init_freeable+0x186/0x20e [ 0.669653] ? set_debug_rodata+0x11/0x11 [ 0.669656] ? rest_init+0xb0/0xb0 [ 0.669658] kernel_init+0xa/0xff [ 0.669661] ret_from_fork+0x1f/0x30 [ 0.669662] Code: 7a 08 75 0e e9 c3 01 00 00 4c 39 7b 08 74 57 48 89 da 48 8b 5a 18 48 85 db 75 ee 89 ee 48 c7 c7 78 67 a7 81 31 c0 e8 4c 37 fa ff <0f> ff 48 8b 34 24 4c 89 ef e 8 0e 4c 68 00 49 8b 46 40 48 8b 80 [ 0.669688] ---[ end trace 58a470248700f2fc ]--- Cc: Alex Williamson <alex.williamson@redhat.com> Cc: Joerg Roedel <joro@8bytes.org> Cc: Ashok Raj <ashok.raj@intel.com> Signed-off-by: NJerry Snitselaar <jsnitsel@redhat.com> Reviewed-by: NAshok Raj <ashok.raj@intel.com> Signed-off-by: NAlex Williamson <alex.williamson@redhat.com>
-
由 Jordan Crouse 提交于
The result of iommu_group_get() was being blindly used in both attach and detach which results in a dereference when trying to work with an unknown device. Signed-off-by: NJordan Crouse <jcrouse@codeaurora.org> Signed-off-by: NAlex Williamson <alex.williamson@redhat.com>
-
由 Gary R Hook 提交于
The AMD IOMMU specification Rev 3.00 (December 2016) introduces a new Enhanced PPR Handling Support (EPHSup) bit in the MMIO register offset 0030h (IOMMU Extended Feature Register). When EPHSup=1, the IOMMU hardware requires the PPR bit of the device table entry (DTE) to be set in order to support PPR for a particular endpoint device. Please see https://support.amd.com/TechDocs/48882_IOMMU.pdf for this revision of the AMD IOMMU specification. Signed-off-by: NGary R Hook <gary.hook@amd.com> Signed-off-by: NAlex Williamson <alex.williamson@redhat.com>
-
由 Gary R Hook 提交于
When an unknown type event occurs, the default information written to the syslog should dump raw event data. This could provide insight into the event that occurred. Signed-off-by: NGary R Hook <gary.hook@amd.com> Signed-off-by: NAlex Williamson <alex.williamson@redhat.com>
-
- 15 12月, 2017 1 次提交
-
-
由 Thierry Reding 提交于
Implement the ->device_group() and ->of_xlate() callbacks which are used in order to group devices. Each group can then share a single domain. This is implemented primarily in order to achieve the same semantics on Tegra210 and earlier as on Tegra186 where the Tegra SMMU was replaced by an ARM SMMU. Users of the IOMMU API can now use the same code to share domains between devices, whereas previously they used to attach each device individually. Acked-by: NAlex Williamson <alex.williamson@redhat.com> Signed-off-by: NThierry Reding <treding@nvidia.com>
-
- 22 11月, 2017 1 次提交
-
-
由 Kees Cook 提交于
This converts all remaining cases of the old setup_timer() API into using timer_setup(), where the callback argument is the structure already holding the struct timer_list. These should have no behavioral changes, since they just change which pointer is passed into the callback with the same available pointers after conversion. It handles the following examples, in addition to some other variations. Casting from unsigned long: void my_callback(unsigned long data) { struct something *ptr = (struct something *)data; ... } ... setup_timer(&ptr->my_timer, my_callback, ptr); and forced object casts: void my_callback(struct something *ptr) { ... } ... setup_timer(&ptr->my_timer, my_callback, (unsigned long)ptr); become: void my_callback(struct timer_list *t) { struct something *ptr = from_timer(ptr, t, my_timer); ... } ... timer_setup(&ptr->my_timer, my_callback, 0); Direct function assignments: void my_callback(unsigned long data) { struct something *ptr = (struct something *)data; ... } ... ptr->my_timer.function = my_callback; have a temporary cast added, along with converting the args: void my_callback(struct timer_list *t) { struct something *ptr = from_timer(ptr, t, my_timer); ... } ... ptr->my_timer.function = (TIMER_FUNC_TYPE)my_callback; And finally, callbacks without a data assignment: void my_callback(unsigned long data) { ... } ... setup_timer(&ptr->my_timer, my_callback, 0); have their argument renamed to verify they're unused during conversion: void my_callback(struct timer_list *unused) { ... } ... timer_setup(&ptr->my_timer, my_callback, 0); The conversion is done with the following Coccinelle script: spatch --very-quiet --all-includes --include-headers \ -I ./arch/x86/include -I ./arch/x86/include/generated \ -I ./include -I ./arch/x86/include/uapi \ -I ./arch/x86/include/generated/uapi -I ./include/uapi \ -I ./include/generated/uapi --include ./include/linux/kconfig.h \ --dir . \ --cocci-file ~/src/data/timer_setup.cocci @fix_address_of@ expression e; @@ setup_timer( -&(e) +&e , ...) // Update any raw setup_timer() usages that have a NULL callback, but // would otherwise match change_timer_function_usage, since the latter // will update all function assignments done in the face of a NULL // function initialization in setup_timer(). @change_timer_function_usage_NULL@ expression _E; identifier _timer; type _cast_data; @@ ( -setup_timer(&_E->_timer, NULL, _E); +timer_setup(&_E->_timer, NULL, 0); | -setup_timer(&_E->_timer, NULL, (_cast_data)_E); +timer_setup(&_E->_timer, NULL, 0); | -setup_timer(&_E._timer, NULL, &_E); +timer_setup(&_E._timer, NULL, 0); | -setup_timer(&_E._timer, NULL, (_cast_data)&_E); +timer_setup(&_E._timer, NULL, 0); ) @change_timer_function_usage@ expression _E; identifier _timer; struct timer_list _stl; identifier _callback; type _cast_func, _cast_data; @@ ( -setup_timer(&_E->_timer, _callback, _E); +timer_setup(&_E->_timer, _callback, 0); | -setup_timer(&_E->_timer, &_callback, _E); +timer_setup(&_E->_timer, _callback, 0); | -setup_timer(&_E->_timer, _callback, (_cast_data)_E); +timer_setup(&_E->_timer, _callback, 0); | -setup_timer(&_E->_timer, &_callback, (_cast_data)_E); +timer_setup(&_E->_timer, _callback, 0); | -setup_timer(&_E->_timer, (_cast_func)_callback, _E); +timer_setup(&_E->_timer, _callback, 0); | -setup_timer(&_E->_timer, (_cast_func)&_callback, _E); +timer_setup(&_E->_timer, _callback, 0); | -setup_timer(&_E->_timer, (_cast_func)_callback, (_cast_data)_E); +timer_setup(&_E->_timer, _callback, 0); | -setup_timer(&_E->_timer, (_cast_func)&_callback, (_cast_data)_E); +timer_setup(&_E->_timer, _callback, 0); | -setup_timer(&_E._timer, _callback, (_cast_data)_E); +timer_setup(&_E._timer, _callback, 0); | -setup_timer(&_E._timer, _callback, (_cast_data)&_E); +timer_setup(&_E._timer, _callback, 0); | -setup_timer(&_E._timer, &_callback, (_cast_data)_E); +timer_setup(&_E._timer, _callback, 0); | -setup_timer(&_E._timer, &_callback, (_cast_data)&_E); +timer_setup(&_E._timer, _callback, 0); | -setup_timer(&_E._timer, (_cast_func)_callback, (_cast_data)_E); +timer_setup(&_E._timer, _callback, 0); | -setup_timer(&_E._timer, (_cast_func)_callback, (_cast_data)&_E); +timer_setup(&_E._timer, _callback, 0); | -setup_timer(&_E._timer, (_cast_func)&_callback, (_cast_data)_E); +timer_setup(&_E._timer, _callback, 0); | -setup_timer(&_E._timer, (_cast_func)&_callback, (_cast_data)&_E); +timer_setup(&_E._timer, _callback, 0); | _E->_timer@_stl.function = _callback; | _E->_timer@_stl.function = &_callback; | _E->_timer@_stl.function = (_cast_func)_callback; | _E->_timer@_stl.function = (_cast_func)&_callback; | _E._timer@_stl.function = _callback; | _E._timer@_stl.function = &_callback; | _E._timer@_stl.function = (_cast_func)_callback; | _E._timer@_stl.function = (_cast_func)&_callback; ) // callback(unsigned long arg) @change_callback_handle_cast depends on change_timer_function_usage@ identifier change_timer_function_usage._callback; identifier change_timer_function_usage._timer; type _origtype; identifier _origarg; type _handletype; identifier _handle; @@ void _callback( -_origtype _origarg +struct timer_list *t ) { ( ... when != _origarg _handletype *_handle = -(_handletype *)_origarg; +from_timer(_handle, t, _timer); ... when != _origarg | ... when != _origarg _handletype *_handle = -(void *)_origarg; +from_timer(_handle, t, _timer); ... when != _origarg | ... when != _origarg _handletype *_handle; ... when != _handle _handle = -(_handletype *)_origarg; +from_timer(_handle, t, _timer); ... when != _origarg | ... when != _origarg _handletype *_handle; ... when != _handle _handle = -(void *)_origarg; +from_timer(_handle, t, _timer); ... when != _origarg ) } // callback(unsigned long arg) without existing variable @change_callback_handle_cast_no_arg depends on change_timer_function_usage && !change_callback_handle_cast@ identifier change_timer_function_usage._callback; identifier change_timer_function_usage._timer; type _origtype; identifier _origarg; type _handletype; @@ void _callback( -_origtype _origarg +struct timer_list *t ) { + _handletype *_origarg = from_timer(_origarg, t, _timer); + ... when != _origarg - (_handletype *)_origarg + _origarg ... when != _origarg } // Avoid already converted callbacks. @match_callback_converted depends on change_timer_function_usage && !change_callback_handle_cast && !change_callback_handle_cast_no_arg@ identifier change_timer_function_usage._callback; identifier t; @@ void _callback(struct timer_list *t) { ... } // callback(struct something *handle) @change_callback_handle_arg depends on change_timer_function_usage && !match_callback_converted && !change_callback_handle_cast && !change_callback_handle_cast_no_arg@ identifier change_timer_function_usage._callback; identifier change_timer_function_usage._timer; type _handletype; identifier _handle; @@ void _callback( -_handletype *_handle +struct timer_list *t ) { + _handletype *_handle = from_timer(_handle, t, _timer); ... } // If change_callback_handle_arg ran on an empty function, remove // the added handler. @unchange_callback_handle_arg depends on change_timer_function_usage && change_callback_handle_arg@ identifier change_timer_function_usage._callback; identifier change_timer_function_usage._timer; type _handletype; identifier _handle; identifier t; @@ void _callback(struct timer_list *t) { - _handletype *_handle = from_timer(_handle, t, _timer); } // We only want to refactor the setup_timer() data argument if we've found // the matching callback. This undoes changes in change_timer_function_usage. @unchange_timer_function_usage depends on change_timer_function_usage && !change_callback_handle_cast && !change_callback_handle_cast_no_arg && !change_callback_handle_arg@ expression change_timer_function_usage._E; identifier change_timer_function_usage._timer; identifier change_timer_function_usage._callback; type change_timer_function_usage._cast_data; @@ ( -timer_setup(&_E->_timer, _callback, 0); +setup_timer(&_E->_timer, _callback, (_cast_data)_E); | -timer_setup(&_E._timer, _callback, 0); +setup_timer(&_E._timer, _callback, (_cast_data)&_E); ) // If we fixed a callback from a .function assignment, fix the // assignment cast now. @change_timer_function_assignment depends on change_timer_function_usage && (change_callback_handle_cast || change_callback_handle_cast_no_arg || change_callback_handle_arg)@ expression change_timer_function_usage._E; identifier change_timer_function_usage._timer; identifier change_timer_function_usage._callback; type _cast_func; typedef TIMER_FUNC_TYPE; @@ ( _E->_timer.function = -_callback +(TIMER_FUNC_TYPE)_callback ; | _E->_timer.function = -&_callback +(TIMER_FUNC_TYPE)_callback ; | _E->_timer.function = -(_cast_func)_callback; +(TIMER_FUNC_TYPE)_callback ; | _E->_timer.function = -(_cast_func)&_callback +(TIMER_FUNC_TYPE)_callback ; | _E._timer.function = -_callback +(TIMER_FUNC_TYPE)_callback ; | _E._timer.function = -&_callback; +(TIMER_FUNC_TYPE)_callback ; | _E._timer.function = -(_cast_func)_callback +(TIMER_FUNC_TYPE)_callback ; | _E._timer.function = -(_cast_func)&_callback +(TIMER_FUNC_TYPE)_callback ; ) // Sometimes timer functions are called directly. Replace matched args. @change_timer_function_calls depends on change_timer_function_usage && (change_callback_handle_cast || change_callback_handle_cast_no_arg || change_callback_handle_arg)@ expression _E; identifier change_timer_function_usage._timer; identifier change_timer_function_usage._callback; type _cast_data; @@ _callback( ( -(_cast_data)_E +&_E->_timer | -(_cast_data)&_E +&_E._timer | -_E +&_E->_timer ) ) // If a timer has been configured without a data argument, it can be // converted without regard to the callback argument, since it is unused. @match_timer_function_unused_data@ expression _E; identifier _timer; identifier _callback; @@ ( -setup_timer(&_E->_timer, _callback, 0); +timer_setup(&_E->_timer, _callback, 0); | -setup_timer(&_E->_timer, _callback, 0L); +timer_setup(&_E->_timer, _callback, 0); | -setup_timer(&_E->_timer, _callback, 0UL); +timer_setup(&_E->_timer, _callback, 0); | -setup_timer(&_E._timer, _callback, 0); +timer_setup(&_E._timer, _callback, 0); | -setup_timer(&_E._timer, _callback, 0L); +timer_setup(&_E._timer, _callback, 0); | -setup_timer(&_E._timer, _callback, 0UL); +timer_setup(&_E._timer, _callback, 0); | -setup_timer(&_timer, _callback, 0); +timer_setup(&_timer, _callback, 0); | -setup_timer(&_timer, _callback, 0L); +timer_setup(&_timer, _callback, 0); | -setup_timer(&_timer, _callback, 0UL); +timer_setup(&_timer, _callback, 0); | -setup_timer(_timer, _callback, 0); +timer_setup(_timer, _callback, 0); | -setup_timer(_timer, _callback, 0L); +timer_setup(_timer, _callback, 0); | -setup_timer(_timer, _callback, 0UL); +timer_setup(_timer, _callback, 0); ) @change_callback_unused_data depends on match_timer_function_unused_data@ identifier match_timer_function_unused_data._callback; type _origtype; identifier _origarg; @@ void _callback( -_origtype _origarg +struct timer_list *unused ) { ... when != _origarg } Signed-off-by: NKees Cook <keescook@chromium.org>
-
- 18 11月, 2017 1 次提交
-
-
由 Robin Murphy 提交于
The intel-iommu DMA ops fail to correctly handle scatterlists where sg->offset is greater than PAGE_SIZE - the IOVA allocation is computed appropriately based on the page-aligned portion of the offset, but the mapping is set up relative to sg->page, which means it fails to actually cover the whole buffer (and in the worst case doesn't cover it at all): (sg->dma_address + sg->dma_len) ----+ sg->dma_address ---------+ | iov_pfn------+ | | | | | v v v iova: a b c d e f |--------|--------|--------|--------|--------| <...calculated....> [_____mapped______] pfn: 0 1 2 3 4 5 |--------|--------|--------|--------|--------| ^ ^ ^ | | | sg->page ----+ | | sg->offset --------------+ | (sg->offset + sg->length) ----------+ As a result, the caller ends up overrunning the mapping into whatever lies beyond, which usually goes badly: [ 429.645492] DMAR: DRHD: handling fault status reg 2 [ 429.650847] DMAR: [DMA Write] Request device [02:00.4] fault addr f2682000 ... Whilst this is a fairly rare occurrence, it can happen from the result of intermediate scatterlist processing such as scatterwalk_ffwd() in the crypto layer. Whilst that particular site could be fixed up, it still seems worthwhile to bring intel-iommu in line with other DMA API implementations in handling this robustly. To that end, fix the intel_map_sg() path to line up the mapping correctly (in units of MM pages rather than VT-d pages to match the aligned_nrpages() calculation) regardless of the offset, and use sg_phys() consistently for clarity. Reported-by: NHarsh Jain <Harsh@chelsio.com> Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Reviewed by: Ashok Raj <ashok.raj@intel.com> Tested by: Jacob Pan <jacob.jun.pan@intel.com> Cc: stable@vger.kernel.org Signed-off-by: NAlex Williamson <alex.williamson@redhat.com>
-
- 07 11月, 2017 4 次提交
-
-
get_cpu_ptr() disabled preemption and returns the ->fq object of the current CPU. raw_cpu_ptr() does the same except that it not disable preemption which means the scheduler can move it to another CPU after it obtained the per-CPU object. In this case this is not bad because the data structure itself is protected with a spin_lock. This change shouldn't matter however on RT it does because the sleeping lock can't be accessed with disabled preemption. Cc: Joerg Roedel <joro@8bytes.org> Cc: iommu@lists.linux-foundation.org Reported-by: vinadhy@gmail.com Signed-off-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: NAlex Williamson <alex.williamson@redhat.com>
-
由 Matthias Brugger 提交于
There exist two Mediatek iommu drivers for the two different generations of the device. But both drivers have the same name "mtk-iommu". This breaks the registration of the second driver: Error: Driver 'mtk-iommu' is already registered, aborting... Fix this by changing the name for first generation to "mtk-iommu-v1". Fixes: b17336c5 ("iommu/mediatek: add support for mtk iommu generation one HW") Signed-off-by: NMatthias Brugger <matthias.bgg@gmail.com> Signed-off-by: NAlex Williamson <alex.williamson@redhat.com>
-
由 Magnus Damm 提交于
Tie in r8a7795 features and update the IOMMU_OF_DECLARE compat string to include the updated compat string. Signed-off-by: NMagnus Damm <damm+renesas@opensource.se> Signed-off-by: NAlex Williamson <alex.williamson@redhat.com>
-
由 Magnus Damm 提交于
Introduce support for two bit SL0 bitfield in IMTTBCR by using a separate feature flag. Signed-off-by: NMagnus Damm <damm+renesas@opensource.se> Signed-off-by: NAlex Williamson <alex.williamson@redhat.com>
-