- 06 5月, 2014 3 次提交
-
-
由 Andy Lutomirski 提交于
This code is used during CPU setup, and it isn't strictly speaking related to the 32-bit vdso. It's easier to understand how this works when the code is closer to its callers. This also lets syscall32_cpu_init be static, which might save some trivial amount of kernel text. Signed-off-by: NAndy Lutomirski <luto@amacapital.net> Link: http://lkml.kernel.org/r/4e466987204e232d7b55a53ff6b9739f12237461.1399317206.git.luto@amacapital.netSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
由 Andy Lutomirski 提交于
Rather than using 'vdso_enabled' and an awful #define, just call the parameters vdso32_enabled and vdso64_enabled. Signed-off-by: NAndy Lutomirski <luto@amacapital.net> Link: http://lkml.kernel.org/r/87913de56bdcbae3d93917938302fc369b05caee.1399317206.git.luto@amacapital.netSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
由 Andy Lutomirski 提交于
The early_ioremap code requires that its buffers not span a PMD boundary. The logic for ensuring that only works if the fixmap is aligned, so assert that it's aligned correctly. To make this work reliably, reserve_top_address needs to be adjusted. Signed-off-by: NAndy Lutomirski <luto@amacapital.net> Link: http://lkml.kernel.org/r/e59a5f4362661f75dd4841fa74e1f2448045e245.1399317206.git.luto@amacapital.netSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
- 04 5月, 2014 5 次提交
-
-
由 Catalin Marinas 提交于
Since the default DMA ops for arm64 are non-coherent, mark the X-Gene controller explicitly as dma-coherent to avoid additional cache maintenance. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Cc: Loc Ho <lho@apm.com>
-
由 Catalin Marinas 提交于
Recently, the default DMA ops have been changed to non-coherent for alignment with 32-bit ARM platforms (and DT files). This patch adds bus notifiers to be able to set the coherent DMA ops (with no cache maintenance) for devices explicitly marked as coherent via the "dma-coherent" DT property. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Ritesh Harjani 提交于
Currently arm64 dma_ops is by default made coherent which makes it opposite in default policy from arm. Make default dma_ops to be noncoherent (same as arm), as currently there aren't any dma-capable drivers which assumes coherent ops Signed-off-by: NRitesh Harjani <ritesh.harjani@gmail.com> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Marc Zyngier 提交于
Commit d57c33c5 (add generic fixmap.h) added (among other similar things) set_fixmap_io to deal with early ioremap of devices. More recently, commit bf4b558e (arm64: add early_ioremap support) converted the arm64 earlyprintk to use set_fixmap_io. A side effect of this conversion is that my virtual machines have stopped booting when I pass "earlyprintk=uart8250-8bit,0x3f8" to the guest kernel. Turns out that the new earlyprintk code doesn't care at all about sub-page offsets, and just assumes that the earlyprintk device will be page-aligned. Obviously, that doesn't play well with the above example. Further investigation shows that set_fixmap_io uses __set_fixmap instead of __set_fixmap_offset. A fix is to introduce a set_fixmap_offset_io that uses the latter, and to remove the superflous call to fix_to_virt (which only returns the value that set_fixmap_io has already given us). With this applied, my VMs are back in business. Tested on a Cortex-A57 platform with kvmtool as platform emulation. Cc: Will Deacon <will.deacon@arm.com> Acked-by: NMark Salter <msalter@redhat.com> Acked-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Dave Anderson 提交于
Fix for the arm64 kern_addr_valid() function to recognize virtual addresses in the kernel logical memory map. The function fails as written because it does not check whether the addresses in that region are mapped at the pmd level to 2MB or 512MB pages, continues the page table walk to the pte level, and issues a garbage value to pfn_valid(). Tested on 4K-page and 64K-page kernels. Signed-off-by: NDave Anderson <anderson@redhat.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 02 5月, 2014 3 次提交
-
-
由 Helge Deller 提交于
Signed-off-by: NHelge Deller <deller@gmx.de>
-
由 John David Anglin 提交于
There are only a couple of architectures that override _STK_LIM_MAX to a non-infinity value. This changes the stack allocation semantics in subtle ways. For example, GNU make changes its stack allocation to the hard maximum defined by _STK_LIM_MAX. As a results, threads executed by processes running under make are allocated a stack size of _STK_LIM_MAX rather than a sensible default value. This causes various thread stress tests to fail when they can't muster more than about 50 threads. The attached change implements the default behavior used by the majority of architectures. Signed-off-by: NJohn David Anglin <dave.anglin@bell.net> Reviewed-by: NCarlos O'Donell <carlos@systemhalted.org> Cc: stable@vger.kernel.org # 3.14 Signed-off-by: NHelge Deller <deller@gmx.de>
-
由 Vineet Gupta 提交于
Commit 93ea02bb ("arch: Clean up asm/barrier.h implementations") wired generic barrier.h for hexagon, but failed to delete the existing file. Cc: Richard Kuo <rkuo@codeaurora.org> Cc: Peter Zijlstra <peterz@infradead.org> Signed-off-by: NVineet Gupta <vgupta@synopsys.com> Compile-tested-by: NGuenter Roeck <linux@roeck-us.net> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 30 4月, 2014 1 次提交
-
-
由 Vineet Gupta 提交于
There was a very small race window where resume to kernel mode from a Exception Path (or pure kernel mode which is true for most of ARC exceptions anyways), was not disabling interrupts in restore_regs, clobbering the exception regs Anton found the culprit call flow (after many sleepless nights) | 1. we got a Trap from user land | 2. started to service it. | 3. While doing some stuff on user-land memory (I think it is padzero()), | we got a DataTlbMiss | 4. On return from it we are taking "resume_kernel_mode" path | 5. NEED_RESHED is not set, so we go to "return from exception" path in | restore regs. | 6. there seems to be IRQ happening Signed-off-by: NVineet Gupta <vgupta@synopsys.com> Cc: <stable@vger.kernel.org> #3.10, 3.12, 3.13, 3.14 Cc: Anton Kolesov <Anton.Kolesov@synopsys.com> Cc: Francois Bedard <Francois.Bedard@synopsys.com> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 28 4月, 2014 28 次提交
-
-
由 Mark Salter 提交于
The kvm/mmu code shared by arm and arm64 uses kalloc() to allocate a bounce page (if hypervisor init code crosses page boundary) and hypervisor PGDs. The problem is that kalloc() does not guarantee the proper alignment. In the case of the bounce page, the page sized buffer allocated may also cross a page boundary negating the purpose and leading to a hang during kvm initialization. Likewise the PGDs allocated may not meet the minimum alignment requirements of the underlying MMU. This patch uses __get_free_page() to guarantee the worst case alignment needs of the bounce page and PGDs on both arm and arm64. Cc: <stable@vger.kernel.org> # 3.10+ Signed-off-by: NMark Salter <msalter@redhat.com> Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
由 Thomas Gleixner 提交于
On x86 the allocation of irq descriptors may allocate interrupts which are in the range of the GSI interrupts. That's wrong as those interrupts are hardwired and we don't have the irq domain translation like PPC. So one of these interrupts can be hooked up later to one of the devices which are hard wired to it and the io_apic init code for that particular interrupt line happily reuses that descriptor with a completely different configuration so hell breaks lose. Inside x86 we allocate dynamic interrupts from above nr_gsi_irqs, except for a few usage sites which have not yet blown up in our face for whatever reason. But for drivers which need an irq range, like the GPIO drivers, we have no limit in place and we don't want to expose such a detail to a driver. To cure this introduce a function which an architecture can implement to impose a lower bound on the dynamic interrupt allocations. Implement it for x86 and set the lower bound to nr_gsi_irqs, which is the end of the hardwired interrupt space, so all dynamic allocations happen above. That not only allows the GPIO driver to work sanely, it also protects the bogus callsites of create_irq_nr() in hpet, uv, irq_remapping and htirq code. They need to be cleaned up as well, but that's a separate issue. Reported-by: NJin Yao <yao.jin@linux.intel.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Tested-by: NMika Westerberg <mika.westerberg@linux.intel.com> Cc: Mathias Nyman <mathias.nyman@linux.intel.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Grant Likely <grant.likely@linaro.org> Cc: H. Peter Anvin <hpa@linux.intel.com> Cc: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Cc: Krogerus Heikki <heikki.krogerus@intel.com> Cc: Linus Walleij <linus.walleij@linaro.org> Link: http://lkml.kernel.org/r/alpine.DEB.2.02.1404241617360.28206@ionos.tec.linutronix.deSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Bandan Das 提交于
We track shadow vmcs fields through two static lists, one for read only and another for r/w fields. However, with addition of new vmcs fields, not all fields may be supported on all hosts. If so, copy_vmcs12_to_shadow() trying to vmwrite on unsupported hosts will result in a vmwrite error. For example, commit 36be0b9d introduced GUEST_BNDCFGS, which is not supported by all processors. Filter out host unsupported fields before letting guests use shadow vmcs Signed-off-by: NBandan Das <bsd@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Oren Twaig 提交于
Correct IRQ routing in case a vSMP box is detected but the Interrupt Routing Comply (IRC) value is set to "comply", which leads to incorrect IRQ routing. Before the patch: When a vSMP box was detected and IRC was set to "comply", users (and the kernel) couldn't effectively set the destination of the IRQs. This is because the hook inside vsmp_64.c always setup all CPUs as the IRQ destination using cpumask_setall() as the return value for IRQ allocation mask. Later, this "overrided" mask caused the kernel to set the IRQ destination to the lowest online CPU in the mask (CPU0 usually). After the patch: When the IRC is set to "comply", users (and the kernel) can control the destination of the IRQs as we will not be changing the default "apic->vector_allocation_domain". Signed-off-by: NOren Twaig <oren@scalemp.com> Acked-by: NShai Fultheim <shai@scalemp.com> Link: http://lkml.kernel.org/r/1398669697-2123-1-git-send-email-oren@scalemp.com [ Minor readability edits. ] Signed-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Alistair Popple 提交于
This patch fixes this section mismatch: WARNING: vmlinux.o(.text+0x1efc4): Section mismatch in reference from the function apm821xx_pciex_init_port_hw() to the function .init.text:ppc4xx_pciex_wait_on_sdr.isra.9() The function apm821xx_pciex_init_port_hw() references the function __init ppc4xx_pciex_wait_on_sdr.isra.9(). This is often because apm821xx_pciex_init_port_hw lacks a __init annotation or the annotation of ppc4xx_pciex_wait_on_sdr.isra.9 is wrong. apm821xx_pciex_init_port_hw is only referenced by a struct in __initdata, so it should be safe to add __init to apm821xx_pciex_init_port_hw. Signed-off-by: NAlistair Popple <alistair@popple.id.au> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Preeti U Murthy 提交于
When the guest cedes the vcpu or the vcpu has no guest to run it naps. Clear the runlatch bit of the vcpu before napping to indicate an idle cpu. Signed-off-by: NPreeti U Murthy <preeti@linux.vnet.ibm.com> Acked-by: NPaul Mackerras <paulus@samba.org> Reviewed-by: NSrivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Preeti U Murthy 提交于
The secondary threads in the core are kept offline before launching guests in kvm on powerpc: "371fefd6:KVM: PPC: Allow book3s_hv guests to use SMT processor modes." Hence their runlatch bits are cleared. When the secondary threads are called in to start a guest, their runlatch bits need to be set to indicate that they are busy. The primary thread has its runlatch bit set though, but there is no harm in setting this bit once again. Hence set the runlatch bit for all threads before they start guest. Signed-off-by: NPreeti U Murthy <preeti@linux.vnet.ibm.com> Acked-by: NPaul Mackerras <paulus@samba.org> Reviewed-by: NSrivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Preeti U Murthy 提交于
Up until now we have been setting the runlatch bits for a busy CPU and clearing it when a CPU enters idle state. The runlatch bit has thus been consistent with the utilization of a CPU as long as the CPU is online. However when a CPU is hotplugged out the runlatch bit is not cleared. It needs to be cleared to indicate an unused CPU. Hence this patch has the runlatch bit cleared for an offline CPU just before entering an idle state and sets it immediately after it exits the idle state. Signed-off-by: NPreeti U Murthy <preeti@linux.vnet.ibm.com> Acked-by: NPaul Mackerras <paulus@samba.org> Reviewed-by: NSrivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Li Zhong 提交于
While testing memory hot-remove, I found following dead lock: Process #1141 is drmgr, trying to remove some memory, i.e. memory499. It holds the memory_hotplug_mutex, and blocks when trying to remove file "online" under dir memory499, in kernfs_drain(), at wait_event(root->deactivate_waitq, atomic_read(&kn->active) == KN_DEACTIVATED_BIAS); Process #1120 is trying to online memory499 by echo 1 > memory499/online In .kernfs_fop_write, it uses kernfs_get_active() to increase &kn->active, thus blocking process #1141. While itself is blocked later when trying to acquire memory_hotplug_mutex, which is held by process The backtrace of both processes are shown below: [<c000000001b18600>] 0xc000000001b18600 [<c000000000015044>] .__switch_to+0x144/0x200 [<c000000000263ca4>] .online_pages+0x74/0x7b0 [<c00000000055b40c>] .memory_subsys_online+0x9c/0x150 [<c00000000053cbe8>] .device_online+0xb8/0x120 [<c00000000053cd04>] .online_store+0xb4/0xc0 [<c000000000538ce4>] .dev_attr_store+0x64/0xa0 [<c00000000030f4ec>] .sysfs_kf_write+0x7c/0xb0 [<c00000000030e574>] .kernfs_fop_write+0x154/0x1e0 [<c000000000268450>] .vfs_write+0xe0/0x260 [<c000000000269144>] .SyS_write+0x64/0x110 [<c000000000009ffc>] syscall_exit+0x0/0x7c [<c000000001b18600>] 0xc000000001b18600 [<c000000000015044>] .__switch_to+0x144/0x200 [<c00000000030be14>] .__kernfs_remove+0x204/0x300 [<c00000000030d428>] .kernfs_remove_by_name_ns+0x68/0xf0 [<c00000000030fb38>] .sysfs_remove_file_ns+0x38/0x60 [<c000000000539354>] .device_remove_attrs+0x54/0xc0 [<c000000000539fd8>] .device_del+0x158/0x250 [<c00000000053a104>] .device_unregister+0x34/0xa0 [<c00000000055bc14>] .unregister_memory_section+0x164/0x170 [<c00000000024ee18>] .__remove_pages+0x108/0x4c0 [<c00000000004b590>] .arch_remove_memory+0x60/0xc0 [<c00000000026446c>] .remove_memory+0x8c/0xe0 [<c00000000007f9f4>] .pseries_remove_memblock+0xd4/0x160 [<c00000000007fcfc>] .pseries_memory_notifier+0x27c/0x290 [<c0000000008ae6cc>] .notifier_call_chain+0x8c/0x100 [<c0000000000d858c>] .__blocking_notifier_call_chain+0x6c/0xe0 [<c00000000071ddec>] .of_property_notify+0x7c/0xc0 [<c00000000071ed3c>] .of_update_property+0x3c/0x1b0 [<c0000000000756cc>] .ofdt_write+0x3dc/0x740 [<c0000000002f60fc>] .proc_reg_write+0xac/0x110 [<c000000000268450>] .vfs_write+0xe0/0x260 [<c000000000269144>] .SyS_write+0x64/0x110 [<c000000000009ffc>] syscall_exit+0x0/0x7c This patch uses lock_device_hotplug() to protect remove_memory() called in pseries_remove_memblock(), which is also stated before function remove_memory(): * NOTE: The caller must call lock_device_hotplug() to serialize hotplug * and online/offline operations before this call, as required by * try_offline_node(). */ void __ref remove_memory(int nid, u64 start, u64 size) With this lock held, the other process(#1120 above) trying to online the memory block will retry the system call when calling lock_device_hotplug_sysfs(), and finally find No such device error. Signed-off-by: NLi Zhong <zhong@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Anton Blanchard 提交于
module_init should return 0 or a negative errno. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Anton Blanchard 提交于
Bump the boot wrapper BOOT_COMMAND_LINE_SIZE to match the kernel. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Anton Blanchard 提交于
I've had a report that the current limit is too small for an automated network based installer. Bump it. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Anton Blanchard 提交于
We have two definitions of COMMAND_LINE_SIZE, one for the kernel and one for the boot wrapper. I assume this is so the boot wrapper can be self sufficient and not rely on kernel headers. Having two defines with the same name is confusing, I just updated the wrong one when trying to bump it. Make the boot wrapper define unique by calling it BOOT_COMMAND_LINE_SIZE. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Cody P Schafer 提交于
The catalog version number was changed from a be32 (with proceeding 32bits of padding) to a be64, update the code to treat it as a be64 Signed-off-by: NCody P Schafer <cody@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Cody P Schafer 提交于
Signed-off-by: NCody P Schafer <cody@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Cody P Schafer 提交于
Signed-off-by: NCody P Schafer <cody@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Cody P Schafer 提交于
Signed-off-by: NCody P Schafer <cody@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Cody P Schafer 提交于
fixup for "powerpc/perf: Add support for the hv gpci (get performance counter info) interface". Makes the "not enabled" message less awful (and hidden unless debugging). Signed-off-by: NCody P Schafer <cody@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Cody P Schafer 提交于
fixup for "powerpc/perf: Add support for the hv 24x7 interface" Makes the "not enabled" message less awful (and hides it in most cases). Signed-off-by: NCody P Schafer <cody@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Aneesh Kumar K.V 提交于
The if condition check was based on a draft ISA doc. Remove the same. Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Anton Blanchard 提交于
Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Anton Blanchard 提交于
We have two copies of code that creates an OPAL sg list. Consolidate these into a common set of helpers and fix the endian issues. The flash interface embedded a version number in the num_entries field, whereas the dump interface did did not. Since versioning wasn't added to the flash interface and it is impossible to add this in a backwards compatible way, just remove it. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Anton Blanchard 提交于
Fix little endian issues with the OPAL error log code. Signed-off-by: NAnton Blanchard <anton@samba.org> Reviewed-by: NStewart Smith <stewart@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Anton Blanchard 提交于
The bitmap in opal_poll_events and opal_handle_interrupt is big endian, so we need to byteswap it on little endian builds. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Anton Blanchard 提交于
We had some duplication of the internal OPAL functions. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Anton Blanchard 提交于
Using size_t in our APIs is asking for trouble, especially when some OPAL calls use size_t pointers. Signed-off-by: NAnton Blanchard <anton@samba.org> Reviewed-by: NStewart Smith <stewart@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Wei Yang 提交于
On PowerNV platform, we are holding an unnecessary refcount on a pci_dev, which leads to the pci_dev is not destroyed when hotplugging a pci device. This patch release the unnecessary refcount. Signed-off-by: NWei Yang <weiyang@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Wei Yang 提交于
During the EEH hotplug event, iommu_add_device() will be invoked three times and two of them will trigger warning or error. The three times to invoke the iommu_add_device() are: pci_device_add ... set_iommu_table_base_and_group <- 1st time, fail device_add ... tce_iommu_bus_notifier <- 2nd time, succees pcibios_add_pci_devices ... pcibios_setup_bus_devices <- 3rd time, re-attach The first time fails, since the dev->kobj->sd is not initialized. The dev->kobj->sd is initialized in device_add(). The third time's warning is triggered by the re-attach of the iommu_group. After applying this patch, the error iommu_tce: 0003:05:00.0 has not been added, ret=-14 and the warning [ 204.123609] ------------[ cut here ]------------ [ 204.123645] WARNING: at arch/powerpc/kernel/iommu.c:1125 [ 204.123680] Modules linked in: xt_CHECKSUM nf_conntrack_netbios_ns nf_conntrack_broadcast ipt_MASQUERADE ip6t_REJECT bnep bluetooth 6lowpan_iphc rfkill xt_conntrack ebtable_nat ebtable_broute bridge stp llc mlx4_ib ib_sa ib_mad ib_core ib_addr ebtable_filter ebtables ip6table_nat nf_conntrack_ipv6 nf_defrag_ipv6 nf_nat_ipv6 ip6table_mangle ip6table_security ip6table_raw ip6table_filter ip6_tables iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_nat nf_conntrack iptable_mangle iptable_security iptable_raw bnx2x tg3 mlx4_core nfsd ptp mdio ses libcrc32c nfs_acl enclosure be2net pps_core shpchp lockd kvm uinput sunrpc binfmt_misc lpfc scsi_transport_fc ipr scsi_tgt [ 204.124356] CPU: 18 PID: 650 Comm: eehd Not tainted 3.14.0-rc5yw+ #102 [ 204.124400] task: c0000027ed485670 ti: c0000027ed50c000 task.ti: c0000027ed50c000 [ 204.124453] NIP: c00000000003cf80 LR: c00000000006c648 CTR: c00000000006c5c0 [ 204.124506] REGS: c0000027ed50f440 TRAP: 0700 Not tainted (3.14.0-rc5yw+) [ 204.124558] MSR: 9000000000029032 <SF,HV,EE,ME,IR,DR,RI> CR: 88008084 XER: 20000000 [ 204.124682] CFAR: c00000000006c644 SOFTE: 1 GPR00: c00000000006c648 c0000027ed50f6c0 c000000001398380 c0000027ec260300 GPR04: c0000027ea92c000 c00000000006ad00 c0000000016e41b0 0000000000000110 GPR08: c0000000012cd4c0 0000000000000001 c0000027ec2602ff 0000000000000062 GPR12: 0000000028008084 c00000000fdca200 c0000000000d1d90 c0000027ec281a80 GPR16: 0000000000000000 0000000000000000 0000000000000000 0000000000000000 GPR20: 0000000000000000 0000000000000000 0000000000000000 0000000000000001 GPR24: 000000005342697b 0000000000002906 c000001fe6ac9800 c000001fe6ac9800 GPR28: 0000000000000000 c0000000016e3a80 c0000027ea92c090 c0000027ea92c000 [ 204.125353] NIP [c00000000003cf80] .iommu_add_device+0x30/0x1f0 [ 204.125399] LR [c00000000006c648] .pnv_pci_ioda_dma_dev_setup+0x88/0xb0 [ 204.125443] Call Trace: [ 204.125464] [c0000027ed50f6c0] [c0000027ed50f750] 0xc0000027ed50f750 (unreliable) [ 204.125526] [c0000027ed50f750] [c00000000006c648] .pnv_pci_ioda_dma_dev_setup+0x88/0xb0 [ 204.125588] [c0000027ed50f7d0] [c000000000069cc8] .pnv_pci_dma_dev_setup+0x78/0x340 [ 204.125650] [c0000027ed50f870] [c000000000044408] .pcibios_setup_device+0x88/0x2f0 [ 204.125712] [c0000027ed50f940] [c000000000046040] .pcibios_setup_bus_devices+0x60/0xd0 [ 204.125774] [c0000027ed50f9c0] [c000000000043acc] .pcibios_add_pci_devices+0xdc/0x1c0 [ 204.125837] [c0000027ed50fa50] [c00000000086f970] .eeh_reset_device+0x36c/0x4f0 [ 204.125939] [c0000027ed50fb20] [c00000000003a2d8] .eeh_handle_normal_event+0x448/0x480 [ 204.126068] [c0000027ed50fbc0] [c00000000003a35c] .eeh_handle_event+0x4c/0x340 [ 204.126192] [c0000027ed50fc80] [c00000000003a74c] .eeh_event_handler+0xfc/0x1b0 [ 204.126319] [c0000027ed50fd30] [c0000000000d1ea0] .kthread+0x110/0x130 [ 204.126430] [c0000027ed50fe30] [c00000000000a460] .ret_from_kernel_thread+0x5c/0x7c [ 204.126556] Instruction dump: [ 204.126610] 7c0802a6 fba1ffe8 fbc1fff0 fbe1fff8 f8010010 f821ff71 7c7e1b78 60000000 [ 204.126787] 60000000 e87e0298 3143ffff 7d2a1910 <0b090000> 2fa90000 40de00c8 ebfe0218 [ 204.126966] ---[ end trace 6e7aefd80add2973 ]--- are cleared. This patch removes iommu_add_device() in pnv_pci_ioda_dma_dev_setup(), which revert part of the change in commit d905c5df(PPC: POWERNV: move iommu_add_device earlier). Signed-off-by: NWei Yang <weiyang@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-