- 07 1月, 2020 3 次提交
-
-
由 Julia Lawall 提交于
Use resource_size rather than a verbose computation on the end and start fields. The semantic patch that makes these changes is as follows: (http://coccinelle.lip6.fr/) <smpl> @@ struct resource ptr; @@ - (ptr.end - ptr.start + 1) + resource_size(&ptr) </smpl> Signed-off-by: NJulia Lawall <Julia.Lawall@inria.fr> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/1577900990-8588-11-git-send-email-Julia.Lawall@inria.fr
-
由 Julia Lawall 提交于
Use resource_size rather than a verbose computation on the end and start fields. The semantic patch that makes this change is as follows: (http://coccinelle.lip6.fr/) <smpl> @@ struct resource ptr; @@ - (ptr.end - ptr.start + 1) + resource_size(&ptr) </smpl> Signed-off-by: NJulia Lawall <Julia.Lawall@inria.fr> Acked-by: NScott Wood <oss@buserror.net> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/1577900990-8588-6-git-send-email-Julia.Lawall@inria.fr
-
由 Julia Lawall 提交于
The mpic_ipi_chip and mpic_irq_ht_chip structures are only copied into other structures, so make them const. The opportunity for this change was found using Coccinelle. Signed-off-by: NJulia Lawall <Julia.Lawall@inria.fr> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/1577864614-5543-10-git-send-email-Julia.Lawall@inria.fr
-
- 06 1月, 2020 13 次提交
-
-
With CONFIG_QUICC_ENGINE enabled and CONFIG_UCC_GETH + CONFIG_SERIAL_QE disabled we have an unused variable (np). The code won't compile with -Werror. Move the np variable to the block where it is actually used. Signed-off-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de> Acked-by: NScott Wood <oss@buserror.net> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20191219151602.1908411-1-bigeasy@linutronix.de
-
由 Alexey Kardashevskiy 提交于
H_PUT_TCE_INDIRECT uses a shared page to send up to 512 TCE to a hypervisor in a single hypercall. This does not work for secure VMs as the page needs to be shared or the VM should use H_PUT_TCE instead. This disables H_PUT_TCE_INDIRECT by clearing the FW_FEATURE_PUT_TCE_IND feature bit so SVMs will map TCEs using H_PUT_TCE. This is not a part of init_svm() as it is called too late after FW patching is done and may result in a warning like this: [ 3.727716] Firmware features changed after feature patching! [ 3.727965] WARNING: CPU: 0 PID: 1 at (...)arch/powerpc/lib/feature-fixups.c:466 check_features+0xa4/0xc0 Signed-off-by: NAlexey Kardashevskiy <aik@ozlabs.ru> Reviewed-by: NThiago Jung Bauermann <bauerman@linux.ibm.com> Tested-by: NThiago Jung Bauermann <bauerman@linux.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20191216041924.42318-5-aik@ozlabs.ru
-
由 Alexey Kardashevskiy 提交于
H_PUT_TCE_INDIRECT allows packing up to 512 TCE updates into a single hypercall; H_STUFF_TCE can clear lots in a single hypercall too. However, unlike H_STUFF_TCE (which writes the same TCE to all entries), H_PUT_TCE_INDIRECT uses a 4K page with new TCEs. In a secure VM environment this means sharing a secure VM page with a hypervisor which we would rather avoid. This splits the FW_FEATURE_MULTITCE feature into FW_FEATURE_PUT_TCE_IND and FW_FEATURE_STUFF_TCE. "hcall-multi-tce" in the "/rtas/ibm,hypertas-functions" device tree property sets both; the "multitce=off" kernel command line parameter disables both. This should not cause behavioural change. Signed-off-by: NAlexey Kardashevskiy <aik@ozlabs.ru> Reviewed-by: NThiago Jung Bauermann <bauerman@linux.ibm.com> Tested-by: NThiago Jung Bauermann <bauerman@linux.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20191216041924.42318-4-aik@ozlabs.ru
-
由 Alexey Kardashevskiy 提交于
By default a pseries guest supports a H_PUT_TCE hypercall which maps a single IOMMU page in a DMA window. Additionally the hypervisor may support H_PUT_TCE_INDIRECT/H_STUFF_TCE which update multiple TCEs at once; this is advertised via the device tree /rtas/ibm,hypertas-functions property which Linux converts to FW_FEATURE_MULTITCE. FW_FEATURE_MULTITCE is checked when dma_iommu_ops is used; however the code managing the huge DMA window (DDW) ignores it and calls H_PUT_TCE_INDIRECT even if it is explicitly disabled via the "multitce=off" kernel command line parameter. This adds FW_FEATURE_MULTITCE checking to the DDW code path. This changes tce_build_pSeriesLP to take liobn and page size as the huge window does not have iommu_table descriptor which usually the place to store these numbers. Fixes: 4e8b0cf4 ("powerpc/pseries: Add support for dynamic dma windows") Signed-off-by: NAlexey Kardashevskiy <aik@ozlabs.ru> Reviewed-by: NThiago Jung Bauermann <bauerman@linux.ibm.com> Tested-by: NThiago Jung Bauermann <bauerman@linux.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20191216041924.42318-3-aik@ozlabs.ru
-
由 Ram Pai 提交于
This reverts commit edea902c. At the time the change allowed direct DMA ops for secure VMs; however since then we switched on using SWIOTLB backed with IOMMU (direct mapping) and to make this work, we need dma_iommu_ops which handles all cases including TCE mapping I/O pages in the presence of an IOMMU. Fixes: edea902c ("powerpc/pseries/iommu: Don't use dma_iommu_ops on secure guests") Signed-off-by: NRam Pai <linuxram@us.ibm.com> [aik: added "revert" and "fixes:"] Signed-off-by: NAlexey Kardashevskiy <aik@ozlabs.ru> Reviewed-by: NThiago Jung Bauermann <bauerman@linux.ibm.com> Tested-by: NThiago Jung Bauermann <bauerman@linux.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20191216041924.42318-2-aik@ozlabs.ru
-
由 Michael Ellerman 提交于
Commit d4e58e59 ("powerpc/powernv: Enable POWER8 doorbell IPIs") added a select of PPC_DOORBELL to PPC_PSERIES, but it already had a select of PPC_DOORBELL. One is enough. Reported-by: NJason A. Donenfeld <Jason@zx2c4.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20191219125840.32592-1-mpe@ellerman.id.au
-
由 Peter Ujfalusi 提交于
dma_request_slave_channel() is a wrapper on top of dma_request_chan() eating up the error code. By using dma_request_chan() directly the driver can support deferred probing against DMA. Signed-off-by: NPeter Ujfalusi <peter.ujfalusi@ti.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20191217073730.21249-1-peter.ujfalusi@ti.com
-
由 Oliver O'Halloran 提交于
With the previous patch applied pcibios_setup_device() will always be run when pcibios_bus_add_device() is called. There are several code paths where pcibios_setup_bus_device() is still called (the PowerPC specific PCI hotplug support is one) so with just the previous patch applied the setup can be run multiple times on a device, once before the device is added to the bus and once after. There's no need to run the setup in the early case any more so just remove it entirely. Signed-off-by: NOliver O'Halloran <oohall@gmail.com> Tested-by: NAlexey Kardashevskiy <aik@ozlabs.ru> Reviewed-by: NAlexey Kardashevskiy <aik@ozlabs.ru> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20191028085424.12006-3-oohall@gmail.com
-
由 Shawn Anastasio 提交于
Move PCI device setup from pcibios_add_device() and pcibios_fixup_bus() to pcibios_bus_add_device(). This ensures that platform-specific DMA and IOMMU setup occurs after the device has been registered in sysfs, which is a requirement for IOMMU group assignment to work This fixes IOMMU group assignment for hotplugged devices on pseries, where the existing behavior results in IOMMU assignment before registration. Thanks to Lukas Wunner <lukas@wunner.de> for the suggestion. Signed-off-by: NShawn Anastasio <shawn@anastas.io> Tested-by: NAlexey Kardashevskiy <aik@ozlabs.ru> Reviewed-by: NAlexey Kardashevskiy <aik@ozlabs.ru> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20191028085424.12006-2-oohall@gmail.com
-
由 Oliver O'Halloran 提交于
On pseries there is a bug with adding hotplugged devices to an IOMMU group. For a number of dumb reasons fixing that bug first requires re-working how VFs are configured on PowerNV. For background, on PowerNV we use the pcibios_sriov_enable() hook to do two things: 1. Create a pci_dn structure for each of the VFs, and 2. Configure the PHB's internal BARs so the MMIO range for each VF maps to a unique PE. Roughly speaking a PE is the hardware counterpart to a Linux IOMMU group since all the devices in a PE share the same IOMMU table. A PE also defines the set of devices that should be isolated in response to a PCI error (i.e. bad DMA, UR/CA, AER events, etc). When isolated all MMIO and DMA traffic to and from devicein the PE is blocked by the root complex until the PE is recovered by the OS. The requirement to block MMIO causes a giant headache because the P8 PHB generally uses a fixed mapping between MMIO addresses and PEs. As a result we need to delay configuring the IOMMU groups for device until after MMIO resources are assigned. For physical devices (i.e. non-VFs) the PE assignment is done in pcibios_setup_bridge() which is called immediately after the MMIO resources for downstream devices (and the bridge's windows) are assigned. For VFs the setup is more complicated because: a) pcibios_setup_bridge() is not called again when VFs are activated, and b) The pci_dev for VFs are created by generic code which runs after pcibios_sriov_enable() is called. The work around for this is a two step process: 1. A fixup in pcibios_add_device() is used to initialised the cached pe_number in pci_dn, then 2. A bus notifier then adds the device to the IOMMU group for the PE specified in pci_dn->pe_number. A side effect fixing the pseries bug mentioned in the first paragraph is moving the fixup out of pcibios_add_device() and into pcibios_bus_add_device(), which is called much later. This results in step 2. failing because pci_dn->pe_number won't be initialised when the bus notifier is run. We can fix this by removing the need for the fixup. The PE for a VF is known before the VF is even scanned so we can initialise pci_dn->pe_number pcibios_sriov_enable() instead. Unfortunately, moving the initialisation causes two problems: 1. We trip the WARN_ON() in the current fixup code, and 2. The EEH core clears pdn->pe_number when recovering a VF and relies on the fixup to correctly re-set it. The only justification for either of these is a comment in eeh_rmv_device() suggesting that pdn->pe_number *must* be set to IODA_INVALID_PE in order for the VF to be scanned. However, this comment appears to have no basis in reality. Both bugs can be fixed by just deleting the code. Tested-by: NAlexey Kardashevskiy <aik@ozlabs.ru> Reviewed-by: NAlexey Kardashevskiy <aik@ozlabs.ru> Signed-off-by: NOliver O'Halloran <oohall@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20191028085424.12006-1-oohall@gmail.com
-
由 Aneesh Kumar K.V 提交于
Resource struct p->res is assigned later. Avoid using %pR before the resource struct is assigned. Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20191202063855.154321-1-aneesh.kumar@linux.ibm.com
-
由 Nathan Chancellor 提交于
Clang warns: ../arch/powerpc/boot/4xx.c:231:3: warning: misleading indentation; statement is not part of the previous 'else' [-Wmisleading-indentation] val = SDRAM0_READ(DDR0_42); ^ ../arch/powerpc/boot/4xx.c:227:2: note: previous statement is here else ^ This is because there is a space at the beginning of this line; remove it so that the indentation is consistent according to the Linux kernel coding style and clang no longer warns. Fixes: d23f5099 ("[POWERPC] 4xx: Adds decoding of 440SPE memory size to boot wrapper library") Signed-off-by: NNathan Chancellor <natechancellor@gmail.com> Reviewed-by: NNick Desaulniers <ndesaulniers@google.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://github.com/ClangBuiltLinux/linux/issues/780 Link: https://lore.kernel.org/r/20191209200338.12546-1-natechancellor@gmail.com
-
由 Jordan Niethe 提交于
In entry_64.S there are places that open code saving and restoring the non-volatile registers. There are already macros for doing this so use them. Signed-off-by: NJordan Niethe <jniethe5@gmail.com> Reviewed-by: NAndrew Donnellan <ajd@linux.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20191211023552.16480-1-jniethe5@gmail.com
-
- 05 1月, 2020 1 次提交
-
-
由 David Hildenbrand 提交于
We currently try to shrink a single zone when removing memory. We use the zone of the first page of the memory we are removing. If that memmap was never initialized (e.g., memory was never onlined), we will read garbage and can trigger kernel BUGs (due to a stale pointer): BUG: unable to handle page fault for address: 000000000000353d #PF: supervisor write access in kernel mode #PF: error_code(0x0002) - not-present page PGD 0 P4D 0 Oops: 0002 [#1] SMP PTI CPU: 1 PID: 7 Comm: kworker/u8:0 Not tainted 5.3.0-rc5-next-20190820+ #317 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.12.1-0-ga5cab58e9a3f-prebuilt.qemu.4 Workqueue: kacpi_hotplug acpi_hotplug_work_fn RIP: 0010:clear_zone_contiguous+0x5/0x10 Code: 48 89 c6 48 89 c3 e8 2a fe ff ff 48 85 c0 75 cf 5b 5d c3 c6 85 fd 05 00 00 01 5b 5d c3 0f 1f 840 RSP: 0018:ffffad2400043c98 EFLAGS: 00010246 RAX: 0000000000000000 RBX: 0000000200000000 RCX: 0000000000000000 RDX: 0000000000200000 RSI: 0000000000140000 RDI: 0000000000002f40 RBP: 0000000140000000 R08: 0000000000000000 R09: 0000000000000001 R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000140000 R13: 0000000000140000 R14: 0000000000002f40 R15: ffff9e3e7aff3680 FS: 0000000000000000(0000) GS:ffff9e3e7bb00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 000000000000353d CR3: 0000000058610000 CR4: 00000000000006e0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 Call Trace: __remove_pages+0x4b/0x640 arch_remove_memory+0x63/0x8d try_remove_memory+0xdb/0x130 __remove_memory+0xa/0x11 acpi_memory_device_remove+0x70/0x100 acpi_bus_trim+0x55/0x90 acpi_device_hotplug+0x227/0x3a0 acpi_hotplug_work_fn+0x1a/0x30 process_one_work+0x221/0x550 worker_thread+0x50/0x3b0 kthread+0x105/0x140 ret_from_fork+0x3a/0x50 Modules linked in: CR2: 000000000000353d Instead, shrink the zones when offlining memory or when onlining failed. Introduce and use remove_pfn_range_from_zone(() for that. We now properly shrink the zones, even if we have DIMMs whereby - Some memory blocks fall into no zone (never onlined) - Some memory blocks fall into multiple zones (offlined+re-onlined) - Multiple memory blocks that fall into different zones Drop the zone parameter (with a potential dubious value) from __remove_pages() and __remove_section(). Link: http://lkml.kernel.org/r/20191006085646.5768-6-david@redhat.com Fixes: f1dd2cd1 ("mm, memory_hotplug: do not associate hotadded memory to zones until online") [visible after d0dc12e8] Signed-off-by: NDavid Hildenbrand <david@redhat.com> Reviewed-by: NOscar Salvador <osalvador@suse.de> Cc: Michal Hocko <mhocko@suse.com> Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org> Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com> Cc: Pavel Tatashin <pasha.tatashin@soleen.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Dan Williams <dan.j.williams@intel.com> Cc: Logan Gunthorpe <logang@deltatee.com> Cc: <stable@vger.kernel.org> [5.0+] Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 30 12月, 2019 1 次提交
-
-
由 Jason A. Donenfeld 提交于
Recently, the spinlock implementation grew a static key optimization, but the jump_label.h header include was left out, leading to build errors: linux/arch/powerpc/include/asm/spinlock.h:44:7: error: implicit declaration of function ‘static_branch_unlikely’ 44 | if (!static_branch_unlikely(&shared_processor)) This commit adds the missing header. mpe: The build break is only seen with CONFIG_JUMP_LABEL=n. Fixes: 656c21d6 ("powerpc/shared: Use static key to detect shared processor") Signed-off-by: NJason A. Donenfeld <Jason@zx2c4.com> Reviewed-by: NSrikar Dronamraju <srikar@linux.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20191223133147.129983-1-Jason@zx2c4.com
-
- 23 12月, 2019 1 次提交
-
-
由 Michael Ellerman 提交于
These slice routines are called from the SLB miss handler, which can lead to warnings from the IRQ code, because we have not reconciled the IRQ state properly: WARNING: CPU: 72 PID: 30150 at arch/powerpc/kernel/irq.c:258 arch_local_irq_restore.part.0+0xcc/0x100 Modules linked in: CPU: 72 PID: 30150 Comm: ftracetest Not tainted 5.5.0-rc2-gcc9x-g7e0165b2 #1 NIP: c00000000001d83c LR: c00000000029ab90 CTR: c00000000026cf90 REGS: c0000007eee3b960 TRAP: 0700 Not tainted (5.5.0-rc2-gcc9x-g7e0165b2) MSR: 8000000000021033 <SF,ME,IR,DR,RI,LE> CR: 22242844 XER: 20000000 CFAR: c00000000001d780 IRQMASK: 0 ... NIP arch_local_irq_restore.part.0+0xcc/0x100 LR trace_graph_entry+0x270/0x340 Call Trace: trace_graph_entry+0x254/0x340 (unreliable) function_graph_enter+0xe4/0x1a0 prepare_ftrace_return+0xa0/0x130 ftrace_graph_caller+0x44/0x94 # (get_slice_psize()) slb_allocate_user+0x7c/0x100 do_slb_fault+0xf8/0x300 instruction_access_slb_common+0x140/0x180 Fixes: 48e7b769 ("powerpc/64s/hash: Convert SLB miss handlers to C") Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20191221121337.4894-1-mpe@ellerman.id.au
-
- 18 12月, 2019 1 次提交
-
-
由 Paul Mackerras 提交于
Commit 22945688 ("KVM: PPC: Book3S HV: Support reset of secure guest") added a call to uv_svm_terminate, which is an ultravisor call, without any check that the guest is a secure guest or even that the system has an ultravisor. On a system without an ultravisor, the ultracall will degenerate to a hypercall, but since we are not in KVM guest context, the hypercall will get treated as a system call, which could have random effects depending on what happens to be in r0, and could also corrupt the current task's kernel stack. Hence this adds a test for the guest being a secure guest before doing uv_svm_terminate(). Fixes: 22945688 ("KVM: PPC: Book3S HV: Support reset of secure guest") Signed-off-by: NPaul Mackerras <paulus@ozlabs.org>
-
- 17 12月, 2019 1 次提交
-
-
由 Marcus Comstedt 提交于
VCPU_CR is the offset of arch.regs.ccr in kvm_vcpu. arch/powerpc/include/asm/kvm_host.h defines arch.regs as a struct pt_regs, and arch/powerpc/include/asm/ptrace.h defines the ccr field of pt_regs as "unsigned long ccr". Since unsigned long is 64 bits, a 64-bit load needs to be used to load it, unless an endianness specific correction offset is added to access the desired subpart. In this case there is no reason to _not_ use a 64 bit load though. Fixes: 6c85b7bc ("powerpc/kvm: Use UV_RETURN ucall to return to ultravisor") Cc: stable@vger.kernel.org # v5.4+ Signed-off-by: NMarcus Comstedt <marcus@mc.pp.se> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20191215094900.46740-1-marcus@mc.pp.se
-
- 16 12月, 2019 3 次提交
-
-
由 Andrew Donnellan 提交于
The KUAP implementation adds calls in clear_user() to enable and disable access to userspace memory. However, it doesn't add these to __clear_user(), which is used in the ptrace regset code. As there's only one direct user of __clear_user() (the regset code), and the time taken to set the AMR for KUAP purposes is going to dominate the cost of a quick access_ok(), there's not much point having a separate path. Rename __clear_user() to __arch_clear_user(), and make __clear_user() just call clear_user(). Reported-by: syzbot+f25ecf4b2982d8c7a640@syzkaller-ppc64.appspotmail.com Reported-by: NDaniel Axtens <dja@axtens.net> Suggested-by: NMichael Ellerman <mpe@ellerman.id.au> Fixes: de78a9c4 ("powerpc: Add a framework for Kernel Userspace Access Protection") Signed-off-by: NAndrew Donnellan <ajd@linux.ibm.com> [mpe: Use __arch_clear_user() for the asm version like arm64 & nds32] Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20191209132221.15328-1-ajd@linux.ibm.com
-
由 David Hildenbrand 提交于
Commit 63341ab0 (virtio-balloon: fix managed page counts when migrating pages between zones) fixed a long existing BUG in the virtio-balloon driver when pages would get migrated between zones. I did not try to reproduce on powerpc, but looking at the code, the same should apply to powerpc/cmm ever since it started using the balloon compaction infrastructure (luckily just recently). In case we have to migrate a ballon page to a newpage of another zone, the managed page count of both zones is wrong. Paired with memory offlining (which will adjust the managed page count), we can trigger kernel crashes and all kinds of different symptoms. Fix it by properly adjusting the managed page count when migrating if the zone changed. We'll temporarily modify the totalram page count. If this ever becomes a problem, we can fine tune by providing helpers that don't touch the totalram pages (e.g., adjust_zone_managed_page_count()). Fixes: fe030c9b ("powerpc/pseries/cmm: Implement balloon compaction") Signed-off-by: NDavid Hildenbrand <david@redhat.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20191216103058.4958-1-david@redhat.com
-
由 Christophe Leroy 提交于
Remove __init qualifier for mmu_mapin_ram_chunk() as it is called by mmu_mark_initmem_nx() and mmu_mark_rodata_ro() which are not __init functions. At the same time, mark it static as it is only used in this file. Reported-by: Nkbuild test robot <lkp@intel.com> Fixes: a2227a27 ("powerpc/32: Don't populate page tables for block mapped pages except on the 8xx") Signed-off-by: NChristophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/56648921986a6b3e7315b1fbbf4684f21bd2dea8.1576310997.git.christophe.leroy@c-s.fr
-
- 14 12月, 2019 1 次提交
-
-
由 Christophe Leroy 提交于
Before commit 0366a1c7 ("powerpc/irq: Run softirqs off the top of the irq stack"), check_stack_overflow() was called by do_IRQ(), before switching to the irq stack. In that commit, do_IRQ() was renamed __do_irq(), and is now executing on the irq stack, so check_stack_overflow() has just become almost useless. Move check_stack_overflow() call in do_IRQ() to do the check while still on the current stack. Fixes: 0366a1c7 ("powerpc/irq: Run softirqs off the top of the irq stack") Cc: stable@vger.kernel.org Signed-off-by: NChristophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/e033aa8116ab12b7ca9a9c75189ad0741e3b9b5f.1575872340.git.christophe.leroy@c-s.fr
-
- 13 12月, 2019 3 次提交
-
-
由 Mike Rapoport 提交于
Some powerpc platforms (e.g. 85xx) limit DMA-able memory way below 4G. If a system has more physical memory than this limit, the swiotlb buffer is not addressable because it is allocated from memblock using top-down mode. Force memblock to bottom-up mode before calling swiotlb_init() to ensure that the swiotlb buffer is DMA-able. Reported-by: NChristian Zigotzky <chzigotzky@xenosoft.de> Signed-off-by: NMike Rapoport <rppt@linux.ibm.com> Reviewed-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20191204123524.22919-1-rppt@kernel.org
-
由 Srikar Dronamraju 提交于
With the static key shared processor available, is_shared_processor() can return without having to query the lppaca structure. Signed-off-by: NSrikar Dronamraju <srikar@linux.vnet.ibm.com> Acked-by: NPhil Auld <pauld@redhat.com> Acked-by: NWaiman Long <longman@redhat.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20191213035036.6913-2-mpe@ellerman.id.au
-
由 Srikar Dronamraju 提交于
With commit 247f2f6f ("sched/core: Don't schedule threads on pre-empted vCPUs"), the scheduler avoids preempted vCPUs to schedule tasks on wakeup. This leads to wrong choice of CPU, which in-turn leads to larger wakeup latencies. Eventually, it leads to performance regression in latency sensitive benchmarks like soltp, schbench etc. On Powerpc, vcpu_is_preempted() only looks at yield_count. If the yield_count is odd, the vCPU is assumed to be preempted. However yield_count is increased whenever the LPAR enters CEDE state (idle). So any CPU that has entered CEDE state is assumed to be preempted. Even if vCPU of dedicated LPAR is preempted/donated, it should have right of first-use since they are supposed to own the vCPU. On a Power9 System with 32 cores: # lscpu Architecture: ppc64le Byte Order: Little Endian CPU(s): 128 On-line CPU(s) list: 0-127 Thread(s) per core: 8 Core(s) per socket: 1 Socket(s): 16 NUMA node(s): 2 Model: 2.2 (pvr 004e 0202) Model name: POWER9 (architected), altivec supported Hypervisor vendor: pHyp Virtualization type: para L1d cache: 32K L1i cache: 32K L2 cache: 512K L3 cache: 10240K NUMA node0 CPU(s): 0-63 NUMA node1 CPU(s): 64-127 # perf stat -a -r 5 ./schbench v5.4 v5.4 + patch Latency percentiles (usec) Latency percentiles (usec) 50.0000th: 45 50.0th: 45 75.0000th: 62 75.0th: 63 90.0000th: 71 90.0th: 74 95.0000th: 77 95.0th: 78 *99.0000th: 91 *99.0th: 82 99.5000th: 707 99.5th: 83 99.9000th: 6920 99.9th: 86 min=0, max=10048 min=0, max=96 Latency percentiles (usec) Latency percentiles (usec) 50.0000th: 45 50.0th: 46 75.0000th: 61 75.0th: 64 90.0000th: 72 90.0th: 75 95.0000th: 79 95.0th: 79 *99.0000th: 691 *99.0th: 83 99.5000th: 3972 99.5th: 85 99.9000th: 8368 99.9th: 91 min=0, max=16606 min=0, max=117 Latency percentiles (usec) Latency percentiles (usec) 50.0000th: 45 50.0th: 46 75.0000th: 61 75.0th: 64 90.0000th: 71 90.0th: 75 95.0000th: 77 95.0th: 79 *99.0000th: 106 *99.0th: 83 99.5000th: 2364 99.5th: 84 99.9000th: 7480 99.9th: 90 min=0, max=10001 min=0, max=95 Latency percentiles (usec) Latency percentiles (usec) 50.0000th: 45 50.0th: 47 75.0000th: 62 75.0th: 65 90.0000th: 72 90.0th: 75 95.0000th: 78 95.0th: 79 *99.0000th: 93 *99.0th: 84 99.5000th: 108 99.5th: 85 99.9000th: 6792 99.9th: 90 min=0, max=17681 min=0, max=117 Latency percentiles (usec) Latency percentiles (usec) 50.0000th: 46 50.0th: 45 75.0000th: 62 75.0th: 64 90.0000th: 73 90.0th: 75 95.0000th: 79 95.0th: 79 *99.0000th: 113 *99.0th: 82 99.5000th: 2724 99.5th: 83 99.9000th: 6184 99.9th: 93 min=0, max=9887 min=0, max=111 Performance counter stats for 'system wide' (5 runs): context-switches 43,373 ( +- 0.40% ) 44,597 ( +- 0.55% ) cpu-migrations 1,211 ( +- 5.04% ) 220 ( +- 6.23% ) page-faults 15,983 ( +- 5.21% ) 15,360 ( +- 3.38% ) Waiman Long suggested using static_keys. Fixes: 247f2f6f ("sched/core: Don't schedule threads on pre-empted vCPUs") Cc: stable@vger.kernel.org # v4.18+ Reported-by: NParth Shah <parth@linux.ibm.com> Reported-by: NIhor Pasichnyk <Ihor.Pasichnyk@ibm.com> Tested-by: NJuri Lelli <juri.lelli@redhat.com> Acked-by: NWaiman Long <longman@redhat.com> Reviewed-by: NGautham R. Shenoy <ego@linux.vnet.ibm.com> Signed-off-by: NSrikar Dronamraju <srikar@linux.vnet.ibm.com> Acked-by: NPhil Auld <pauld@redhat.com> Reviewed-by: NVaidyanathan Srinivasan <svaidy@linux.ibm.com> Tested-by: NParth Shah <parth@linux.ibm.com> [mpe: Move the key and setting of the key to pseries/setup.c] Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20191213035036.6913-1-mpe@ellerman.id.au
-
- 10 12月, 2019 1 次提交
-
-
由 Pankaj Bharadiya 提交于
Replace all the occurrences of FIELD_SIZEOF() with sizeof_field() except at places where these are defined. Later patches will remove the unused definition of FIELD_SIZEOF(). This patch is generated using following script: EXCLUDE_FILES="include/linux/stddef.h|include/linux/kernel.h" git grep -l -e "\bFIELD_SIZEOF\b" | while read file; do if [[ "$file" =~ $EXCLUDE_FILES ]]; then continue fi sed -i -e 's/\bFIELD_SIZEOF\b/sizeof_field/g' $file; done Signed-off-by: NPankaj Bharadiya <pankaj.laxminarayan.bharadiya@intel.com> Link: https://lore.kernel.org/r/20190924105839.110713-3-pankaj.laxminarayan.bharadiya@intel.comCo-developed-by: NKees Cook <keescook@chromium.org> Signed-off-by: NKees Cook <keescook@chromium.org> Acked-by: David Miller <davem@davemloft.net> # for net
-
- 05 12月, 2019 6 次提交
-
-
由 Madhavan Srinivasan 提交于
When a root user or a user with CAP_SYS_ADMIN privilege uses any trace_imc performance monitoring unit events, to monitor application or KVM threads, it may result in a checkstop (System crash). The cause is frequent switching of the "trace/accumulation" mode of the In-Memory Collection hardware (LDBAR). This patch disables the trace_imc PMU unit entirely to avoid triggering the checkstop. A future patch will reenable it at a later stage once a workaround has been developed. Fixes: 012ae244 ("powerpc/perf: Trace imc PMU functions") Cc: stable@vger.kernel.org # v5.2+ Signed-off-by: NMadhavan Srinivasan <maddy@linux.vnet.ibm.com> Tested-by: NHariharan T.S. <hari@linux.ibm.com> [mpe: Add pr_info_once() so dmesg shows the PMU has been disabled] Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20191118034452.9939-1-maddy@linux.vnet.ibm.com
-
由 Anju T Sudhakar 提交于
export_imc_mode_and_cmd() function which creates the debugfs interface for imc-mode and imc-command, is invoked when each nest pmu units is registered. When the first nest pmu unit is registered, export_imc_mode_and_cmd() creates 'imc' directory under `/debug/powerpc/`. In the subsequent invocations debugfs_create_dir() function returns, since the directory already exists. The recent commit <c33d4423> (debugfs: make error message a bit more verbose), throws a warning if we try to invoke `debugfs_create_dir()` with an already existing directory name. Address this warning by making the debugfs directory registration in the opal_imc_counters_probe() function, i.e invoke export_imc_mode_and_cmd() function from the probe function. Signed-off-by: NAnju T Sudhakar <anju@linux.vnet.ibm.com> Tested-by: NNageswara R Sastry <nasastry@in.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20191127072035.4283-1-anju@linux.vnet.ibm.com
-
由 Masahiro Yamada 提交于
Userspace cannot compile <asm/sembuf.h> due to some missing type definitions. For example, building it for x86 fails as follows: CC usr/include/asm/sembuf.h.s In file included from <command-line>:32:0: usr/include/asm/sembuf.h:17:20: error: field `sem_perm' has incomplete type struct ipc64_perm sem_perm; /* permissions .. see ipc.h */ ^~~~~~~~ usr/include/asm/sembuf.h:24:2: error: unknown type name `__kernel_time_t' __kernel_time_t sem_otime; /* last semop time */ ^~~~~~~~~~~~~~~ usr/include/asm/sembuf.h:25:2: error: unknown type name `__kernel_ulong_t' __kernel_ulong_t __unused1; ^~~~~~~~~~~~~~~~ usr/include/asm/sembuf.h:26:2: error: unknown type name `__kernel_time_t' __kernel_time_t sem_ctime; /* last change time */ ^~~~~~~~~~~~~~~ usr/include/asm/sembuf.h:27:2: error: unknown type name `__kernel_ulong_t' __kernel_ulong_t __unused2; ^~~~~~~~~~~~~~~~ usr/include/asm/sembuf.h:29:2: error: unknown type name `__kernel_ulong_t' __kernel_ulong_t sem_nsems; /* no. of semaphores in array */ ^~~~~~~~~~~~~~~~ usr/include/asm/sembuf.h:30:2: error: unknown type name `__kernel_ulong_t' __kernel_ulong_t __unused3; ^~~~~~~~~~~~~~~~ usr/include/asm/sembuf.h:31:2: error: unknown type name `__kernel_ulong_t' __kernel_ulong_t __unused4; ^~~~~~~~~~~~~~~~ It is just a matter of missing include directive. Include <asm/ipcbuf.h> to make it self-contained, and add it to the compile-test coverage. Link: http://lkml.kernel.org/r/20191030063855.9989-3-yamada.masahiro@socionext.comSigned-off-by: NMasahiro Yamada <yamada.masahiro@socionext.com> Cc: Arnd Bergmann <arnd@arndb.de> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Masahiro Yamada 提交于
Userspace cannot compile <asm/msgbuf.h> due to some missing type definitions. For example, building it for x86 fails as follows: CC usr/include/asm/msgbuf.h.s In file included from usr/include/asm/msgbuf.h:6:0, from <command-line>:32: usr/include/asm-generic/msgbuf.h:25:20: error: field `msg_perm' has incomplete type struct ipc64_perm msg_perm; ^~~~~~~~ usr/include/asm-generic/msgbuf.h:27:2: error: unknown type name `__kernel_time_t' __kernel_time_t msg_stime; /* last msgsnd time */ ^~~~~~~~~~~~~~~ usr/include/asm-generic/msgbuf.h:28:2: error: unknown type name `__kernel_time_t' __kernel_time_t msg_rtime; /* last msgrcv time */ ^~~~~~~~~~~~~~~ usr/include/asm-generic/msgbuf.h:29:2: error: unknown type name `__kernel_time_t' __kernel_time_t msg_ctime; /* last change time */ ^~~~~~~~~~~~~~~ usr/include/asm-generic/msgbuf.h:41:2: error: unknown type name `__kernel_pid_t' __kernel_pid_t msg_lspid; /* pid of last msgsnd */ ^~~~~~~~~~~~~~ usr/include/asm-generic/msgbuf.h:42:2: error: unknown type name `__kernel_pid_t' __kernel_pid_t msg_lrpid; /* last receive pid */ ^~~~~~~~~~~~~~ It is just a matter of missing include directive. Include <asm/ipcbuf.h> to make it self-contained, and add it to the compile-test coverage. Link: http://lkml.kernel.org/r/20191030063855.9989-2-yamada.masahiro@socionext.comSigned-off-by: NMasahiro Yamada <yamada.masahiro@socionext.com> Cc: Arnd Bergmann <arnd@arndb.de> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Aneesh Kumar K.V 提交于
All other architecture export this as GPL symbol Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20191202064018.155083-1-aneesh.kumar@linux.ibm.com
-
由 Ard Biesheuvel 提交于
Commit 01c9348c powerpc: Use hardware RNG for arch_get_random_seed_* not arch_get_random_* updated arch_get_random_[int|long]() to be NOPs, and moved the hardware RNG backing to arch_get_random_seed_[int|long]() instead. However, it failed to take into account that arch_get_random_int() was implemented in terms of arch_get_random_long(), and so we ended up with a version of the former that is essentially a NOP as well. Fix this by calling arch_get_random_seed_long() from arch_get_random_seed_int() instead. Fixes: 01c9348c ("powerpc: Use hardware RNG for arch_get_random_seed_* not arch_get_random_*") Signed-off-by: NArd Biesheuvel <ardb@kernel.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20191204115015.18015-1-ardb@kernel.org
-
- 04 12月, 2019 4 次提交
-
-
由 Vincenzo Frascino 提交于
clock_getres in the vDSO library has to preserve the same behaviour of posix_get_hrtimer_res(). In particular, posix_get_hrtimer_res() does: sec = 0; ns = hrtimer_resolution; and hrtimer_resolution depends on the enablement of the high resolution timers that can happen either at compile or at run time. Fix the powerpc vdso implementation of clock_getres keeping a copy of hrtimer_resolution in vdso data and using that directly. Fixes: a7f290da ("[PATCH] powerpc: Merge vdso's and add vdso support to 32 bits kernel") Cc: stable@vger.kernel.org Signed-off-by: NVincenzo Frascino <vincenzo.frascino@arm.com> Reviewed-by: NChristophe Leroy <christophe.leroy@c-s.fr> Acked-by: NShuah Khan <skhan@linuxfoundation.org> [chleroy: changed CLOCK_REALTIME_RES to CLOCK_HRTIMER_RES] Signed-off-by: NChristophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/a55eca3a5e85233838c2349783bcb5164dae1d09.1575273217.git.christophe.leroy@c-s.fr
-
由 Aneesh Kumar K.V 提交于
This patch fix the below kernel crash. BUG: Unable to handle kernel data access on read at 0xc000000380000000 Faulting instruction address: 0xc00000000008b6f0 cpu 0x5: Vector: 300 (Data Access) at [c0000000d8587790] pc: c00000000008b6f0: arch_remove_memory+0x150/0x210 lr: c00000000008b720: arch_remove_memory+0x180/0x210 sp: c0000000d8587a20 msr: 800000000280b033 dar: c000000380000000 dsisr: 40000000 current = 0xc0000000d8558600 paca = 0xc00000000fff8f00 irqmask: 0x03 irq_happened: 0x01 pid = 1220, comm = ndctl enter ? for help memunmap_pages+0x33c/0x410 devm_action_release+0x30/0x50 release_nodes+0x30c/0x3a0 device_release_driver_internal+0x178/0x240 unbind_store+0x74/0x190 drv_attr_store+0x44/0x60 sysfs_kf_write+0x74/0xa0 kernfs_fop_write+0x1b0/0x260 __vfs_write+0x3c/0x70 vfs_write+0xe4/0x200 ksys_write+0x7c/0x140 system_call+0x5c/0x68 Fixes: 07626590 ("powerpc: Chunk calls to flush_dcache_range in arch_*_memory") Reported-by: NSachin Sant <sachinp@linux.vnet.ibm.com> Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20191204052909.59145-1-aneesh.kumar@linux.ibm.com
-
由 Cédric Le Goater 提交于
The PCI INTx interrupts and other LSI interrupts are handled differently under a sPAPR platform. When the interrupt source characteristics are queried, the hypervisor returns an H_INT_ESB flag to inform the OS that it should be using the H_INT_ESB hcall for interrupt management and not loads and stores on the interrupt ESB pages. A default -1 value is returned for the addresses of the ESB pages. The driver ignores this condition today and performs a bogus IO mapping. Recent changes and the DEBUG_VM configuration option make the bug visible with : kernel BUG at arch/powerpc/include/asm/book3s/64/pgtable.h:612! Oops: Exception in kernel mode, sig: 5 [#1] LE PAGE_SIZE=64K MMU=Radix MMU=Hash SMP NR_CPUS=1024 NUMA pSeries Modules linked in: CPU: 0 PID: 1 Comm: swapper/0 Not tainted 5.4.0-0.rc6.git0.1.fc32.ppc64le #1 NIP: c000000000f63294 LR: c000000000f62e44 CTR: 0000000000000000 REGS: c0000000fa45f0d0 TRAP: 0700 Not tainted (5.4.0-0.rc6.git0.1.fc32.ppc64le) ... NIP ioremap_page_range+0x4c4/0x6e0 LR ioremap_page_range+0x74/0x6e0 Call Trace: ioremap_page_range+0x74/0x6e0 (unreliable) do_ioremap+0x8c/0x120 __ioremap_caller+0x128/0x140 ioremap+0x30/0x50 xive_spapr_populate_irq_data+0x170/0x260 xive_irq_domain_map+0x8c/0x170 irq_domain_associate+0xb4/0x2d0 irq_create_mapping+0x1e0/0x3b0 irq_create_fwspec_mapping+0x27c/0x3e0 irq_create_of_mapping+0x98/0xb0 of_irq_parse_and_map_pci+0x168/0x230 pcibios_setup_device+0x88/0x250 pcibios_setup_bus_devices+0x54/0x100 __of_scan_bus+0x160/0x310 pcibios_scan_phb+0x330/0x390 pcibios_init+0x8c/0x128 do_one_initcall+0x60/0x2c0 kernel_init_freeable+0x290/0x378 kernel_init+0x2c/0x148 ret_from_kernel_thread+0x5c/0x80 Fixes: bed81ee1 ("powerpc/xive: introduce H_INT_ESB hcall") Cc: stable@vger.kernel.org # v4.14+ Signed-off-by: NCédric Le Goater <clg@kaod.org> Tested-by: NDaniel Axtens <dja@axtens.net> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20191203163642.2428-1-clg@kaod.org
-
由 Christophe Leroy 提交于
When enabling CONFIG_RELOCATABLE and CONFIG_KASAN on FSL_BOOKE, the kernel doesn't boot. relocate_init() requires KASAN early shadow area to be set up because it needs access to the device tree through generic functions. Call kasan_early_init() before calling relocate_init() Reported-by: NLexi Shao <shaolexi@huawei.com> Fixes: 2edb16ef ("powerpc/32: Add KASAN support") Signed-off-by: NChristophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/b58426f1664a4b344ff696d18cacf3b3e8962111.1575036985.git.christophe.leroy@c-s.fr
-
- 02 12月, 2019 1 次提交
-
-
由 Mike Kravetz 提交于
Patch series "hugetlbfs: convert macros to static inline, fix sparse warning". The definition for huge_pte_offset() in <linux/hugetlb.h> causes a sparse warning in the !CONFIG_HUGETLB_PAGE. Fix this as well as converting all macros in this block of definitions to static inlines for better type checking. When making the above changes, build errors were found in powerpc due to duplicate definitions. A separate powerpc specific patch is included as a requisite to remove the definitions and get them from <linux/hugetlb.h>. This patch (of 2): This removes the power specific stubs created by commit aad71e39 ("powerpc/mm: Fix build break with RADIX=y & HUGETLBFS=n") used when !CONFIG_HUGETLB_PAGE. Instead, it addresses the build break by getting the definitions from <linux/hugetlb.h>. This allows the macros in <linux/hugetlb.h> to be replaced with static inlines. Link: http://lkml.kernel.org/r/20191112194558.139389-2-mike.kravetz@oracle.comSigned-off-by: NMike Kravetz <mike.kravetz@oracle.com> Acked-by: NMichael Ellerman <mpe@ellerman.id.au> Cc: Ben Dooks <ben.dooks@codethink.co.uk> Cc: Jason Gunthorpe <jgg@ziepe.ca> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-