- 23 12月, 2021 2 次提交
-
-
由 Christophe Leroy 提交于
code-patching has been working for years now, time has come to remove debugging messages. Change useful message to KERN_INFO and remove other ones. Also add KERN_ERR to check() macro and change it into a do/while to make checkpatch happy. Signed-off-by: NChristophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/3ff9823c0a812a8a145d979a9600a6d4591b80ee.1638446239.git.christophe.leroy@csgroup.eu
-
由 Nick Child 提交于
Some functions defined in 'arch/powerpc/lib' are deserving of an `__init` macro attribute. These functions are only called by other initialization functions and therefore should inherit the attribute. Also, change function declarations in header files to include `__init`. Signed-off-by: NNick Child <nick.child@ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20211216220035.605465-3-nick.child@ibm.com
-
- 09 12月, 2021 1 次提交
-
-
由 Christophe Leroy 提交于
In order to stop using 'struct ppc_inst' on PPC32, define a ppc_inst_t typedef. Signed-off-by: NChristophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/fe5baa2c66fea9db05a8b300b3e8d2880a42596c.1638208156.git.christophe.leroy@csgroup.eu
-
- 29 11月, 2021 1 次提交
-
-
由 Michael Ellerman 提交于
This reverts commit 8b8a8f0a. As reported[1] by Sachin this causes problems with ftrace, and it also causes the code patching selftests to fail as reported[2] by Stephen. So revert it for now. 1: https://lore.kernel.org/linuxppc-dev/3668743C-09DF-4673-B15C-2FFE2A57F7D7@linux.vnet.ibm.com/ 2: https://lore.kernel.org/linuxppc-dev/20211126161747.1f7795b0@canb.auug.org.au/Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 25 11月, 2021 1 次提交
-
-
由 Christophe Leroy 提交于
Today, patch_instruction() assumes that it is called exclusively on valid addresses, and only checks that it is not called on an init address after init section has been freed. Improve verification by calling kernel_text_address() instead. kernel_text_address() already includes a verification of initmem release. Signed-off-by: NChristophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/bc683d499a411730504b132a924de0ccc2ef1f79.1636971137.git.christophe.leroy@csgroup.eu
-
- 07 10月, 2021 1 次提交
-
-
由 Naveen N. Rao 提交于
Add a helper to check if a given offset is within the branch range for a powerpc conditional branch instruction, and update some sites to use the new helper. Signed-off-by: NNaveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Reviewed-by: NChristophe Leroy <christophe.leroy@csgroup.eu> Acked-by: NSong Liu <songliubraving@fb.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/442b69a34ced32ca346a0d9a855f3f6cfdbbbd41.1633464148.git.naveen.n.rao@linux.vnet.ibm.com
-
- 21 6月, 2021 1 次提交
-
-
由 Jordan Niethe 提交于
setup_text_poke_area() is a late init call so it runs before mark_rodata_ro() and after the init calls. This lets all the init code patching simply write to their locations. In the future, kprobes is going to allocate its instruction pages RO which means they will need setup_text__poke_area() to have been already called for their code patching. However, init_kprobes() (which allocates and patches some instruction pages) is an early init call so it happens before setup_text__poke_area(). start_kernel() calls poking_init() before any of the init calls. On powerpc, poking_init() is currently a nop. setup_text_poke_area() relies on kernel virtual memory, cpu hotplug and per_cpu_areas being setup. setup_per_cpu_areas(), boot_cpu_hotplug_init() and mm_init() are called before poking_init(). Turn setup_text_poke_area() into poking_init(). Signed-off-by: NJordan Niethe <jniethe5@gmail.com> Reviewed-by: NChristophe Leroy <christophe.leroy@csgroup.eu> Reviewed-by: NRussell Currey <ruscur@russell.cc> [mpe: Fold in missing prototype for poking_init() from lkp] Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210609013431.9805-3-jniethe5@gmail.com
-
- 16 6月, 2021 4 次提交
-
-
由 Christophe Leroy 提交于
'struct ppc_inst' is an internal representation of an instruction, but in-memory instructions are and will remain a table of 'u32' forever. Replace all 'struct ppc_inst *' used for locating an instruction in memory by 'u32 *'. This removes a lot of undue casts to 'struct ppc_inst *'. It also helps locating ab-use of 'struct ppc_inst' dereference. Signed-off-by: NChristophe Leroy <christophe.leroy@csgroup.eu> [mpe: Fix ppc_inst_next(), use u32 instead of unsigned int] Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/7062722b087228e42cbd896e39bfdf526d6a340a.1621516826.git.christophe.leroy@csgroup.eu
-
由 Christophe Leroy 提交于
'struct ppc_inst' is meant to represent an instruction internally, it is not meant to dereference code in memory. For testing code patching, use patch_instruction() to properly write into memory the code to be tested. Signed-off-by: NChristophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/d8425fb42a4adebc35b7509f121817eeb02fac31.1621516826.git.christophe.leroy@csgroup.eu
-
由 Christophe Leroy 提交于
instr_is_branch_to_addr() is only used in code-patching.c Make it static. Signed-off-by: NChristophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/5f6b9c8c83170ed310953eac2f5b14539bfc964a.1621516826.git.christophe.leroy@csgroup.eu
-
由 Christophe Leroy 提交于
'struct ppc_inst' is an internal structure to represent an instruction, it is not directly the representation of that instruction in text code. It is not meant to map and dereference code. Dereferencing code directly through 'struct ppc_inst' has two main issues: - On powerpc, structs are expected to be 8 bytes aligned while code is spread every 4 byte. - Should a non prefixed instruction lie at the end of the page and the following page not be mapped, it would generate a page fault. In-memory code must be accessed with ppc_inst_read(). Signed-off-by: NChristophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/c9a1201dd0a66b4a0f91f0fb46d9385cbf030feb.1621516826.git.christophe.leroy@csgroup.eu
-
- 21 4月, 2021 1 次提交
-
-
由 Christophe Leroy 提交于
In order to simplify use on PPC32, change ppc_inst_as_u64() into ppc_inst_as_ulong() that returns the 32 bits instruction on PPC32. Will be used when porting OPTPROBES to PPC32. Signed-off-by: NChristophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/22cadf29620664b600b82026d2a72b8b23351777.1618927318.git.christophe.leroy@csgroup.eu
-
- 26 3月, 2021 1 次提交
-
-
由 Christophe Leroy 提交于
__put_user_asm_goto() is internal to uaccess.h Use __put_kernel_nofault() instead. The generated code is identical. Signed-off-by: NChristophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/3e32c4f0361933909368b68f5ee569e5de661c1b.1615398498.git.christophe.leroy@csgroup.eu
-
- 15 9月, 2020 1 次提交
-
-
由 Christophe Leroy 提交于
__patch_instruction() is the only user of __put_user_asm() outside of asm/uaccess.h Switch to the new __put_user_asm_goto() to enable retirement of __put_user_asm() in a later patch. Signed-off-by: NChristophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/b9745b122f4a9ae72cef445c61320022ab8b77b7.1599216721.git.christophe.leroy@csgroup.eu
-
- 26 7月, 2020 1 次提交
-
-
由 Christophe Leroy 提交于
Use is_vmalloc_or_module_addr() instead of is_vmalloc_addr() Signed-off-by: NChristophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/7d884db0e5a6f521331639d8c0f13e520d5a4fef.1593428200.git.christophe.leroy@csgroup.eu
-
- 10 6月, 2020 1 次提交
-
-
由 Mike Rapoport 提交于
Patch series "mm: consolidate definitions of page table accessors", v2. The low level page table accessors (pXY_index(), pXY_offset()) are duplicated across all architectures and sometimes more than once. For instance, we have 31 definition of pgd_offset() for 25 supported architectures. Most of these definitions are actually identical and typically it boils down to, e.g. static inline unsigned long pmd_index(unsigned long address) { return (address >> PMD_SHIFT) & (PTRS_PER_PMD - 1); } static inline pmd_t *pmd_offset(pud_t *pud, unsigned long address) { return (pmd_t *)pud_page_vaddr(*pud) + pmd_index(address); } These definitions can be shared among 90% of the arches provided XYZ_SHIFT, PTRS_PER_XYZ and xyz_page_vaddr() are defined. For architectures that really need a custom version there is always possibility to override the generic version with the usual ifdefs magic. These patches introduce include/linux/pgtable.h that replaces include/asm-generic/pgtable.h and add the definitions of the page table accessors to the new header. This patch (of 12): The linux/mm.h header includes <asm/pgtable.h> to allow inlining of the functions involving page table manipulations, e.g. pte_alloc() and pmd_alloc(). So, there is no point to explicitly include <asm/pgtable.h> in the files that include <linux/mm.h>. The include statements in such cases are remove with a simple loop: for f in $(git grep -l "include <linux/mm.h>") ; do sed -i -e '/include <asm\/pgtable.h>/ d' $f done Signed-off-by: NMike Rapoport <rppt@linux.ibm.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Cain <bcain@codeaurora.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Chris Zankel <chris@zankel.net> Cc: "David S. Miller" <davem@davemloft.net> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Greentime Hu <green.hu@gmail.com> Cc: Greg Ungerer <gerg@linux-m68k.org> Cc: Guan Xuetao <gxt@pku.edu.cn> Cc: Guo Ren <guoren@kernel.org> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Helge Deller <deller@gmx.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: Ley Foon Tan <ley.foon.tan@intel.com> Cc: Mark Salter <msalter@redhat.com> Cc: Matthew Wilcox <willy@infradead.org> Cc: Matt Turner <mattst88@gmail.com> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Simek <monstr@monstr.eu> Cc: Mike Rapoport <rppt@kernel.org> Cc: Nick Hu <nickhu@andestech.com> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Richard Weinberger <richard@nod.at> Cc: Rich Felker <dalias@libc.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Stafford Horne <shorne@gmail.com> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Tony Luck <tony.luck@intel.com> Cc: Vincent Chen <deanbo422@gmail.com> Cc: Vineet Gupta <vgupta@synopsys.com> Cc: Will Deacon <will@kernel.org> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Link: http://lkml.kernel.org/r/20200514170327.31389-1-rppt@kernel.org Link: http://lkml.kernel.org/r/20200514170327.31389-2-rppt@kernel.orgSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 05 6月, 2020 1 次提交
-
-
由 Mike Rapoport 提交于
Implement primitives necessary for the 4th level folding, add walks of p4d level where appropriate and replace 5level-fixup.h with pgtable-nop4d.h. [rppt@linux.ibm.com: powerpc/xmon: drop unused pgdir varialble in show_pte() function] Link: http://lkml.kernel.org/r/20200519181454.GI1059226@linux.ibm.com [rppt@linux.ibm.com; build fix] Link: http://lkml.kernel.org/r/20200423141845.GI13521@linux.ibm.comSigned-off-by: NMike Rapoport <rppt@linux.ibm.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Tested-by: Christophe Leroy <christophe.leroy@c-s.fr> # 8xx and 83xx Cc: Arnd Bergmann <arnd@arndb.de> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Brian Cain <bcain@codeaurora.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Fenghua Yu <fenghua.yu@intel.com> Cc: Geert Uytterhoeven <geert+renesas@glider.be> Cc: Guan Xuetao <gxt@pku.edu.cn> Cc: James Morse <james.morse@arm.com> Cc: Jonas Bonn <jonas@southpole.se> Cc: Julien Thierry <julien.thierry.kdev@gmail.com> Cc: Ley Foon Tan <ley.foon.tan@intel.com> Cc: Marc Zyngier <maz@kernel.org> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Paul Mackerras <paulus@samba.org> Cc: Rich Felker <dalias@libc.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Stafford Horne <shorne@gmail.com> Cc: Stefan Kristiansson <stefan.kristiansson@saunalahti.fi> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: Tony Luck <tony.luck@intel.com> Cc: Will Deacon <will@kernel.org> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Link: http://lkml.kernel.org/r/20200414153455.21744-9-rppt@kernel.orgSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 26 5月, 2020 1 次提交
-
-
由 Michael Ellerman 提交于
The code patching code wants to get the value of a struct ppc_inst as a u64 when the instruction is prefixed, so we can pass the u64 down to __put_user_asm() and write it with a single store. The optprobes code wants to load a struct ppc_inst as an immediate into a register so it is useful to have it as a u64 to use the existing helper function. Currently this is a bit awkward because the value differs based on the CPU endianness, so add a helper to do the conversion. This fixes the usage in arch_prepare_optimized_kprobe() which was previously incorrect on big endian. Fixes: 650b55b7 ("powerpc: Add prefixed instructions to instruction data type") Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Tested-by: NJordan Niethe <jniethe5@gmail.com> Link: https://lore.kernel.org/r/20200526072630.2487363-1-mpe@ellerman.id.au
-
- 18 5月, 2020 10 次提交
-
-
由 Jordan Niethe 提交于
Expand the code-patching self-tests to includes tests for patching prefixed instructions. Signed-off-by: NJordan Niethe <jniethe5@gmail.com> [mpe: Use CONFIG_PPC64 not __powerpc64__] Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200506034050.24806-25-jniethe5@gmail.com
-
由 Jordan Niethe 提交于
For powerpc64, redefine the ppc_inst type so both word and prefixed instructions can be represented. On powerpc32 the type will remain the same. Update places which had assumed instructions to be 4 bytes long. Signed-off-by: NJordan Niethe <jniethe5@gmail.com> Reviewed-by: NAlistair Popple <alistair@popple.id.au> [mpe: Rework the get_user_inst() macros to be parameterised, and don't assign to the dest if an error occurred. Use CONFIG_PPC64 not __powerpc64__ in a few places. Address other comments from Christophe. Fix some sparse complaints.] Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200506034050.24806-24-jniethe5@gmail.com
-
由 Jordan Niethe 提交于
test_translate_branch() uses two pointers to instructions within a buffer, p and q, to test patch_branch(). The pointer arithmetic done on them assumes a size of 4. This will not work if the instruction length changes. Instead do the arithmetic relative to the void * to the buffer. Signed-off-by: NJordan Niethe <jniethe5@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Reviewed-by: NAlistair Popple <alistair@popple.id.au> Link: https://lore.kernel.org/r/20200506034050.24806-21-jniethe5@gmail.com
-
由 Jordan Niethe 提交于
Prefixed instructions will mean there are instructions of different length. As a result dereferencing a pointer to an instruction will not necessarily give the desired result. Introduce a function for reading instructions from memory into the instruction data type. Signed-off-by: NJordan Niethe <jniethe5@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Reviewed-by: NAlistair Popple <alistair@popple.id.au> Link: https://lore.kernel.org/r/20200506034050.24806-13-jniethe5@gmail.com
-
由 Jordan Niethe 提交于
Currently unsigned ints are used to represent instructions on powerpc. This has worked well as instructions have always been 4 byte words. However, ISA v3.1 introduces some changes to instructions that mean this scheme will no longer work as well. This change is Prefixed Instructions. A prefixed instruction is made up of a word prefix followed by a word suffix to make an 8 byte double word instruction. No matter the endianness of the system the prefix always comes first. Prefixed instructions are only planned for powerpc64. Introduce a ppc_inst type to represent both prefixed and word instructions on powerpc64 while keeping it possible to exclusively have word instructions on powerpc32. Signed-off-by: NJordan Niethe <jniethe5@gmail.com> [mpe: Fix compile error in emulate_spe()] Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200506034050.24806-12-jniethe5@gmail.com
-
由 Jordan Niethe 提交于
In preparation for an instruction data type that can not be directly used with the '==' operator use functions for checking equality. Signed-off-by: NJordan Niethe <jniethe5@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Reviewed-by: NBalamuruhan S <bala24@linux.ibm.com> Link: https://lore.kernel.org/r/20200506034050.24806-11-jniethe5@gmail.com
-
由 Jordan Niethe 提交于
In preparation for using a data type for instructions that can not be directly used with the '>>' operator use a function for getting the op code of an instruction. Signed-off-by: NJordan Niethe <jniethe5@gmail.com> Reviewed-by: NAlistair Popple <alistair@popple.id.au> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200506034050.24806-9-jniethe5@gmail.com
-
由 Jordan Niethe 提交于
In preparation for introducing a more complicated instruction type to accommodate prefixed instructions use an accessor for getting an instruction as a u32. Signed-off-by: NJordan Niethe <jniethe5@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200506034050.24806-8-jniethe5@gmail.com
-
由 Jordan Niethe 提交于
In preparation for instructions having a more complex data type start using a macro, ppc_inst(), for making an instruction out of a u32. A macro is used so that instructions can be used as initializer elements. Currently this does nothing, but it will allow for creating a data type that can represent prefixed instructions. Signed-off-by: NJordan Niethe <jniethe5@gmail.com> [mpe: Change include guard to _ASM_POWERPC_INST_H] Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Reviewed-by: NAlistair Popple <alistair@popple.id.au> Link: https://lore.kernel.org/r/20200506034050.24806-7-jniethe5@gmail.com
-
由 Jordan Niethe 提交于
create_branch(), create_cond_branch() and translate_branch() return the instruction that they create, or return 0 to signal an error. Separate these concerns in preparation for an instruction type that is not just an unsigned int. Fill the created instruction to a pointer passed as the first parameter to the function and use a non-zero return value to signify an error. Signed-off-by: NJordan Niethe <jniethe5@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Reviewed-by: NAlistair Popple <alistair@popple.id.au> Link: https://lore.kernel.org/r/20200506034050.24806-6-jniethe5@gmail.com
-
- 31 5月, 2019 1 次提交
-
-
由 Thomas Gleixner 提交于
Based on 1 normalized pattern(s): this program is free software you can redistribute it and or modify it under the terms of the gnu general public license as published by the free software foundation either version 2 of the license or at your option any later version extracted by the scancode license scanner the SPDX license identifier GPL-2.0-or-later has been chosen to replace the boilerplate/reference in 3029 file(s). Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Reviewed-by: NAllison Randal <allison@lohutok.net> Cc: linux-spdx@vger.kernel.org Link: https://lkml.kernel.org/r/20190527070032.746973796@linutronix.deSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 21 4月, 2019 1 次提交
-
-
由 Russell Currey 提交于
__patch_instruction() is called in early boot, and uses __put_user_size(), which includes the allow/prevent calls to enforce KUAP, which could either be called too early, or in the Radix case, forced to use "early_" versions of functions just to safely handle this one case. __put_user_asm() does not do this, and thus is safe to use both in early boot, and later on since in this case it should only ever be touching kernel memory. __patch_instruction() was previously refactored to use __put_user_size() in order to be able to return -EFAULT, which would allow the kernel to patch instructions in userspace, which should never happen. This has the functional change of causing faults on userspace addresses if KUAP is turned on, which should never happen in practice. A future enhancement could be to double check the patch address is definitely allowed to be tampered with by the kernel. Signed-off-by: NRussell Currey <ruscur@russell.cc> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 20 4月, 2019 1 次提交
-
-
由 Jagadeesh Pagadala 提交于
Remove duplicate headers inclusions. Signed-off-by: NJagadeesh Pagadala <jagdsh.linux@gmail.com> Reviewed-by: NMukesh Ojha <mojha@codeaurora.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 19 12月, 2018 1 次提交
-
-
由 Christophe Leroy 提交于
Using patch_site_addr() helper, patch_instruction_site() and patch_branch_site() can be simplified and inlined. Signed-off-by: NChristophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 14 10月, 2018 1 次提交
-
-
由 Christophe Leroy 提交于
In order to avoid multiple conversions, handover directly a pgprot_t to map_kernel_page() as already done for radix. Do the same for __ioremap_caller() and __ioremap_at(). Signed-off-by: NChristophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 02 10月, 2018 1 次提交
-
-
由 Christophe Leroy 提交于
Commit 51c3c62b ("powerpc: Avoid code patching freed init sections") accesses 'init_mem_is_free' flag too early, before the kernel is relocated. This provokes early boot failure (before the console is active). As it is not necessary to do this verification that early, this patch moves the test into patch_instruction() instead of __patch_instruction(). This modification also has the advantage of avoiding unnecessary remappings. Fixes: 51c3c62b ("powerpc: Avoid code patching freed init sections") Cc: stable@vger.kernel.org # 4.13+ Signed-off-by: NChristophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 18 9月, 2018 1 次提交
-
-
由 Michael Neuling 提交于
This stops us from doing code patching in init sections after they've been freed. In this chain: kvm_guest_init() -> kvm_use_magic_page() -> fault_in_pages_readable() -> __get_user() -> __get_user_nocheck() -> barrier_nospec(); We have a code patching location at barrier_nospec() and kvm_guest_init() is an init function. This whole chain gets inlined, so when we free the init section (hence kvm_guest_init()), this code goes away and hence should no longer be patched. We seen this as userspace memory corruption when using a memory checker while doing partition migration testing on powervm (this starts the code patching post migration via /sys/kernel/mobility/migration). In theory, it could also happen when using /sys/kernel/debug/powerpc/barrier_nospec. Cc: stable@vger.kernel.org # 4.13+ Signed-off-by: NMichael Neuling <mikey@neuling.org> Reviewed-by: NNicholas Piggin <npiggin@gmail.com> Reviewed-by: NChristophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 07 8月, 2018 1 次提交
-
-
由 Michael Ellerman 提交于
Add a macro and some helper C functions for patching single asm instructions. The gas macro means we can do something like: 1: nop patch_site 1b, patch__foo Which is less visually distracting than defining a GLOBAL symbol at 1, and also doesn't pollute the symbol table which can confuse eg. perf. These are obviously similar to our existing feature sections, but are not automatically patched based on CPU/MMU features, rather they are designed to be manually patched by C code at some arbitrary point. Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 21 1月, 2018 2 次提交
-
-
由 Christophe Leroy 提交于
feature fixups need to use patch_instruction() early in the boot, even before the code is relocated to its final address, requiring patch_instruction() to use PTRRELOC() in order to address data. But feature fixups applies on code before it is set to read only, even for modules. Therefore, feature fixups can use raw_patch_instruction() instead. Signed-off-by: NChristophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Christophe Leroy 提交于
patch_instruction() uses almost the same sequence as __patch_instruction() This patch refactor it so that patch_instruction() uses __patch_instruction() instead of duplicating code. Signed-off-by: NChristophe Leroy <christophe.leroy@c-s.fr> Acked-by: NBalbir Singh <bsingharora@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 11 12月, 2017 1 次提交
-
-
由 Josh Poimboeuf 提交于
When attempting to load a livepatch module, I got the following error: module_64: patch_module: Expect noop after relocate, got 3c820000 The error was triggered by the following code in unregister_netdevice_queue(): 14c: 00 00 00 48 b 14c <unregister_netdevice_queue+0x14c> 14c: R_PPC64_REL24 net_set_todo 150: 00 00 82 3c addis r4,r2,0 GCC didn't insert a nop after the branch to net_set_todo() because it's a sibling call, so it never returns. The nop isn't needed after the branch in that case. Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com> Acked-by: NNaveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Reviewed-and-tested-by: NKamalesh Babulal <kamalesh@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 22 11月, 2017 1 次提交
-
-
由 Christophe Leroy 提交于
On powerpc32, patch_instruction() is called by apply_feature_fixups() which is called from early_init() There is the following note in front of early_init(): * Note that the kernel may be running at an address which is different * from the address that it was linked at, so we must use RELOC/PTRRELOC * to access static data (including strings). -- paulus Therefore, slab_is_available() cannot be called yet, and text_poke_area must be addressed with PTRRELOC() Fixes: 95902e6c ("powerpc/mm: Implement STRICT_KERNEL_RWX on PPC32") Cc: stable@vger.kernel.org # v4.14+ Reported-by: NMeelis Roos <mroos@linux.ee> Signed-off-by: NChristophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-