- 05 4月, 2022 2 次提交
-
-
由 Peter Zijlstra 提交于
Since not all compilers have a function attribute to disable KCOV instrumentation, objtool can rewrite KCOV instrumentation in noinstr functions as per commit: f56dae88 ("objtool: Handle __sanitize_cov*() tail calls") However, this has subtle interaction with the SLS validation from commit: 1cc1e4c8 ("objtool: Add straight-line-speculation validation") In that when a tail-call instrucion is replaced with a RET an additional INT3 instruction is also written, but is not represented in the decoded instruction stream. This then leads to false positive missing INT3 objtool warnings in noinstr code. Instead of adding additional struct instruction objects, mark the RET instruction with retpoline_safe to suppress the warning (since we know there really is an INT3). Fixes: 1cc1e4c8 ("objtool: Add straight-line-speculation validation") Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20220323230712.GA8939@worktop.programming.kicks-ass.net
-
由 Peter Zijlstra 提交于
Objtool reports: arch/x86/crypto/poly1305-x86_64.o: warning: objtool: poly1305_blocks_avx() falls through to next function poly1305_blocks_x86_64() arch/x86/crypto/poly1305-x86_64.o: warning: objtool: poly1305_emit_avx() falls through to next function poly1305_emit_x86_64() arch/x86/crypto/poly1305-x86_64.o: warning: objtool: poly1305_blocks_avx2() falls through to next function poly1305_blocks_x86_64() Which reads like: 0000000000000040 <poly1305_blocks_x86_64>: 40: f3 0f 1e fa endbr64 ... 0000000000000400 <poly1305_blocks_avx>: 400: f3 0f 1e fa endbr64 404: 44 8b 47 14 mov 0x14(%rdi),%r8d 408: 48 81 fa 80 00 00 00 cmp $0x80,%rdx 40f: 73 09 jae 41a <poly1305_blocks_avx+0x1a> 411: 45 85 c0 test %r8d,%r8d 414: 0f 84 2a fc ff ff je 44 <poly1305_blocks_x86_64+0x4> ... These are simple conditional tail-calls and *should* be recognised as such by objtool, however due to a mistake in commit 08f87a93 ("objtool: Validate IBT assumptions") this is failing. Specifically, the jump_dest is +4, this means the instruction pointed at will not be ENDBR and as such it will fail the second clause of is_first_func_insn() that was supposed to capture this exact case. Instead, have is_first_func_insn() look at the previous instruction. Fixes: 08f87a93 ("objtool: Validate IBT assumptions") Reported-by: NStephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20220322115125.811582125@infradead.org
-
- 15 3月, 2022 10 次提交
-
-
由 Peter Zijlstra 提交于
Find all ENDBR instructions which are never referenced and stick them in a section such that the kernel can poison them, sealing the functions from ever being an indirect call target. This removes about 1-in-4 ENDBR instructions. Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Acked-by: NJosh Poimboeuf <jpoimboe@redhat.com> Link: https://lore.kernel.org/r/20220308154319.763643193@infradead.org
-
由 Peter Zijlstra 提交于
Intel IBT requires that every indirect JMP/CALL targets an ENDBR instructions, failing this #CP happens and we die. Similarly, all exception entries should be ENDBR. Find all code relocations and ensure they're either an ENDBR instruction or ANNOTATE_NOENDBR. For the exceptions look for UNWIND_HINT_IRET_REGS at sym+0 not being ENDBR. Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Acked-by: NJosh Poimboeuf <jpoimboe@redhat.com> Link: https://lore.kernel.org/r/20220308154319.705110141@infradead.org
-
由 Peter Zijlstra 提交于
Read the new NOENDBR annotation. While there, attempt to not bloat struct instruction. Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Acked-by: NJosh Poimboeuf <jpoimboe@redhat.com> Link: https://lore.kernel.org/r/20220308154319.586815435@infradead.org
-
由 Peter Zijlstra 提交于
Currently ASM_REACHABLE only works for UD2 instructions; reorder things to also allow over-riding dead_end_function(). To that end: - Mark INSN_BUG instructions in decode_instructions(), this saves having to iterate all instructions yet again. - Have add_call_destinations() set insn->dead_end for dead_end_function() calls. - Move add_dead_ends() *after* add_call_destinations() such that ASM_REACHABLE can clear the ->dead_end mark. - have validate_branch() only check ->dead_end. Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Acked-by: NJosh Poimboeuf <jpoimboe@redhat.com> Link: https://lore.kernel.org/r/20220308154319.410010807@infradead.org
-
由 Peter Zijlstra 提交于
vmlinux.o: warning: objtool: ksys_unshare()+0x36c: unreachable instruction 0000 0000000000067040 <ksys_unshare>: ... 0364 673a4: 4c 89 ef mov %r13,%rdi 0367 673a7: e8 00 00 00 00 call 673ac <ksys_unshare+0x36c> 673a8: R_X86_64_PLT32 __invalid_creds-0x4 036c 673ac: e9 28 ff ff ff jmp 672d9 <ksys_unshare+0x299> 0371 673b1: 41 bc f4 ff ff ff mov $0xfffffff4,%r12d 0377 673b7: e9 80 fd ff ff jmp 6713c <ksys_unshare+0xfc> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/Yi9gOW9f1GGwwUD6@hirez.programming.kicks-ass.net
-
由 Peter Zijlstra 提交于
vmlinux.o: warning: objtool: get_signal()+0x108: unreachable instruction 0000 000000000007f930 <get_signal>: ... 0103 7fa33: e8 00 00 00 00 call 7fa38 <get_signal+0x108> 7fa34: R_X86_64_PLT32 do_group_exit-0x4 0108 7fa38: 41 8b 45 74 mov 0x74(%r13),%eax Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Acked-by: NJosh Poimboeuf <jpoimboe@redhat.com> Link: https://lore.kernel.org/r/20220308154319.351270711@infradead.org
-
由 Peter Zijlstra 提交于
vmlinux.o: warning: objtool: smp_stop_nmi_callback()+0x2b: unreachable instruction 0000 0000000000047cf0 <smp_stop_nmi_callback>: ... 0026 47d16: e8 00 00 00 00 call 47d1b <smp_stop_nmi_callback+0x2b> 47d17: R_X86_64_PLT32 stop_this_cpu-0x4 002b 47d1b: b8 01 00 00 00 mov $0x1,%eax Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Acked-by: NJosh Poimboeuf <jpoimboe@redhat.com> Link: https://lore.kernel.org/r/20220308154319.290905453@infradead.org
-
由 Peter Zijlstra 提交于
There's a fun implementation detail on linking STB_WEAK symbols. When the linker combines two translation units, where one contains a weak function and the other an override for it. It simply strips the STB_WEAK symbol from the symbol table, but doesn't actually remove the code. The result is that when objtool is ran in a whole-archive kind of way, it will encounter *heaps* of unused (and unreferenced) code. All rudiments of weak functions. Additionally, when a weak implementation is split into a .cold subfunction that .cold symbol is left in place, even though completely unused. Teach objtool to ignore such rudiments by searching for symbol holes; that is, code ranges that fall outside the given symbol bounds. Specifically, ignore a sequence of unreachable instruction iff they occupy a single hole, additionally ignore any .cold subfunctions referenced. Both ld.bfd and ld.lld behave like this. LTO builds otoh can (and do) properly DCE weak functions. Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Acked-by: NJosh Poimboeuf <jpoimboe@redhat.com> Link: https://lore.kernel.org/r/20220308154319.232019347@infradead.org
-
由 Peter Zijlstra 提交于
In order to prepare for LTO like objtool runs for modules, rename the duplicate argument to lto. Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Acked-by: NJosh Poimboeuf <jpoimboe@redhat.com> Link: https://lore.kernel.org/r/20220308154319.172584233@infradead.org
-
由 Peter Zijlstra 提交于
Ignore all INT3 instructions for unreachable code warnings, similar to NOP. This allows using INT3 for various paddings instead of NOPs. Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Acked-by: NJosh Poimboeuf <jpoimboe@redhat.com> Link: https://lore.kernel.org/r/20220308154317.343312938@infradead.org
-
- 25 1月, 2022 1 次提交
-
-
由 Sergei Trofimovich 提交于
On GCC 12, the build fails due to a possible truncated string: check.c: In function 'validate_call': check.c:2865:58: error: '%d' directive output may be truncated writing between 1 and 10 bytes into a region of size 9 [-Werror=format-truncation=] 2865 | snprintf(pvname, sizeof(pvname), "pv_ops[%d]", idx); | ^~ In theory it's a valid bug: static char pvname[16]; int idx; ... idx = (rel->addend / sizeof(void *)); snprintf(pvname, sizeof(pvname), "pv_ops[%d]", idx); There are only 7 chars for %d while it could take up to 9, so the printed "pv_ops[%d]" string could get truncated. In reality the bug should never happen, because pv_ops only has ~80 entries, so 7 chars for the integer is more than enough. Still, it's worth fixing. Bump the buffer size by 2 bytes to silence the warning. [ jpoimboe: changed size to 19; massaged changelog ] Fixes: db2b0c5d ("objtool: Support pv_opsindirect calls for noinstr") Reported-by: NAdam Borowski <kilobyte@angband.pl> Reported-by: NMartin Liška <mliska@suse.cz> Signed-off-by: NSergei Trofimovich <slyich@gmail.com> Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com> Link: https://lore.kernel.org/r/20220120233748.2062559-1-slyich@gmail.com
-
- 16 12月, 2021 1 次提交
-
-
由 Eric W. Biederman 提交于
Recently the kbuild robot reported two new errors: >> lib/kunit/kunit-example-test.o: warning: objtool: .text.unlikely: unexpected end of section >> arch/x86/kernel/dumpstack.o: warning: objtool: oops_end() falls through to next function show_opcodes() I don't know why they did not occur in my test setup but after digging it I realized I had accidentally dropped a comma in tools/objtool/check.c when I renamed rewind_stack_do_exit to rewind_stack_and_make_dead. Add that comma back to fix objtool errors. Link: https://lkml.kernel.org/r/202112140949.Uq5sFKR1-lkp@intel.com Fixes: 0e25498f ("exit: Add and use make_task_dead.") Reported-by: Nkernel test robot <lkp@intel.com> Signed-off-by: N"Eric W. Biederman" <ebiederm@xmission.com>
-
- 14 12月, 2021 4 次提交
-
-
由 Eric W. Biederman 提交于
Update complete_and_exit to call kthread_exit instead of do_exit. Change the name to reflect this change in functionality. All of the users of complete_and_exit are causing the current kthread to exit so this change makes it clear what is happening. Move the implementation of kthread_complete_and_exit from kernel/exit.c to to kernel/kthread.c. As this function is kthread specific it makes most sense to live with the kthread functions. There are no functional change. Signed-off-by: N"Eric W. Biederman" <ebiederm@xmission.com>
-
由 Eric W. Biederman 提交于
Update module_put_and_exit to call kthread_exit instead of do_exit. Change the name to reflect this change in functionality. All of the users of module_put_and_exit are causing the current kthread to exit so this change makes it clear what is happening. There is no functional change. Signed-off-by: N"Eric W. Biederman" <ebiederm@xmission.com>
-
由 Eric W. Biederman 提交于
The way the per task_struct exit_code is used by kernel threads is not quite compatible how it is used by userspace applications. The low byte of the userspace exit_code value encodes the exit signal. While kthreads just use the value as an int holding ordinary kernel function exit status like -EPERM. Add kthread_exit to clearly separate the two kinds of uses. Signed-off-by: N"Eric W. Biederman" <ebiederm@xmission.com>
-
由 Eric W. Biederman 提交于
There are two big uses of do_exit. The first is it's design use to be the guts of the exit(2) system call. The second use is to terminate a task after something catastrophic has happened like a NULL pointer in kernel code. Add a function make_task_dead that is initialy exactly the same as do_exit to cover the cases where do_exit is called to handle catastrophic failure. In time this can probably be reduced to just a light wrapper around do_task_dead. For now keep it exactly the same so that there will be no behavioral differences introducing this new concept. Replace all of the uses of do_exit that use it for catastraphic task cleanup with make_task_dead to make it clear what the code is doing. As part of this rename rewind_stack_do_exit rewind_stack_and_make_dead. Signed-off-by: N"Eric W. Biederman" <ebiederm@xmission.com>
-
- 11 12月, 2021 1 次提交
-
-
由 Peter Zijlstra 提交于
The .fixup has gone the way of the Dodo, that test will always be false. Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: NJosh Poimboeuf <jpoimboe@redhat.com> Link: https://lore.kernel.org/r/20211110101326.261496792@infradead.org
-
- 10 12月, 2021 2 次提交
-
-
由 Marco Elver 提交于
Teach objtool to turn instrumentation required for memory barrier modeling into nops in noinstr text. The __tsan_func_entry/exit calls are still emitted by compilers even with the __no_sanitize_thread attribute. The memory barrier instrumentation will be inserted explicitly (without compiler help), and thus needs to also explicitly be removed. Signed-off-by: NMarco Elver <elver@google.com> Acked-by: NJosh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: NPaul E. McKenney <paulmck@kernel.org>
-
由 Marco Elver 提交于
Adds KCSAN's memory barrier instrumentation to objtool's uaccess whitelist. Signed-off-by: NMarco Elver <elver@google.com> Signed-off-by: NPaul E. McKenney <paulmck@kernel.org>
-
- 09 12月, 2021 1 次提交
-
-
由 Peter Zijlstra 提交于
Teach objtool to validate the straight-line-speculation constraints: - speculation trap after indirect calls - speculation trap after RET Notable: when an instruction is annotated RETPOLINE_SAFE, indicating speculation isn't a problem, also don't care about sls for that instruction. Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NBorislav Petkov <bp@suse.de> Link: https://lore.kernel.org/r/20211204134908.023037659@infradead.org
-
- 11 11月, 2021 1 次提交
-
-
由 Peter Zijlstra 提交于
Add a few signature bytes after the static call trampoline and verify those bytes match before patching the trampoline. This avoids patching random other JMPs (such as CFI jump-table entries) instead. These bytes decode as: d: 53 push %rbx e: 43 54 rex.XB push %r12 And happen to spell "SCT". Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20211030074758.GT174703@worktop.programming.kicks-ass.net
-
- 29 10月, 2021 4 次提交
-
-
由 Peter Zijlstra 提交于
Instead of writing complete alternatives, simply provide a list of all the retpoline thunk calls. Then the kernel is free to do with them as it pleases. Simpler code all-round. Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: NBorislav Petkov <bp@suse.de> Acked-by: NJosh Poimboeuf <jpoimboe@redhat.com> Tested-by: NAlexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/r/20211026120309.850007165@infradead.org
-
由 Peter Zijlstra 提交于
Any one instruction can only ever call a single function, therefore insn->mcount_loc_node is superfluous and can use insn->call_node. This shrinks struct instruction, which is by far the most numerous structure objtool creates. Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: NBorislav Petkov <bp@suse.de> Acked-by: NJosh Poimboeuf <jpoimboe@redhat.com> Tested-by: NAlexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/r/20211026120309.785456706@infradead.org
-
由 Peter Zijlstra 提交于
Assume ALTERNATIVE()s know what they're doing and do not change, or cause to change, instructions in .altinstr_replacement sections. Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: NBorislav Petkov <bp@suse.de> Acked-by: NJosh Poimboeuf <jpoimboe@redhat.com> Tested-by: NAlexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/r/20211026120309.722511775@infradead.org
-
由 Peter Zijlstra 提交于
In order to avoid calling str*cmp() on symbol names, over and over, do them all once upfront and store the result. Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: NBorislav Petkov <bp@suse.de> Acked-by: NJosh Poimboeuf <jpoimboe@redhat.com> Tested-by: NAlexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/r/20211026120309.658539311@infradead.org
-
- 06 10月, 2021 1 次提交
-
-
由 Joe Lawrence 提交于
The section structure already contains sh_size, so just remove the extra 'len' member that requires extra mirroring and potential confusion. Suggested-by: NJosh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: NJoe Lawrence <joe.lawrence@redhat.com> Reviewed-by: NMiroslav Benes <mbenes@suse.cz> Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com> Link: https://lore.kernel.org/r/20210822225037.54620-3-joe.lawrence@redhat.com Cc: Andy Lavr <andy.lavr@gmail.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: x86@kernel.org Cc: linux-kernel@vger.kernel.org
-
- 01 10月, 2021 1 次提交
-
-
由 Josh Poimboeuf 提交于
If a function is ignored, also ignore its hints. This is useful for the case where the function ignore is conditional on frame pointers, e.g. STACK_FRAME_NON_STANDARD_FP(). Link: https://lkml.kernel.org/r/163163048317.489837.10988954983369863209.stgit@devnote2Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com> Reviewed-by: NMasami Hiramatsu <mhiramat@kernel.org> Tested-by: NMasami Hiramatsu <mhiramat@kernel.org> Signed-off-by: NSteven Rostedt (VMware) <rostedt@goodmis.org>
-
- 17 9月, 2021 1 次提交
-
-
由 Peter Zijlstra 提交于
Normally objtool will now follow indirect calls; there is no need. However, this becomes a problem with noinstr validation; if there's an indirect call from noinstr code, we very much need to know it is to another noinstr function. Luckily there aren't many indirect calls in entry code with the obvious exception of paravirt. As such, noinstr validation didn't work with paravirt kernels. In order to track pv_ops[] call targets, objtool reads the static pv_ops[] tables as well as direct assignments to the pv_ops[] array, provided the compiler makes them a single instruction like: bf87: 48 c7 05 00 00 00 00 00 00 00 00 movq $0x0,0x0(%rip) bf92 <xen_init_spinlocks+0x5f> bf8a: R_X86_64_PC32 pv_ops+0x268 There are, as of yet, no warnings for when this goes wrong :/ Using the functions found with the above means, all pv_ops[] calls are now subject to noinstr validation. Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20210624095149.118815755@infradead.org
-
- 15 9月, 2021 3 次提交
-
-
由 Peter Zijlstra 提交于
Turns out the compilers also generate tail calls to __sanitize_cov*(), make sure to also patch those out in noinstr code. Fixes: 0f1441b4 ("objtool: Fix noinstr vs KCOV") Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Acked-by: NMarco Elver <elver@google.com> Link: https://lore.kernel.org/r/20210624095147.818783799@infradead.org
-
由 Peter Zijlstra 提交于
Andi reported that objtool on vmlinux.o consumes more memory than his system has, leading to horrific performance. This is in part because we keep a struct instruction for every instruction in the file in-memory. Shrink struct instruction by removing the CFI state (which includes full register state) from it and demand allocating it. Given most instructions don't actually change CFI state, there's lots of repetition there, so add a hash table to find previous CFI instances. Reduces memory consumption (and runtime) for processing an x86_64-allyesconfig: pre: 4:40.84 real, 143.99 user, 44.18 sys, 30624988 mem post: 2:14.61 real, 108.58 user, 25.04 sys, 16396184 mem Suggested-by: NAndi Kleen <andi@firstfloor.org> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20210624095147.756759107@infradead.org
-
由 Peter Zijlstra 提交于
The asm_cpu_bringup_and_idle() function is required to push the return value on the stack in order to make ORC happy, but the only reason objtool doesn't complain is because of a happy accident. The thing is that asm_cpu_bringup_and_idle() doesn't return, so validate_branch() never terminates and falls through to the next function, which in the normal case is the hypercall_page. And that, as it happens, is 4095 NOPs and a RET. Make asm_cpu_bringup_and_idle() terminate on it's own, by making the function it calls as a dead-end. This way we no longer rely on what code happens to come after. Fixes: c3881eb5 ("x86/xen: Make the secondary CPU idle tasks reliable") Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: NJuergen Gross <jgross@suse.com> Reviewed-by: NMiroslav Benes <mbenes@suse.cz> Link: https://lore.kernel.org/r/20210624095147.693801717@infradead.org
-
- 14 5月, 2021 1 次提交
-
-
由 Peter Zijlstra 提交于
Miroslav figured the code flow in handle_jump_alt() was sub-optimal with that goto. Reflow the code to make it clearer. Reported-by: NMiroslav Benes <mbenes@suse.com> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NIngo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/YJ00lgslY+IpA/rL@hirez.programming.kicks-ass.net
-
- 12 5月, 2021 2 次提交
-
-
由 Peter Zijlstra 提交于
Add objtool --stats to count the jump_label sites it encounters. Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NIngo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20210506194158.153101906@infradead.org
-
由 Peter Zijlstra 提交于
When a jump_entry::key has bit1 set, rewrite the instruction to be a NOP. This allows the compiler/assembler to emit JMP (and thus decide on which encoding to use). Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NIngo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20210506194158.091028792@infradead.org
-
- 20 4月, 2021 1 次提交
-
-
由 Josh Poimboeuf 提交于
Objtool detection of asm jump tables would normally just work, except for the fact that asm retpolines use alternatives. Objtool thinks the alternative code path (a jump to the retpoline) is a sibling call. Don't treat alternative indirect branches as sibling calls when the original instruction has a jump table. Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com> Tested-by: NArd Biesheuvel <ardb@kernel.org> Acked-by: NArd Biesheuvel <ardb@kernel.org> Tested-by: NSami Tolvanen <samitolvanen@google.com> Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Acked-by: NHerbert Xu <herbert@gondor.apana.org.au> Link: https://lore.kernel.org/r/460cf4dc675d64e1124146562cabd2c05aa322e8.1614182415.git.jpoimboe@redhat.com
-
- 02 4月, 2021 3 次提交
-
-
由 Peter Zijlstra 提交于
Track the reloc of instructions in the new instruction->reloc field to avoid having to look them up again later. ( Technically x86 instructions can have two relocations, but not jumps and calls, for which we're using this. ) Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NBorislav Petkov <bp@suse.de> Signed-off-by: NIngo Molnar <mingo@kernel.org> Reviewed-by: NMiroslav Benes <mbenes@suse.cz> Link: https://lkml.kernel.org/r/20210326151300.195441549@infradead.org
-
由 Peter Zijlstra 提交于
Provide infrastructure for architectures to rewrite/augment compiler generated retpoline calls. Similar to what we do for static_call()s, keep track of the instructions that are retpoline calls. Use the same list_head, since a retpoline call cannot also be a static_call. Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NBorislav Petkov <bp@suse.de> Signed-off-by: NIngo Molnar <mingo@kernel.org> Reviewed-by: NMiroslav Benes <mbenes@suse.cz> Link: https://lkml.kernel.org/r/20210326151300.130805730@infradead.org
-
由 Peter Zijlstra 提交于
Have elf_add_reloc() create the relocation section implicitly. Suggested-by: NJosh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NBorislav Petkov <bp@suse.de> Signed-off-by: NIngo Molnar <mingo@kernel.org> Reviewed-by: NMiroslav Benes <mbenes@suse.cz> Link: https://lkml.kernel.org/r/20210326151259.880174448@infradead.org
-