- 20 9月, 2022 12 次提交
-
-
由 Peter Zijlstra 提交于
stable inclusion from stable-v5.10.133 commit 33092b486686c31432c5354dbb18651e44200668 category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I5PTAS CVE: CVE-2022-29900,CVE-2022-23816,CVE-2022-29901 Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id=33092b486686c31432c5354dbb18651e44200668 -------------------------------- commit 43d5430a upstream. Provide infrastructure for architectures to rewrite/augment compiler generated retpoline calls. Similar to what we do for static_call()s, keep track of the instructions that are retpoline calls. Use the same list_head, since a retpoline call cannot also be a static_call. Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NBorislav Petkov <bp@suse.de> Signed-off-by: NIngo Molnar <mingo@kernel.org> Reviewed-by: NMiroslav Benes <mbenes@suse.cz> Link: https://lkml.kernel.org/r/20210326151300.130805730@infradead.org [bwh: Backported to 5.10: adjust context] Signed-off-by: NBen Hutchings <ben@decadent.org.uk> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NLin Yujun <linyujun809@huawei.com> Reviewed-by: NZhang Jianhua <chris.zjh@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Peter Zijlstra 提交于
stable inclusion from stable-v5.10.133 commit b37c439250118f6fecfd6436d8b218a452ab6fa8 category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I5PTAS CVE: CVE-2022-29900,CVE-2022-23816,CVE-2022-29901 Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id=b37c439250118f6fecfd6436d8b218a452ab6fa8 -------------------------------- commit d0c5c4cc upstream. Have elf_add_reloc() create the relocation section implicitly. Suggested-by: NJosh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NBorislav Petkov <bp@suse.de> Signed-off-by: NIngo Molnar <mingo@kernel.org> Reviewed-by: NMiroslav Benes <mbenes@suse.cz> Link: https://lkml.kernel.org/r/20210326151259.880174448@infradead.org [bwh: Backported to 5.10: drop changes in create_mcount_loc_sections()] Signed-off-by: NBen Hutchings <ben@decadent.org.uk> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NLin Yujun <linyujun809@huawei.com> Reviewed-by: NZhang Jianhua <chris.zjh@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Peter Zijlstra 提交于
stable inclusion from stable-v5.10.133 commit fcdb7926d399910ee847856b28d7bde5437f77f0 category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I5PTAS CVE: CVE-2022-29900,CVE-2022-23816,CVE-2022-29901 Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id=fcdb7926d399910ee847856b28d7bde5437f77f0 -------------------------------- commit ef47cc01 upstream. We have 4 instances of adding a relocation. Create a common helper to avoid growing even more. Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NBorislav Petkov <bp@suse.de> Signed-off-by: NIngo Molnar <mingo@kernel.org> Reviewed-by: NMiroslav Benes <mbenes@suse.cz> Link: https://lkml.kernel.org/r/20210326151259.817438847@infradead.org [bwh: Backported to 5.10: drop changes in create_mcount_loc_sections()] Signed-off-by: NBen Hutchings <ben@decadent.org.uk> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NLin Yujun <linyujun809@huawei.com> Reviewed-by: NZhang Jianhua <chris.zjh@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Peter Zijlstra 提交于
stable inclusion from stable-v5.10.133 commit c9049cf4804ab6f2b73d4cc244c3e2f6e0a9f10e category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I5PTAS CVE: CVE-2022-29900,CVE-2022-23816,CVE-2022-29901 Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id=c9049cf4804ab6f2b73d4cc244c3e2f6e0a9f10e -------------------------------- commit 3a647607 upstream. Instead of manually calling elf_rebuild_reloc_section() on sections we've called elf_add_reloc() on, have elf_write() DTRT. This makes it easier to add random relocations in places without carefully tracking when we're done and need to flush what section. Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NBorislav Petkov <bp@suse.de> Signed-off-by: NIngo Molnar <mingo@kernel.org> Reviewed-by: NMiroslav Benes <mbenes@suse.cz> Link: https://lkml.kernel.org/r/20210326151259.754213408@infradead.org [bwh: Backported to 5.10: drop changes in create_mcount_loc_sections()] Signed-off-by: NBen Hutchings <ben@decadent.org.uk> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NLin Yujun <linyujun809@huawei.com> Reviewed-by: NZhang Jianhua <chris.zjh@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Peter Zijlstra 提交于
stable inclusion from stable-v5.10.133 commit d42fa5bf19fc04833f3c27e9555051c428422248 category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I5PTAS CVE: CVE-2022-29900,CVE-2022-23816,CVE-2022-29901 Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id=d42fa5bf19fc04833f3c27e9555051c428422248 -------------------------------- commit 530b4ddd upstream. The __x86_indirect_ naming is obviously not generic. Shorten to allow matching some additional magic names later. Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NBorislav Petkov <bp@suse.de> Signed-off-by: NIngo Molnar <mingo@kernel.org> Reviewed-by: NMiroslav Benes <mbenes@suse.cz> Link: https://lkml.kernel.org/r/20210326151259.630296706@infradead.orgSigned-off-by: NBen Hutchings <ben@decadent.org.uk> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NLin Yujun <linyujun809@huawei.com> Reviewed-by: NZhang Jianhua <chris.zjh@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Peter Zijlstra 提交于
stable inclusion from stable-v5.10.133 commit 6e95f8caffb3f10e48b100c47e753ca83042fe6f category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I5PTAS CVE: CVE-2022-29900,CVE-2022-23816,CVE-2022-29901 Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id=6e95f8caffb3f10e48b100c47e753ca83042fe6f -------------------------------- commit bcb1b6ff upstream. Just like JMP handling, convert a direct CALL to a retpoline thunk into a retpoline safe indirect CALL. Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NBorislav Petkov <bp@suse.de> Signed-off-by: NIngo Molnar <mingo@kernel.org> Reviewed-by: NMiroslav Benes <mbenes@suse.cz> Link: https://lkml.kernel.org/r/20210326151259.567568238@infradead.orgSigned-off-by: NBen Hutchings <ben@decadent.org.uk> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NLin Yujun <linyujun809@huawei.com> Reviewed-by: NZhang Jianhua <chris.zjh@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Peter Zijlstra 提交于
stable inclusion from stable-v5.10.133 commit 28ca351296742a9e7506a548acaf7ea3bc9feef0 category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I5PTAS CVE: CVE-2022-29900,CVE-2022-23816,CVE-2022-29901 Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id=28ca351296742a9e7506a548acaf7ea3bc9feef0 -------------------------------- commit 11925185 upstream. Due to: c9c324dc ("objtool: Support stack layout changes in alternatives") it is now possible to simplify the retpolines. Currently our retpolines consist of 2 symbols: - __x86_indirect_thunk_\reg: the compiler target - __x86_retpoline_\reg: the actual retpoline. Both are consecutive in code and aligned such that for any one register they both live in the same cacheline: 0000000000000000 <__x86_indirect_thunk_rax>: 0: ff e0 jmpq *%rax 2: 90 nop 3: 90 nop 4: 90 nop 0000000000000005 <__x86_retpoline_rax>: 5: e8 07 00 00 00 callq 11 <__x86_retpoline_rax+0xc> a: f3 90 pause c: 0f ae e8 lfence f: eb f9 jmp a <__x86_retpoline_rax+0x5> 11: 48 89 04 24 mov %rax,(%rsp) 15: c3 retq 16: 66 2e 0f 1f 84 00 00 00 00 00 nopw %cs:0x0(%rax,%rax,1) The thunk is an alternative_2, where one option is a JMP to the retpoline. This was done so that objtool didn't need to deal with alternatives with stack ops. But that problem has been solved, so now it is possible to fold the entire retpoline into the alternative to simplify and consolidate unused bytes: 0000000000000000 <__x86_indirect_thunk_rax>: 0: ff e0 jmpq *%rax 2: 90 nop 3: 90 nop 4: 90 nop 5: 90 nop 6: 90 nop 7: 90 nop 8: 90 nop 9: 90 nop a: 90 nop b: 90 nop c: 90 nop d: 90 nop e: 90 nop f: 90 nop 10: 90 nop 11: 66 66 2e 0f 1f 84 00 00 00 00 00 data16 nopw %cs:0x0(%rax,%rax,1) 1c: 0f 1f 40 00 nopl 0x0(%rax) Notice that since the longest alternative sequence is now: 0: e8 07 00 00 00 callq c <.altinstr_replacement+0xc> 5: f3 90 pause 7: 0f ae e8 lfence a: eb f9 jmp 5 <.altinstr_replacement+0x5> c: 48 89 04 24 mov %rax,(%rsp) 10: c3 retq 17 bytes, we have 15 bytes NOP at the end of our 32 byte slot. (IOW, if we can shrink the retpoline by 1 byte we can pack it more densely). [ bp: Massage commit message. ] Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NBorislav Petkov <bp@suse.de> Signed-off-by: NIngo Molnar <mingo@kernel.org> Link: https://lkml.kernel.org/r/20210326151259.506071949@infradead.org [bwh: Backported to 5.10: - Use X86_FEATRURE_RETPOLINE_LFENCE flag instead of X86_FEATURE_RETPOLINE_AMD, since the later renaming of this flag has already been applied - Adjust context] Signed-off-by: NBen Hutchings <ben@decadent.org.uk> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NLin Yujun <linyujun809@huawei.com> Reviewed-by: NZhang Jianhua <chris.zjh@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Josh Poimboeuf 提交于
stable inclusion from stable-v5.10.133 commit 3116dee2704bfb3713efa3637a9e65369d019cc4 category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I5PTAS CVE: CVE-2022-29900,CVE-2022-23816,CVE-2022-29901 Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id=3116dee2704bfb3713efa3637a9e65369d019cc4 -------------------------------- commit b735bd3e upstream. The ORC metadata generated for UNWIND_HINT_FUNC isn't actually very func-like. With certain usages it can cause stack state mismatches because it doesn't set the return address (CFI_RA). Also, users of UNWIND_HINT_RET_OFFSET no longer need to set a custom return stack offset. Instead they just need to specify a func-like situation, so the current ret_offset code is hacky for no good reason. Solve both problems by simplifying the RET_OFFSET handling and converting it into a more useful UNWIND_HINT_FUNC. If we end up needing the old 'ret_offset' functionality again in the future, we should be able to support it pretty easily with the addition of a custom 'sp_offset' in UNWIND_HINT_FUNC. Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com> Link: https://lore.kernel.org/r/db9d1f5d79dddfbb3725ef6d8ec3477ad199948d.1611263462.git.jpoimboe@redhat.com [bwh: Backported to 5.10: - Don't use bswap_if_needed() since we don't have any of the other fixes for mixed-endian cross-compilation - Adjust context] Signed-off-by: NBen Hutchings <ben@decadent.org.uk> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NLin Yujun <linyujun809@huawei.com> Reviewed-by: NZhang Jianhua <chris.zjh@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Josh Poimboeuf 提交于
stable inclusion from stable-v5.10.133 commit 53e89bc78e4351924a1a1474683d47a00c2633f2 category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I5PTAS CVE: CVE-2022-29900,CVE-2022-23816,CVE-2022-29901 Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id=53e89bc78e4351924a1a1474683d47a00c2633f2 -------------------------------- commit ecf11ba4 upstream. There's an inconsistency in how sibling calls are detected in non-function asm code, depending on the scope of the object. If the target code is external to the object, objtool considers it a sibling call. If the target code is internal but not a function, objtool *doesn't* consider it a sibling call. This can cause some inconsistencies between per-object and vmlinux.o validation. Instead, assume only ELF functions can do sibling calls. This generally matches existing reality, and makes sibling call validation consistent between vmlinux.o and per-object. Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com> Link: https://lore.kernel.org/r/0e9ab6f3628cc7bf3bde7aa6762d54d7df19ad78.1611263461.git.jpoimboe@redhat.comSigned-off-by: NBen Hutchings <ben@decadent.org.uk> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NLin Yujun <linyujun809@huawei.com> Reviewed-by: NZhang Jianhua <chris.zjh@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Josh Poimboeuf 提交于
stable inclusion from stable-v5.10.133 commit 3e674f26528931c6a0f1bc7aa29445b45fdfd62d category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I5PTAS CVE: CVE-2022-29900,CVE-2022-23816,CVE-2022-29901 Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id=3e674f26528931c6a0f1bc7aa29445b45fdfd62d -------------------------------- commit 31a7424b upstream. Objtool converts direct retpoline jumps to type INSN_JUMP_DYNAMIC, since that's what they are semantically. That conversion doesn't work in vmlinux.o validation because the indirect thunk function is present in the object, so the intra-object jump check succeeds before the retpoline jump check gets a chance. Rearrange the checks: check for a retpoline jump before checking for an intra-object jump. Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com> Link: https://lore.kernel.org/r/4302893513770dde68ddc22a9d6a2a04aca491dd.1611263461.git.jpoimboe@redhat.comSigned-off-by: NBen Hutchings <ben@decadent.org.uk> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NLin Yujun <linyujun809@huawei.com> Reviewed-by: NZhang Jianhua <chris.zjh@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Josh Poimboeuf 提交于
stable inclusion from stable-v5.10.133 commit 917a4f6348d94d9a3c20d78c800dd4715825362d category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I5PTAS CVE: CVE-2022-29900,CVE-2022-23816,CVE-2022-29901 Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id=917a4f6348d94d9a3c20d78c800dd4715825362d -------------------------------- commit c9c324dc upstream. The ORC unwinder showed a warning [1] which revealed the stack layout didn't match what was expected. The problem was that paravirt patching had replaced "CALL *pv_ops.irq.save_fl" with "PUSHF;POP". That changed the stack layout between the PUSHF and the POP, so unwinding from an interrupt which occurred between those two instructions would fail. Part of the agreed upon solution was to rework the custom paravirt patching code to use alternatives instead, since objtool already knows how to read alternatives (and converging runtime patching infrastructure is always a good thing anyway). But the main problem still remains, which is that runtime patching can change the stack layout. Making stack layout changes in alternatives was disallowed with commit 7117f16b ("objtool: Fix ORC vs alternatives"), but now that paravirt is going to be doing it, it needs to be supported. One way to do so would be to modify the ORC table when the code gets patched. But ORC is simple -- a good thing! -- and it's best to leave it alone. Instead, support stack layout changes by "flattening" all possible stack states (CFI) from parallel alternative code streams into a single set of linear states. The only necessary limitation is that CFI conflicts are disallowed at all possible instruction boundaries. For example, this scenario is allowed: Alt1 Alt2 Alt3 0x00 CALL *pv_ops.save_fl CALL xen_save_fl PUSHF 0x01 POP %RAX 0x02 NOP ... 0x05 NOP ... 0x07 <insn> The unwind information for offset-0x00 is identical for all 3 alternatives. Similarly offset-0x05 and higher also are identical (and the same as 0x00). However offset-0x01 has deviating CFI, but that is only relevant for Alt3, neither of the other alternative instruction streams will ever hit that offset. This scenario is NOT allowed: Alt1 Alt2 0x00 CALL *pv_ops.save_fl PUSHF 0x01 NOP6 ... 0x07 NOP POP %RAX The problem here is that offset-0x7, which is an instruction boundary in both possible instruction patch streams, has two conflicting stack layouts. [ The above examples were stolen from Peter Zijlstra. ] The new flattened CFI array is used both for the detection of conflicts (like the second example above) and the generation of linear ORC entries. BTW, another benefit of these changes is that, thanks to some related cleanups (new fake nops and alt_group struct) objtool can finally be rid of fake jumps, which were a constant source of headaches. [1] https://lkml.kernel.org/r/20201111170536.arx2zbn4ngvjoov7@treble Cc: Shinichiro Kawasaki <shinichiro.kawasaki@wdc.com> Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: NBen Hutchings <ben@decadent.org.uk> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NLin Yujun <linyujun809@huawei.com> Reviewed-by: NZhang Jianhua <chris.zjh@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Josh Poimboeuf 提交于
stable inclusion from stable-v5.10.133 commit e9197d768f976199a2356842400df947b4007377 category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I5PTAS CVE: CVE-2022-29900,CVE-2022-23816,CVE-2022-29901 Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id=e9197d768f976199a2356842400df947b4007377 -------------------------------- commit b23cc71c upstream. Create a new struct associated with each group of alternatives instructions. This will help with the removal of fake jumps, and more importantly with adding support for stack layout changes in alternatives. Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: NBen Hutchings <ben@decadent.org.uk> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NLin Yujun <linyujun809@huawei.com> Reviewed-by: NZhang Jianhua <chris.zjh@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
- 06 12月, 2021 3 次提交
-
-
由 Peter Zijlstra 提交于
stable inclusion from stable-5.10.80 commit f0bc12b8482684fe4d0e48b367ca4fcb8580e81a bugzilla: 185821 https://gitee.com/openeuler/kernel/issues/I4L7CG Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id=f0bc12b8482684fe4d0e48b367ca4fcb8580e81a -------------------------------- [ Upstream commit a958c4fe ] Currently, objtool generates tail call entries in add_jump_destination() but waits until validate_branch() to generate the regular call entries. Move these to add_call_destination() for consistency. Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NBorislav Petkov <bp@suse.de> Signed-off-by: NIngo Molnar <mingo@kernel.org> Reviewed-by: NMiroslav Benes <mbenes@suse.cz> Link: https://lkml.kernel.org/r/20210326151259.691529901@infradead.orgSigned-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Reviewed-by: NWeilong Chen <chenweilong@huawei.com> Acked-by: NWeilong Chen <chenweilong@huawei.com> Signed-off-by: NChen Jun <chenjun102@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Peter Zijlstra 提交于
stable inclusion from stable-5.10.80 commit b36ab509e1817e60f73ce297494f0de1b7ed2c08 bugzilla: 185821 https://gitee.com/openeuler/kernel/issues/I4L7CG Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id=b36ab509e1817e60f73ce297494f0de1b7ed2c08 -------------------------------- [ Upstream commit 9af9dcf1 ] The asm_cpu_bringup_and_idle() function is required to push the return value on the stack in order to make ORC happy, but the only reason objtool doesn't complain is because of a happy accident. The thing is that asm_cpu_bringup_and_idle() doesn't return, so validate_branch() never terminates and falls through to the next function, which in the normal case is the hypercall_page. And that, as it happens, is 4095 NOPs and a RET. Make asm_cpu_bringup_and_idle() terminate on it's own, by making the function it calls as a dead-end. This way we no longer rely on what code happens to come after. Fixes: c3881eb5 ("x86/xen: Make the secondary CPU idle tasks reliable") Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: NJuergen Gross <jgross@suse.com> Reviewed-by: NMiroslav Benes <mbenes@suse.cz> Link: https://lore.kernel.org/r/20210624095147.693801717@infradead.orgSigned-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Reviewed-by: NWeilong Chen <chenweilong@huawei.com> Acked-by: NWeilong Chen <chenweilong@huawei.com> Signed-off-by: NChen Jun <chenjun102@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Josh Poimboeuf 提交于
stable inclusion from stable-5.10.80 commit abf37e855e53fe5b9bd4aeda047884e6fff9e32e bugzilla: 185821 https://gitee.com/openeuler/kernel/issues/I4L7CG Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id=abf37e855e53fe5b9bd4aeda047884e6fff9e32e -------------------------------- [ Upstream commit c26acfbb ] xen_start_kernel() doesn't return. Annotate it as such so objtool can follow the code flow. Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com> Link: https://lore.kernel.org/r/930deafa89256c60b180442df59a1bbae48f30ab.1611263462.git.jpoimboe@redhat.comSigned-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Reviewed-by: NWeilong Chen <chenweilong@huawei.com> Acked-by: NWeilong Chen <chenweilong@huawei.com> Signed-off-by: NChen Jun <chenjun102@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
- 19 4月, 2021 1 次提交
-
-
由 Josh Poimboeuf 提交于
stable inclusion from stable-5.10.27 commit a63068e93917927d443e32609dde9298bcd14833 bugzilla: 51493 -------------------------------- [ Upstream commit 73f44fe1 ] When exporting static_call_key; with EXPORT_STATIC_CALL*(), the module can use static_call_update() to change the function called. This is not desirable in general. Not exporting static_call_key however also disallows usage of static_call(), since objtool needs the key to construct the static_call_site. Solve this by allowing objtool to create the static_call_site using the trampoline address when it builds a module and cannot find the static_call_key symbol. The module loader will then try and map the trampole back to a key before it constructs the normal sites list. Doing this requires a trampoline -> key associsation, so add another magic section that keeps those. Originally-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NIngo Molnar <mingo@kernel.org> Link: https://lkml.kernel.org/r/20210127231837.ifddpn7rhwdaepiu@trebleSigned-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: N Weilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
- 09 4月, 2021 3 次提交
-
-
由 Josh Poimboeuf 提交于
stable inclusion from stable-5.10.20 commit c41fc75addf12239b9baa7ad3b4e5ce9ff49d809 bugzilla: 50608 -------------------------------- [ Upstream commit 34ca59e1 ] With my version of GCC 9.3.1 the ".cold" subfunctions no longer have a numbered suffix, so the trailing period is no longer there. Presumably this doesn't yet trigger a user-visible bug since most of the subfunction detection logic is duplicated. I only found it when testing vmlinux.o validation. Fixes: 54262aa2 ("objtool: Fix sibling call detection") Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com> Link: https://lore.kernel.org/r/ca0b5a57f08a2fbb48538dd915cc253b5edabb40.1611263461.git.jpoimboe@redhat.comSigned-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: NXie XiuQi <xiexiuqi@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Josh Poimboeuf 提交于
stable inclusion from stable-5.10.20 commit 7631376b2d8e72e8df55d541b7f35bd27ffed4a1 bugzilla: 50608 -------------------------------- [ Upstream commit 1f9a1b74 ] The JMP_NOSPEC macro branches to __x86_retpoline_*() rather than the __x86_indirect_thunk_*() wrappers used by C code. Detect jumps to __x86_retpoline_*() as retpoline dynamic jumps. Presumably this doesn't trigger a user-visible bug. I only found it when testing vmlinux.o validation. Fixes: 39b73533 ("objtool: Detect jumps to retpoline thunks") Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com> Link: https://lore.kernel.org/r/31f5833e2e4f01e3d755889ac77e3661e906c09f.1611263461.git.jpoimboe@redhat.comSigned-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: NXie XiuQi <xiexiuqi@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Josh Poimboeuf 提交于
stable inclusion from stable-5.10.20 commit 9e06f36658dfc50b8500aa3b2a680f4a8f11bbc7 bugzilla: 50608 -------------------------------- [ Upstream commit 6f567c93 ] Actually return an error (and display a backtrace, if requested) for directional bit warnings. Fixes: 2f0f9e9a ("objtool: Add Direction Flag validation") Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com> Link: https://lore.kernel.org/r/dc70f2adbc72f09526f7cab5b6feb8bf7f6c5ad4.1611263461.git.jpoimboe@redhat.comSigned-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: NXie XiuQi <xiexiuqi@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
- 09 3月, 2021 1 次提交
-
-
由 Josh Poimboeuf 提交于
stable inclusion from stable-5.10.17 commit 2b02985bf83e6da9d9165c5f2165af1b97d76edf bugzilla: 48169 -------------------------------- commit 44f6a7c0 upstream. The Clang assembler likes to strip section symbols, which means objtool can't reference some text code by its section. This confuses objtool greatly, causing it to seg fault. The fix is similar to what was done before, for ORC reloc generation: e81e0724 ("objtool: Support Clang non-section symbols in ORC generation") Factor out that code into a common helper and use it for static call reloc generation as well. Reported-by: NArnd Bergmann <arnd@kernel.org> Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: NNick Desaulniers <ndesaulniers@google.com> Reviewed-by: NMiroslav Benes <mbenes@suse.cz> Link: https://github.com/ClangBuiltLinux/linux/issues/1207 Link: https://lkml.kernel.org/r/ba6b6c0f0dd5acbba66e403955a967d9fdd1726a.1607983452.git.jpoimboe@redhat.comSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com> Acked-by: NXie XiuQi <xiexiuqi@huawei.com>
-
- 19 2月, 2021 1 次提交
-
-
由 Josh Poimboeuf 提交于
stable inclusion from stable-5.10.14 commit 9c8bb3eac07de8834d012e12ff0185a9dcc01331 bugzilla: 48051 -------------------------------- [ Upstream commit 655cf865 ] This is basically a revert of commit 644592d3 ("objtool: Fail the kernel build on fatal errors"). That change turned out to be more trouble than it's worth. Failing the build is an extreme measure which sometimes gets too much attention and blocks CI build testing. These fatal-type warnings aren't yet as rare as we'd hope, due to the ever-increasing matrix of supported toolchains/plugins and their fast-changing nature as of late. Also, there are more people (and bots) looking for objtool warnings than ever before, so even non-fatal warnings aren't likely to be ignored for long. Suggested-by: NNick Desaulniers <ndesaulniers@google.com> Reviewed-by: NMiroslav Benes <mbenes@suse.cz> Reviewed-by: NNick Desaulniers <ndesaulniers@google.com> Reviewed-by: NKamalesh Babulal <kamalesh@linux.vnet.ibm.com> Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com> Acked-by: NXie XiuQi <xiexiuqi@huawei.com>
-
- 06 10月, 2020 2 次提交
-
-
由 Dan Williams 提交于
The motivations to go rework memcpy_mcsafe() are that the benefit of doing slow and careful copies is obviated on newer CPUs, and that the current opt-in list of CPUs to instrument recovery is broken relative to those CPUs. There is no need to keep an opt-in list up to date on an ongoing basis if pmem/dax operations are instrumented for recovery by default. With recovery enabled by default the old "mcsafe_key" opt-in to careful copying can be made a "fragile" opt-out. Where the "fragile" list takes steps to not consume poison across cachelines. The discussion with Linus made clear that the current "_mcsafe" suffix was imprecise to a fault. The operations that are needed by pmem/dax are to copy from a source address that might throw #MC to a destination that may write-fault, if it is a user page. So copy_to_user_mcsafe() becomes copy_mc_to_user() to indicate the separate precautions taken on source and destination. copy_mc_to_kernel() is introduced as a non-SMAP version that does not expect write-faults on the destination, but is still prepared to abort with an error code upon taking #MC. The original copy_mc_fragile() implementation had negative performance implications since it did not use the fast-string instruction sequence to perform copies. For this reason copy_mc_to_kernel() fell back to plain memcpy() to preserve performance on platforms that did not indicate the capability to recover from machine check exceptions. However, that capability detection was not architectural and now that some platforms can recover from fast-string consumption of memory errors the memcpy() fallback now causes these more capable platforms to fail. Introduce copy_mc_enhanced_fast_string() as the fast default implementation of copy_mc_to_kernel() and finalize the transition of copy_mc_fragile() to be a platform quirk to indicate 'copy-carefully'. With this in place, copy_mc_to_kernel() is fast and recovery-ready by default regardless of hardware capability. Thanks to Vivek for identifying that copy_user_generic() is not suitable as the copy_mc_to_user() backend since the #MC handler explicitly checks ex_has_fault_handler(). Thanks to the 0day robot for catching a performance bug in the x86/copy_mc_to_user implementation. [ bp: Add the "why" for this change from the 0/2th message, massage. ] Fixes: 92b0729c ("x86/mm, x86/mce: Add memcpy_mcsafe()") Reported-by: NErwin Tsaur <erwin.tsaur@intel.com> Reported-by: N0day robot <lkp@intel.com> Signed-off-by: NDan Williams <dan.j.williams@intel.com> Signed-off-by: NBorislav Petkov <bp@suse.de> Reviewed-by: NTony Luck <tony.luck@intel.com> Tested-by: NErwin Tsaur <erwin.tsaur@intel.com> Cc: <stable@vger.kernel.org> Link: https://lkml.kernel.org/r/160195562556.2163339.18063423034951948973.stgit@dwillia2-desk3.amr.corp.intel.com
-
由 Dan Williams 提交于
In reaction to a proposal to introduce a memcpy_mcsafe_fast() implementation Linus points out that memcpy_mcsafe() is poorly named relative to communicating the scope of the interface. Specifically what addresses are valid to pass as source, destination, and what faults / exceptions are handled. Of particular concern is that even though x86 might be able to handle the semantics of copy_mc_to_user() with its common copy_user_generic() implementation other archs likely need / want an explicit path for this case: On Fri, May 1, 2020 at 11:28 AM Linus Torvalds <torvalds@linux-foundation.org> wrote: > > On Thu, Apr 30, 2020 at 6:21 PM Dan Williams <dan.j.williams@intel.com> wrote: > > > > However now I see that copy_user_generic() works for the wrong reason. > > It works because the exception on the source address due to poison > > looks no different than a write fault on the user address to the > > caller, it's still just a short copy. So it makes copy_to_user() work > > for the wrong reason relative to the name. > > Right. > > And it won't work that way on other architectures. On x86, we have a > generic function that can take faults on either side, and we use it > for both cases (and for the "in_user" case too), but that's an > artifact of the architecture oddity. > > In fact, it's probably wrong even on x86 - because it can hide bugs - > but writing those things is painful enough that everybody prefers > having just one function. Replace a single top-level memcpy_mcsafe() with either copy_mc_to_user(), or copy_mc_to_kernel(). Introduce an x86 copy_mc_fragile() name as the rename for the low-level x86 implementation formerly named memcpy_mcsafe(). It is used as the slow / careful backend that is supplanted by a fast copy_mc_generic() in a follow-on patch. One side-effect of this reorganization is that separating copy_mc_64.S to its own file means that perf no longer needs to track dependencies for its memcpy_64.S benchmarks. [ bp: Massage a bit. ] Signed-off-by: NDan Williams <dan.j.williams@intel.com> Signed-off-by: NBorislav Petkov <bp@suse.de> Reviewed-by: NTony Luck <tony.luck@intel.com> Acked-by: NMichael Ellerman <mpe@ellerman.id.au> Cc: <stable@vger.kernel.org> Link: http://lore.kernel.org/r/CAHk-=wjSqtXAqfUJxFtWNwmguFASTgB0dz1dT3V-78Quiezqbg@mail.gmail.com Link: https://lkml.kernel.org/r/160195561680.2163339.11574962055305783722.stgit@dwillia2-desk3.amr.corp.intel.com
-
- 02 10月, 2020 1 次提交
-
-
由 Jann Horn 提交于
Building linux-next with JUMP_LABEL=n and KASAN=y, I got this objtool warning: arch/x86/lib/copy_mc.o: warning: objtool: copy_mc_to_user()+0x22: call to __kasan_check_read() with UACCESS enabled What happens here is that copy_mc_to_user() branches on a static key in a UACCESS region: __uaccess_begin(); if (static_branch_unlikely(©_mc_fragile_key)) ret = copy_mc_fragile(to, from, len); ret = copy_mc_generic(to, from, len); __uaccess_end(); and the !CONFIG_JUMP_LABEL version of static_branch_unlikely() uses static_key_enabled(), which uses static_key_count(), which uses atomic_read(), which calls instrument_atomic_read(), which uses kasan_check_read(), which is __kasan_check_read(). Let's permit these KASAN helpers in UACCESS regions - static keys should probably work under UACCESS, I think. PeterZ adds: It's not a matter of permitting, it's a matter of being safe and correct. In this case it is, because it's a thin wrapper around check_memory_region() which was already marked safe. check_memory_region() is correct because the only thing it ends up calling is kasa_report() and that is also marked safe because that is annotated with user_access_save/restore() before it does anything else. On top of that, all of KASAN is noinstr, so nothing in here will end up in tracing and/or call schedule() before the user_access_save(). Signed-off-by: NJann Horn <jannh@google.com> Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com>
-
- 21 9月, 2020 2 次提交
-
-
由 Ilie Halip 提交于
With CONFIG_UBSAN_TRAP enabled, the compiler may insert a trap instruction after a call to a noreturn function. In this case, objtool warns that the UD2 instruction is unreachable. This is a behavior seen with Clang, from the oldest version capable of building the mainline x64_64 kernel (9.0), to the latest experimental version (12.0). Objtool silences similar warnings (trap after dead end instructions), so so expand that check to include dead end functions. Cc: Nick Desaulniers <ndesaulniers@google.com> Cc: Rong Chen <rong.a.chen@intel.com> Cc: Marco Elver <elver@google.com> Cc: Philip Li <philip.li@intel.com> Cc: Borislav Petkov <bp@alien8.de> Cc: kasan-dev@googlegroups.com Cc: x86@kernel.org Cc: clang-built-linux@googlegroups.com BugLink: https://github.com/ClangBuiltLinux/linux/issues/1148 Link: https://lore.kernel.org/lkml/CAKwvOdmptEpi8fiOyWUo=AiZJiX+Z+VHJOM2buLPrWsMTwLnyw@mail.gmail.comSuggested-by: NNick Desaulniers <ndesaulniers@google.com> Reviewed-by: NNick Desaulniers <ndesaulniers@google.com> Tested-by: NNick Desaulniers <ndesaulniers@google.com> Reported-by: Nkbuild test robot <lkp@intel.com> Signed-off-by: NIlie Halip <ilie.halip@gmail.com> Tested-by: NSedat Dilek <sedat.dilek@gmail.com> Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com>
-
由 Julien Thierry 提交于
Relocation for a call destination could point to a symbol that has type STT_NOTYPE. Lookup such a symbol when no function is available. Signed-off-by: NJulien Thierry <jthierry@redhat.com> Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com>
-
- 19 9月, 2020 3 次提交
-
-
由 Josh Poimboeuf 提交于
When a function is annotated with STACK_FRAME_NON_STANDARD, objtool doesn't validate its code paths. It also skips sibling call detection within the function. But sibling call detection is actually needed for the case where the ignored function doesn't have any return instructions. Otherwise objtool naively marks the function as implicit static noreturn, which affects the reachability of its callers, resulting in "unreachable instruction" warnings. Fix it by just enabling sibling call detection for ignored functions. The 'insn->ignore' check in add_jump_destinations() is no longer needed after e6da9567 ("objtool: Don't use ignore flag for fake jumps"). Fixes the following warning: arch/x86/kvm/vmx/vmx.o: warning: objtool: vmx_handle_exit_irqoff()+0x142: unreachable instruction which triggers on an allmodconfig with CONFIG_GCOV_KERNEL unset. Reported-by: NLinus Torvalds <torvalds@linux-foundation.org> Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: NBorislav Petkov <bp@suse.de> Acked-by: NLinus Torvalds <torvalds@linux-foundation.org> Link: https://lkml.kernel.org/r/5b1e2536cdbaa5246b60d7791b76130a74082c62.1599751464.git.jpoimboe@redhat.com
-
由 Julien Thierry 提交于
It is possible for alternative code to unconditionally jump out of the alternative region. In such a case, if a fake jump is added at the end of the alternative instructions, the fake jump will never be reached. Since the fake jump is just a mean to make sure code validation does not go beyond the set of alternatives, reaching it is not a requirement. Signed-off-by: NJulien Thierry <jthierry@redhat.com> Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com>
-
由 Julien Thierry 提交于
save_reg already checks that the register being saved does not already have a saved state. Remove redundant checks before processing a register storing operation. Signed-off-by: NJulien Thierry <jthierry@redhat.com> Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com>
-
- 10 9月, 2020 4 次提交
-
-
由 Julien Thierry 提交于
The set of registers that can be included in an unwind hint and their encoding will depend on the architecture. Have arch specific code to decode that register. Signed-off-by: NJulien Thierry <jthierry@redhat.com> Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com>
-
由 Julien Thierry 提交于
Unwind hints are useful to provide objtool with information about stack states in non-standard functions/code. While the type of information being provided might be very arch specific, the mechanism to provide the information can be useful for other architectures. Move the relevant unwint hint definitions for all architectures to see. [ jpoimboe: REGS_IRET -> REGS_PARTIAL ] Signed-off-by: NJulien Thierry <jthierry@redhat.com> Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com>
-
由 Raphael Gault 提交于
The way to identify jump tables and retrieve all the data necessary to handle the different execution branches is not the same on all architectures. In order to be able to add other architecture support, define an arch-dependent function to process jump-tables. Reviewed-by: NMiroslav Benes <mbenes@suse.cz> Signed-off-by: NRaphael Gault <raphael.gault@arm.com> [J.T.: Move arm64 bits out of this patch, Have only one function to find the start of the jump table, for now assume that the jump table format will be the same as x86] Signed-off-by: NJulien Thierry <jthierry@redhat.com> Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com>
-
由 Julien Thierry 提交于
As pointed out by the comment in handle_group_alt(), support of relocation for instructions in an alternative group depends on whether arch specific kernel code handles it. So, let objtool arch specific code decide whether a relocation for the alternative section should be accepted. Reviewed-by: NMiroslav Benes <mbenes@suse.cz> Signed-off-by: NJulien Thierry <jthierry@redhat.com> Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com>
-
- 02 9月, 2020 2 次提交
-
-
由 Julien Thierry 提交于
Now that the objtool_file can be obtained outside of the check function, orc generation builtin no longer requires check to explicitly call its orc related functions. Signed-off-by: NJulien Thierry <jthierry@redhat.com> Reviewed-by: NMiroslav Benes <mbenes@suse.cz> Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com>
-
由 Julien Thierry 提交于
Structure objtool_file can be used by different subcommands. In fact it already is, by check and orc. Provide a function that allows to initialize objtool_file, that builtin can call, without relying on check to do the correct setup for them and explicitly hand the objtool_file to them. Reviewed-by: NMiroslav Benes <mbenes@suse.cz> Signed-off-by: NJulien Thierry <jthierry@redhat.com> Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com>
-
- 01 9月, 2020 2 次提交
-
-
由 Peter Zijlstra 提交于
GCC can turn our static_call(name)(args...) into a tail call, in which case we get a JMP.d32 into the trampoline (which then does a further tail-call). Teach objtool to recognise and mark these in .static_call_sites and adjust the code patching to deal with this. Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NIngo Molnar <mingo@kernel.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Link: https://lore.kernel.org/r/20200818135805.101186767@infradead.org
-
由 Josh Poimboeuf 提交于
Add the inline static call implementation for x86-64. The generated code is identical to the out-of-line case, except we move the trampoline into it's own section. Objtool uses the trampoline naming convention to detect all the call sites. It then annotates those call sites in the .static_call_sites section. During boot (and module init), the call sites are patched to call directly into the destination function. The temporary trampoline is then no longer used. [peterz: merged trampolines, put trampoline in section] Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NIngo Molnar <mingo@kernel.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Link: https://lore.kernel.org/r/20200818135804.864271425@infradead.org
-
- 25 8月, 2020 2 次提交
-
-
由 Marco Elver 提交于
Adds the new __tsan_read_write compound instrumentation to objtool's uaccess whitelist. Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: NMarco Elver <elver@google.com> Signed-off-by: NPaul E. McKenney <paulmck@kernel.org>
-
由 Marco Elver 提交于
Adds the new TSAN functions that may be emitted for atomic builtins to objtool's uaccess whitelist. Signed-off-by: NMarco Elver <elver@google.com> Signed-off-by: NPaul E. McKenney <paulmck@kernel.org> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org>
-
- 25 6月, 2020 1 次提交
-
-
由 Peter Zijlstra 提交于
Avoids issuing C-file warnings for vmlinux. Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20200618144801.701257527@infradead.org
-