- 12 6月, 2020 1 次提交
-
-
由 Marco Elver 提交于
Some compilers incorrectly inline small __no_kcsan functions, which then results in instrumenting the accesses. For this reason, the 'noinline' attribute was added to __no_kcsan_or_inline. All known versions of GCC are affected by this. Supported versions of Clang are unaffected, and never inline a no_sanitize function. However, the attribute 'noinline' in __no_kcsan_or_inline causes unexpected code generation in functions that are __no_kcsan and call a __no_kcsan_or_inline function. In certain situations it is expected that the __no_kcsan_or_inline function is actually inlined by the __no_kcsan function, and *no* calls are emitted. By removing the 'noinline' attribute, give the compiler the ability to inline and generate the expected code in __no_kcsan functions. Signed-off-by: NMarco Elver <elver@google.com> Signed-off-by: NBorislav Petkov <bp@suse.de> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Acked-by: NWill Deacon <will@kernel.org> Link: https://lkml.kernel.org/r/CANpmjNNOpJk0tprXKB_deiNAv_UmmORf1-2uajLhnLWQQ1hvoA@mail.gmail.com Link: https://lkml.kernel.org/r/20200521142047.169334-6-elver@google.com
-
- 05 6月, 2020 2 次提交
-
-
由 Will Deacon 提交于
READ_ONCE_NOCHECK() unconditionally performs a sizeof(long)-sized access, so enforce that the size of the pointed-to object that we are loading from is the same size as 'long'. Reported-by: NMarco Elver <elver@google.com> Signed-off-by: NWill Deacon <will@kernel.org>
-
由 Will Deacon 提交于
READ_ONCE() permits 64-bit accesses on 32-bit architectures, since this crops up in a few places and is generally harmless because either the upper bits are always zero (e.g. for a virtual address or 32-bit time_t) or the architecture provides 64-bit atomicity anyway. Update the corresponding comment above compiletime_assert_rwonce_type(), which incorrectly states that 32-bit x86 provides 64-bit atomicity, and instead reference 32-bit Armv7 with LPAE. Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Peter Zijlstra <peterz@infradead.org> Reported-by: NJann Horn <jannh@google.com> Signed-off-by: NWill Deacon <will@kernel.org>
-
- 19 5月, 2020 1 次提交
-
-
由 Thomas Gleixner 提交于
Some code pathes, especially the low level entry code, must be protected against instrumentation for various reasons: - Low level entry code can be a fragile beast, especially on x86. - With NO_HZ_FULL RCU state needs to be established before using it. Having a dedicated section for such code allows to validate with tooling that no unsafe functions are invoked. Add the .noinstr.text section and the noinstr attribute to mark functions. noinstr implies notrace. Kprobes will gain a section check later. Provide also a set of markers: instrumentation_begin()/end() These are used to mark code inside a noinstr function which calls into regular instrumentable text section as safe. The instrumentation markers are only active when CONFIG_DEBUG_ENTRY is enabled as the end marker emits a NOP to prevent the compiler from merging the annotation points. This means the objtool verification requires a kernel compiled with this option. Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Reviewed-by: NAlexandre Chartre <alexandre.chartre@oracle.com> Acked-by: NPeter Zijlstra <peterz@infradead.org> Link: https://lkml.kernel.org/r/20200505134100.075416272@linutronix.de
-
- 15 5月, 2020 1 次提交
-
-
由 Borislav Petkov 提交于
... or the odyssey of trying to disable the stack protector for the function which generates the stack canary value. The whole story started with Sergei reporting a boot crash with a kernel built with gcc-10: Kernel panic — not syncing: stack-protector: Kernel stack is corrupted in: start_secondary CPU: 1 PID: 0 Comm: swapper/1 Not tainted 5.6.0-rc5—00235—gfffb08b3 #139 Hardware name: Gigabyte Technology Co., Ltd. To be filled by O.E.M./H77M—D3H, BIOS F12 11/14/2013 Call Trace: dump_stack panic ? start_secondary __stack_chk_fail start_secondary secondary_startup_64 -—-[ end Kernel panic — not syncing: stack—protector: Kernel stack is corrupted in: start_secondary This happens because gcc-10 tail-call optimizes the last function call in start_secondary() - cpu_startup_entry() - and thus emits a stack canary check which fails because the canary value changes after the boot_init_stack_canary() call. To fix that, the initial attempt was to mark the one function which generates the stack canary with: __attribute__((optimize("-fno-stack-protector"))) ... start_secondary(void *unused) however, using the optimize attribute doesn't work cumulatively as the attribute does not add to but rather replaces previously supplied optimization options - roughly all -fxxx options. The key one among them being -fno-omit-frame-pointer and thus leading to not present frame pointer - frame pointer which the kernel needs. The next attempt to prevent compilers from tail-call optimizing the last function call cpu_startup_entry(), shy of carving out start_secondary() into a separate compilation unit and building it with -fno-stack-protector, was to add an empty asm(""). This current solution was short and sweet, and reportedly, is supported by both compilers but we didn't get very far this time: future (LTO?) optimization passes could potentially eliminate this, which leads us to the third attempt: having an actual memory barrier there which the compiler cannot ignore or move around etc. That should hold for a long time, but hey we said that about the other two solutions too so... Reported-by: NSergei Trofimovich <slyfox@gentoo.org> Signed-off-by: NBorislav Petkov <bp@suse.de> Tested-by: NKalle Valo <kvalo@codeaurora.org> Cc: <stable@vger.kernel.org> Link: https://lkml.kernel.org/r/20200314164451.346497-1-slyfox@gentoo.org
-
- 16 4月, 2020 3 次提交
-
-
由 Will Deacon 提交于
Passing a volatile-qualified pointer to READ_ONCE() is an absolute trainwreck for code generation: the use of 'typeof()' to define a temporary variable inside the macro means that the final evaluation in macro scope ends up forcing a read back from the stack. When stack protector is enabled (the default for arm64, at least), this causes the compiler to vomit up all sorts of junk. Unfortunately, dropping pointer qualifiers inside the macro poses quite a challenge, especially since the pointed-to type is permitted to be an aggregate, and this is relied upon by mm/ code accessing things like 'pmd_t'. Based on numerous hacks and discussions on the mailing list, this is the best I've managed to come up with. Introduce '__unqual_scalar_typeof()' which takes an expression and, if the expression is an optionally qualified 8, 16, 32 or 64-bit scalar type, evaluates to the unqualified type. Other input types, including aggregates, remain unchanged. Hopefully READ_ONCE() on volatile aggregate pointers isn't something we do on a fast-path. Cc: Peter Zijlstra <peterz@infradead.org> Cc: Arnd Bergmann <arnd@arndb.de> Suggested-by: NLinus Torvalds <torvalds@linux-foundation.org> Reported-by: NMichael Ellerman <mpe@ellerman.id.au> Signed-off-by: NWill Deacon <will@kernel.org>
-
由 Will Deacon 提交于
{READ,WRITE}_ONCE() cannot guarantee atomicity for arbitrary data sizes. This can be surprising to callers that might incorrectly be expecting atomicity for accesses to aggregate structures, although there are other callers where tearing is actually permissable (e.g. if they are using something akin to sequence locking to protect the access). Linus sayeth: | We could also look at being stricter for the normal READ/WRITE_ONCE(), | and require that they are | | (a) regular integer types | | (b) fit in an atomic word | | We actually did (b) for a while, until we noticed that we do it on | loff_t's etc and relaxed the rules. But maybe we could have a | "non-atomic" version of READ/WRITE_ONCE() that is used for the | questionable cases? The slight snag is that we also have to support 64-bit accesses on 32-bit architectures, as these appear to be widespread and tend to work out ok if either the architecture supports atomic 64-bit accesses (x86, armv7) or if the variable being accesses represents a virtual address and therefore only requires 32-bit atomicity in practice. Take a step in that direction by introducing a variant of 'compiletime_assert_atomic_type()' and use it to check the pointer argument to {READ,WRITE}_ONCE(). Expose __{READ,WRITE}_ONCE() variants which are allowed to tear and convert the one broken caller over to the new macros. Suggested-by: NLinus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Arnd Bergmann <arnd@arndb.de> Signed-off-by: NWill Deacon <will@kernel.org> -
由 Will Deacon 提交于
The implementations of {READ,WRITE}_ONCE() suffer from a significant amount of indirection and complexity due to a historic GCC bug: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=58145 which was originally worked around by 230fa253 ("kernel: Provide READ_ONCE and ASSIGN_ONCE"). Since GCC 4.8 is fairly vintage at this point and we emit a warning if we detect it during the build, return {READ,WRITE}_ONCE() to their former glory with an implementation that is easier to understand and, crucially, more amenable to optimisation. A side effect of this simplification is that WRITE_ONCE() no longer returns a value, but nobody seems to be relying on that and the new behaviour is aligned with smp_store_release(). Suggested-by: NLinus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Christian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: NWill Deacon <will@kernel.org>
-
- 14 4月, 2020 1 次提交
-
-
由 Marco Elver 提交于
Thus far, accesses marked with data_race() would still require the racing access to be marked in some way (be it with READ_ONCE(), WRITE_ONCE(), or data_race() itself), as otherwise KCSAN would still report a data race. This requirement, however, seems to be unintuitive, and some valid use-cases demand *not* marking other accesses, as it might hide more serious bugs (e.g. diagnostic reads). Therefore, this commit changes data_race() to no longer require marking racing accesses (although it's still recommended if possible). The alternative would have been introducing another variant of data_race(), however, since usage of data_race() already needs to be carefully reasoned about, distinguishing between these cases likely adds more complexity in the wrong place. Link: https://lkml.kernel.org/r/20200331131002.GA30975@willie-the-truck Cc: Paul E. McKenney <paulmck@kernel.org> Cc: Will Deacon <will@kernel.org> Cc: Qian Cai <cai@lca.pw> Acked-by: NWill Deacon <will@kernel.org> Signed-off-by: NMarco Elver <elver@google.com> Signed-off-by: NPaul E. McKenney <paulmck@kernel.org>
-
- 08 4月, 2020 1 次提交
-
-
由 Vegard Nossum 提交于
compiletime_assert() uses __LINE__ to create a unique function name. This means that if you have more than one BUILD_BUG_ON() in the same source line (which can happen if they appear e.g. in a macro), then the error message from the compiler might output the wrong condition. For this source file: #include <linux/build_bug.h> #define macro() \ BUILD_BUG_ON(1); \ BUILD_BUG_ON(0); void foo() { macro(); } gcc would output: ./include/linux/compiler.h:350:38: error: call to `__compiletime_assert_9' declared with attribute error: BUILD_BUG_ON failed: 0 _compiletime_assert(condition, msg, __compiletime_assert_, __LINE__) However, it was not the BUILD_BUG_ON(0) that failed, so it should say 1 instead of 0. With this patch, we use __COUNTER__ instead of __LINE__, so each BUILD_BUG_ON() gets a different function name and the correct condition is printed: ./include/linux/compiler.h:350:38: error: call to `__compiletime_assert_0' declared with attribute error: BUILD_BUG_ON failed: 1 _compiletime_assert(condition, msg, __compiletime_assert_, __COUNTER__) Signed-off-by: NVegard Nossum <vegard.nossum@oracle.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Reviewed-by: NMasahiro Yamada <yamada.masahiro@socionext.com> Reviewed-by: NDaniel Santos <daniel.santos@pobox.com> Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk> Cc: Ian Abbott <abbotti@mev.co.uk> Cc: Joe Perches <joe@perches.com> Link: http://lkml.kernel.org/r/20200331112637.25047-1-vegard.nossum@oracle.comSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 21 3月, 2020 2 次提交
-
-
由 Marco Elver 提交于
No we longer have to include kcsan.h, since the required KCSAN interface for both compiler.h and seqlock.h are now provided by kcsan-checks.h. Acked-by: NJohn Hubbard <jhubbard@nvidia.com> Signed-off-by: NMarco Elver <elver@google.com> Signed-off-by: NPaul E. McKenney <paulmck@kernel.org> Signed-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Paul E. McKenney 提交于
Signed-off-by: NPaul E. McKenney <paulmck@kernel.org> Signed-off-by: NIngo Molnar <mingo@kernel.org> Cc: Marco Elver <elver@google.com> Cc: Dmitry Vyukov <dvyukov@google.com>
-
- 07 1月, 2020 1 次提交
-
-
由 Marco Elver 提交于
Since the use of -fsanitize=thread is an implementation detail of KCSAN, the name __no_sanitize_thread could be misleading if used widely. Instead, we introduce the __no_kcsan attribute which is shorter and more accurate in the context of KCSAN. This matches the attribute name __no_kcsan_or_inline. The use of __kcsan_or_inline itself is still required for __always_inline functions to retain compatibility with older compilers. Signed-off-by: NMarco Elver <elver@google.com> Signed-off-by: NPaul E. McKenney <paulmck@kernel.org>
-
- 20 11月, 2019 1 次提交
-
-
由 Ingo Molnar 提交于
Tidy up a few bits: - Fix typos and grammar, improve wording. - Remove spurious newlines that are col80 warning artifacts where the resulting line-break is worse than the disease it's curing. - Use core kernel coding style to improve readability and reduce spurious code pattern variations. - Use better vertical alignment for structure definitions and initialization sequences. - Misc other small details. No change in functionality intended. Cc: linux-kernel@vger.kernel.org Cc: Marco Elver <elver@google.com> Cc: Paul E. McKenney <paulmck@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Paul E. McKenney <paulmck@kernel.org> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NIngo Molnar <mingo@kernel.org>
-
- 16 11月, 2019 2 次提交
-
-
由 Marco Elver 提交于
This introduces the data_race(expr) macro, which can be used to annotate expressions for purposes of (1) documenting, and (2) giving tooling such as KCSAN information about which data races are deemed "safe". More context: http://lkml.kernel.org/r/CAHk-=wg5CkOEF8DTez1Qu0XTEFw_oHhxN98bDnFqbY7HL5AB2g@mail.gmail.comSigned-off-by: NMarco Elver <elver@google.com> Cc: Alan Stern <stern@rowland.harvard.edu> Cc: Eric Dumazet <edumazet@google.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Paul E. McKenney <paulmck@kernel.org> Signed-off-by: NPaul E. McKenney <paulmck@kernel.org>
-
由 Marco Elver 提交于
Kernel Concurrency Sanitizer (KCSAN) is a dynamic data-race detector for kernel space. KCSAN is a sampling watchpoint-based data-race detector. See the included Documentation/dev-tools/kcsan.rst for more details. This patch adds basic infrastructure, but does not yet enable KCSAN for any architecture. Signed-off-by: NMarco Elver <elver@google.com> Acked-by: NPaul E. McKenney <paulmck@kernel.org> Signed-off-by: NPaul E. McKenney <paulmck@kernel.org>
-
- 08 9月, 2019 1 次提交
-
-
由 Nick Desaulniers 提交于
GCC unescapes escaped string section names while Clang does not. Because __section uses the `#` stringification operator for the section name, it doesn't need to be escaped. This fixes an Oops observed in distro's that use systemd and not net.core.bpf_jit_enable=1, when their kernels are compiled with Clang. Link: https://github.com/ClangBuiltLinux/linux/issues/619 Link: https://bugs.llvm.org/show_bug.cgi?id=42950 Link: https://marc.info/?l=linux-netdev&m=156412960619946&w=2 Link: https://lore.kernel.org/lkml/20190904181740.GA19688@gmail.com/Acked-by: NWill Deacon <will@kernel.org> Reported-by: NSedat Dilek <sedat.dilek@gmail.com> Suggested-by: NJosh Poimboeuf <jpoimboe@redhat.com> Tested-by: NSedat Dilek <sedat.dilek@gmail.com> Signed-off-by: NNick Desaulniers <ndesaulniers@google.com> [Cherry-picked from the __section cleanup series for 5.3] [Adjusted commit message] Signed-off-by: NMiguel Ojeda <miguel.ojeda.sandonis@gmail.com>
-
- 09 7月, 2019 1 次提交
-
-
由 Josh Poimboeuf 提交于
Objtool doesn't know how to read C jump tables, so it has to whitelist functions which use them, causing missing ORC unwinder data for such functions, e.g. ___bpf_prog_run(). C jump tables are very similar to GCC switch jump tables, which objtool already knows how to read. So adding support for C jump tables is easy. It just needs to be able to find the tables and distinguish them from other data. To allow the jump tables to be found, create an __annotate_jump_table macro which can be used to annotate them. The annotation is done by placing the jump table in an .rodata..c_jump_table section. The '.rodata' prefix ensures that the data will be placed in the rodata section by the vmlinux linker script. The double periods are part of an existing convention which distinguishes kernel sections from GCC sections. Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Song Liu <songliubraving@fb.com> Cc: Kairui Song <kasong@redhat.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Alexei Starovoitov <ast@kernel.org> Cc: Daniel Borkmann <daniel@iogearbox.net> Link: https://lkml.kernel.org/r/0ba2ca30442b16b97165992381ce643dc27b3d1a.1561685471.git.jpoimboe@redhat.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 10 5月, 2019 1 次提交
-
-
由 Linus Torvalds 提交于
Peter Zijlstra noticed that with CONFIG_PROFILE_ALL_BRANCHES, the "if" macro converts the conditional to an array index. This can cause GCC to create horrible code. When there are nested ifs, the generated code uses register values to encode branching decisions. Josh Poimboeuf found that replacing the define "if" macro from using the condition as an array index and incrementing the branch statics with an if statement itself, reduced the asm complexity and shrinks the generated code quite a bit. But this can be simplified even further by replacing the internal if statement with a ternary operator. Link: https://lkml.kernel.org/r/20190307174802.46fmpysxyo35hh43@treble Link: http://lkml.kernel.org/r/CAHk-=wiALN3jRuzARpwThN62iKd476Xj-uom+YnLZ4=eqcz7xQ@mail.gmail.comReported-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Reported-by: NJosh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org> Signed-off-by: NSteven Rostedt (VMware) <rostedt@goodmis.org>
-
- 03 4月, 2019 1 次提交
-
-
由 Josh Poimboeuf 提交于
With CONFIG_PROFILE_ALL_BRANCHES=y, the "if" macro converts the conditional to an array index. This can cause GCC to create horrible code. When there are nested ifs, the generated code uses register values to encode branching decisions. Make it easier for GCC to optimize by keeping the conditional as a conditional rather than converting it to an integer. This shrinks the generated code quite a bit, and also makes the code sane enough for objtool to understand. Reported-by: NPeter Zijlstra <peterz@infradead.org> Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: brgerst@gmail.com Cc: catalin.marinas@arm.com Cc: dvlasenk@redhat.com Cc: dvyukov@google.com Cc: hpa@zytor.com Cc: james.morse@arm.com Cc: julien.thierry@arm.com Cc: luto@amacapital.net Cc: luto@kernel.org Cc: rostedt@goodmis.org Cc: valentin.schneider@arm.com Cc: will.deacon@arm.com Link: https://lkml.kernel.org/r/20190307174802.46fmpysxyo35hh43@trebleSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 09 1月, 2019 1 次提交
-
-
由 Michael S. Tsirkin 提交于
Since commit 815f0ddb ("include/linux/compiler*.h: make compiler-*.h mutually exclusive") clang no longer reuses the OPTIMIZER_HIDE_VAR macro from compiler-gcc - instead it gets the version in include/linux/compiler.h. Unfortunately that version doesn't actually prevent compiler from optimizing out the variable. Fix up by moving the macro out from compiler-gcc.h to compiler.h. Compilers without incline asm support will keep working since it's protected by an ifdef. Also fix up comments to match reality since we are no longer overriding any macros. Build-tested with gcc and clang. Fixes: 815f0ddb ("include/linux/compiler*.h: make compiler-*.h mutually exclusive") Cc: Eli Friedman <efriedma@codeaurora.org> Cc: Joe Perches <joe@perches.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Reviewed-by: NNick Desaulniers <ndesaulniers@google.com> Signed-off-by: NMichael S. Tsirkin <mst@redhat.com> Signed-off-by: NMiguel Ojeda <miguel.ojeda.sandonis@gmail.com>
-
- 19 12月, 2018 1 次提交
-
-
由 Ingo Molnar 提交于
This reverts commit c06c4d80. See this commit for details about the revert: e769742d ("Revert "x86/jump-labels: Macrofy inline assembly code to work around GCC inlining bugs"") Reported-by: NMasahiro Yamada <yamada.masahiro@socionext.com> Reviewed-by: NBorislav Petkov <bp@alien8.de> Reviewed-by: NThomas Gleixner <tglx@linutronix.de> Cc: Juergen Gross <jgross@suse.com> Cc: Richard Biener <rguenther@suse.de> Cc: Kees Cook <keescook@chromium.org> Cc: Segher Boessenkool <segher@kernel.crashing.org> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Nadav Amit <namit@vmware.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: linux-kernel@vger.kernel.org Signed-off-by: NIngo Molnar <mingo@kernel.org>
-
- 06 11月, 2018 1 次提交
-
-
由 Martin Schwidefsky 提交于
The __no_sanitize_address_or_inline and __no_kasan_or_inline defines are almost identical. The only difference is that __no_kasan_or_inline does not have the 'notrace' attribute. To be able to replace __no_sanitize_address_or_inline with the older definition, add 'notrace' to __no_kasan_or_inline and change to two users of __no_sanitize_address_or_inline in the s390 code. The 'notrace' option is necessary for e.g. the __load_psw_mask function in arch/s390/include/asm/processor.h. Without the option it is possible to trace __load_psw_mask which leads to kernel stack overflow. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com> Pointed-out-by: NAndrey Ryabinin <aryabinin@virtuozzo.com> Acked-by: NSteven Rostedt (VMware) <rostedt@goodmis.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 19 10月, 2018 1 次提交
-
-
由 ndesaulniers@google.com 提交于
Fixes the objtool warning seen with Clang: arch/x86/mm/fault.o: warning: objtool: no_context()+0x220: unreachable instruction Fixes commit 815f0ddb ("include/linux/compiler*.h: make compiler-*.h mutually exclusive") Josh noted that the fallback definition was meant to work around a pre-gcc-4.6 bug. GCC still needs to work around https://gcc.gnu.org/bugzilla/show_bug.cgi?id=82365, so compiler-gcc.h defines its own version of unreachable(). Clang and ICC can use this shared definition. Link: https://github.com/ClangBuiltLinux/linux/issues/204Suggested-by: NAndy Lutomirski <luto@amacapital.net> Suggested-by: NJosh Poimboeuf <jpoimboe@redhat.com> Tested-by: NNathan Chancellor <natechancellor@gmail.com> Signed-off-by: NNick Desaulniers <ndesaulniers@google.com> Signed-off-by: NMiguel Ojeda <miguel.ojeda.sandonis@gmail.com>
-
- 11 10月, 2018 1 次提交
-
-
由 Masahiro Yamada 提交于
__compiletime_assert_fallback() is supposed to stop building earlier by using the negative-array-size method in case the compiler does not support "error" attribute, but has never worked like that. You can simply try: BUILD_BUG_ON(1); GCC immediately terminates the build, but Clang does not report anything because Clang does not support the "error" attribute now. It will later fail at link time, but __compiletime_assert_fallback() is not working at least. The root cause is commit 1d6a0d19 ("bug.h: prevent double evaluation of `condition' in BUILD_BUG_ON"). Prior to that commit, BUILD_BUG_ON() was checked by the negative-array-size method *and* the link-time trick. Since that commit, the negative-array-size is not effective because '__cond' is no longer constant. As the comment in <linux/build_bug.h> says, GCC (and Clang as well) only emits the error for obvious cases. When '__cond' is a variable, ((void)sizeof(char[1 - 2 * __cond])) ... is not obvious for the compiler to know the array size is negative. Reverting that commit would break BUILD_BUG() because negative-size-array is evaluated before the code is optimized out. Let's give up __compiletime_assert_fallback(). This commit does not change the current behavior since it just rips off the useless code. Signed-off-by: NMasahiro Yamada <yamada.masahiro@socionext.com> Reviewed-by: NKees Cook <keescook@chromium.org> Reviewed-by: NNick Desaulniers <ndesaulniers@google.com> Signed-off-by: NKees Cook <keescook@chromium.org>
-
- 04 10月, 2018 1 次提交
-
-
由 Nadav Amit 提交于
As described in: 77b0bf55: ("kbuild/Makefile: Prepare for using macros in inline assembly code to work around asm() related GCC inlining bugs") GCC's inlining heuristics are broken with common asm() patterns used in kernel code, resulting in the effective disabling of inlining. In the case of objtool the resulting borkage can be significant, since all the annotations of objtool are discarded during linkage and never inlined, yet GCC bogusly considers most functions affected by objtool annotations as 'too large'. The workaround is to set an assembly macro and call it from the inline assembly block. As a result GCC considers the inline assembly block as a single instruction. (Which it isn't, but that's the best we can get.) This increases the kernel size slightly: text data bss dec hex filename 18140829 10224724 2957312 31322865 1ddf2f1 ./vmlinux before 18140970 10225412 2957312 31323694 1ddf62e ./vmlinux after (+829) The number of static text symbols (i.e. non-inlined functions) is reduced: Before: 40321 After: 40302 (-19) [ mingo: Rewrote the changelog. ] Tested-by: NKees Cook <keescook@chromium.org> Signed-off-by: NNadav Amit <namit@vmware.com> Reviewed-by: NJosh Poimboeuf <jpoimboe@redhat.com> Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Gerst <brgerst@gmail.com> Cc: Christopher Li <sparse@chrisli.org> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-sparse@vger.kernel.org Link: http://lkml.kernel.org/r/20181003213100.189959-4-namit@vmware.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 01 10月, 2018 6 次提交
-
-
由 Miguel Ojeda 提交于
Suggested-by: NNick Desaulniers <ndesaulniers@google.com> Tested-by: Sedat Dilek <sedat.dilek@gmail.com> # on top of v4.19-rc5, clang 7 Reviewed-by: NNick Desaulniers <ndesaulniers@google.com> Reviewed-by: NLuc Van Oostenryck <luc.vanoostenryck@gmail.com> Signed-off-by: NMiguel Ojeda <miguel.ojeda.sandonis@gmail.com>
-
由 Miguel Ojeda 提交于
Tested-by: Sedat Dilek <sedat.dilek@gmail.com> # on top of v4.19-rc5, clang 7 Reviewed-by: NNick Desaulniers <ndesaulniers@google.com> Reviewed-by: NLuc Van Oostenryck <luc.vanoostenryck@gmail.com> Signed-off-by: NMiguel Ojeda <miguel.ojeda.sandonis@gmail.com>
-
由 Miguel Ojeda 提交于
Sparse knows about a few more attributes now, so we can remove the __CHECKER__ conditions from them (which, in turn, allow us to move some of them later on to compiler_attributes.h). * assume_aligned: since sparse's commit ffc860b ("sparse: ignore __assume_aligned__ attribute"), included in 0.5.1 * error: since sparse's commit 0a04210 ("sparse: Add 'error' to ignored attributes"), included in 0.5.0 * hotpatch: since sparse's commit 6043210 ("sparse/parse.c: ignore hotpatch attribute"), included in 0.5.1 * warning: since sparse's commit 977365d ("Avoid "attribute 'warning': unknown attribute" warning"), included in 0.4.2 On top of that, __must_be_array does not need it either because: * Even ancient versions of sparse do not have a problem * BUILD_BUG_ON_ZERO() is currently disabled for __CHECKER__ Tested-by: Sedat Dilek <sedat.dilek@gmail.com> # on top of v4.19-rc5, clang 7 Reviewed-by: NNick Desaulniers <ndesaulniers@google.com> Reviewed-by: NLuc Van Oostenryck <luc.vanoostenryck@gmail.com> Signed-off-by: NMiguel Ojeda <miguel.ojeda.sandonis@gmail.com> -
由 Miguel Ojeda 提交于
Different definitions of __must_be_array: * gcc: disabled for __CHECKER__ * clang: same definition as gcc's, but without __CHECKER__ * intel: the comment claims __builtin_types_compatible_p() is unsupported; but icc seems to support it since 13.0.1 (released in 2012). See https://godbolt.org/z/S0l6QQ Therefore, we can remove all of them and have a single definition in compiler.h Tested-by: Sedat Dilek <sedat.dilek@gmail.com> # on top of v4.19-rc5, clang 7 Reviewed-by: NNick Desaulniers <ndesaulniers@google.com> Reviewed-by: NLuc Van Oostenryck <luc.vanoostenryck@gmail.com> Signed-off-by: NMiguel Ojeda <miguel.ojeda.sandonis@gmail.com> -
由 Miguel Ojeda 提交于
The attribute syntax optionally allows to surround attribute names with "__" in order to avoid collisions with macros of the same name (see https://gcc.gnu.org/onlinedocs/gcc/Attribute-Syntax.html). This homogenizes all attributes to use the syntax with underscores. While there are currently only a handful of cases of some TUs defining macros like "error" which may collide with the attributes, this should prevent futures surprises. This has been done only for "standard" attributes supported by the major compilers. In other words, those of third-party tools (e.g. sparse, plugins...) have not been changed for the moment. Tested-by: Sedat Dilek <sedat.dilek@gmail.com> # on top of v4.19-rc5, clang 7 Reviewed-by: NNick Desaulniers <ndesaulniers@google.com> Reviewed-by: NLuc Van Oostenryck <luc.vanoostenryck@gmail.com> Signed-off-by: NMiguel Ojeda <miguel.ojeda.sandonis@gmail.com>
-
由 Miguel Ojeda 提交于
__optimize and __deprecate_for_modules are unused in the whole kernel tree. Simply drop them. Tested-by: Sedat Dilek <sedat.dilek@gmail.com> # on top of v4.19-rc5, clang 7 Reviewed-by: NNick Desaulniers <ndesaulniers@google.com> Reviewed-by: NLuc Van Oostenryck <luc.vanoostenryck@gmail.com> Signed-off-by: NMiguel Ojeda <miguel.ojeda.sandonis@gmail.com>
-
- 23 8月, 2018 2 次提交
-
-
由 Ard Biesheuvel 提交于
An ordinary arm64 defconfig build has ~64 KB worth of __ksymtab entries, each consisting of two 64-bit fields containing absolute references, to the symbol itself and to a char array containing its name, respectively. When we build the same configuration with KASLR enabled, we end up with an additional ~192 KB of relocations in the .init section, i.e., one 24 byte entry for each absolute reference, which all need to be processed at boot time. Given how the struct kernel_symbol that describes each entry is completely local to module.c (except for the references emitted by EXPORT_SYMBOL() itself), we can easily modify it to contain two 32-bit relative references instead. This reduces the size of the __ksymtab section by 50% for all 64-bit architectures, and gets rid of the runtime relocations entirely for architectures implementing KASLR, either via standard PIE linking (arm64) or using custom host tools (x86). Note that the binary search involving __ksymtab contents relies on each section being sorted by symbol name. This is implemented based on the input section names, not the names in the ksymtab entries, so this patch does not interfere with that. Given that the use of place-relative relocations requires support both in the toolchain and in the module loader, we cannot enable this feature for all architectures. So make it dependent on whether CONFIG_HAVE_ARCH_PREL32_RELOCATIONS is defined. Link: http://lkml.kernel.org/r/20180704083651.24360-4-ard.biesheuvel@linaro.orgSigned-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: NJessica Yu <jeyu@kernel.org> Acked-by: NMichael Ellerman <mpe@ellerman.id.au> Reviewed-by: NWill Deacon <will.deacon@arm.com> Acked-by: NIngo Molnar <mingo@kernel.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Bjorn Helgaas <bhelgaas@google.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: James Morris <james.morris@microsoft.com> Cc: James Morris <jmorris@namei.org> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Kees Cook <keescook@chromium.org> Cc: Nicolas Pitre <nico@linaro.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Petr Mladek <pmladek@suse.com> Cc: Russell King <linux@armlinux.org.uk> Cc: "Serge E. Hallyn" <serge@hallyn.com> Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Thomas Garnier <thgarnie@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Rasmus Villemoes 提交于
Appararently, it's possible to have a non-trivial TU include a few headers, including linux/build_bug.h, without ending up with linux/types.h. So the 0day bot sent me config: um-x86_64_defconfig (attached as .config) >> include/linux/compiler.h:316:3: error: unknown type name 'bool'; did you mean '_Bool'? bool __cond = !(condition); \ for something I'm working on. Rather than contributing to the #include madness and including linux/types.h from compiler.h, just use int. Link: http://lkml.kernel.org/r/20180817101036.20969-1-linux@rasmusvillemoes.dkSigned-off-by: NRasmus Villemoes <linux@rasmusvillemoes.dk> Cc: Christopher Li <sparse@chrisli.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 05 6月, 2018 1 次提交
-
-
由 Mikulas Patocka 提交于
The function __builtin_expect returns long type (see the gcc documentation), and so do macros likely and unlikely. Unfortunatelly, when CONFIG_PROFILE_ANNOTATED_BRANCHES is selected, the macros likely and unlikely expand to __branch_check__ and __branch_check__ truncates the long type to int. This unintended truncation may cause bugs in various kernel code (we found a bug in dm-writecache because of it), so it's better to fix __branch_check__ to return long. Link: http://lkml.kernel.org/r/alpine.LRH.2.02.1805300818140.24812@file01.intranet.prod.int.rdu2.redhat.com Cc: Ingo Molnar <mingo@redhat.com> Cc: stable@vger.kernel.org Fixes: 1f0d69a9 ("tracing: profile likely and unlikely annotations") Signed-off-by: NMikulas Patocka <mpatocka@redhat.com> Signed-off-by: NSteven Rostedt (VMware) <rostedt@goodmis.org>
-
- 22 2月, 2018 1 次提交
-
-
由 Arnd Bergmann 提交于
Looking at functions with large stack frames across all architectures led me discovering that BUG() suffers from the same problem as fortify_panic(), which I've added a workaround for already. In short, variables that go out of scope by calling a noreturn function or __builtin_unreachable() keep using stack space in functions afterwards. A workaround that was identified is to insert an empty assembler statement just before calling the function that doesn't return. I'm adding a macro "barrier_before_unreachable()" to document this, and insert calls to that in all instances of BUG() that currently suffer from this problem. The files that saw the largest change from this had these frame sizes before, and much less with my patch: fs/ext4/inode.c:82:1: warning: the frame size of 1672 bytes is larger than 800 bytes [-Wframe-larger-than=] fs/ext4/namei.c:434:1: warning: the frame size of 904 bytes is larger than 800 bytes [-Wframe-larger-than=] fs/ext4/super.c:2279:1: warning: the frame size of 1160 bytes is larger than 800 bytes [-Wframe-larger-than=] fs/ext4/xattr.c:146:1: warning: the frame size of 1168 bytes is larger than 800 bytes [-Wframe-larger-than=] fs/f2fs/inode.c:152:1: warning: the frame size of 1424 bytes is larger than 800 bytes [-Wframe-larger-than=] net/netfilter/ipvs/ip_vs_core.c:1195:1: warning: the frame size of 1068 bytes is larger than 800 bytes [-Wframe-larger-than=] net/netfilter/ipvs/ip_vs_core.c:395:1: warning: the frame size of 1084 bytes is larger than 800 bytes [-Wframe-larger-than=] net/netfilter/ipvs/ip_vs_ftp.c:298:1: warning: the frame size of 928 bytes is larger than 800 bytes [-Wframe-larger-than=] net/netfilter/ipvs/ip_vs_ftp.c:418:1: warning: the frame size of 908 bytes is larger than 800 bytes [-Wframe-larger-than=] net/netfilter/ipvs/ip_vs_lblcr.c:718:1: warning: the frame size of 960 bytes is larger than 800 bytes [-Wframe-larger-than=] drivers/net/xen-netback/netback.c:1500:1: warning: the frame size of 1088 bytes is larger than 800 bytes [-Wframe-larger-than=] In case of ARC and CRIS, it turns out that the BUG() implementation actually does return (or at least the compiler thinks it does), resulting in lots of warnings about uninitialized variable use and leaving noreturn functions, such as: block/cfq-iosched.c: In function 'cfq_async_queue_prio': block/cfq-iosched.c:3804:1: error: control reaches end of non-void function [-Werror=return-type] include/linux/dmaengine.h: In function 'dma_maxpq': include/linux/dmaengine.h:1123:1: error: control reaches end of non-void function [-Werror=return-type] This makes them call __builtin_trap() instead, which should normally dump the stack and kill the current process, like some of the other architectures already do. I tried adding barrier_before_unreachable() to panic() and fortify_panic() as well, but that had very little effect, so I'm not submitting that patch. Vineet said: : For ARC, it is double win. : : 1. Fixes 3 -Wreturn-type warnings : : | ../net/core/ethtool.c:311:1: warning: control reaches end of non-void function : [-Wreturn-type] : | ../kernel/sched/core.c:3246:1: warning: control reaches end of non-void function : [-Wreturn-type] : | ../include/linux/sunrpc/svc_xprt.h:180:1: warning: control reaches end of : non-void function [-Wreturn-type] : : 2. bloat-o-meter reports code size improvements as gcc elides the : generated code for stack return. Link: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=82365 Link: http://lkml.kernel.org/r/20171219114112.939391-1-arnd@arndb.deSigned-off-by: NArnd Bergmann <arnd@arndb.de> Acked-by: Vineet Gupta <vgupta@synopsys.com> [arch/arc] Tested-by: Vineet Gupta <vgupta@synopsys.com> [arch/arc] Cc: Mikael Starvik <starvik@axis.com> Cc: Jesper Nilsson <jesper.nilsson@axis.com> Cc: Tony Luck <tony.luck@intel.com> Cc: Fenghua Yu <fenghua.yu@intel.com> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: "David S. Miller" <davem@davemloft.net> Cc: Christopher Li <sparse@chrisli.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Kees Cook <keescook@chromium.org> Cc: Ingo Molnar <mingo@kernel.org> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Will Deacon <will.deacon@arm.com> Cc: "Steven Rostedt (VMware)" <rostedt@goodmis.org> Cc: Mark Rutland <mark.rutland@arm.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 08 2月, 2018 1 次提交
-
-
由 Geert Uytterhoeven 提交于
Create a new function attribute __optimize, which allows to specify an optimization level on a per-function basis. Signed-off-by: NGeert Uytterhoeven <geert@linux-m68k.org> Acked-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
- 02 2月, 2018 2 次提交
-
-
由 Andrey Ryabinin 提交于
Sometimes we know that it's safe to do potentially out-of-bounds access because we know it won't cross a page boundary. Still, KASAN will report this as a bug. Add read_word_at_a_time() function which is supposed to be used in such cases. In read_word_at_a_time() KASAN performs relaxed check - only the first byte of access is validated. Signed-off-by: NAndrey Ryabinin <aryabinin@virtuozzo.com> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Andrey Ryabinin 提交于
Instead of having two identical __read_once_size_nocheck() functions with different attributes, consolidate all the difference in new macro __no_kasan_or_inline and use it. No functional changes. Signed-off-by: NAndrey Ryabinin <aryabinin@virtuozzo.com> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 17 12月, 2017 1 次提交
-
-
由 Will Deacon 提交于
[ Note, this is a Git cherry-pick of the following commit: 76ebbe78 ("locking/barriers: Add implicit smp_read_barrier_depends() to READ_ONCE()") ... for easier x86 PTI code testing and back-porting. ] In preparation for the removal of lockless_dereference(), which is the same as READ_ONCE() on all architectures other than Alpha, add an implicit smp_read_barrier_depends() to READ_ONCE() so that it can be used to head dependency chains on all architectures. Signed-off-by: NWill Deacon <will.deacon@arm.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/1508840570-22169-3-git-send-email-will.deacon@arm.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-