- 03 6月, 2021 1 次提交
-
-
由 Nathan Chancellor 提交于
With x86_64_defconfig and the following configs, there is an orphan section warning: CONFIG_SMP=n CONFIG_AMD_MEM_ENCRYPT=y CONFIG_HYPERVISOR_GUEST=y CONFIG_KVM=y CONFIG_PARAVIRT=y ld: warning: orphan section `.data..decrypted' from `arch/x86/kernel/cpu/vmware.o' being placed in section `.data..decrypted' ld: warning: orphan section `.data..decrypted' from `arch/x86/kernel/kvm.o' being placed in section `.data..decrypted' These sections are created with DEFINE_PER_CPU_DECRYPTED, which ultimately turns into __PCPU_ATTRS, which in turn has a section attribute with a value of PER_CPU_BASE_SECTION + the section name. When CONFIG_SMP is not set, the base section is .data and that is not currently handled in any linker script. Add .data..decrypted to PERCPU_DECRYPTED_SECTION, which is included in PERCPU_INPUT -> PERCPU_SECTION, which is include in the x86 linker script when either CONFIG_X86_64 or CONFIG_SMP is unset, taking care of the warning. Fixes: ac26963a ("percpu: Introduce DEFINE_PER_CPU_DECRYPTED") Link: https://github.com/ClangBuiltLinux/linux/issues/1360Reported-by: Nkernel test robot <lkp@intel.com> Signed-off-by: NNathan Chancellor <nathan@kernel.org> Tested-by: Nick Desaulniers <ndesaulniers@google.com> # build Signed-off-by: NKees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20210506001410.1026691-1-nathan@kernel.org
-
- 09 4月, 2021 1 次提交
-
-
由 Sami Tolvanen 提交于
This change adds support for Clang’s forward-edge Control Flow Integrity (CFI) checking. With CONFIG_CFI_CLANG, the compiler injects a runtime check before each indirect function call to ensure the target is a valid function with the correct static type. This restricts possible call targets and makes it more difficult for an attacker to exploit bugs that allow the modification of stored function pointers. For more details, see: https://clang.llvm.org/docs/ControlFlowIntegrity.html Clang requires CONFIG_LTO_CLANG to be enabled with CFI to gain visibility to possible call targets. Kernel modules are supported with Clang’s cross-DSO CFI mode, which allows checking between independently compiled components. With CFI enabled, the compiler injects a __cfi_check() function into the kernel and each module for validating local call targets. For cross-module calls that cannot be validated locally, the compiler calls the global __cfi_slowpath_diag() function, which determines the target module and calls the correct __cfi_check() function. This patch includes a slowpath implementation that uses __module_address() to resolve call targets, and with CONFIG_CFI_CLANG_SHADOW enabled, a shadow map that speeds up module look-ups by ~3x. Clang implements indirect call checking using jump tables and offers two methods of generating them. With canonical jump tables, the compiler renames each address-taken function to <function>.cfi and points the original symbol to a jump table entry, which passes __cfi_check() validation. This isn’t compatible with stand-alone assembly code, which the compiler doesn’t instrument, and would result in indirect calls to assembly code to fail. Therefore, we default to using non-canonical jump tables instead, where the compiler generates a local jump table entry <function>.cfi_jt for each address-taken function, and replaces all references to the function with the address of the jump table entry. Note that because non-canonical jump table addresses are local to each component, they break cross-module function address equality. Specifically, the address of a global function will be different in each module, as it's replaced with the address of a local jump table entry. If this address is passed to a different module, it won’t match the address of the same function taken there. This may break code that relies on comparing addresses passed from other components. CFI checking can be disabled in a function with the __nocfi attribute. Additionally, CFI can be disabled for an entire compilation unit by filtering out CC_FLAGS_CFI. By default, CFI failures result in a kernel panic to stop a potential exploit. CONFIG_CFI_PERMISSIVE enables a permissive mode, where the kernel prints out a rate-limited warning instead, and allows execution to continue. This option is helpful for locating type mismatches, but should only be enabled during development. Signed-off-by: NSami Tolvanen <samitolvanen@google.com> Reviewed-by: NKees Cook <keescook@chromium.org> Tested-by: NNathan Chancellor <nathan@kernel.org> Signed-off-by: NKees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20210408182843.1754385-2-samitolvanen@google.com
-
- 26 2月, 2021 1 次提交
-
-
由 Nathan Chancellor 提交于
clang produces .eh_frame sections when CONFIG_GCOV_KERNEL is enabled, even when -fno-asynchronous-unwind-tables is in KBUILD_CFLAGS: $ make CC=clang vmlinux ... ld: warning: orphan section `.eh_frame' from `init/main.o' being placed in section `.eh_frame' ld: warning: orphan section `.eh_frame' from `init/version.o' being placed in section `.eh_frame' ld: warning: orphan section `.eh_frame' from `init/do_mounts.o' being placed in section `.eh_frame' ld: warning: orphan section `.eh_frame' from `init/do_mounts_initrd.o' being placed in section `.eh_frame' ld: warning: orphan section `.eh_frame' from `init/initramfs.o' being placed in section `.eh_frame' ld: warning: orphan section `.eh_frame' from `init/calibrate.o' being placed in section `.eh_frame' ld: warning: orphan section `.eh_frame' from `init/init_task.o' being placed in section `.eh_frame' ... $ rg "GCOV_KERNEL|GCOV_PROFILE_ALL" .config CONFIG_GCOV_KERNEL=y CONFIG_ARCH_HAS_GCOV_PROFILE_ALL=y CONFIG_GCOV_PROFILE_ALL=y This was already handled for a couple of other options in commit d812db78 ("vmlinux.lds.h: Avoid KASAN and KCSAN's unwanted sections") and there is an open LLVM bug for this issue. Take advantage of that section for this config as well so that there are no more orphan warnings. Link: https://bugs.llvm.org/show_bug.cgi?id=46478 Link: https://github.com/ClangBuiltLinux/linux/issues/1069Reported-by: Nkernel test robot <lkp@intel.com> Reviewed-by: NFangrui Song <maskray@google.com> Reviewed-by: NNick Desaulniers <ndesaulniers@google.com> Signed-off-by: NNathan Chancellor <nathan@kernel.org> Fixes: d812db78 ("vmlinux.lds.h: Avoid KASAN and KCSAN's unwanted sections") Cc: stable@vger.kernel.org Signed-off-by: NKees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20210130004650.2682422-1-nathan@kernel.org
-
- 23 2月, 2021 1 次提交
-
-
由 Alexander Lobakin 提交于
LKP caught another bunch of orphaned instrumentation symbols [0]: mipsel-linux-ld: warning: orphan section `.data.$LPBX1' from `init/main.o' being placed in section `.data.$LPBX1' mipsel-linux-ld: warning: orphan section `.data.$LPBX0' from `init/main.o' being placed in section `.data.$LPBX0' mipsel-linux-ld: warning: orphan section `.data.$LPBX1' from `init/do_mounts.o' being placed in section `.data.$LPBX1' mipsel-linux-ld: warning: orphan section `.data.$LPBX0' from `init/do_mounts.o' being placed in section `.data.$LPBX0' mipsel-linux-ld: warning: orphan section `.data.$LPBX1' from `init/do_mounts_initrd.o' being placed in section `.data.$LPBX1' mipsel-linux-ld: warning: orphan section `.data.$LPBX0' from `init/do_mounts_initrd.o' being placed in section `.data.$LPBX0' mipsel-linux-ld: warning: orphan section `.data.$LPBX1' from `init/initramfs.o' being placed in section `.data.$LPBX1' mipsel-linux-ld: warning: orphan section `.data.$LPBX0' from `init/initramfs.o' being placed in section `.data.$LPBX0' mipsel-linux-ld: warning: orphan section `.data.$LPBX1' from `init/calibrate.o' being placed in section `.data.$LPBX1' mipsel-linux-ld: warning: orphan section `.data.$LPBX0' from `init/calibrate.o' being placed in section `.data.$LPBX0' [...] Soften the wildcard to .data.$L* to grab these ones into .data too. [0] https://lore.kernel.org/lkml/202102231519.lWPLPveV-lkp@intel.comReported-by: Nkernel test robot <lkp@intel.com> Signed-off-by: NAlexander Lobakin <alobakin@pm.me> Signed-off-by: NThomas Bogendoerfer <tsbogend@alpha.franken.de>
-
- 17 2月, 2021 2 次提交
-
-
由 Josh Poimboeuf 提交于
When exporting static_call_key; with EXPORT_STATIC_CALL*(), the module can use static_call_update() to change the function called. This is not desirable in general. Not exporting static_call_key however also disallows usage of static_call(), since objtool needs the key to construct the static_call_site. Solve this by allowing objtool to create the static_call_site using the trampoline address when it builds a module and cannot find the static_call_key symbol. The module loader will then try and map the trampole back to a key before it constructs the normal sites list. Doing this requires a trampoline -> key associsation, so add another magic section that keeps those. Originally-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NIngo Molnar <mingo@kernel.org> Link: https://lkml.kernel.org/r/20210127231837.ifddpn7rhwdaepiu@treble
-
由 Alexander Lobakin 提交于
LKP triggered lots of LD orphan warnings [0]: mipsel-linux-ld: warning: orphan section `.data.$Lubsan_data299' from `init/do_mounts_rd.o' being placed in section `.data.$Lubsan_data299' mipsel-linux-ld: warning: orphan section `.data.$Lubsan_data183' from `init/do_mounts_rd.o' being placed in section `.data.$Lubsan_data183' mipsel-linux-ld: warning: orphan section `.data.$Lubsan_type3' from `init/do_mounts_rd.o' being placed in section `.data.$Lubsan_type3' mipsel-linux-ld: warning: orphan section `.data.$Lubsan_type2' from `init/do_mounts_rd.o' being placed in section `.data.$Lubsan_type2' mipsel-linux-ld: warning: orphan section `.data.$Lubsan_type0' from `init/do_mounts_rd.o' being placed in section `.data.$Lubsan_type0' [...] Seems like "unnamed data" isn't the only type of symbols that UBSAN instrumentation can emit. Catch these into .data with the wildcard as well. [0] https://lore.kernel.org/linux-mm/202102160741.k57GCNSR-lkp@intel.com Fixes: f41b233d ("vmlinux.lds.h: catch UBSAN's "unnamed data" into data") Reported-by: Nkernel test robot <lkp@intel.com> Signed-off-by: NAlexander Lobakin <alobakin@pm.me> Signed-off-by: NThomas Bogendoerfer <tsbogend@alpha.franken.de>
-
- 16 2月, 2021 1 次提交
-
-
由 Nick Desaulniers 提交于
We expect toolchains to produce these new debug info sections as part of DWARF v5. Add explicit placements to prevent the linker warnings from --orphan-section=warn. Compilers may produce such sections with explicit -gdwarf-5, or based on the implicit default version of DWARF when -g is used via DEBUG_INFO. This implicit default changes over time, and has changed to DWARF v5 with GCC 11. .debug_sup was mentioned in review, but without compilers producing it today, let's wait to add it until it becomes necessary. Cc: stable@vger.kernel.org Link: https://bugzilla.redhat.com/show_bug.cgi?id=1922707Reported-by: NChris Murphy <lists@colorremedies.com> Suggested-by: NFangrui Song <maskray@google.com> Reviewed-by: NNathan Chancellor <nathan@kernel.org> Reviewed-by: NMark Wielaard <mark@klomp.org> Tested-by: NSedat Dilek <sedat.dilek@gmail.com> Signed-off-by: NNick Desaulniers <ndesaulniers@google.com> Signed-off-by: NMasahiro Yamada <masahiroy@kernel.org>
-
- 10 2月, 2021 1 次提交
-
-
由 Fangrui Song 提交于
arm64 references the start address of .builtin_fw (__start_builtin_fw) with a pair of R_AARCH64_ADR_PREL_PG_HI21/R_AARCH64_LDST64_ABS_LO12_NC relocations. The compiler is allowed to emit the R_AARCH64_LDST64_ABS_LO12_NC relocation because struct builtin_fw in include/linux/firmware.h is 8-byte aligned. The R_AARCH64_LDST64_ABS_LO12_NC relocation requires the address to be a multiple of 8, which may not be the case if .builtin_fw is empty. Unconditionally align .builtin_fw to fix the linker error. 32-bit architectures could use ALIGN(4) but that would add unnecessary complexity, so just use ALIGN(8). Link: https://lkml.kernel.org/r/20201208054646.2913063-1-maskray@google.com Link: https://github.com/ClangBuiltLinux/linux/issues/1204 Fixes: 5658c769 ("firmware: allow firmware files to be built into kernel image") Signed-off-by: NFangrui Song <maskray@google.com> Reported-by: Nkernel test robot <lkp@intel.com> Acked-by: NArnd Bergmann <arnd@arndb.de> Reviewed-by: NNick Desaulniers <ndesaulniers@google.com> Tested-by: NNick Desaulniers <ndesaulniers@google.com> Tested-by: NDouglas Anderson <dianders@chromium.org> Acked-by: NNathan Chancellor <nathan@kernel.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 08 2月, 2021 2 次提交
-
-
由 Christoph Hellwig 提交于
EXPORT_UNUSED_SYMBOL* is not actually used anywhere. Remove the unused functionality as we generally just remove unused code anyway. Reviewed-by: NMiroslav Benes <mbenes@suse.cz> Reviewed-by: NEmil Velikov <emil.l.velikov@gmail.com> Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NJessica Yu <jeyu@kernel.org>
-
由 Christoph Hellwig 提交于
As far as I can tell this has never been used at all, and certainly not any time recently. Reviewed-by: NMiroslav Benes <mbenes@suse.cz> Reviewed-by: NEmil Velikov <emil.l.velikov@gmail.com> Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NJessica Yu <jeyu@kernel.org>
-
- 15 1月, 2021 3 次提交
-
-
由 Alexander Lobakin 提交于
When building kernel with both LD_DEAD_CODE_DATA_ELIMINATION and UBSAN, LLVM stack generates lots of "unnamed data" sections: ld.lld: warning: net/built-in.a(netfilter/utils.o): (.data.$__unnamed_2) is being placed in '.data.$__unnamed_2' ld.lld: warning: net/built-in.a(netfilter/utils.o): (.data.$__unnamed_3) is being placed in '.data.$__unnamed_3' ld.lld: warning: net/built-in.a(netfilter/utils.o): (.data.$__unnamed_4) is being placed in '.data.$__unnamed_4' ld.lld: warning: net/built-in.a(netfilter/utils.o): (.data.$__unnamed_5) is being placed in '.data.$__unnamed_5' [...] Also handle this by adding the related sections to generic definitions. Signed-off-by: NAlexander Lobakin <alobakin@pm.me> Reviewed-by: NNathan Chancellor <natechancellor@gmail.com> Acked-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NThomas Bogendoerfer <tsbogend@alpha.franken.de>
-
由 Alexander Lobakin 提交于
When building kernel with LD_DEAD_CODE_DATA_ELIMINATION, LLVM stack generates separate sections for compound literals, just like in case with enabled LTO [0]: ld.lld: warning: drivers/built-in.a(mtd/nand/spi/gigadevice.o): (.data..compoundliteral.14) is being placed in '.data..compoundliteral.14' ld.lld: warning: drivers/built-in.a(mtd/nand/spi/gigadevice.o): (.data..compoundliteral.15) is being placed in '.data..compoundliteral.15' ld.lld: warning: drivers/built-in.a(mtd/nand/spi/gigadevice.o): (.data..compoundliteral.16) is being placed in '.data..compoundliteral.16' ld.lld: warning: drivers/built-in.a(mtd/nand/spi/gigadevice.o): (.data..compoundliteral.17) is being placed in '.data..compoundliteral.17' [...] Handle this by adding the related sections to generic definitions as suggested by Sami [0]. [0] https://lore.kernel.org/lkml/20201211184633.3213045-3-samitolvanen@google.comSuggested-by: NSami Tolvanen <samitolvanen@google.com> Suggested-by: NKees Cook <keescook@chromium.org> Signed-off-by: NAlexander Lobakin <alobakin@pm.me> Reviewed-by: NKees Cook <keescook@chromium.org> Reviewed-by: NNathan Chancellor <natechancellor@gmail.com> Acked-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NThomas Bogendoerfer <tsbogend@alpha.franken.de>
-
由 Sami Tolvanen 提交于
This change adds build system support for Clang's Link Time Optimization (LTO). With -flto, instead of ELF object files, Clang produces LLVM bitcode, which is compiled into native code at link time, allowing the final binary to be optimized globally. For more details, see: https://llvm.org/docs/LinkTimeOptimization.html The Kconfig option CONFIG_LTO_CLANG is implemented as a choice, which defaults to LTO being disabled. To use LTO, the architecture must select ARCH_SUPPORTS_LTO_CLANG and support: - compiling with Clang, - compiling all assembly code with Clang's integrated assembler, - and linking with LLD. While using CONFIG_LTO_CLANG_FULL results in the best runtime performance, the compilation is not scalable in time or memory. CONFIG_LTO_CLANG_THIN enables ThinLTO, which allows parallel optimization and faster incremental builds. ThinLTO is used by default if the architecture also selects ARCH_SUPPORTS_LTO_CLANG_THIN: https://clang.llvm.org/docs/ThinLTO.html To enable LTO, LLVM tools must be used to handle bitcode files, by passing LLVM=1 and LLVM_IAS=1 options to make: $ make LLVM=1 LLVM_IAS=1 defconfig $ scripts/config -e LTO_CLANG_THIN $ make LLVM=1 LLVM_IAS=1 To prepare for LTO support with other compilers, common parts are gated behind the CONFIG_LTO option, and LTO can be disabled for specific files by filtering out CC_FLAGS_LTO. Signed-off-by: NSami Tolvanen <samitolvanen@google.com> Reviewed-by: NKees Cook <keescook@chromium.org> Signed-off-by: NKees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20201211184633.3213045-3-samitolvanen@google.com
-
- 23 12月, 2020 1 次提交
-
-
由 Daniel Lezcano 提交于
On the embedded world, the complexity of the SoC leads to an increasing number of hotspots which need to be monitored and mitigated as a whole in order to prevent the temperature to go above the normative and legally stated 'skin temperature'. Another aspect is to sustain the performance for a given power budget, for example virtual reality where the user can feel dizziness if the GPU performance is capped while a big CPU is processing something else. Or reduce the battery charging because the dissipated power is too high compared with the power consumed by other devices. The userspace is the most adequate place to dynamically act on the different devices by limiting their power given an application profile: it has the knowledge of the platform. These userspace daemons are in charge of the Dynamic Thermal Power Management (DTPM). Nowadays, the dtpm daemons are abusing the thermal framework as they act on the cooling device state to force a specific and arbitrary state without taking care of the governor decisions. Given the closed loop of some governors that can confuse the logic or directly enter in a decision conflict. As the number of cooling device support is limited today to the CPU and the GPU, the dtpm daemons have little control on the power dissipation of the system. The out of tree solutions are hacking around here and there in the drivers, in the frameworks to have control on the devices. The common solution is to declare them as cooling devices. There is no unification of the power limitation unit, opaque states are used. This patch provides a way to create a hierarchy of constraints using the powercap framework. The devices which are registered as power limit-able devices are represented in this hierarchy as a tree. They are linked together with intermediate nodes which are just there to propagate the constraint to the children. The leaves of the tree are the real devices, the intermediate nodes are virtual, aggregating the children constraints and power characteristics. Each node have a weight on a 2^10 basis, in order to reflect the percentage of power distribution of the children's node. This percentage is used to dispatch the power limit to the children. The weight is computed against the max power of the siblings. This simple approach allows to do a fair distribution of the power limit. Signed-off-by: NDaniel Lezcano <daniel.lezcano@linaro.org> Reviewed-by: NLukasz Luba <lukasz.luba@arm.com> Tested-by: NLukasz Luba <lukasz.luba@arm.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 28 10月, 2020 1 次提交
-
-
由 Kees Cook 提交于
Under some circumstances, the compiler generates .ctors.* sections. This is seen doing a cross compile of x86_64 from a powerpc64el host: x86_64-linux-gnu-ld: warning: orphan section `.ctors.65435' from `kernel/trace/trace_clock.o' being placed in section `.ctors.65435' x86_64-linux-gnu-ld: warning: orphan section `.ctors.65435' from `kernel/trace/ftrace.o' being placed in section `.ctors.65435' x86_64-linux-gnu-ld: warning: orphan section `.ctors.65435' from `kernel/trace/ring_buffer.o' being placed in section `.ctors.65435' Include these orphans along with the regular .ctors section. Reported-by: NStephen Rothwell <sfr@canb.auug.org.au> Tested-by: NStephen Rothwell <sfr@canb.auug.org.au> Fixes: 83109d5d ("x86/build: Warn on orphan section placement") Signed-off-by: NKees Cook <keescook@chromium.org> Acked-by: NNick Desaulniers <ndesaulniers@google.com> Link: https://lore.kernel.org/r/20201005025720.2599682-1-keescook@chromium.org
-
- 10 10月, 2020 1 次提交
-
-
由 Brendan Higgins 提交于
Add a linker section where KUnit can put references to its test suites. This patch is the first step in transitioning to dispatching all KUnit tests from a centralized executor rather than having each as its own separate late_initcall. Co-developed-by: NIurii Zaikin <yzaikin@google.com> Signed-off-by: NIurii Zaikin <yzaikin@google.com> Signed-off-by: NBrendan Higgins <brendanhiggins@google.com> Reviewed-by: NStephen Boyd <sboyd@kernel.org> Signed-off-by: NShuah Khan <skhan@linuxfoundation.org>
-
- 22 9月, 2020 1 次提交
-
-
由 Tony Ambardar 提交于
Systems with memory or disk constraints often reduce the kernel footprint by configuring LD_DEAD_CODE_DATA_ELIMINATION. However, this can result in removal of any BTF information. Use the KEEP() macro to preserve the BTF data as done with other important sections, while still allowing for smaller kernels. Fixes: 90ceddcb ("bpf: Support llvm-objcopy for vmlinux BTF") Signed-off-by: NTony Ambardar <Tony.Ambardar@gmail.com> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net> Acked-by: NJohn Fastabend <john.fastabend@gmail.com> Acked-by: NAndrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/a635b5d3e2da044e7b51ec1315e8910fbce0083f.1600417359.git.Tony.Ambardar@gmail.com
-
- 01 9月, 2020 8 次提交
-
-
由 Josh Poimboeuf 提交于
Add the inline static call implementation for x86-64. The generated code is identical to the out-of-line case, except we move the trampoline into it's own section. Objtool uses the trampoline naming convention to detect all the call sites. It then annotates those call sites in the .static_call_sites section. During boot (and module init), the call sites are patched to call directly into the destination function. The temporary trampoline is then no longer used. [peterz: merged trampolines, put trampoline in section] Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NIngo Molnar <mingo@kernel.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Link: https://lore.kernel.org/r/20200818135804.864271425@infradead.org
-
由 Josh Poimboeuf 提交于
Add infrastructure for an arch-specific CONFIG_HAVE_STATIC_CALL_INLINE option, which is a faster version of CONFIG_HAVE_STATIC_CALL. At runtime, the static call sites are patched directly, rather than using the out-of-line trampolines. Compared to out-of-line static calls, the performance benefits are more modest, but still measurable. Steven Rostedt did some tracepoint measurements: https://lkml.kernel.org/r/20181126155405.72b4f718@gandalf.local.home This code is heavily inspired by the jump label code (aka "static jumps"), as some of the concepts are very similar. For more details, see the comments in include/linux/static_call.h. [peterz: simplified interface; merged trampolines] Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NIngo Molnar <mingo@kernel.org> Reviewed-by: NSteven Rostedt (VMware) <rostedt@goodmis.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Link: https://lore.kernel.org/r/20200818135804.684334440@infradead.org
-
由 Nick Desaulniers 提交于
Basically, consider .text.{hot|unlikely|unknown}.* part of .text, too. When compiling with profiling information (collected via PGO instrumentations or AutoFDO sampling), Clang will separate code into .text.hot, .text.unlikely, or .text.unknown sections based on profiling information. After D79600 (clang-11), these sections will have a trailing `.` suffix, ie. .text.hot., .text.unlikely., .text.unknown.. When using -ffunction-sections together with profiling infomation, either explicitly (FGKASLR) or implicitly (LTO), code may be placed in sections following the convention: .text.hot.<foo>, .text.unlikely.<bar>, .text.unknown.<baz> where <foo>, <bar>, and <baz> are functions. (This produces one section per function; we generally try to merge these all back via linker script so that we don't have 50k sections). For the above cases, we need to teach our linker scripts that such sections might exist and that we'd explicitly like them grouped together, otherwise we can wind up with code outside of the _stext/_etext boundaries that might not be mapped properly for some architectures, resulting in boot failures. If the linker script is not told about possible input sections, then where the section is placed as output is a heuristic-laiden mess that's non-portable between linkers (ie. BFD and LLD), and has resulted in many hard to debug bugs. Kees Cook is working on cleaning this up by adding --orphan-handling=warn linker flag used in ARCH=powerpc to additional architectures. In the case of linker scripts, borrowing from the Zen of Python: explicit is better than implicit. Also, ld.bfd's internal linker script considers .text.hot AND .text.hot.* to be part of .text, as well as .text.unlikely and .text.unlikely.*. I didn't see support for .text.unknown.*, and didn't see Clang producing such code in our kernel builds, but I see code in LLVM that can produce such section names if profiling information is missing. That may point to a larger issue with generating or collecting profiles, but I would much rather be safe and explicit than have to debug yet another issue related to orphan section placement. Reported-by: NJian Cai <jiancai@google.com> Suggested-by: NFāng-ruì Sòng <maskray@google.com> Signed-off-by: NNick Desaulniers <ndesaulniers@google.com> Signed-off-by: NKees Cook <keescook@chromium.org> Signed-off-by: NIngo Molnar <mingo@kernel.org> Tested-by: NLuis Lozano <llozano@google.com> Tested-by: NManoj Gupta <manojgupta@google.com> Acked-by: NKees Cook <keescook@chromium.org> Cc: linux-arch@vger.kernel.org Cc: stable@vger.kernel.org Link: https://sourceware.org/git/?p=binutils-gdb.git;a=commitdiff;h=add44f8d5c5c05e08b11e033127a744d61c26aee Link: https://sourceware.org/git/?p=binutils-gdb.git;a=commitdiff;h=1de778ed23ce7492c523d5850c6c6dbb34152655 Link: https://reviews.llvm.org/D79600 Link: https://bugs.chromium.org/p/chromium/issues/detail?id=1084760 Link: https://lore.kernel.org/r/20200821194310.3089815-7-keescook@chromium.orgDebugged-by: NLuis Lozano <llozano@google.com>
-
由 Kees Cook 提交于
When linking vmlinux with LLD, the synthetic sections .symtab, .strtab, and .shstrtab are listed as orphaned. Add them to the ELF_DETAILS section so there will be no warnings when --orphan-handling=warn is used more widely. (They are added above comment as it is the more common order[1].) ld.lld: warning: <internal>:(.symtab) is being placed in '.symtab' ld.lld: warning: <internal>:(.shstrtab) is being placed in '.shstrtab' ld.lld: warning: <internal>:(.strtab) is being placed in '.strtab' [1] https://lore.kernel.org/lkml/20200622224928.o2a7jkq33guxfci4@google.com/Reported-by: NFangrui Song <maskray@google.com> Signed-off-by: NKees Cook <keescook@chromium.org> Signed-off-by: NIngo Molnar <mingo@kernel.org> Cc: linux-arch@vger.kernel.org Link: https://lore.kernel.org/r/20200821194310.3089815-6-keescook@chromium.org
-
由 Kees Cook 提交于
The .comment section doesn't belong in STABS_DEBUG. Split it out into a new macro named ELF_DETAILS. This will gain other non-debug sections that need to be accounted for when linking with --orphan-handling=warn. Signed-off-by: NKees Cook <keescook@chromium.org> Signed-off-by: NIngo Molnar <mingo@kernel.org> Cc: linux-arch@vger.kernel.org Link: https://lore.kernel.org/r/20200821194310.3089815-5-keescook@chromium.org
-
由 Kees Cook 提交于
KASAN (-fsanitize=kernel-address) and KCSAN (-fsanitize=thread) produce unwanted[1] .eh_frame and .init_array.* sections. Add them to COMMON_DISCARDS, except with CONFIG_CONSTRUCTORS, which wants to keep .init_array.* sections. [1] https://bugs.llvm.org/show_bug.cgi?id=46478Signed-off-by: NKees Cook <keescook@chromium.org> Signed-off-by: NIngo Molnar <mingo@kernel.org> Tested-by: NMarco Elver <elver@google.com> Cc: linux-arch@vger.kernel.org Link: https://lore.kernel.org/r/20200821194310.3089815-4-keescook@chromium.org
-
由 Kees Cook 提交于
For vmlinux linking, no architecture uses the .gnu.version* sections, so remove it via the COMMON_DISCARDS macro in preparation for adding --orphan-handling=warn more widely. This is a work-around for what appears to be a bug[1] in ld.bfd which warns for this synthetic section even when none is found in input objects, and even when no section is emitted for an output object[2]. [1] https://sourceware.org/bugzilla/show_bug.cgi?id=26153 [2] https://lore.kernel.org/lkml/202006221524.CEB86E036B@keescook/Signed-off-by: NKees Cook <keescook@chromium.org> Signed-off-by: NIngo Molnar <mingo@kernel.org> Reviewed-by: NFangrui Song <maskray@google.com> Cc: linux-arch@vger.kernel.org Link: https://lore.kernel.org/r/20200821194310.3089815-3-keescook@chromium.org
-
由 Kees Cook 提交于
Collect the common DISCARD sections for architectures that need more specialized discard control than what the standard DISCARDS section provides. Signed-off-by: NKees Cook <keescook@chromium.org> Signed-off-by: NIngo Molnar <mingo@kernel.org> Cc: linux-arch@vger.kernel.org Link: https://lore.kernel.org/r/20200821194310.3089815-2-keescook@chromium.org
-
- 15 8月, 2020 1 次提交
-
-
由 Romain Naour 提交于
Since the patch [1], building the kernel using a toolchain built with binutils 2.33.1 prevents booting a sh4 system under Qemu. Apply the patch provided by Alan Modra [2] that fix alignment of rodata. [1] https://sourceware.org/git/gitweb.cgi?p=binutils-gdb.git;h=ebd2263ba9a9124d93bbc0ece63d7e0fae89b40e [2] https://www.sourceware.org/ml/binutils/2019-12/msg00112.htmlSigned-off-by: NRomain Naour <romain.naour@gmail.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Cc: Alan Modra <amodra@gmail.com> Cc: Bin Meng <bin.meng@windriver.com> Cc: Chen Zhou <chenzhou10@huawei.com> Cc: Geert Uytterhoeven <geert+renesas@glider.be> Cc: John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de> Cc: Krzysztof Kozlowski <krzk@kernel.org> Cc: Kuninori Morimoto <kuninori.morimoto.gx@renesas.com> Cc: Rich Felker <dalias@libc.org> Cc: Sam Ravnborg <sam@ravnborg.org> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Cc: Arnd Bergmann <arnd@arndb.de> Cc: <stable@vger.kernel.org> Link: https://marc.info/?l=linux-sh&m=158429470221261Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 24 7月, 2020 1 次提交
-
-
由 Jim Cromie 提交于
dyndbg populates its callsite info into __verbose section, change that to a more specific and descriptive name, __dyndbg. Also, per checkpatch: simplify __attribute(..) to __section(__dyndbg) declaration. and 1 spelling fix, decriptor Acked-by: <jbaron@akamai.com> Signed-off-by: NJim Cromie <jim.cromie@gmail.com> Link: https://lore.kernel.org/r/20200719231058.1586423-6-jim.cromie@gmail.comSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 22 7月, 2020 1 次提交
-
-
由 Joerg Roedel 提交于
On x86-32 the idt_table with 256 entries needs only 2048 bytes. It is page-aligned, but the end of the .bss..page_aligned section is not guaranteed to be page-aligned. As a result, objects from other .bss sections may end up on the same 4k page as the idt_table, and will accidentially get mapped read-only during boot, causing unexpected page-faults when the kernel writes to them. This could be worked around by making the objects in the page aligned sections page sized, but that's wrong. Explicit sections which store only page aligned objects have an implicit guarantee that the object is alone in the page in which it is placed. That works for all objects except the last one. That's inconsistent. Enforcing page sized objects for these sections would wreckage memory sanitizers, because the object becomes artificially larger than it should be and out of bound access becomes legit. Align the end of the .bss..page_aligned and .data..page_aligned section on page-size so all objects places in these sections are guaranteed to have their own page. [ tglx: Amended changelog ] Signed-off-by: NJoerg Roedel <jroedel@suse.de> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Reviewed-by: NKees Cook <keescook@chromium.org> Cc: stable@vger.kernel.org Link: https://lkml.kernel.org/r/20200721093448.10417-1-joro@8bytes.org
-
- 14 7月, 2020 1 次提交
-
-
由 Jiri Olsa 提交于
Adding support to generate .BTF_ids section that will hold BTF ID lists for verifier. Adding macros that will help to define lists of BTF ID values placed in .BTF_ids section. They are initially filled with zeros (during compilation) and resolved later during the linking phase by resolve_btfids tool. Following defines list of one BTF ID value: BTF_ID_LIST(bpf_skb_output_btf_ids) BTF_ID(struct, sk_buff) It also defines following variable to access the list: extern u32 bpf_skb_output_btf_ids[]; The BTF_ID_UNUSED macro defines 4 zero bytes. It's used when we want to define 'unused' entry in BTF_ID_LIST, like: BTF_ID_LIST(bpf_skb_output_btf_ids) BTF_ID(struct, sk_buff) BTF_ID_UNUSED BTF_ID(struct, task_struct) Suggested-by: NAndrii Nakryiko <andriin@fb.com> Signed-off-by: NJiri Olsa <jolsa@kernel.org> Signed-off-by: NAlexei Starovoitov <ast@kernel.org> Tested-by: NAndrii Nakryiko <andriin@fb.com> Acked-by: NAndrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/20200711215329.41165-4-jolsa@kernel.org
-
- 08 7月, 2020 1 次提交
-
-
由 Peter Zijlstra 提交于
For some mysterious reason GCC-4.9 has a 64 byte section alignment for structures, all other GCC versions (and Clang) tested (including 4.8 and 5.0) are fine with the 32 bytes alignment. Getting this right is important for the new SCHED_DATA macro that creates an explicitly ordered array of 'struct sched_class' in the linker script and expect pointer arithmetic to work. Fixes: c3a340f7 ("sched: Have sched_class_highest define by vmlinux.lds.h") Reported-by: Nkernel test robot <lkp@intel.com> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20200630144905.GX4817@hirez.programming.kicks-ass.net
-
- 25 6月, 2020 2 次提交
-
-
由 Steven Rostedt (VMware) 提交于
Now that the sched_class descriptors are defined by the linker script, and this needs to be aware of the existance of stop_sched_class when SMP is enabled or not, as it is used as the "highest" priority when defined. Move the declaration of sched_class_highest to the same location in the linker script that inserts stop_sched_class, and this will also make it easier to see what should be defined as the highest class, as this linker script location defines the priorities as well. Signed-off-by: NSteven Rostedt (VMware) <rostedt@goodmis.org> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20191219214558.682913590@goodmis.org
-
由 Steven Rostedt (VMware) 提交于
In order to make a micro optimization in pick_next_task(), the order of the sched class descriptor address must be in the same order as their priority to each other. That is: &idle_sched_class < &fair_sched_class < &rt_sched_class < &dl_sched_class < &stop_sched_class In order to guarantee this order of the sched class descriptors, add each one into their own data section and force the order in the linker script. Signed-off-by: NSteven Rostedt (VMware) <rostedt@goodmis.org> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/157675913272.349305.8936736338884044103.stgit@localhost.localdomain
-
- 19 5月, 2020 1 次提交
-
-
由 Thomas Gleixner 提交于
Some code pathes, especially the low level entry code, must be protected against instrumentation for various reasons: - Low level entry code can be a fragile beast, especially on x86. - With NO_HZ_FULL RCU state needs to be established before using it. Having a dedicated section for such code allows to validate with tooling that no unsafe functions are invoked. Add the .noinstr.text section and the noinstr attribute to mark functions. noinstr implies notrace. Kprobes will gain a section check later. Provide also a set of markers: instrumentation_begin()/end() These are used to mark code inside a noinstr function which calls into regular instrumentable text section as safe. The instrumentation markers are only active when CONFIG_DEBUG_ENTRY is enabled as the end marker emits a NOP to prevent the compiler from merging the annotation points. This means the objtool verification requires a kernel compiled with this option. Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Reviewed-by: NAlexandre Chartre <alexandre.chartre@oracle.com> Acked-by: NPeter Zijlstra <peterz@infradead.org> Link: https://lkml.kernel.org/r/20200505134100.075416272@linutronix.de
-
- 27 3月, 2020 1 次提交
-
-
由 H.J. Lu 提交于
In the x86 kernel, .exit.text and .exit.data sections are discarded at runtime, not by the linker. Add RUNTIME_DISCARD_EXIT to generic DISCARDS and define it in the x86 kernel linker script to keep them. The sections are added before the DISCARD directive so document here only the situation explicitly as this change doesn't have any effect on the generated kernel. Also, other architectures like ARM64 will use it too so generalize the approach with the RUNTIME_DISCARD_EXIT define. [ bp: Massage and extend commit message. ] Signed-off-by: NH.J. Lu <hjl.tools@gmail.com> Signed-off-by: NBorislav Petkov <bp@suse.de> Reviewed-by: NKees Cook <keescook@chromium.org> Link: https://lkml.kernel.org/r/20200326193021.255002-1-hjl.tools@gmail.com
-
- 19 3月, 2020 1 次提交
-
-
由 Fangrui Song 提交于
Simplify gen_btf logic to make it work with llvm-objcopy. The existing 'file format' and 'architecture' parsing logic is brittle and does not work with llvm-objcopy/llvm-objdump. 'file format' output of llvm-objdump>=11 will match GNU objdump, but 'architecture' (bfdarch) may not. .BTF in .tmp_vmlinux.btf is non-SHF_ALLOC. Add the SHF_ALLOC flag because it is part of vmlinux image used for introspection. C code can reference the section via linker script defined __start_BTF and __stop_BTF. This fixes a small problem that previous .BTF had the SHF_WRITE flag (objcopy -I binary -O elf* synthesized .data). Additionally, `objcopy -I binary` synthesized symbols _binary__btf_vmlinux_bin_start and _binary__btf_vmlinux_bin_stop (not used elsewhere) are replaced with more commonplace __start_BTF and __stop_BTF. Add 2>/dev/null because GNU objcopy (but not llvm-objcopy) warns "empty loadable segment detected at vaddr=0xffffffff81000000, is this intentional?" We use a dd command to change the e_type field in the ELF header from ET_EXEC to ET_REL so that lld will accept .btf.vmlinux.bin.o. Accepting ET_EXEC as an input file is an extremely rare GNU ld feature that lld does not intend to support, because this is error-prone. The output section description .BTF in include/asm-generic/vmlinux.lds.h avoids potential subtle orphan section placement issues and suppresses --orphan-handling=warn warnings. Fixes: df786c9b ("bpf: Force .BTF section start to zero when dumping from vmlinux") Fixes: cb0cc635 ("powerpc: Include .BTF section") Reported-by: NNathan Chancellor <natechancellor@gmail.com> Signed-off-by: NFangrui Song <maskray@google.com> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net> Tested-by: NStanislav Fomichev <sdf@google.com> Tested-by: NAndrii Nakryiko <andriin@fb.com> Reviewed-by: NStanislav Fomichev <sdf@google.com> Reviewed-by: NKees Cook <keescook@chromium.org> Acked-by: NAndrii Nakryiko <andriin@fb.com> Acked-by: Michael Ellerman <mpe@ellerman.id.au> (powerpc) Link: https://github.com/ClangBuiltLinux/linux/issues/871 Link: https://lore.kernel.org/bpf/20200318222746.173648-1-maskray@google.com
-
- 19 11月, 2019 1 次提交
-
-
由 Steven Rostedt (VMware) 提交于
The ftrace_graph_stub was created and points to ftrace_stub as a way to assign the functon graph tracer function pointer to a stub function with a different prototype than what ftrace_stub has and not trigger the C verifier. The ftrace_graph_stub was created via the linker script vmlinux.lds.h. Unfortunately, powerpc already uses the name ftrace_graph_stub for its internal implementation of the function graph tracer, and even though powerpc would still build, the change via the linker script broke function tracer on powerpc from working. By using the name ftrace_stub_graph, which does not exist anywhere else in the kernel, this should not be a problem. Link: https://lore.kernel.org/r/1573849732.5937.136.camel@lca.pw Fixes: b83b43ff ("fgraph: Fix function type mismatches of ftrace_graph_return using ftrace_stub") Reorted-by: NQian Cai <cai@lca.pw> Signed-off-by: NSteven Rostedt (VMware) <rostedt@goodmis.org>
-
- 15 11月, 2019 1 次提交
-
-
由 Steven Rostedt (VMware) 提交于
The C compiler is allowing more checks to make sure that function pointers are assigned to the correct prototype function. Unfortunately, the function graph tracer uses a special name with its assigned ftrace_graph_return function pointer that maps to a stub function used by the function tracer (ftrace_stub). The ftrace_graph_return variable is compared to the ftrace_stub in some archs to know if the function graph tracer is enabled or not. This means we can not just simply create a new function stub that compares it without modifying all the archs. Instead, have the linker script create a function_graph_stub that maps to ftrace_stub, and this way we can define the prototype for it to match the prototype of ftrace_graph_return, and make the compiler checks all happy! Link: http://lkml.kernel.org/r/20191015090055.789a0aed@gandalf.local.home Cc: linux-sh@vger.kernel.org Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Cc: Rich Felker <dalias@libc.org> Reported-by: NSami Tolvanen <samitolvanen@google.com> Signed-off-by: NSteven Rostedt (VMware) <rostedt@goodmis.org>
-
- 06 11月, 2019 1 次提交
-
-
由 Mark Rutland 提交于
When using patchable-function-entry, the compiler will record the callsites into a section named "__patchable_function_entries" rather than "__mcount_loc". Let's abstract this difference behind a new FTRACE_CALLSITE_SECTION, so that architectures don't have to handle this explicitly (e.g. with custom module linker scripts). As parisc currently handles this explicitly, it is fixed up accordingly, with its custom linker script removed. Since FTRACE_CALLSITE_SECTION is only defined when DYNAMIC_FTRACE is selected, the parisc module loading code is updated to only use the definition in that case. When DYNAMIC_FTRACE is not selected, modules shouldn't have this section, so this removes some redundant work in that case. To make sure that this is keep up-to-date for modules and the main kernel, a comment is added to vmlinux.lds.h, with the existing ifdeffery simplified for legibility. I built parisc generic-{32,64}bit_defconfig with DYNAMIC_FTRACE enabled, and verified that the section made it into the .ko files for modules. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NHelge Deller <deller@gmx.de> Acked-by: NSteven Rostedt (VMware) <rostedt@goodmis.org> Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Reviewed-by: NTorsten Duwe <duwe@suse.de> Tested-by: NAmit Daniel Kachhap <amit.kachhap@arm.com> Tested-by: NSven Schnelle <svens@stackframe.org> Tested-by: NTorsten Duwe <duwe@suse.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: James E.J. Bottomley <James.Bottomley@HansenPartnership.com> Cc: Jessica Yu <jeyu@kernel.org> Cc: linux-parisc@vger.kernel.org
-
- 04 11月, 2019 2 次提交
-
-
由 Kees Cook 提交于
Many architectures have an EXCEPTION_TABLE that needs to be only readable. As such, it should live in RO_DATA. Create a macro to identify this case for the architectures that can move EXCEPTION_TABLE into RO_DATA. Signed-off-by: NKees Cook <keescook@chromium.org> Signed-off-by: NBorislav Petkov <bp@suse.de> Acked-by: NWill Deacon <will@kernel.org> Cc: Andy Lutomirski <luto@kernel.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: linux-alpha@vger.kernel.org Cc: linux-arch@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org Cc: linux-c6x-dev@linux-c6x.org Cc: linux-ia64@vger.kernel.org Cc: linuxppc-dev@lists.ozlabs.org Cc: linux-s390@vger.kernel.org Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Simek <monstr@monstr.eu> Cc: Rick Edgecombe <rick.p.edgecombe@intel.com> Cc: Segher Boessenkool <segher@kernel.crashing.org> Cc: x86-ml <x86@kernel.org> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Link: https://lkml.kernel.org/r/20191029211351.13243-15-keescook@chromium.org
-
由 Kees Cook 提交于
Rename RW_DATA_SECTION to RW_DATA. (Calling this a "section" is a lie, since it's multiple sections and section flags cannot be applied to the macro.) Signed-off-by: NKees Cook <keescook@chromium.org> Signed-off-by: NBorislav Petkov <bp@suse.de> Acked-by: Heiko Carstens <heiko.carstens@de.ibm.com> # s390 Acked-by: Geert Uytterhoeven <geert@linux-m68k.org> # m68k Cc: Andy Lutomirski <luto@kernel.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: linux-alpha@vger.kernel.org Cc: linux-arch@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org Cc: linux-c6x-dev@linux-c6x.org Cc: linux-ia64@vger.kernel.org Cc: linux-s390@vger.kernel.org Cc: linuxppc-dev@lists.ozlabs.org Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Simek <monstr@monstr.eu> Cc: Rick Edgecombe <rick.p.edgecombe@intel.com> Cc: Segher Boessenkool <segher@kernel.crashing.org> Cc: Will Deacon <will@kernel.org> Cc: x86-ml <x86@kernel.org> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Link: https://lkml.kernel.org/r/20191029211351.13243-14-keescook@chromium.org
-