- 02 7月, 2021 1 次提交
-
-
由 Trent Piepho 提交于
Adds a number of test cases that cover a range of possible code paths. [akpm@linux-foundation.org: remove non-ascii characters, fix whitespace] [colin.king@canonical.com: fix spelling mistake "demominator" -> "denominator"] Link: https://lkml.kernel.org/r/20210526085049.6393-1-colin.king@canonical.com Link: https://lkml.kernel.org/r/20210525144250.214670-2-tpiepho@gmail.comSigned-off-by: NTrent Piepho <tpiepho@gmail.com> Signed-off-by: NColin Ian King <colin.king@canonical.com> Reviewed-by: NAndy Shevchenko <andriy.shevchenko@linux.intel.com> Cc: Daniel Latypov <dlatypov@google.com> Cc: Oskar Schirmer <oskar@scara.com> Cc: Yiyuan Guo <yguoaz@gmail.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 30 6月, 2021 2 次提交
-
-
由 Mel Gorman 提交于
There is a lack of clarity of what exactly local_irq_save/local_irq_restore protects in page_alloc.c . It conflates the protection of per-cpu page allocation structures with per-cpu vmstat deltas. This patch protects the PCP structure using local_lock which for most configurations is identical to IRQ enabling/disabling. The scope of the lock is still wider than it should be but this is decreased later. It is possible for the local_lock to be embedded safely within struct per_cpu_pages but it adds complexity to free_unref_page_list. [akpm@linux-foundation.org: coding style fixes] [mgorman@techsingularity.net: work around a pahole limitation with zero-sized struct pagesets] Link: https://lkml.kernel.org/r/20210526080741.GW30378@techsingularity.net [lkp@intel.com: Make pagesets static] Link: https://lkml.kernel.org/r/20210512095458.30632-3-mgorman@techsingularity.netSigned-off-by: NMel Gorman <mgorman@techsingularity.net> Acked-by: NVlastimil Babka <vbabka@suse.cz> Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Cc: Chuck Lever <chuck.lever@oracle.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Jesper Dangaard Brouer <brouer@redhat.com> Cc: Michal Hocko <mhocko@kernel.org> Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Oliver Glitta 提交于
SLUB has resiliency_test() function which is hidden behind #ifdef SLUB_RESILIENCY_TEST that is not part of Kconfig, so nobody runs it. KUnit should be a proper replacement for it. Try changing byte in redzone after allocation and changing pointer to next free node, first byte, 50th byte and redzone byte. Check if validation finds errors. There are several differences from the original resiliency test: Tests create own caches with known state instead of corrupting shared kmalloc caches. The corruption of freepointer uses correct offset, the original resiliency test got broken with freepointer changes. Scratch changing random byte test, because it does not have meaning in this form where we need deterministic results. Add new option CONFIG_SLUB_KUNIT_TEST in Kconfig. Tests next_pointer, first_word and clobber_50th_byte do not run with KASAN option on. Because the test deliberately modifies non-allocated objects. Use kunit_resource to count errors in cache and silence bug reports. Count error whenever slab_bug() or slab_fix() is called or when the count of pages is wrong. [glittao@gmail.com: remove unused function test_exit(), from SLUB KUnit test] Link: https://lkml.kernel.org/r/20210512140656.12083-1-glittao@gmail.com [akpm@linux-foundation.org: export kasan_enable/disable_current to modules] Link: https://lkml.kernel.org/r/20210511150734.3492-2-glittao@gmail.comSigned-off-by: NOliver Glitta <glittao@gmail.com> Reviewed-by: NVlastimil Babka <vbabka@suse.cz> Acked-by: NDaniel Latypov <dlatypov@google.com> Acked-by: NMarco Elver <elver@google.com> Cc: Brendan Higgins <brendanhiggins@google.com> Cc: Christoph Lameter <cl@linux.com> Cc: David Rientjes <rientjes@google.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Pekka Enberg <penberg@kernel.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 01 5月, 2021 1 次提交
-
-
由 Anshuman Khandual 提交于
early_memtest() does not get called from all architectures. Hence enabling CONFIG_MEMTEST and providing a valid memtest=[1..N] kernel command line option might not trigger the memory pattern tests as would be expected in normal circumstances. This situation is misleading. The change here prevents the above mentioned problem after introducing a new config option ARCH_USE_MEMTEST that should be subscribed on platforms that call early_memtest(), in order to enable the config CONFIG_MEMTEST. Conversely CONFIG_MEMTEST cannot be enabled on platforms where it would not be tested anyway. Link: https://lkml.kernel.org/r/1617269193-22294-1-git-send-email-anshuman.khandual@arm.comSigned-off-by: NAnshuman Khandual <anshuman.khandual@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> (arm64) Reviewed-by: NMax Filippov <jcmvbkbc@gmail.com> Cc: Russell King <linux@armlinux.org.uk> Cc: Will Deacon <will@kernel.org> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: Chris Zankel <chris@zankel.net> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 25 4月, 2021 2 次提交
-
-
由 Masahiro Yamada 提交于
The test code in scripts/test_dwarf5_support.sh is somewhat difficult to understand, but after all, we want to check binutils >= 2.35.2 From the former discussion, the requirement for generating DWARF v5 from C code is as follows: - gcc + gnu as -> requires gcc 5.0+ (but 7.0+ for full support) - clang + gnu as -> requires binutils 2.35.2+ - clang + integrated as -> OK Signed-off-by: NMasahiro Yamada <masahiroy@kernel.org> Reviewed-by: NNathan Chancellor <nathan@kernel.org> Reviewed-by: NNick Desaulniers <ndesaulniers@google.com>
-
由 Rasmus Villemoes 提交于
It can be quite useful to have ld emit a link map file, in order to debug or verify that special sections end up where they are supposed to, and to see what LD_DEAD_CODE_DATA_ELIMINATION manages to get rid of. The only reason I'm not just adding this unconditionally is that the .map file can be rather large (several MB), and that's a waste of space when one isn't interested in these things. Also make it depend on CONFIG_EXPERT. Signed-off-by: NRasmus Villemoes <linux@rasmusvillemoes.dk> Signed-off-by: NMasahiro Yamada <masahiroy@kernel.org>
-
- 21 4月, 2021 1 次提交
-
-
由 Maciej W. Rozycki 提交于
Implement a module for correctness and performance evaluation for the `do_div' function, often handled in an optimised manner by platform code. Use a somewhat randomly generated set of inputs that is supposed to be representative, using the same set of divisors twice, expressed as a constant and as a variable each, so as to verify the implementation for both cases should they be handled by different code execution paths. Reference results were produced with GNU bc. At the conclusion output the total execution time elapsed. Signed-off-by: NMaciej W. Rozycki <macro@orcam.me.uk> Signed-off-by: NThomas Bogendoerfer <tsbogend@alpha.franken.de>
-
- 16 4月, 2021 1 次提交
-
-
由 Peter Zijlstra 提交于
SCHED_DEBUG is not in fact required for LATENCYTOP, don't select it. Suggested-by: NMel Gorman <mgorman@suse.de> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Tested-by: NValentin Schneider <valentin.schneider@arm.com> Link: https://lkml.kernel.org/r/20210412102001.224578981@infradead.org
-
- 10 4月, 2021 1 次提交
-
-
由 Julian Braha 提交于
When LATENCYTOP, LOCKDEP, or FAULT_INJECTION_STACKTRACE_FILTER is enabled and ARCH_WANT_FRAME_POINTERS is disabled, Kbuild gives a warning such as: WARNING: unmet direct dependencies detected for FRAME_POINTER Depends on [n]: DEBUG_KERNEL [=y] && (M68K || UML || SUPERH) || ARCH_WANT_FRAME_POINTERS [=n] || MCOUNT [=n] Selected by [y]: - LATENCYTOP [=y] && DEBUG_KERNEL [=y] && STACKTRACE_SUPPORT [=y] && PROC_FS [=y] && !MIPS && !PPC && !S390 && !MICROBLAZE && !ARM && !ARC && !X86 Depending on ARCH_WANT_FRAME_POINTERS causes a recursive dependency error. ARCH_WANT_FRAME_POINTERS is to be selected by the architecture, and is not supposed to be overridden by other config options. Link: https://lkml.kernel.org/r/20210329165329.27994-1-julianbraha@gmail.comSigned-off-by: NJulian Braha <julianbraha@gmail.com> Cc: Andreas Schwab <schwab@linux-m68k.org> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Necip Fazil Yildiran <fazilyildiran@gmail.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 05 4月, 2021 1 次提交
-
-
由 Tetsuo Handa 提交于
Since syzkaller continues various test cases until the kernel crashes, syzkaller tends to examine more locking dependencies than normal systems. As a result, syzbot is reporting that the fuzz testing was terminated due to hitting upper limits lockdep can track [1] [2] [3]. Since analysis via /proc/lockdep* did not show any obvious culprit [4] [5], we have no choice but allow tuning tracing capacity constants. [1] https://syzkaller.appspot.com/bug?id=3d97ba93fb3566000c1c59691ea427370d33ea1b [2] https://syzkaller.appspot.com/bug?id=381cb436fe60dc03d7fd2a092b46d7f09542a72a [3] https://syzkaller.appspot.com/bug?id=a588183ac34c1437fc0785e8f220e88282e5a29f [4] https://lkml.kernel.org/r/4b8f7a57-fa20-47bd-48a0-ae35d860f233@i-love.sakura.ne.jp [5] https://lkml.kernel.org/r/1c351187-253b-2d49-acaf-4563c63ae7d2@i-love.sakura.ne.jp References: https://lkml.kernel.org/r/1595640639-9310-1-git-send-email-penguin-kernel@I-love.SAKURA.ne.jpSigned-off-by: NTetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp> Acked-by: NDmitry Vyukov <dvyukov@google.com>
-
- 27 2月, 2021 1 次提交
-
-
由 Alexander Potapenko 提交于
Patch series "KFENCE: A low-overhead sampling-based memory safety error detector", v7. This adds the Kernel Electric-Fence (KFENCE) infrastructure. KFENCE is a low-overhead sampling-based memory safety error detector of heap use-after-free, invalid-free, and out-of-bounds access errors. This series enables KFENCE for the x86 and arm64 architectures, and adds KFENCE hooks to the SLAB and SLUB allocators. KFENCE is designed to be enabled in production kernels, and has near zero performance overhead. Compared to KASAN, KFENCE trades performance for precision. The main motivation behind KFENCE's design, is that with enough total uptime KFENCE will detect bugs in code paths not typically exercised by non-production test workloads. One way to quickly achieve a large enough total uptime is when the tool is deployed across a large fleet of machines. KFENCE objects each reside on a dedicated page, at either the left or right page boundaries. The pages to the left and right of the object page are "guard pages", whose attributes are changed to a protected state, and cause page faults on any attempted access to them. Such page faults are then intercepted by KFENCE, which handles the fault gracefully by reporting a memory access error. Guarded allocations are set up based on a sample interval (can be set via kfence.sample_interval). After expiration of the sample interval, the next allocation through the main allocator (SLAB or SLUB) returns a guarded allocation from the KFENCE object pool. At this point, the timer is reset, and the next allocation is set up after the expiration of the interval. To enable/disable a KFENCE allocation through the main allocator's fast-path without overhead, KFENCE relies on static branches via the static keys infrastructure. The static branch is toggled to redirect the allocation to KFENCE. The KFENCE memory pool is of fixed size, and if the pool is exhausted no further KFENCE allocations occur. The default config is conservative with only 255 objects, resulting in a pool size of 2 MiB (with 4 KiB pages). We have verified by running synthetic benchmarks (sysbench I/O, hackbench) and production server-workload benchmarks that a kernel with KFENCE (using sample intervals 100-500ms) is performance-neutral compared to a non-KFENCE baseline kernel. KFENCE is inspired by GWP-ASan [1], a userspace tool with similar properties. The name "KFENCE" is a homage to the Electric Fence Malloc Debugger [2]. For more details, see Documentation/dev-tools/kfence.rst added in the series -- also viewable here: https://raw.githubusercontent.com/google/kasan/kfence/Documentation/dev-tools/kfence.rst [1] http://llvm.org/docs/GwpAsan.html [2] https://linux.die.net/man/3/efence This patch (of 9): This adds the Kernel Electric-Fence (KFENCE) infrastructure. KFENCE is a low-overhead sampling-based memory safety error detector of heap use-after-free, invalid-free, and out-of-bounds access errors. KFENCE is designed to be enabled in production kernels, and has near zero performance overhead. Compared to KASAN, KFENCE trades performance for precision. The main motivation behind KFENCE's design, is that with enough total uptime KFENCE will detect bugs in code paths not typically exercised by non-production test workloads. One way to quickly achieve a large enough total uptime is when the tool is deployed across a large fleet of machines. KFENCE objects each reside on a dedicated page, at either the left or right page boundaries. The pages to the left and right of the object page are "guard pages", whose attributes are changed to a protected state, and cause page faults on any attempted access to them. Such page faults are then intercepted by KFENCE, which handles the fault gracefully by reporting a memory access error. To detect out-of-bounds writes to memory within the object's page itself, KFENCE also uses pattern-based redzones. The following figure illustrates the page layout: ---+-----------+-----------+-----------+-----------+-----------+--- | xxxxxxxxx | O : | xxxxxxxxx | : O | xxxxxxxxx | | xxxxxxxxx | B : | xxxxxxxxx | : B | xxxxxxxxx | | x GUARD x | J : RED- | x GUARD x | RED- : J | x GUARD x | | xxxxxxxxx | E : ZONE | xxxxxxxxx | ZONE : E | xxxxxxxxx | | xxxxxxxxx | C : | xxxxxxxxx | : C | xxxxxxxxx | | xxxxxxxxx | T : | xxxxxxxxx | : T | xxxxxxxxx | ---+-----------+-----------+-----------+-----------+-----------+--- Guarded allocations are set up based on a sample interval (can be set via kfence.sample_interval). After expiration of the sample interval, a guarded allocation from the KFENCE object pool is returned to the main allocator (SLAB or SLUB). At this point, the timer is reset, and the next allocation is set up after the expiration of the interval. To enable/disable a KFENCE allocation through the main allocator's fast-path without overhead, KFENCE relies on static branches via the static keys infrastructure. The static branch is toggled to redirect the allocation to KFENCE. To date, we have verified by running synthetic benchmarks (sysbench I/O, hackbench) that a kernel compiled with KFENCE is performance-neutral compared to the non-KFENCE baseline. For more details, see Documentation/dev-tools/kfence.rst (added later in the series). [elver@google.com: fix parameter description for kfence_object_start()] Link: https://lkml.kernel.org/r/20201106092149.GA2851373@elver.google.com [elver@google.com: avoid stalling work queue task without allocations] Link: https://lkml.kernel.org/r/CADYN=9J0DQhizAGB0-jz4HOBBh+05kMBXb4c0cXMS7Qi5NAJiw@mail.gmail.com Link: https://lkml.kernel.org/r/20201110135320.3309507-1-elver@google.com [elver@google.com: fix potential deadlock due to wake_up()] Link: https://lkml.kernel.org/r/000000000000c0645805b7f982e4@google.com Link: https://lkml.kernel.org/r/20210104130749.1768991-1-elver@google.com [elver@google.com: add option to use KFENCE without static keys] Link: https://lkml.kernel.org/r/20210111091544.3287013-1-elver@google.com [elver@google.com: add missing copyright and description headers] Link: https://lkml.kernel.org/r/20210118092159.145934-1-elver@google.com Link: https://lkml.kernel.org/r/20201103175841.3495947-2-elver@google.comSigned-off-by: NMarco Elver <elver@google.com> Signed-off-by: NAlexander Potapenko <glider@google.com> Reviewed-by: NDmitry Vyukov <dvyukov@google.com> Reviewed-by: NSeongJae Park <sjpark@amazon.de> Co-developed-by: NMarco Elver <elver@google.com> Reviewed-by: NJann Horn <jannh@google.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Paul E. McKenney <paulmck@kernel.org> Cc: Andrey Konovalov <andreyknvl@google.com> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Christopher Lameter <cl@linux.com> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: David Rientjes <rientjes@google.com> Cc: Eric Dumazet <edumazet@google.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Hillf Danton <hdanton@sina.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Joern Engel <joern@purestorage.com> Cc: Kees Cook <keescook@chromium.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Pekka Enberg <penberg@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Will Deacon <will@kernel.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 16 2月, 2021 2 次提交
-
-
由 Nick Desaulniers 提交于
DWARF v5 is the latest standard of the DWARF debug info format. GCC 11 will change the implicit default DWARF version, if left unspecified, to DWARF v5. Allow users of Clang and older versions of GCC that have not changed the implicit default DWARF version to DWARF v5 to opt in. This can help testing consumers of DWARF debug info in preparation of v5 becoming more widespread, as well as result in significant binary size savings of the pre-stripped vmlinux image. DWARF5 wins significantly in terms of size when mixed with compression (CONFIG_DEBUG_INFO_COMPRESSED). 363M vmlinux.clang12.dwarf5.compressed 434M vmlinux.clang12.dwarf4.compressed 439M vmlinux.clang12.dwarf2.compressed 457M vmlinux.clang12.dwarf5 536M vmlinux.clang12.dwarf4 548M vmlinux.clang12.dwarf2 515M vmlinux.gcc10.2.dwarf5.compressed 599M vmlinux.gcc10.2.dwarf4.compressed 624M vmlinux.gcc10.2.dwarf2.compressed 630M vmlinux.gcc10.2.dwarf5 765M vmlinux.gcc10.2.dwarf4 809M vmlinux.gcc10.2.dwarf2 Though the quality of debug info is harder to quantify; size is not a proxy for quality. Jakub notes: One thing is GCC DWARF-5 support, that is whether the compiler will support -gdwarf-5 flag, and that support should be there from GCC 7 onwards. All [GCC] 5.1 - 6.x did was start accepting -gdwarf-5 as experimental option that enabled some small DWARF subset (initially only a few DW_LANG_* codes newly added to DWARF5 drafts). Only GCC 7 (released after DWARF 5 has been finalized) started emitting DWARF5 section headers and got most of the DWARF5 changes in... Another separate thing is whether the assembler does support the -gdwarf-5 option (i.e. if you can compile assembler files with -Wa,-gdwarf-5) ... That option is about whether the assembler will emit DWARF5 or DWARF2 .debug_line. It is fine to compile C sources with -gdwarf-5 and use DWARF2 .debug_line for assembler files if as doesn't support it. Version check GCC so that we don't need to worry about the difference in command line args between GNU readelf and llvm-readelf/llvm-dwarfdump to validate the DWARF Version in the assembler feature detection script. Most issues with clang produced assembler were fixed in binutils 2.35.1, but 2.35.2 fixed issues related to requiring the flag -Wa,-gdwarf-5 explicitly. The added shell script test checks for the latter, and is only required when using clang without its integrated assembler, though we use for clang regardless as we do not yet have a way to query the assembler from Kconfig. Disabled for now if CONFIG_DEBUG_INFO_BTF is set; pahole doesn't yet recognize the new additions to the DWARF debug info. This only modifies the DWARF version emitted by the compiler, not the assembler. The DWARF version of a binary can be validated with: $ llvm-dwarfdump <object file> | head -n 4 | grep version or $ readelf --debug-dump=info <object file> 2>/dev/null | grep Version Parts of the tree don't reuse DEBUG_CFLAGS as they should; such cleanup is left as a follow up. Link: http://www.dwarfstd.org/doc/DWARF5.pdf Link: https://bugzilla.redhat.com/show_bug.cgi?id=1922707Reported-by: NSedat Dilek <sedat.dilek@gmail.com> Suggested-by: NArvind Sankar <nivedita@alum.mit.edu> Suggested-by: NCaroline Tice <cmtice@google.com> Suggested-by: NFangrui Song <maskray@google.com> Suggested-by: NJakub Jelinek <jakub@redhat.com> Suggested-by: NMasahiro Yamada <masahiroy@kernel.org> Suggested-by: NNathan Chancellor <natechancellor@gmail.com> Signed-off-by: NNick Desaulniers <ndesaulniers@google.com> Tested-by: Sedat Dilek <sedat.dilek@gmail.com> # LLVM/Clang v12.0.0-rc1 x86-64 Signed-off-by: NMasahiro Yamada <masahiroy@kernel.org>
-
由 Nick Desaulniers 提交于
Adds a default CONFIG_DEBUG_INFO_DWARF_TOOLCHAIN_DEFAULT which allows the implicit default version of DWARF emitted by the toolchain to progress over time. Modifies CONFIG_DEBUG_INFO_DWARF4 to be a member of a choice, making it mutually exclusive with CONFIG_DEBUG_INFO_DWARF_TOOLCHAIN_DEFAULT. Users may want to select this if they are using a newer toolchain, but have consumers of the DWARF debug info that aren't yet ready for newer DWARF versions' debug info. Does so in a way that's forward compatible with existing configs, and makes adding future versions more straightforward. This patch does not change the current behavior or selection of DWARF version for users upgrading to kernels with this patch. GCC since ~4.8 has defaulted to DWARF v4 implicitly, and GCC 11 has bumped this to v5. Remove the Kconfig help text about DWARF v4 being larger. It's empirically false for the latest toolchains for x86_64 defconfig, has no point of reference (I suspect it was DWARF v2 but that's stil empirically false), and debug info size is not a qualatative measure. Suggested-by: NArvind Sankar <nivedita@alum.mit.edu> Suggested-by: NFangrui Song <maskray@google.com> Suggested-by: NJakub Jelinek <jakub@redhat.com> Suggested-by: NMark Wielaard <mark@klomp.org> Suggested-by: NMasahiro Yamada <masahiroy@kernel.org> Suggested-by: NNathan Chancellor <nathan@kernel.org> Tested-by: NSedat Dilek <sedat.dilek@gmail.com> Signed-off-by: NNick Desaulniers <ndesaulniers@google.com> Signed-off-by: NMasahiro Yamada <masahiroy@kernel.org>
-
- 12 2月, 2021 1 次提交
-
-
由 Masahiro Yamada 提交于
The -gdwarf-4 flag is supported by GCC 4.5+, and also by Clang. You can see it at https://godbolt.org/z/6ed1oW For gcc 4.5.3 pane, line 37: .value 0x4 For clang 10.0.1 pane, line 117: .short 4 Given Documentation/process/changes.rst stating GCC 4.9 is the minimal version, this cc-option is unneeded. Note ---- CONFIG_DEBUG_INFO_DWARF4 controls the DWARF version only for C files. As you can see in the top Makefile, -gdwarf-4 is only passed to CFLAGS. ifdef CONFIG_DEBUG_INFO_DWARF4 DEBUG_CFLAGS += -gdwarf-4 endif This flag is used when compiling *.c files. On the other hand, the assembler is always given -gdwarf-2. KBUILD_AFLAGS += -Wa,-gdwarf-2 Hence, the debug info that comes from *.S files is always DWARF v2. This is simply because GAS supported only -gdwarf-2 for a long time. Recently, GAS gained the support for --gdwarf-[345] options. [1] And, also we have Clang integrated assembler. So, the debug info for *.S files might be improved in the future. In my understanding, the current code is intentional, not a bug. [1] https://sourceware.org/git/?p=binutils-gdb.git;a=commit;h=31bf18645d98b4d3d7357353be840e320649a67dSigned-off-by: NMasahiro Yamada <masahiroy@kernel.org> Reviewed-by: NNick Desaulniers <ndesaulniers@google.com> Reviewed-by: NNathan Chancellor <natechancellor@gmail.com>
-
- 22 1月, 2021 1 次提交
-
-
由 Mark Rutland 提交于
We generally expect local_irq_save() and local_irq_restore() to be paired and sanely nested, and so local_irq_restore() expects to be called with irqs disabled. Thus, within local_irq_restore() we only trace irq flag changes when unmasking irqs. This means that a sequence such as: | local_irq_disable(); | local_irq_save(flags); | local_irq_enable(); | local_irq_restore(flags); ... is liable to break things, as the local_irq_restore() would mask irqs without tracing this change. Similar problems may exist for architectures whose arch_irq_restore() function depends on being called with irqs disabled. We don't consider such sequences to be a good idea, so let's define those as forbidden, and add tooling to detect such broken cases. This patch adds debug code to WARN() when raw_local_irq_restore() is called with irqs enabled. As raw_local_irq_restore() is expected to pair with raw_local_irq_save(), it should never be called with irqs enabled. To avoid the possibility of circular header dependencies between irqflags.h and bug.h, the warning is handled in a separate C file. The new code is all conditional on a new CONFIG_DEBUG_IRQFLAGS symbol which is independent of CONFIG_TRACE_IRQFLAGS. As noted above such cases will confuse lockdep, so CONFIG_DEBUG_LOCKDEP now selects CONFIG_DEBUG_IRQFLAGS. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20210111153707.10071-1-mark.rutland@arm.com
-
- 16 12月, 2020 1 次提交
-
-
由 Andy Shevchenko 提交于
Test get_option() for a starter which is provided by cmdline.c. [akpm@linux-foundation.org: fix warning by constifying cmdline_test_values] [andriy.shevchenko@linux.intel.com: type of expected returned values should be int] Link: https://lkml.kernel.org/r/20201116104244.15472-1-andriy.shevchenko@linux.intel.com [andriy.shevchenko@linux.intel.com: provide meaningful MODULE_LICENSE()] Link: https://lkml.kernel.org/r/20201116104257.15527-1-andriy.shevchenko@linux.intel.com Link: https://lkml.kernel.org/r/20201112180732.75589-6-andriy.shevchenko@linux.intel.comSigned-off-by: NAndy Shevchenko <andriy.shevchenko@linux.intel.com> Cc: Shuah Khan <skhan@linuxfoundation.org> Cc: Vitor Massaru Iha <vitor@massaru.org> Cc: Mark Brown <broonie@kernel.org> Cc: Brendan Higgins <brendanhiggins@google.com> Cc: David Gow <davidgow@google.com> Cc: Matti Vaittinen <matti.vaittinen@fi.rohmeurope.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 12 12月, 2020 1 次提交
-
-
由 Palmer Dabbelt 提交于
As part of adding support for STRICT_DEVMEM to the RISC-V port, Zong provided a devmem_is_allowed() implementation that's exactly the same as all the others I checked. Instead I'm adding a generic version, which will soon be used. Reviewed-by: NLuis Chamberlain <mcgrof@kernel.org> Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
- 02 12月, 2020 2 次提交
-
-
由 Masahiro Yamada 提交于
Revert commit cebc04ba ("add CONFIG_ENABLE_MUST_CHECK"). A lot of warn_unused_result warnings existed in 2006, but until now they have been fixed thanks to people doing allmodconfig tests. Our goal is to always enable __must_check where appropriate, so this CONFIG option is no longer needed. I see a lot of defconfig (arch/*/configs/*_defconfig) files having: # CONFIG_ENABLE_MUST_CHECK is not set I did not touch them for now since it would be a big churn. If arch maintainers want to clean them up, please go ahead. While I was here, I also moved __must_check to compiler_attributes.h from compiler_types.h Signed-off-by: NMasahiro Yamada <masahiroy@kernel.org> Acked-by: NJason A. Donenfeld <Jason@zx2c4.com> Acked-by: NNathan Chancellor <natechancellor@gmail.com> Reviewed-by: NNick Desaulniers <ndesaulniers@google.com> [Moved addition in compiler_attributes.h to keep it sorted] Signed-off-by: NMiguel Ojeda <ojeda@kernel.org>
-
由 Marco Elver 提交于
It turns out that usage of skb extensions can cause memory leaks. Ido Schimmel reported: "[...] there are instances that blindly overwrite 'skb->extensions' by invoking skb_copy_header() after __alloc_skb()." Therefore, give up on using skb extensions for KCOV handle, and instead directly store kcov_handle in sk_buff. Fixes: 6370cc3b ("net: add kcov handle to skb extensions") Fixes: 85ce50d3 ("net: kcov: don't select SKB_EXTENSIONS when there is no NET") Fixes: 97f53a08 ("net: linux/skbuff.h: combine SKB_EXTENSIONS + KCOV handling") Link: https://lore.kernel.org/linux-wireless/20201121160941.GA485907@shredder.lan/Reported-by: NIdo Schimmel <idosch@idosch.org> Signed-off-by: NMarco Elver <elver@google.com> Link: https://lore.kernel.org/r/20201125224840.2014773-1-elver@google.comSigned-off-by: NJakub Kicinski <kuba@kernel.org>
-
- 24 11月, 2020 2 次提交
-
-
由 Thomas Gleixner 提交于
CONFIG_DEBUG_KMAP_LOCAL, which is selected by CONFIG_DEBUG_HIGHMEM is only providing guard pages, but does not provide a mechanism to enforce the usage of the kmap_local() infrastructure. Provide CONFIG_DEBUG_KMAP_LOCAL_FORCE_MAP which forces the temporary mapping even for lowmem pages. This needs to be a seperate config switch because this only works on architectures which do not have cache aliasing problems. Suggested-by: NLinus Torvalds <torvalds@linux-foundation.org> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20201118204007.028261233@linutronix.de
-
由 Thomas Gleixner 提交于
CONFIG_KMAP_LOCAL can be enabled by x86/32bit even if CONFIG_HIGHMEM is not enabled for temporary MMIO space mappings. Provide it as a seperate config option which depends on CONFIG_KMAP_LOCAL and let CONFIG_DEBUG_HIGHMEM select it. This won't increase the debug coverage of this significantly but it paves the way to do so. Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20201118204006.869487226@linutronix.de
-
- 18 11月, 2020 1 次提交
-
-
由 Andy Shevchenko 提交于
Add test cases for newly added resource APIs. Signed-off-by: NAndy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 13 11月, 2020 1 次提交
-
-
由 Randy Dunlap 提交于
Fix kconfig warning when CONFIG_NET is not set/enabled: WARNING: unmet direct dependencies detected for SKB_EXTENSIONS Depends on [n]: NET [=n] Selected by [y]: - KCOV [=y] && ARCH_HAS_KCOV [=y] && (CC_HAS_SANCOV_TRACE_PC [=y] || GCC_PLUGINS [=n]) Fixes: 6370cc3b ("net: add kcov handle to skb extensions") Signed-off-by: NRandy Dunlap <rdunlap@infradead.org> Cc: Aleksandr Nogikh <nogikh@google.com> Cc: Willem de Bruijn <willemb@google.com> Link: https://lore.kernel.org/r/20201110175746.11437-1-rdunlap@infradead.orgSigned-off-by: NJakub Kicinski <kuba@kernel.org>
-
- 11 11月, 2020 1 次提交
-
-
由 Andrii Nakryiko 提交于
Detect if pahole supports split BTF generation, and generate BTF for each selected kernel module, if it does. This is exposed to Makefiles and C code as CONFIG_DEBUG_INFO_BTF_MODULES flag. Kernel module BTF has to be re-generated if either vmlinux's BTF changes or module's .ko changes. To achieve that, I needed a helper similar to if_changed, but that would allow to filter out vmlinux from the list of updated dependencies for .ko building. I've put it next to the only place that uses and needs it, but it might be a better idea to just add it along the other if_changed variants into scripts/Kbuild.include. Each kernel module's BTF deduplication is pretty fast, as it does only incremental BTF deduplication on top of already deduplicated vmlinux BTF. To show the added build time, I've first ran make only just built kernel (to establish the baseline) and then forced only BTF re-generation, without regenerating .ko files. The build was performed with -j60 parallelization on 56-core machine. The final time also includes bzImage building, so it's not a pure BTF overhead. $ time make -j60 ... make -j60 27.65s user 10.96s system 782% cpu 4.933 total $ touch ~/linux-build/default/vmlinux && time make -j60 ... make -j60 123.69s user 27.85s system 1566% cpu 9.675 total So 4.6 seconds real time, with noticeable part spent in compressed vmlinux and bzImage building. To show size savings, I've built my kernel configuration with about 700 kernel modules with full BTF per each kernel module (without deduplicating against vmlinux) and with split BTF against deduplicated vmlinux (approach in this patch). Below are top 10 modules with biggest BTF sizes. And total size of BTF data across all kernel modules. It shows that split BTF "compresses" 115MB down to 5MB total. And the biggest kernel modules get a downsize from 500-570KB down to 200-300KB. FULL BTF ======== $ for f in $(find . -name '*.ko'); do size -A -d $f | grep BTF | awk '{print $2}'; done | awk '{ s += $1 } END { print s }' 115710691 $ for f in $(find . -name '*.ko'); do printf "%s %d\n" $f $(size -A -d $f | grep BTF | awk '{print $2}'); done | sort -nr -k2 | head -n10 ./drivers/gpu/drm/i915/i915.ko 570570 ./drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.ko 520240 ./drivers/gpu/drm/radeon/radeon.ko 503849 ./drivers/infiniband/hw/mlx5/mlx5_ib.ko 491777 ./fs/xfs/xfs.ko 411544 ./drivers/net/ethernet/intel/i40e/i40e.ko 403904 ./drivers/net/ethernet/broadcom/bnx2x/bnx2x.ko 398754 ./drivers/infiniband/core/ib_core.ko 397224 ./fs/cifs/cifs.ko 386249 ./fs/nfsd/nfsd.ko 379738 SPLIT BTF ========= $ for f in $(find . -name '*.ko'); do size -A -d $f | grep BTF | awk '{print $2}'; done | awk '{ s += $1 } END { print s }' 5194047 $ for f in $(find . -name '*.ko'); do printf "%s %d\n" $f $(size -A -d $f | grep BTF | awk '{print $2}'); done | sort -nr -k2 | head -n10 ./drivers/gpu/drm/i915/i915.ko 293206 ./drivers/gpu/drm/radeon/radeon.ko 282103 ./fs/xfs/xfs.ko 222150 ./drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.ko 198503 ./drivers/infiniband/hw/mlx5/mlx5_ib.ko 198356 ./drivers/net/ethernet/broadcom/bnx2x/bnx2x.ko 113444 ./fs/cifs/cifs.ko 109379 ./arch/x86/kvm/kvm.ko 100225 ./drivers/gpu/drm/drm.ko 94827 ./drivers/infiniband/core/ib_core.ko 91188 Signed-off-by: NAndrii Nakryiko <andrii@kernel.org> Signed-off-by: NAlexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20201110011932.3201430-4-andrii@kernel.org
-
- 03 11月, 2020 1 次提交
-
-
由 Aleksandr Nogikh 提交于
Remote KCOV coverage collection enables coverage-guided fuzzing of the code that is not reachable during normal system call execution. It is especially helpful for fuzzing networking subsystems, where it is common to perform packet handling in separate work queues even for the packets that originated directly from the user space. Enable coverage-guided frame injection by adding kcov remote handle to skb extensions. Default initialization in __alloc_skb and __build_skb_around ensures that no socket buffer that was generated during a system call will be missed. Code that is of interest and that performs packet processing should be annotated with kcov_remote_start()/kcov_remote_stop(). An alternative approach is to determine kcov_handle solely on the basis of the device/interface that received the specific socket buffer. However, in this case it would be impossible to distinguish between packets that originated during normal background network processes or were intentionally injected from the user space. Signed-off-by: NAleksandr Nogikh <nogikh@google.com> Acked-by: NWillem de Bruijn <willemb@google.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
- 30 10月, 2020 1 次提交
-
-
由 Mauro Carvalho Chehab 提交于
The files under Documentation/ABI should follow the syntax as defined at Documentation/ABI/README. Allow checking if they're following the syntax by running the ABI parser script on COMPILE_TEST. With that, when there's a problem with a file under Documentation/ABI, it would produce a warning like: Warning: file ./Documentation/ABI/testing/sysfs-bus-pci-devices-aer_stats#14: What '/sys/bus/pci/devices/<dev>/aer_stats/aer_rootport_total_err_cor' doesn't have a description Warning: file ./Documentation/ABI/testing/sysfs-bus-pci-devices-aer_stats#21: What '/sys/bus/pci/devices/<dev>/aer_stats/aer_rootport_total_err_fatal' doesn't have a description Acked-by: NJonathan Corbet <corbet@lwn.net> Signed-off-by: NMauro Carvalho Chehab <mchehab+huawei@kernel.org> Link: https://lore.kernel.org/r/57a38de85cb4b548857207cf1fc1bf1ee08613c9.1604042072.git.mchehab+huawei@kernel.orgSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 17 10月, 2020 1 次提交
-
-
由 Albert van der Linde 提交于
Patch series "add fault injection to user memory access", v3. The goal of this series is to improve testing of fault-tolerance in usages of user memory access functions, by adding support for fault injection. syzkaller/syzbot are using the existing fault injection modes and will use this particular feature also. The first patch adds failure injection capability for usercopy functions. The second changes usercopy functions to use this new failure capability (copy_from_user, ...). The third patch adds get/put/clear_user failures to x86. This patch (of 3): Add a failure injection capability to improve testing of fault-tolerance in usages of user memory access functions. Add CONFIG_FAULT_INJECTION_USERCOPY to enable faults in usercopy functions. The should_fail_usercopy function is to be called by these functions (copy_from_user, get_user, ...) in order to fail or not. Signed-off-by: NAlbert van der Linde <alinde@google.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Reviewed-by: NAkinobu Mita <akinobu.mita@gmail.com> Reviewed-by: NAlexander Potapenko <glider@google.com> Cc: Borislav Petkov <bp@alien8.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Andrey Konovalov <andreyknvl@google.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Marco Elver <elver@google.com> Cc: Christoph Hellwig <hch@lst.de> Link: http://lkml.kernel.org/r/20200831171733.955393-1-alinde@google.com Link: http://lkml.kernel.org/r/20200831171733.955393-2-alinde@google.comSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 14 10月, 2020 2 次提交
-
-
由 Matthew Wilcox (Oracle) 提交于
Here is a very rare race which leaks memory: Page P0 is allocated to the page cache. Page P1 is free. Thread A Thread B Thread C find_get_entry(): xas_load() returns P0 Removes P0 from page cache P0 finds its buddy P1 alloc_pages(GFP_KERNEL, 1) returns P0 P0 has refcount 1 page_cache_get_speculative(P0) P0 has refcount 2 __free_pages(P0) P0 has refcount 1 put_page(P0) P1 is not freed Fix this by freeing all the pages in __free_pages() that won't be freed by the call to put_page(). It's usually not a good idea to split a page, but this is a very unlikely scenario. Fixes: e286781d ("mm: speculative page references") Signed-off-by: NMatthew Wilcox (Oracle) <willy@infradead.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Acked-by: NMike Rapoport <rppt@linux.ibm.com> Cc: Nick Piggin <npiggin@gmail.com> Cc: Hugh Dickins <hughd@google.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lkml.kernel.org/r/20200926213919.26642-1-willy@infradead.orgSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Vitor Massaru Iha 提交于
This adds the conversion of the runtime tests of test_bitfield, from `lib/test_bitfield.c` to KUnit tests. Code Style Documentation: [0] Signed-off-by: NVitor Massaru Iha <vitor@massaru.org> Link: [0] https://lore.kernel.org/linux-kselftest/20200620054944.167330-1-davidgow@google.com/T/#uReviewed-by: NBrendan Higgins <brendanhiggins@google.com> Signed-off-by: NShuah Khan <skhan@linuxfoundation.org>
-
- 20 9月, 2020 1 次提交
-
-
由 Changbin Du 提交于
This moves the KCSAN kconfig items under menu 'Generic Kernel Debugging Instruments' where UBSAN resides. Signed-off-by: NChangbin Du <changbin.du@gmail.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Tested-by: NRandy Dunlap <rdunlap@infradead.org> Reviewed-by: NRandy Dunlap <rdunlap@infradead.org> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Marco Elver <elver@google.com> Link: https://lkml.kernel.org/r/20200904152224.5570-1-changbin.du@gmail.comSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 05 9月, 2020 1 次提交
-
-
由 Paul E. McKenney 提交于
This commit causes csd_lock_wait() to emit diagnostics when a CPU fails to respond quickly enough to one of the smp_call_function() family of function calls. These diagnostics are enabled by a new CSD_LOCK_WAIT_DEBUG Kconfig option that depends on DEBUG_KERNEL. This commit was inspired by an earlier patch by Josef Bacik. [ paulmck: Fix for syzbot+0f719294463916a3fc0e@syzkaller.appspotmail.com ] [ paulmck: Fix KASAN use-after-free issue reported by Qian Cai. ] [ paulmck: Fix botched nr_cpu_ids comparison per Dan Carpenter. ] [ paulmck: Apply Peter Zijlstra feedback. ] Link: https://lore.kernel.org/lkml/00000000000042f21905a991ecea@google.com Link: https://lore.kernel.org/lkml/0000000000002ef21705a9933cf3@google.com Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ingo Molnar <mingo@kernel.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: NPaul E. McKenney <paulmck@kernel.org>
-
- 26 8月, 2020 1 次提交
-
-
由 Sedat Dilek 提交于
While playing with [1] I saw that the handling of CONFIG_DEBUG_INFO can be simplified. [1] https://patchwork.kernel.org/patch/11716107/Signed-off-by: NSedat Dilek <sedat.dilek@gmail.com> Signed-off-by: NMasahiro Yamada <masahiroy@kernel.org>
-
- 25 8月, 2020 1 次提交
-
-
由 Paul E. McKenney 提交于
This commit adds an smp_call_function() torture test that repeatedly invokes this function and complains if things go badly awry. Signed-off-by: NPaul E. McKenney <paulmck@kernel.org>
-
- 13 8月, 2020 5 次提交
-
-
由 Tiezhu Yang 提交于
There exists duplicated "the" in the help text of CONFIG_PANIC_TIMEOUT, Remove it. Signed-off-by: NTiezhu Yang <yangtiezhu@loongson.cn> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Reviewed-by: NKees Cook <keescook@chromium.org> Cc: Xuefeng Li <lixuefeng@loongson.cn> Link: http://lkml.kernel.org/r/1591103358-32087-2-git-send-email-yangtiezhu@loongson.cnSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Rikard Falkeborn 提交于
Add tests of GENMASK and GENMASK_ULL. A few test cases that should fail compilation are provided under #ifdef TEST_GENMASK_FAILURES [rd.dunlap@gmail.com: add MODULE_LICENSE()] Link: http://lkml.kernel.org/r/dfc74524-0789-2827-4eff-476ddab65699@gmail.com [weiyongjun1@huawei.com: make some functions static] Link: http://lkml.kernel.org/r/20200702150336.4756-1-weiyongjun1@huawei.comSuggested-by: NAndy Shevchenko <andy.shevchenko@gmail.com> Signed-off-by: NRikard Falkeborn <rikard.falkeborn@gmail.com> Signed-off-by: NRandy Dunlap <rd.dunlap@gmail.com> Signed-off-by: NWei Yongjun <weiyongjun1@huawei.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Reviewed-by: NAndy Shevchenko <andy.shevchenko@gmail.com> Acked-by: NWilliam Breathitt Gray <vilhelm.gray@gmail.com> Cc: Emil Velikov <emil.l.velikov@gmail.com> Cc: Syed Nayyar Waris <syednwaris@gmail.com> Cc: Andy Shevchenko <andy.shevchenko@gmail.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Kees Cook <keescook@chromium.org> Cc: Linus Walleij <linus.walleij@linaro.org> Cc: Masahiro Yamada <yamada.masahiro@socionext.com> Link: http://lkml.kernel.org/r/20200621054210.14804-2-rikard.falkeborn@gmail.com Link: http://lkml.kernel.org/r/20200608221823.35799-2-rikard.falkeborn@gmail.comSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Alexander A. Klimov 提交于
Rationale: Reduces attack surface on kernel devs opening the links for MITM as HTTPS traffic is much harder to manipulate. Signed-off-by: NAlexander A. Klimov <grandmaster@al2klimov.de> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Acked-by: Coly Li <colyli@suse.de> [crc64.c] Link: http://lkml.kernel.org/r/20200726112154.16510-1-grandmaster@al2klimov.deSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Tiezhu Yang 提交于
Since test_lockup is a test module to generate lockups, it is better to limit TEST_LOCKUP to module (=m) or disabled (=n) because we can not use the module parameters when CONFIG_TEST_LOCKUP=y. Signed-off-by: NTiezhu Yang <yangtiezhu@loongson.cn> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Reviewed-by: NGuenter Roeck <linux@roeck-us.net> Cc: Konstantin Khlebnikov <khlebnikov@yandex-team.ru> Cc: Kees Cook <keescook@chromium.org> Link: http://lkml.kernel.org/r/1595555407-29875-1-git-send-email-yangtiezhu@loongson.cnSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Feng Tang 提交于
Recently 0day reported many strange performance changes (regression or improvement), in which there was no obvious relation between the culprit commit and the benchmark at the first look, and it causes people to doubt the test itself is wrong. Upon further check, many of these cases are caused by the change to the alignment of kernel text or data, as whole text/data of kernel are linked together, change in one domain may affect alignments of other domains. gcc has an option '-falign-functions=n' to force text aligned, and with that option enabled, some of those performance changes will be gone, like [1][2][3]. Add this option so that developers and 0day can easily find performance bump caused by text alignment change, as tracking these strange bump is quite time consuming. Though it can't help in other cases like data alignment changes like [4]. Following is some size data for v5.7 kernel built with a RHEL config used in 0day: text data bss dec filename 19738771 13292906 5554236 38585913 vmlinux.noalign 19758591 13297002 5529660 38585253 vmlinux.align32 Raw vmlinux size in bytes: v5.7 v5.7+align32 253950832 254018000 +0.02% Some benchmark data, most of them have no big change: * hackbench: [ -1.8%, +0.5%] * fsmark: [ -3.2%, +3.4%] # ext4/xfs/btrfs * kbuild: [ -2.0%, +0.9%] * will-it-scale: [ -0.5%, +1.8%] # mmap1/pagefault3 * netperf: - TCP_CRR [+16.6%, +97.4%] - TCP_RR [-18.5%, -1.8%] - TCP_STREAM [ -1.1%, +1.9%] [1] https://lore.kernel.org/lkml/20200114085637.GA29297@shao2-debian/ [2] https://lore.kernel.org/lkml/20200330011254.GA14393@feng-iot/ [3] https://lore.kernel.org/lkml/1d98d1f0-fe84-6df7-f5bd-f4cb2cdb7f45@intel.com/ [4] https://lore.kernel.org/lkml/20200205123216.GO12867@shao2-debian/Signed-off-by: NFeng Tang <feng.tang@intel.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Cc: Masahiro Yamada <masahiroy@kernel.org> Cc: Michal Marek <michal.lkml@markovi.net> Cc: Andi Kleen <andi.kleen@intel.com> Cc: Huang Ying <ying.huang@intel.com> Cc: Andy Shevchenko <andriy.shevchenko@intel.com> Link: http://lkml.kernel.org/r/1595475001-90945-1-git-send-email-feng.tang@intel.comSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 29 7月, 2020 1 次提交
-
-
由 Ahmed S. Darwish 提交于
Asserting that preemption is enabled or disabled is a critical sanity check. Developers are usually reluctant to add such a check in a fastpath as reading the preemption count can be costly. Extend the lockdep API with macros asserting that preemption is disabled or enabled. If lockdep is disabled, or if the underlying architecture does not support kernel preemption, this assert has no runtime overhead. References: f54bb2ec ("locking/lockdep: Add IRQs disabled/enabled assertion APIs: ...") Signed-off-by: NAhmed S. Darwish <a.darwish@linutronix.de> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20200720155530.1173732-8-a.darwish@linutronix.de
-
- 27 7月, 2020 1 次提交
-
-
由 peterz@infradead.org 提交于
Prior to commit: 859d069e ("lockdep: Prepare for NMI IRQ state tracking") IRQ state tracking was disabled in NMIs due to nmi_enter() doing lockdep_off() -- with the obvious requirement that NMI entry call nmi_enter() before trace_hardirqs_off(). [ AFAICT, PowerPC and SH violate this order on their NMI entry ] However, that commit explicitly changed lockdep_hardirqs_*() to ignore lockdep_off() and breaks every architecture that has irq-tracing in it's NMI entry that hasn't been fixed up (x86 being the only fixed one at this point). The reason for this change is that by ignoring lockdep_off() we can: - get rid of 'current->lockdep_recursion' in lockdep_assert_irqs*() which was going to to give header-recursion issues with the seqlock rework. - allow these lockdep_assert_*() macros to function in NMI context. Restore the previous state of things and allow an architecture to opt-in to the NMI IRQ tracking support, however instead of relying on lockdep_off(), rely on in_nmi(), both are part of nmi_enter() and so over-all entry ordering doesn't need to change. Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NIngo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20200727124852.GK119549@hirez.programming.kicks-ass.net
-