- 14 8月, 2019 1 次提交
-
-
由 Mark Rutland 提交于
Since architectures can implement ftrace using a variety of mechanisms, generic code should always use CC_FLAGS_FTRACE rather than assuming that ftrace is built using -pg. Since commit: 2464a609 ("ftrace: do not trace library functions") ... lib/Makefile has removed CC_FLAGS_FTRACE from KBUILD_CFLAGS, so ftrace is disabled for all files under lib/. Given that, we shouldn't explicitly remove -pg when building lib/string.o, as this is redundant and bad form. Clean things up accordingly. There should be no functional change as a result of this patch. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NBorislav Petkov <bp@suse.de> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Coly Li <colyli@suse.de> Cc: Gary R Hook <gary.hook@amd.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Kees Cook <keescook@chromium.org> Cc: Kent Overstreet <kent.overstreet@gmail.com> Cc: Masahiro Yamada <yamada.masahiro@socionext.com> Cc: Matthew Wilcox <willy@infradead.org> Link: https://lkml.kernel.org/r/20190806162539.51918-1-mark.rutland@arm.com
-
- 03 8月, 2019 1 次提交
-
-
由 Arnd Bergmann 提交于
objtool points out several conditions that it does not like, depending on the combination with other configuration options and compiler variants: stack protector: lib/ubsan.o: warning: objtool: __ubsan_handle_type_mismatch()+0xbf: call to __stack_chk_fail() with UACCESS enabled lib/ubsan.o: warning: objtool: __ubsan_handle_type_mismatch_v1()+0xbe: call to __stack_chk_fail() with UACCESS enabled stackleak plugin: lib/ubsan.o: warning: objtool: __ubsan_handle_type_mismatch()+0x4a: call to stackleak_track_stack() with UACCESS enabled lib/ubsan.o: warning: objtool: __ubsan_handle_type_mismatch_v1()+0x4a: call to stackleak_track_stack() with UACCESS enabled kasan: lib/ubsan.o: warning: objtool: __ubsan_handle_type_mismatch()+0x25: call to memcpy() with UACCESS enabled lib/ubsan.o: warning: objtool: __ubsan_handle_type_mismatch_v1()+0x25: call to memcpy() with UACCESS enabled The stackleak and kasan options just need to be disabled for this file as we do for other files already. For the stack protector, we already attempt to disable it, but this fails on clang because the check is mixed with the gcc specific -fno-conserve-stack option. According to Andrey Ryabinin, that option is not even needed, dropping it here fixes the stackprotector issue. Link: http://lkml.kernel.org/r/20190722125139.1335385-1-arnd@arndb.de Link: https://lore.kernel.org/lkml/20190617123109.667090-1-arnd@arndb.de/t/ Link: https://lore.kernel.org/lkml/20190722091050.2188664-1-arnd@arndb.de/t/ Fixes: d08965a2 ("x86/uaccess, ubsan: Fix UBSAN vs. SMAP") Signed-off-by: NArnd Bergmann <arnd@arndb.de> Reviewed-by: NAndrey Ryabinin <aryabinin@virtuozzo.com> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Borislav Petkov <bp@alien8.de> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@kernel.org> Cc: Kees Cook <keescook@chromium.org> Cc: Matthew Wilcox <willy@infradead.org> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Cc: <stable@vger.kernel.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 17 7月, 2019 1 次提交
-
-
由 Alexander Potapenko 提交于
Add tests for heap and pagealloc initialization. These can be used to check init_on_alloc and init_on_free implementations as well as other approaches to initialization. Expected test output in the case the kernel provides heap initialization (e.g. when running with either init_on_alloc=1 or init_on_free=1): test_meminit: all 10 tests in test_pages passed test_meminit: all 40 tests in test_kvmalloc passed test_meminit: all 60 tests in test_kmemcache passed test_meminit: all 10 tests in test_rcu_persistent passed test_meminit: all 120 tests passed! Link: http://lkml.kernel.org/r/20190529123812.43089-4-glider@google.comSigned-off-by: NAlexander Potapenko <glider@google.com> Acked-by: NKees Cook <keescook@chromium.org> Cc: Christoph Lameter <cl@linux.com> Cc: Nick Desaulniers <ndesaulniers@google.com> Cc: Kostya Serebryany <kcc@google.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Sandeep Patil <sspatil@android.com> Cc: Laura Abbott <labbott@redhat.com> Cc: Jann Horn <jannh@google.com> Cc: Marco Elver <elver@google.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 02 7月, 2019 1 次提交
-
-
由 Mahesh Bandewar 提交于
Since this is not really a device with all capabilities, this test ensures that it has *enough* to make it through the data path without causing unwanted side-effects (read crash!). Signed-off-by: NMahesh Bandewar <maheshb@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 26 6月, 2019 1 次提交
-
-
由 Tal Gilboa 提交于
Moved all logic from dim.h and net_dim.h to dim.c and net_dim.c. This is both more structurally appealing and would allow to only expose externally used functions. Signed-off-by: NTal Gilboa <talgi@mellanox.com> Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com>
-
- 20 6月, 2019 1 次提交
-
-
由 Ard Biesheuvel 提交于
Refactor the core rc4 handling so we can move most users to a library interface, permitting us to drop the cipher interface entirely in a future patch. This is part of an effort to simplify the crypto API and improve its robustness against incorrect use. Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
- 17 6月, 2019 1 次提交
-
-
由 Masahiro Yamada 提交于
jedec_ddr_data.c exports 3 symbols, and all of them are only referenced from drivers/memory/{emif.c,of_memory.c} drivers/memory/ is a better location than lib/. I removed the Kconfig prompt "JEDEC DDR data" because it is only select'ed by TI_EMIF, and there is no other user. There is no good reason in making it a user-configurable CONFIG option. Signed-off-by: NMasahiro Yamada <yamada.masahiro@socionext.com> Signed-off-by: NOlof Johansson <olof@lixom.net>
-
- 15 5月, 2019 1 次提交
-
-
由 Andy Shevchenko 提交于
For better maintenance and expansion move the mathematic helpers to the separate folder. No functional change intended. Note, the int_sqrt() is not used as a part of lib, so, moved to regular obj. Link: http://lkml.kernel.org/r/20190323172531.80025-1-andriy.shevchenko@linux.intel.comSigned-off-by: NAndy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: NMauro Carvalho Chehab <mchehab+samsung@kernel.org> Cc: Randy Dunlap <rdunlap@infradead.org> Cc: Thierry Reding <thierry.reding@gmail.com> Cc: Lee Jones <lee.jones@linaro.org> Cc: Daniel Thompson <daniel.thompson@linaro.org> Cc: Ray Jui <rjui@broadcom.com> [mchehab+samsung@kernel.org: fix broken doc references for div64.c and gcd.c] Link: http://lkml.kernel.org/r/734f49bae5d4052b3c25691dfefad59bea2e5843.1555580999.git.mchehab+samsung@kernel.orgSigned-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 03 5月, 2019 1 次提交
-
-
由 Vladimir Oltean 提交于
This provides an unified API for accessing register bit fields regardless of memory layout. The basic unit of data for these API functions is the u64. The process of transforming an u64 from native CPU encoding into the peripheral's encoding is called 'pack', and transforming it from peripheral to native CPU encoding is 'unpack'. Signed-off-by: NVladimir Oltean <olteanv@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 30 4月, 2019 1 次提交
-
-
由 Gary Hook 提交于
Enablement of AMD's Secure Memory Encryption feature is determined very early after start_kernel() is entered. Part of this procedure involves scanning the command line for the parameter 'mem_encrypt'. To determine intended state, the function sme_enable() uses library functions cmdline_find_option() and strncmp(). Their use occurs early enough such that it cannot be assumed that any instrumentation subsystem is initialized. For example, making calls to a KASAN-instrumented function before KASAN is set up will result in the use of uninitialized memory and a boot failure. When AMD's SME support is enabled, conditionally disable instrumentation of these dependent functions in lib/string.c and arch/x86/lib/cmdline.c. [ bp: Get rid of intermediary nostackp var and cleanup whitespace. ] Fixes: aca20d54 ("x86/mm: Add support to make use of Secure Memory Encryption") Reported-by: NLi RongQing <lirongqing@baidu.com> Signed-off-by: NGary R Hook <gary.hook@amd.com> Signed-off-by: NBorislav Petkov <bp@suse.de> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Cc: Boris Brezillon <bbrezillon@kernel.org> Cc: Coly Li <colyli@suse.de> Cc: "dave.hansen@linux.intel.com" <dave.hansen@linux.intel.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Kees Cook <keescook@chromium.org> Cc: Kent Overstreet <kent.overstreet@gmail.com> Cc: "luto@kernel.org" <luto@kernel.org> Cc: Masahiro Yamada <yamada.masahiro@socionext.com> Cc: Matthew Wilcox <willy@infradead.org> Cc: "mingo@redhat.com" <mingo@redhat.com> Cc: "peterz@infradead.org" <peterz@infradead.org> Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: x86-ml <x86@kernel.org> Link: https://lkml.kernel.org/r/155657657552.7116.18363762932464011367.stgit@sosrh3.amd.com
-
- 09 4月, 2019 1 次提交
-
-
由 Tobin C. Harding 提交于
Add a test module for the new strscpy_pad() function. Tie it into the kselftest infrastructure for lib/ tests. Acked-by: NKees Cook <keescook@chromium.org> Signed-off-by: NTobin C. Harding <tobin@kernel.org> Signed-off-by: NShuah Khan <shuah@kernel.org>
-
- 03 4月, 2019 1 次提交
-
-
由 Peter Zijlstra 提交于
UBSAN can insert extra code in random locations; including AC=1 sections. Typically this code is not safe and needs wrapping. So far, only __ubsan_handle_type_mismatch* have been observed in AC=1 sections and therefore only those are annotated. Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: NIngo Molnar <mingo@kernel.org>
-
- 14 3月, 2019 1 次提交
-
-
由 Masahiro Yamada 提交于
Currently, the Kbuild core manipulates header search paths in a crazy way [1]. To fix this mess, I want all Makefiles to add explicit $(srctree)/ to the search paths in the srctree. Some Makefiles are already written in that way, but not all. The goal of this work is to make the notation consistent, and finally get rid of the gross hacks. Having whitespaces after -I does not matter since commit 48f6e3cf ("kbuild: do not drop -I without parameter"). [1]: https://patchwork.kernel.org/patch/9632347/Signed-off-by: NMasahiro Yamada <yamada.masahiro@socionext.com>
-
- 13 3月, 2019 2 次提交
-
-
由 Kent Overstreet 提交于
All existing users have been converted to generic radix trees Link: http://lkml.kernel.org/r/20181217131929.11727-8-kent.overstreet@gmail.comSigned-off-by: NKent Overstreet <kent.overstreet@gmail.com> Acked-by: NDave Hansen <dave.hansen@intel.com> Cc: Alexey Dobriyan <adobriyan@gmail.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Eric Paris <eparis@parisplace.org> Cc: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com> Cc: Matthew Wilcox <willy@infradead.org> Cc: Neil Horman <nhorman@tuxdriver.com> Cc: Paul Moore <paul@paul-moore.com> Cc: Pravin B Shelar <pshelar@ovn.org> Cc: Shaohua Li <shli@kernel.org> Cc: Stephen Smalley <sds@tycho.nsa.gov> Cc: Vlad Yasevich <vyasevich@gmail.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Kent Overstreet 提交于
Very simple radix tree implementation that supports storing arbitrary size entries, up to PAGE_SIZE - upcoming patches will convert existing flex_array users to genradixes. The new genradix code has a much simpler API and implementation, and doesn't have a hard limit on the number of elements like flex_array does. Link: http://lkml.kernel.org/r/20181217131929.11727-5-kent.overstreet@gmail.comSigned-off-by: NKent Overstreet <kent.overstreet@gmail.com> Cc: Alexey Dobriyan <adobriyan@gmail.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Dave Hansen <dave.hansen@intel.com> Cc: Eric Paris <eparis@parisplace.org> Cc: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com> Cc: Matthew Wilcox <willy@infradead.org> Cc: Neil Horman <nhorman@tuxdriver.com> Cc: Paul Moore <paul@paul-moore.com> Cc: Pravin B Shelar <pshelar@ovn.org> Cc: Shaohua Li <shli@kernel.org> Cc: Stephen Smalley <sds@tycho.nsa.gov> Cc: Vlad Yasevich <vyasevich@gmail.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 06 3月, 2019 1 次提交
-
-
由 Uladzislau Rezki (Sony) 提交于
This adds a new kernel module for analysis of vmalloc allocator. It is only enabled as a module. There are two main reasons this module should be used for: performance evaluation and stressing of vmalloc subsystem. It consists of several test cases. As of now there are 8. The module has five parameters we can specify to change its the behaviour. 1) run_test_mask - set of tests to be run id: 1, name: fix_size_alloc_test id: 2, name: full_fit_alloc_test id: 4, name: long_busy_list_alloc_test id: 8, name: random_size_alloc_test id: 16, name: fix_align_alloc_test id: 32, name: random_size_align_alloc_test id: 64, name: align_shift_alloc_test id: 128, name: pcpu_alloc_test By default all tests are in run test mask. If you want to select some specific tests it is possible to pass the mask. For example for first, second and fourth tests we go 11 value. 2) test_repeat_count - how many times each test should be repeated By default it is one time per test. It is possible to pass any number. As high the value is the test duration gets increased. 3) test_loop_count - internal test loop counter. By default it is set to 1000000. 4) single_cpu_test - use one CPU to run the tests By default this parameter is set to false. It means that all online CPUs execute tests. By setting it to 1, the tests are executed by first online CPU only. 5) sequential_test_order - run tests in sequential order By default this parameter is set to false. It means that before running tests the order is shuffled. It is possible to make it sequential, just set it to 1. Performance analysis: In order to evaluate performance of vmalloc allocations, usually it makes sense to use only one CPU that runs tests, use sequential order, number of repeat tests can be different as well as set of test mask. For example if we want to run all tests, to use one CPU and repeat each test 3 times. Insert the module passing following parameters: single_cpu_test=1 sequential_test_order=1 test_repeat_count=3 with following output: <snip> Summary: fix_size_alloc_test passed: 3 failed: 0 repeat: 3 loops: 1000000 avg: 901177 usec Summary: full_fit_alloc_test passed: 3 failed: 0 repeat: 3 loops: 1000000 avg: 1039341 usec Summary: long_busy_list_alloc_test passed: 3 failed: 0 repeat: 3 loops: 1000000 avg: 11775763 usec Summary: random_size_alloc_test passed 3: failed: 0 repeat: 3 loops: 1000000 avg: 6081992 usec Summary: fix_align_alloc_test passed: 3 failed: 0 repeat: 3, loops: 1000000 avg: 2003712 usec Summary: random_size_align_alloc_test passed: 3 failed: 0 repeat: 3 loops: 1000000 avg: 2895689 usec Summary: align_shift_alloc_test passed: 0 failed: 3 repeat: 3 loops: 1000000 avg: 573 usec Summary: pcpu_alloc_test passed: 3 failed: 0 repeat: 3 loops: 1000000 avg: 95802 usec All test took CPU0=192945605995 cycles <snip> The align_shift_alloc_test is expected to be failed. Stressing: In order to stress the vmalloc subsystem we run all available test cases on all available CPUs simultaneously. In order to prevent constant behaviour pattern, the test cases array is shuffled by default to randomize the order of test execution. For example if we want to run all tests(default), use all online CPUs(default) with shuffled order(default) and to repeat each test 30 times. The command would be like: modprobe vmalloc_test test_repeat_count=30 Expected results are the system is alive, there are no any BUG_ONs or Kernel Panics the tests are completed, no memory leaks. [urezki@gmail.com: fix 32-bit builds] Link: http://lkml.kernel.org/r/20190106214839.ffvjvmrn52uqog7k@pc636 [urezki@gmail.com: make CONFIG_TEST_VMALLOC depend on CONFIG_MMU] Link: http://lkml.kernel.org/r/20190219085441.s6bg2gpy4esny5vw@pc636 Link: http://lkml.kernel.org/r/20190103142108.20744-3-urezki@gmail.comSigned-off-by: NUladzislau Rezki (Sony) <urezki@gmail.com> Cc: Kees Cook <keescook@chromium.org> Cc: Matthew Wilcox <willy@infradead.org> Cc: Michal Hocko <mhocko@suse.com> Cc: Oleksiy Avramchenko <oleksiy.avramchenko@sonymobile.com> Cc: Shuah Khan <shuah@kernel.org> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 05 3月, 2019 1 次提交
-
-
由 Kees Cook 提交于
Adds test for stack initialization coverage. We have several build options that control the level of stack variable initialization. This test lets us visualize which options cover which cases, and provide tests for some of the pathological padding conditions the compiler will sometimes fail to initialize. All options pass the explicit initialization cases and the partial initializers (even with padding): test_stackinit: u8_zero ok test_stackinit: u16_zero ok test_stackinit: u32_zero ok test_stackinit: u64_zero ok test_stackinit: char_array_zero ok test_stackinit: small_hole_zero ok test_stackinit: big_hole_zero ok test_stackinit: trailing_hole_zero ok test_stackinit: packed_zero ok test_stackinit: small_hole_dynamic_partial ok test_stackinit: big_hole_dynamic_partial ok test_stackinit: trailing_hole_dynamic_partial ok test_stackinit: packed_dynamic_partial ok test_stackinit: small_hole_static_partial ok test_stackinit: big_hole_static_partial ok test_stackinit: trailing_hole_static_partial ok test_stackinit: packed_static_partial ok test_stackinit: packed_static_all ok test_stackinit: packed_dynamic_all ok test_stackinit: packed_runtime_all ok The results of the other tests (which contain no explicit initialization), change based on the build's configured compiler instrumentation. No options: test_stackinit: small_hole_static_all FAIL (uninit bytes: 3) test_stackinit: big_hole_static_all FAIL (uninit bytes: 61) test_stackinit: trailing_hole_static_all FAIL (uninit bytes: 7) test_stackinit: small_hole_dynamic_all FAIL (uninit bytes: 3) test_stackinit: big_hole_dynamic_all FAIL (uninit bytes: 61) test_stackinit: trailing_hole_dynamic_all FAIL (uninit bytes: 7) test_stackinit: small_hole_runtime_partial FAIL (uninit bytes: 23) test_stackinit: big_hole_runtime_partial FAIL (uninit bytes: 127) test_stackinit: trailing_hole_runtime_partial FAIL (uninit bytes: 24) test_stackinit: packed_runtime_partial FAIL (uninit bytes: 24) test_stackinit: small_hole_runtime_all FAIL (uninit bytes: 3) test_stackinit: big_hole_runtime_all FAIL (uninit bytes: 61) test_stackinit: trailing_hole_runtime_all FAIL (uninit bytes: 7) test_stackinit: u8_none FAIL (uninit bytes: 1) test_stackinit: u16_none FAIL (uninit bytes: 2) test_stackinit: u32_none FAIL (uninit bytes: 4) test_stackinit: u64_none FAIL (uninit bytes: 8) test_stackinit: char_array_none FAIL (uninit bytes: 16) test_stackinit: switch_1_none FAIL (uninit bytes: 8) test_stackinit: switch_2_none FAIL (uninit bytes: 8) test_stackinit: small_hole_none FAIL (uninit bytes: 24) test_stackinit: big_hole_none FAIL (uninit bytes: 128) test_stackinit: trailing_hole_none FAIL (uninit bytes: 32) test_stackinit: packed_none FAIL (uninit bytes: 32) test_stackinit: user FAIL (uninit bytes: 32) test_stackinit: failures: 25 CONFIG_GCC_PLUGIN_STRUCTLEAK_USER=y This only tries to initialize structs with __user markings, so only the difference from above is now the "user" test passes: test_stackinit: small_hole_static_all FAIL (uninit bytes: 3) test_stackinit: big_hole_static_all FAIL (uninit bytes: 61) test_stackinit: trailing_hole_static_all FAIL (uninit bytes: 7) test_stackinit: small_hole_dynamic_all FAIL (uninit bytes: 3) test_stackinit: big_hole_dynamic_all FAIL (uninit bytes: 61) test_stackinit: trailing_hole_dynamic_all FAIL (uninit bytes: 7) test_stackinit: small_hole_runtime_partial FAIL (uninit bytes: 23) test_stackinit: big_hole_runtime_partial FAIL (uninit bytes: 127) test_stackinit: trailing_hole_runtime_partial FAIL (uninit bytes: 24) test_stackinit: packed_runtime_partial FAIL (uninit bytes: 24) test_stackinit: small_hole_runtime_all FAIL (uninit bytes: 3) test_stackinit: big_hole_runtime_all FAIL (uninit bytes: 61) test_stackinit: trailing_hole_runtime_all FAIL (uninit bytes: 7) test_stackinit: u8_none FAIL (uninit bytes: 1) test_stackinit: u16_none FAIL (uninit bytes: 2) test_stackinit: u32_none FAIL (uninit bytes: 4) test_stackinit: u64_none FAIL (uninit bytes: 8) test_stackinit: char_array_none FAIL (uninit bytes: 16) test_stackinit: switch_1_none FAIL (uninit bytes: 8) test_stackinit: switch_2_none FAIL (uninit bytes: 8) test_stackinit: small_hole_none FAIL (uninit bytes: 24) test_stackinit: big_hole_none FAIL (uninit bytes: 128) test_stackinit: trailing_hole_none FAIL (uninit bytes: 32) test_stackinit: packed_none FAIL (uninit bytes: 32) test_stackinit: user ok test_stackinit: failures: 24 CONFIG_GCC_PLUGIN_STRUCTLEAK_BYREF=y This initializes all structures passed by reference (scalars and strings remain uninitialized): test_stackinit: small_hole_static_all ok test_stackinit: big_hole_static_all ok test_stackinit: trailing_hole_static_all ok test_stackinit: small_hole_dynamic_all ok test_stackinit: big_hole_dynamic_all ok test_stackinit: trailing_hole_dynamic_all ok test_stackinit: small_hole_runtime_partial ok test_stackinit: big_hole_runtime_partial ok test_stackinit: trailing_hole_runtime_partial ok test_stackinit: packed_runtime_partial ok test_stackinit: small_hole_runtime_all ok test_stackinit: big_hole_runtime_all ok test_stackinit: trailing_hole_runtime_all ok test_stackinit: u8_none FAIL (uninit bytes: 1) test_stackinit: u16_none FAIL (uninit bytes: 2) test_stackinit: u32_none FAIL (uninit bytes: 4) test_stackinit: u64_none FAIL (uninit bytes: 8) test_stackinit: char_array_none FAIL (uninit bytes: 16) test_stackinit: switch_1_none FAIL (uninit bytes: 8) test_stackinit: switch_2_none FAIL (uninit bytes: 8) test_stackinit: small_hole_none ok test_stackinit: big_hole_none ok test_stackinit: trailing_hole_none ok test_stackinit: packed_none ok test_stackinit: user ok test_stackinit: failures: 7 CONFIG_GCC_PLUGIN_STRUCTLEAK_BYREF_ALL=y This initializes all variables, so it matches above with the scalars and arrays included: test_stackinit: small_hole_static_all ok test_stackinit: big_hole_static_all ok test_stackinit: trailing_hole_static_all ok test_stackinit: small_hole_dynamic_all ok test_stackinit: big_hole_dynamic_all ok test_stackinit: trailing_hole_dynamic_all ok test_stackinit: small_hole_runtime_partial ok test_stackinit: big_hole_runtime_partial ok test_stackinit: trailing_hole_runtime_partial ok test_stackinit: packed_runtime_partial ok test_stackinit: small_hole_runtime_all ok test_stackinit: big_hole_runtime_all ok test_stackinit: trailing_hole_runtime_all ok test_stackinit: u8_none ok test_stackinit: u16_none ok test_stackinit: u32_none ok test_stackinit: u64_none ok test_stackinit: char_array_none ok test_stackinit: switch_1_none ok test_stackinit: switch_2_none ok test_stackinit: small_hole_none ok test_stackinit: big_hole_none ok test_stackinit: trailing_hole_none ok test_stackinit: packed_none ok test_stackinit: user ok test_stackinit: all tests passed! Signed-off-by: NKees Cook <keescook@chromium.org> Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
-
- 12 1月, 2019 1 次提交
-
-
由 Joe Lawrence 提交于
Add a few livepatch modules and simple target modules that the included regression suite can run tests against: - basic livepatching (multiple patches, atomic replace) - pre/post (un)patch callbacks - shadow variable API Signed-off-by: NJoe Lawrence <joe.lawrence@redhat.com> Signed-off-by: NPetr Mladek <pmladek@suse.com> Tested-by: NMiroslav Benes <mbenes@suse.cz> Tested-by: NAlice Ferrazzi <alice.ferrazzi@gmail.com> Acked-by: NJoe Lawrence <joe.lawrence@redhat.com> Acked-by: NJosh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: NJiri Kosina <jkosina@suse.cz>
-
- 20 11月, 2018 1 次提交
-
-
由 Eric Biggers 提交于
In preparation for adding XChaCha12 support, rename/refactor chacha20-generic to support different numbers of rounds. The justification for needing XChaCha12 support is explained in more detail in the patch "crypto: chacha - add XChaCha12 support". The only difference between ChaCha{8,12,20} are the number of rounds itself; all other parts of the algorithm are the same. Therefore, remove the "20" from all definitions, structures, functions, files, etc. that will be shared by all ChaCha versions. Also make ->setkey() store the round count in the chacha_ctx (previously chacha20_ctx). The generic code then passes the round count through to chacha_block(). There will be a ->setkey() function for each explicitly allowed round count; the encrypt/decrypt functions will be the same. I decided not to do it the opposite way (same ->setkey() function for all round counts, with different encrypt/decrypt functions) because that would have required more boilerplate code in architecture-specific implementations of ChaCha and XChaCha. Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: NMartin Willi <martin@strongswan.org> Signed-off-by: NEric Biggers <ebiggers@google.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
- 16 11月, 2018 1 次提交
-
-
由 Jiri Pirko 提交于
This lib tracks objects which could be of two types: 1) root object 2) nested object - with a "delta" which differentiates it from the associated root object The objects are tracked by a hashtable and reference-counted. User is responsible of implementing callbacks to create/destroy root entity related to each root object and callback to create/destroy nested object delta. Signed-off-by: NJiri Pirko <jiri@mellanox.com> Signed-off-by: NIdo Schimmel <idosch@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 01 11月, 2018 1 次提交
-
-
由 Palmer Dabbelt 提交于
We don't want 64-bit divide in the kernel. This reverts commit 6315730e. Signed-off-by: NPalmer Dabbelt <palmer@sifive.com>
-
- 23 10月, 2018 1 次提交
-
-
由 Zong Li 提交于
Add umoddi3 and udivmoddi4 support for 32-bit. The RV32 need the umoddi3 to do modulo when the operands are long long type, like other libraries implementation such as ucmpdi2, lshrdi3 and so on. I encounter the undefined reference 'umoddi3' when I use the in house dma driver, although it is in house driver, but I think that umoddi3 is a common function for RV32. The udivmoddi4 and umoddi3 are copies from libgcc in gcc. There are other functions use the udivmoddi4 in libgcc, so I separate the umoddi3 and udivmoddi4 for flexible extension in the future. Signed-off-by: NZong Li <zong@andestech.com> Signed-off-by: NPalmer Dabbelt <palmer@sifive.com>
-
- 21 10月, 2018 2 次提交
-
-
由 Matthew Wilcox 提交于
The xa_load function brings with it a lot of infrastructure; xa_empty(), xa_is_err(), and large chunks of the XArray advanced API that are used to implement xa_load. As the test-suite demonstrates, it is possible to use the XArray functions on a radix tree. The radix tree functions depend on the GFP flags being stored in the root of the tree, so it's not possible to use the radix tree functions on an XArray. Signed-off-by: NMatthew Wilcox <willy@infradead.org>
-
由 Matthew Wilcox 提交于
This is a direct replacement for struct radix_tree_root. Some of the struct members have changed name; convert those, and use a #define so that radix_tree users continue to work without change. Signed-off-by: NMatthew Wilcox <willy@infradead.org> Reviewed-by: NJosef Bacik <jbacik@fb.com>
-
- 16 10月, 2018 1 次提交
-
-
由 Alexander Shishkin 提交于
kbuild robot reports that since commit ce76d938 ("lib: Add memcat_p(): paste 2 pointer arrays together") the ia64/hp/sim/boot fails to link: > LD arch/ia64/hp/sim/boot/bootloader > lib/string.o: In function `__memcat_p': > string.c:(.text+0x1f22): undefined reference to `__kmalloc' > string.c:(.text+0x1ff2): undefined reference to `__kmalloc' > make[1]: *** [arch/ia64/hp/sim/boot/Makefile:37: arch/ia64/hp/sim/boot/bootloader] Error 1 The reason is, the above commit, via __memcat_p(), adds a call to __kmalloc to string.o, which happens to be used in the bootloader, but there's no kmalloc or slab or anything. Since the linker would only pull in objects that contain referenced symbols, moving __memcat_p() to a different compilation unit solves the problem. Fixes: ce76d938 ("lib: Add memcat_p(): paste 2 pointer arrays together") Signed-off-by: NAlexander Shishkin <alexander.shishkin@linux.intel.com> Reported-by: Nkbuild test robot <lkp@intel.com> Cc: Fenghua Yu <fenghua.yu@intel.com> Cc: Tony Luck <tony.luck@intel.com> Cc: Joe Perches <joe@perches.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 12 10月, 2018 1 次提交
-
-
由 Arnd Bergmann 提交于
The previous patch introduced very large kernel stack usage and a Makefile change to hide the warning about it. From what I can tell, a number of things went wrong here: - The BCH_MAX_T constant was set to the maximum value for 'n', not the maximum for 't', which is much smaller. - The stack usage is actually larger than the entire kernel stack on some architectures that can use 4KB stacks (m68k, sh, c6x), which leads to an immediate overrun. - The justification in the patch description claimed that nothing changed, however that is not the case even without the two points above: the configuration is machine specific, and most boards never use the maximum BCH_ECC_WORDS() length but instead have something much smaller. That maximum would only apply to machines that use both the maximum block size and the maximum ECC strength. The largest value for 't' that I could find is '32', which in turn leads to a 60 byte array instead of 2048 bytes. Making it '64' for future extension seems also worthwhile, with 120 bytes for the array. Anything larger won't fit into the OOB area on NAND flash. With that changed, the warning can be enabled again. Only linux-4.19+ contains the breakage, so this is only needed as a stable backport if it does not make it into the release. Fixes: 02361bc7 ("lib/bch: Remove VLA usage") Reported-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Cc: stable@vger.kernel.org Signed-off-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NBoris Brezillon <boris.brezillon@bootlin.com>
-
- 11 10月, 2018 2 次提交
-
-
由 Kees Cook 提交于
Now that Variable Length Arrays (VLAs) have been entirely removed[1] from the kernel, enable the VLA warning globally. The only exceptions to this are the KASan an UBSan tests which are explicitly checking that VLAs trigger their respective tests. [1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com Cc: Masahiro Yamada <yamada.masahiro@socionext.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: David Airlie <airlied@linux.ie> Cc: linux-kbuild@vger.kernel.org Cc: intel-gfx@lists.freedesktop.org Cc: dri-devel@lists.freedesktop.org Signed-off-by: NKees Cook <keescook@chromium.org>
-
由 Alexander Shishkin 提交于
This adds a helper to paste 2 pointer arrays together, useful for merging various types of attribute arrays. There are a few places in the kernel tree where this is open coded, and I just added one more in the STM class. The naming is inspired by memset_p() and memcat(), and partial credit for it goes to Andy Shevchenko. This patch adds the function wrapped in a type-enforcing macro and a test module. Signed-off-by: NAlexander Shishkin <alexander.shishkin@linux.intel.com> Reviewed-by: NAndy Shevchenko <andriy.shevchenko@linux.intel.com> Tested-by: NMathieu Poirier <mathieu.poirier@linaro.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 23 8月, 2018 1 次提交
-
-
由 Coly Li 提交于
Patch series "add crc64 calculation as kernel library", v5. This patchset adds basic implementation of crc64 calculation as a Linux kernel library. Since bcache already does crc64 by itself, this patchset also modifies bcache code to use the new crc64 library routine. Currently bcache is the only user of crc64 calculation, another potential user is bcachefs which is on the way to be in mainline kernel. Therefore it makes sense to make crc64 calculation to be a public library. bcache uses crc64 as storage checksum, if a change of crc lib routines results an inconsistent result, the unmatched checksum may make bcache 'think' the on-disk is corrupted, such a change should be avoided or detected as early as possible. Therefore a patch is being prepared which adds a crc test framework, to check consistency of different calculations. This patch (of 2): Add the re-write crc64 calculation routines for Linux kernel. The CRC64 polynomical arithmetic follows ECMA-182 specification, inspired by CRC paper of Dr. Ross N. Williams (see http://www.ross.net/crc/download/crc_v3.txt) and other public domain implementations. All the changes work in this way, - When Linux kernel is built, host program lib/gen_crc64table.c will be compiled to lib/gen_crc64table and executed. - The output of gen_crc64table execution is an array called as lookup table (a.k.a POLY 0x42f0e1eba9ea369) which contain 256 64-bit long numbers, this table is dumped into header file lib/crc64table.h. - Then the header file is included by lib/crc64.c for normal 64bit crc calculation. - Function declaration of the crc64 calculation routines is placed in include/linux/crc64.h Currently bcache is the only user of crc64_be(), another potential user is bcachefs which is on the way to be in mainline kernel. Therefore it makes sense to move crc64 calculation into lib/crc64.c as public code. [colyli@suse.de: fix review comments from v4] Link: http://lkml.kernel.org/r/20180726053352.2781-2-colyli@suse.de Link: http://lkml.kernel.org/r/20180718165545.1622-2-colyli@suse.deSigned-off-by: NColy Li <colyli@suse.de> Co-developed-by: NAndy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: NAndy Shevchenko <andriy.shevchenko@linux.intel.com> Reviewed-by: NHannes Reinecke <hare@suse.de> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Cc: Michael Lyle <mlyle@lyle.org> Cc: Kent Overstreet <kent.overstreet@gmail.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Kate Stewart <kstewart@linuxfoundation.org> Cc: Eric Biggers <ebiggers3@gmail.com> Cc: Randy Dunlap <rdunlap@infradead.org> Cc: Noah Massey <noah.massey@gmail.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 22 8月, 2018 1 次提交
-
-
由 Matthew Wilcox 提交于
Start transitioning the IDA tests into kernel space. Framework heavily cribbed from test_xarray.c. Signed-off-by: NMatthew Wilcox <willy@infradead.org>
-
- 27 6月, 2018 1 次提交
-
-
由 Johannes Berg 提交于
Add tests for the bitfield helpers. The constant ones will all be folded to nothing by the compiler (if everything is correct in the header file), and the variable ones do some tests against open-coding the necessary shifts. A few test cases that should fail/warn compilation are provided under ifdef. Suggested-by: NAndy Shevchenko <andy.shevchenko@gmail.com> Reviewed-by: NAndy Shevchenko <andy.shevchenko@gmail.com> Signed-off-by: NJohannes Berg <johannes@sipsolutions.net> Signed-off-by: NKalle Valo <kvalo@codeaurora.org>
-
- 22 6月, 2018 1 次提交
-
-
由 Kees Cook 提交于
In the quest to remove all stack VLA usage from the kernel[1], this allocates a fixed size stack array to cover the range needed for bch. This was done instead of a preallocation on the SLAB due to performance reasons, shown by Ivan Djelic: little-endian, type sizes: int=4 long=8 longlong=8 cpu: Intel(R) Core(TM) i5 CPU 650 @ 3.20GHz calibration: iter=4.9143µs niter=2034 nsamples=200 m=13 t=4 Buffer allocation | Encoding throughput (Mbit/s) --------------------------------------------------- on-stack, VLA | 3988 on-stack, fixed | 4494 kmalloc | 1967 So this change actually improves performance too, it seems. The resulting stack allocation can get rather large; without CONFIG_BCH_CONST_PARAMS, it will allocate 4096 bytes, which trips the stack size checking: lib/bch.c: In function ‘encode_bch’: lib/bch.c:261:1: warning: the frame size of 4432 bytes is larger than 2048 bytes [-Wframe-larger-than=] Even the default case for "allmodconfig" (with CONFIG_BCH_CONST_M=14 and CONFIG_BCH_CONST_T=4) would have started throwing a warning: lib/bch.c: In function ‘encode_bch’: lib/bch.c:261:1: warning: the frame size of 2288 bytes is larger than 2048 bytes [-Wframe-larger-than=] But this is how large it's always been; it was just hidden from the checker because it was a VLA. So the Makefile has been adjusted to silence this warning for anything smaller than 4500 bytes, which should provide room for normal cases, but still low enough to catch any future pathological situations. [1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.comSigned-off-by: NKees Cook <keescook@chromium.org> Reviewed-by: NIvan Djelic <ivan.djelic@parrot.com> Tested-by: NIvan Djelic <ivan.djelic@parrot.com> Acked-by: NBoris Brezillon <boris.brezillon@bootlin.com> Signed-off-by: NBoris Brezillon <boris.brezillon@bootlin.com>
-
- 20 6月, 2018 1 次提交
-
-
由 Matthew Wilcox 提交于
With its one user gone, remove the library code. Signed-off-by: NMatthew Wilcox <willy@infradead.org> Reviewed-by: NJens Axboe <axboe@kernel.dk> Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com>
-
- 14 6月, 2018 2 次提交
-
-
由 Christoph Hellwig 提交于
Currently the code is split over various files with dma- prefixes in the lib/ and drives/base directories, and the number of files keeps growing. Move them into a single directory to keep the code together and remove the file name prefixes. To match the irq infrastructure this directory is placed under the kernel/ directory. Signed-off-by: NChristoph Hellwig <hch@lst.de>
-
由 Christoph Hellwig 提交于
We already have exact config symbols to select the direct, non-coherent, or virt dma ops. So use the normal obj- scheme to select them. Signed-off-by: NChristoph Hellwig <hch@lst.de>
-
- 13 6月, 2018 1 次提交
-
-
Alpha provides a custom implementation of dec_and_lock(). The functions is split into two parts: - atomic_add_unless() + return 0 (fast path in assembly) - remaining part including locking (slow path in C) Comparing the result of the alpha implementation with the generic implementation compiled by gcc it looks like the fast path is optimized by avoiding a stack frame (and reloading the GP), register store and all this. This is only done in the slowpath. After marking the slowpath (atomic_dec_and_lock_1()) as "noinline" and doing the slowpath in C (the atomic_add_unless(atomic, -1, 1) part) I noticed differences in the resulting assembly: - the GP is still reloaded - atomic_add_unless() adds more memory barriers compared to the custom assembly - the custom assembly here does "load, sub, beq" while atomic_add_unless() does "load, cmpeq, add, bne". This is okay because it compares against zero after subtraction while the generic code compares against 1 before. I'm not sure if avoiding the stack frame (and GP reloading) brings a lot in terms of performance. Regarding the different barriers, Peter Zijlstra says: |refcount decrement needs to be a RELEASE operation, such that all the |load/stores to the object happen before we decrement the refcount. | |Otherwise things like: | | obj->foo = 5; | refcnt_dec(&obj->ref); | |can be re-ordered, which then allows fun scenarios like: | | CPU0 CPU1 | | refcnt_dec(&obj->ref); | if (dec_and_test(&obj->ref)) | free(obj); | obj->foo = 5; // oops UaF | | |This means (for alpha) that there should be a memory barrier _before_ |the decrement, however the dec_and_lock asm thing only has one _after_, |which, per the above, is too late. | |The generic version using add_unless will result in memory barrier |before and after (because that is the rule for atomic ops with a return |value) which is strictly too many barriers for the refcount story, but |who knows what other ordering requirements code has. Remove the custom alpha implementation of dec_and_lock() and if it is an issue (performance wise) then the fast path could still be inlined. Signed-off-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Cc: Richard Henderson <rth@twiddle.net> Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru> Cc: Matt Turner <mattst88@gmail.com> Cc: linux-alpha@vger.kernel.org Link: https://lkml.kernel.org/r/20180606115918.GG12198@hirez.programming.kicks-ass.net Link: https://lkml.kernel.org/r20180612161621.22645-2-bigeasy@linutronix.de
-
- 06 6月, 2018 1 次提交
-
-
由 Rasmus Villemoes 提交于
This adds a small module for testing that the check_*_overflow functions work as expected, whether implemented in C or using gcc builtins. Example output: test_overflow: u8 : 18 tests test_overflow: s8 : 19 tests test_overflow: u16: 17 tests test_overflow: s16: 17 tests test_overflow: u32: 17 tests test_overflow: s32: 17 tests test_overflow: u64: 17 tests test_overflow: s64: 21 tests Signed-off-by: NRasmus Villemoes <linux@rasmusvillemoes.dk> [kees: add output to commit log, drop u64 tests on 32-bit] Signed-off-by: NKees Cook <keescook@chromium.org>
-
- 19 5月, 2018 1 次提交
-
-
由 Christoph Hellwig 提交于
Add a new dma_map_ops implementation that uses dma-direct for the address mapping of streaming mappings, and which requires arch-specific implemenations of coherent allocate/free. Architectures have to provide flushing helpers to ownership trasnfers to the device and/or CPU, and can provide optional implementations of the coherent mmap functionality, and the cache_flush routines for non-coherent long term allocations. Signed-off-by: NChristoph Hellwig <hch@lst.de> Tested-by: NAlexey Brodkin <abrodkin@synopsys.com> Acked-by: NVineet Gupta <vgupta@synopsys.com>
-
- 09 5月, 2018 1 次提交
-
-
由 Christoph Hellwig 提交于
This code is only used by sparc, and all new iommu drivers should use the drivers/iommu/ framework. Also remove the unused exports. Signed-off-by: NChristoph Hellwig <hch@lst.de> Acked-by: NDavid S. Miller <davem@davemloft.net> Reviewed-by: NAnshuman Khandual <khandual@linux.vnet.ibm.com>
-
- 23 4月, 2018 1 次提交
-
-
由 Matt Redfearn 提交于
When these are included into arch Kconfig files, maintaining alphabetical ordering of the selects means these get split up. To allow for keeping things tidier and alphabetical, rename the selects to GENERIC_LIB_* Signed-off-by: NMatt Redfearn <matt.redfearn@mips.com> Acked-by: NPalmer Dabbelt <palmer@sifive.com> Cc: Antony Pavlov <antonynpavlov@gmail.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-riscv@lists.infradead.org Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/19049/Signed-off-by: NJames Hogan <jhogan@kernel.org>
-