1. 01 2月, 2023 2 次提交
  2. 16 12月, 2022 2 次提交
  3. 04 12月, 2022 1 次提交
    • G
      rust: add `build_error` crate · ecaa6ddf
      Gary Guo 提交于
      The `build_error` crate provides a function `build_error` which
      will panic at compile-time if executed in const context and,
      by default, will cause a build error if not executed at compile
      time and the optimizer does not optimise away the call.
      
      The `CONFIG_RUST_BUILD_ASSERT_ALLOW` kernel option allows to
      relax the default build failure and convert it to a runtime
      check. If the runtime check fails, `panic!` will be called.
      
      Its functionality will be exposed to users as a couple macros in
      the `kernel` crate in the following patch, thus some documentation
      here refers to them for simplicity.
      Signed-off-by: NGary Guo <gary@garyguo.net>
      Reviewed-by: NWei Liu <wei.liu@kernel.org>
      [Reworded, adapted for upstream and applied latest changes]
      Signed-off-by: NMiguel Ojeda <ojeda@kernel.org>
      ecaa6ddf
  4. 02 12月, 2022 1 次提交
    • S
      error-injection: Add prompt for function error injection · a4412fdd
      Steven Rostedt (Google) 提交于
      The config to be able to inject error codes into any function annotated
      with ALLOW_ERROR_INJECTION() is enabled when FUNCTION_ERROR_INJECTION is
      enabled.  But unfortunately, this is always enabled on x86 when KPROBES
      is enabled, and there's no way to turn it off.
      
      As kprobes is useful for observability of the kernel, it is useful to
      have it enabled in production environments.  But error injection should
      be avoided.  Add a prompt to the config to allow it to be disabled even
      when kprobes is enabled, and get rid of the "def_bool y".
      
      This is a kernel debug feature (it's in Kconfig.debug), and should have
      never been something enabled by default.
      
      Cc: stable@vger.kernel.org
      Fixes: 540adea3 ("error-injection: Separate error-injection from kprobe")
      Signed-off-by: NSteven Rostedt (Google) <rostedt@goodmis.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      a4412fdd
  5. 01 12月, 2022 2 次提交
  6. 23 11月, 2022 2 次提交
  7. 21 11月, 2022 1 次提交
    • N
      Makefile.debug: support for -gz=zstd · 9f8fe647
      Nick Desaulniers 提交于
      Make DEBUG_INFO_COMPRESSED a choice; DEBUG_INFO_COMPRESSED_NONE is the
      default, DEBUG_INFO_COMPRESSED_ZLIB uses zlib,
      DEBUG_INFO_COMPRESSED_ZSTD uses zstd.
      
      This renames the existing KConfig option DEBUG_INFO_COMPRESSED to
      DEBUG_INFO_COMPRESSED_ZLIB so users upgrading may need to reset the new
      Kconfigs.
      
      Some quick N=1 measurements with du, /usr/bin/time -v, and bloaty:
      
      clang-16, x86_64 defconfig plus
      CONFIG_DEBUG_INFO=y CONFIG_DEBUG_INFO_COMPRESSED_NONE=y:
      Elapsed (wall clock) time (h:mm:ss or m:ss): 0:55.43
      488M vmlinux
      27.6%   136Mi   0.0%       0    .debug_info
       6.1%  30.2Mi   0.0%       0    .debug_str_offsets
       3.5%  17.2Mi   0.0%       0    .debug_line
       3.3%  16.3Mi   0.0%       0    .debug_loclists
       0.9%  4.62Mi   0.0%       0    .debug_str
      
      clang-16, x86_64 defconfig plus
      CONFIG_DEBUG_INFO=y CONFIG_DEBUG_INFO_COMPRESSED_ZLIB=y:
      Elapsed (wall clock) time (h:mm:ss or m:ss): 1:00.35
      385M vmlinux
      21.8%  85.4Mi   0.0%       0    .debug_info
       2.1%  8.26Mi   0.0%       0    .debug_str_offsets
       2.1%  8.24Mi   0.0%       0    .debug_loclists
       1.9%  7.48Mi   0.0%       0    .debug_line
       0.5%  1.94Mi   0.0%       0    .debug_str
      
      clang-16, x86_64 defconfig plus
      CONFIG_DEBUG_INFO=y CONFIG_DEBUG_INFO_COMPRESSED_ZSTD=y:
      Elapsed (wall clock) time (h:mm:ss or m:ss): 0:59.69
      373M vmlinux
      21.4%  81.4Mi   0.0%       0    .debug_info
       2.3%  8.85Mi   0.0%       0    .debug_loclists
       1.5%  5.71Mi   0.0%       0    .debug_line
       0.5%  1.95Mi   0.0%       0    .debug_str_offsets
       0.4%  1.62Mi   0.0%       0    .debug_str
      
      That's only a 3.11% overall binary size savings over zlib, but at no
      performance regression.
      
      Link: https://maskray.me/blog/2022-09-09-zstd-compressed-debug-sections
      Link: https://maskray.me/blog/2022-01-23-compressed-debug-sectionsSuggested-by: NSedat Dilek (DHL Supply Chain) <sedat.dilek@dhl.com>
      Reviewed-by: NNathan Chancellor <nathan@kernel.org>
      Signed-off-by: NNick Desaulniers <ndesaulniers@google.com>
      Signed-off-by: NMasahiro Yamada <masahiroy@kernel.org>
      9f8fe647
  8. 09 11月, 2022 2 次提交
  9. 02 11月, 2022 2 次提交
  10. 29 10月, 2022 2 次提交
  11. 17 10月, 2022 2 次提交
    • P
      arch: Introduce CONFIG_FUNCTION_ALIGNMENT · d49a0626
      Peter Zijlstra 提交于
      Generic function-alignment infrastructure.
      
      Architectures can select FUNCTION_ALIGNMENT_xxB symbols; the
      FUNCTION_ALIGNMENT symbol is then set to the largest such selected
      size, 0 otherwise.
      
      From this the -falign-functions compiler argument and __ALIGN macro
      are set.
      
      This incorporates the DEBUG_FORCE_FUNCTION_ALIGN_64B knob and future
      alignment requirements for x86_64 (later in this series) into a single
      place.
      
      NOTE: also removes the 0x90 filler byte from the generic __ALIGN
            primitive, that value makes no sense outside of x86.
      
      NOTE: .balign 0 reverts to a no-op.
      Requested-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Link: https://lore.kernel.org/r/20220915111143.719248727@infradead.org
      d49a0626
    • N
      lib/Kconfig.debug: Add check for non-constant .{s,u}leb128 support to DWARF5 · 0a6de78c
      Nathan Chancellor 提交于
      When building with a RISC-V kernel with DWARF5 debug info using clang
      and the GNU assembler, several instances of the following error appear:
      
        /tmp/vgettimeofday-48aa35.s:2963: Error: non-constant .uleb128 is not supported
      
      Dumping the .s file reveals these .uleb128 directives come from
      .debug_loc and .debug_ranges:
      
        .Ldebug_loc0:
                .byte   4                               # DW_LLE_offset_pair
                .uleb128 .Lfunc_begin0-.Lfunc_begin0    #   starting offset
                .uleb128 .Ltmp1-.Lfunc_begin0           #   ending offset
                .byte   1                               # Loc expr size
                .byte   90                              # DW_OP_reg10
                .byte   0                               # DW_LLE_end_of_list
      
        .Ldebug_ranges0:
                .byte   4                               # DW_RLE_offset_pair
                .uleb128 .Ltmp6-.Lfunc_begin0           #   starting offset
                .uleb128 .Ltmp27-.Lfunc_begin0          #   ending offset
                .byte   4                               # DW_RLE_offset_pair
                .uleb128 .Ltmp28-.Lfunc_begin0          #   starting offset
                .uleb128 .Ltmp30-.Lfunc_begin0          #   ending offset
                .byte   0                               # DW_RLE_end_of_list
      
      There is an outstanding binutils issue to support a non-constant operand
      to .sleb128 and .uleb128 in GAS for RISC-V but there does not appear to
      be any movement on it, due to concerns over how it would work with
      linker relaxation.
      
      To avoid these build errors, prevent DWARF5 from being selected when
      using clang and an assembler that does not have support for these symbol
      deltas, which can be easily checked in Kconfig with as-instr plus the
      small test program from the dwz test suite from the binutils issue.
      
      Link: https://sourceware.org/bugzilla/show_bug.cgi?id=27215
      Link: https://github.com/ClangBuiltLinux/linux/issues/1719Signed-off-by: NNathan Chancellor <nathan@kernel.org>
      Reviewed-by: NNick Desaulniers <ndesaulniers@google.com>
      Signed-off-by: NMasahiro Yamada <masahiroy@kernel.org>
      0a6de78c
  12. 13 10月, 2022 2 次提交
  13. 04 10月, 2022 1 次提交
    • A
      kmsan: add KMSAN runtime core · f80be457
      Alexander Potapenko 提交于
      For each memory location KernelMemorySanitizer maintains two types of
      metadata:
      
      1. The so-called shadow of that location - а byte:byte mapping describing
         whether or not individual bits of memory are initialized (shadow is 0)
         or not (shadow is 1).
      2. The origins of that location - а 4-byte:4-byte mapping containing
         4-byte IDs of the stack traces where uninitialized values were
         created.
      
      Each struct page now contains pointers to two struct pages holding KMSAN
      metadata (shadow and origins) for the original struct page.  Utility
      routines in mm/kmsan/core.c and mm/kmsan/shadow.c handle the metadata
      creation, addressing, copying and checking.  mm/kmsan/report.c performs
      error reporting in the cases an uninitialized value is used in a way that
      leads to undefined behavior.
      
      KMSAN compiler instrumentation is responsible for tracking the metadata
      along with the kernel memory.  mm/kmsan/instrumentation.c provides the
      implementation for instrumentation hooks that are called from files
      compiled with -fsanitize=kernel-memory.
      
      To aid parameter passing (also done at instrumentation level), each
      task_struct now contains a struct kmsan_task_state used to track the
      metadata of function parameters and return values for that task.
      
      Finally, this patch provides CONFIG_KMSAN that enables KMSAN, and declares
      CFLAGS_KMSAN, which are applied to files compiled with KMSAN.  The
      KMSAN_SANITIZE:=n Makefile directive can be used to completely disable
      KMSAN instrumentation for certain files.
      
      Similarly, KMSAN_ENABLE_CHECKS:=n disables KMSAN checks and makes newly
      created stack memory initialized.
      
      Users can also use functions from include/linux/kmsan-checks.h to mark
      certain memory regions as uninitialized or initialized (this is called
      "poisoning" and "unpoisoning") or check that a particular region is
      initialized.
      
      Link: https://lkml.kernel.org/r/20220915150417.722975-12-glider@google.comSigned-off-by: NAlexander Potapenko <glider@google.com>
      Acked-by: NMarco Elver <elver@google.com>
      Cc: Alexander Viro <viro@zeniv.linux.org.uk>
      Cc: Alexei Starovoitov <ast@kernel.org>
      Cc: Andrey Konovalov <andreyknvl@gmail.com>
      Cc: Andrey Konovalov <andreyknvl@google.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Eric Biggers <ebiggers@google.com>
      Cc: Eric Biggers <ebiggers@kernel.org>
      Cc: Eric Dumazet <edumazet@google.com>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Herbert Xu <herbert@gondor.apana.org.au>
      Cc: Ilya Leoshkevich <iii@linux.ibm.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Michael S. Tsirkin <mst@redhat.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Petr Mladek <pmladek@suse.com>
      Cc: Stephen Rothwell <sfr@canb.auug.org.au>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vasily Gorbik <gor@linux.ibm.com>
      Cc: Vegard Nossum <vegard.nossum@oracle.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      f80be457
  14. 28 9月, 2022 1 次提交
  15. 27 9月, 2022 2 次提交
    • L
      mm: remove vmacache · 7964cf8c
      Liam R. Howlett 提交于
      By using the maple tree and the maple tree state, the vmacache is no
      longer beneficial and is complicating the VMA code.  Remove the vmacache
      to reduce the work in keeping it up to date and code complexity.
      
      Link: https://lkml.kernel.org/r/20220906194824.2110408-26-Liam.Howlett@oracle.comSigned-off-by: NLiam R. Howlett <Liam.Howlett@Oracle.com>
      Acked-by: NVlastimil Babka <vbabka@suse.cz>
      Tested-by: NYu Zhao <yuzhao@google.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: David Hildenbrand <david@redhat.com>
      Cc: David Howells <dhowells@redhat.com>
      Cc: Davidlohr Bueso <dave@stgolabs.net>
      Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org>
      Cc: SeongJae Park <sj@kernel.org>
      Cc: Sven Schnelle <svens@linux.ibm.com>
      Cc: Will Deacon <will@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      7964cf8c
    • L
      Maple Tree: add new data structure · 54a611b6
      Liam R. Howlett 提交于
      Patch series "Introducing the Maple Tree"
      
      The maple tree is an RCU-safe range based B-tree designed to use modern
      processor cache efficiently.  There are a number of places in the kernel
      that a non-overlapping range-based tree would be beneficial, especially
      one with a simple interface.  If you use an rbtree with other data
      structures to improve performance or an interval tree to track
      non-overlapping ranges, then this is for you.
      
      The tree has a branching factor of 10 for non-leaf nodes and 16 for leaf
      nodes.  With the increased branching factor, it is significantly shorter
      than the rbtree so it has fewer cache misses.  The removal of the linked
      list between subsequent entries also reduces the cache misses and the need
      to pull in the previous and next VMA during many tree alterations.
      
      The first user that is covered in this patch set is the vm_area_struct,
      where three data structures are replaced by the maple tree: the augmented
      rbtree, the vma cache, and the linked list of VMAs in the mm_struct.  The
      long term goal is to reduce or remove the mmap_lock contention.
      
      The plan is to get to the point where we use the maple tree in RCU mode.
      Readers will not block for writers.  A single write operation will be
      allowed at a time.  A reader re-walks if stale data is encountered.  VMAs
      would be RCU enabled and this mode would be entered once multiple tasks
      are using the mm_struct.
      
      Davidlor said
      
      : Yes I like the maple tree, and at this stage I don't think we can ask for
      : more from this series wrt the MM - albeit there seems to still be some
      : folks reporting breakage.  Fundamentally I see Liam's work to (re)move
      : complexity out of the MM (not to say that the actual maple tree is not
      : complex) by consolidating the three complimentary data structures very
      : much worth it considering performance does not take a hit.  This was very
      : much a turn off with the range locking approach, which worst case scenario
      : incurred in prohibitive overhead.  Also as Liam and Matthew have
      : mentioned, RCU opens up a lot of nice performance opportunities, and in
      : addition academia[1] has shown outstanding scalability of address spaces
      : with the foundation of replacing the locked rbtree with RCU aware trees.
      
      A similar work has been discovered in the academic press
      
      	https://pdos.csail.mit.edu/papers/rcuvm:asplos12.pdf
      
      Sheer coincidence.  We designed our tree with the intention of solving the
      hardest problem first.  Upon settling on a b-tree variant and a rough
      outline, we researched ranged based b-trees and RCU b-trees and did find
      that article.  So it was nice to find reassurances that we were on the
      right path, but our design choice of using ranges made that paper unusable
      for us.
      
      This patch (of 70):
      
      The maple tree is an RCU-safe range based B-tree designed to use modern
      processor cache efficiently.  There are a number of places in the kernel
      that a non-overlapping range-based tree would be beneficial, especially
      one with a simple interface.  If you use an rbtree with other data
      structures to improve performance or an interval tree to track
      non-overlapping ranges, then this is for you.
      
      The tree has a branching factor of 10 for non-leaf nodes and 16 for leaf
      nodes.  With the increased branching factor, it is significantly shorter
      than the rbtree so it has fewer cache misses.  The removal of the linked
      list between subsequent entries also reduces the cache misses and the need
      to pull in the previous and next VMA during many tree alterations.
      
      The first user that is covered in this patch set is the vm_area_struct,
      where three data structures are replaced by the maple tree: the augmented
      rbtree, the vma cache, and the linked list of VMAs in the mm_struct.  The
      long term goal is to reduce or remove the mmap_lock contention.
      
      The plan is to get to the point where we use the maple tree in RCU mode.
      Readers will not block for writers.  A single write operation will be
      allowed at a time.  A reader re-walks if stale data is encountered.  VMAs
      would be RCU enabled and this mode would be entered once multiple tasks
      are using the mm_struct.
      
      There is additional BUG_ON() calls added within the tree, most of which
      are in debug code.  These will be replaced with a WARN_ON() call in the
      future.  There is also additional BUG_ON() calls within the code which
      will also be reduced in number at a later date.  These exist to catch
      things such as out-of-range accesses which would crash anyways.
      
      Link: https://lkml.kernel.org/r/20220906194824.2110408-1-Liam.Howlett@oracle.com
      Link: https://lkml.kernel.org/r/20220906194824.2110408-2-Liam.Howlett@oracle.comSigned-off-by: NLiam R. Howlett <Liam.Howlett@oracle.com>
      Signed-off-by: NMatthew Wilcox (Oracle) <willy@infradead.org>
      Tested-by: NDavid Howells <dhowells@redhat.com>
      Tested-by: NSven Schnelle <svens@linux.ibm.com>
      Tested-by: NYu Zhao <yuzhao@google.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: David Hildenbrand <david@redhat.com>
      Cc: Davidlohr Bueso <dave@stgolabs.net>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: SeongJae Park <sj@kernel.org>
      Cc: Will Deacon <will@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      54a611b6
  16. 24 9月, 2022 1 次提交
  17. 19 9月, 2022 1 次提交
  18. 08 9月, 2022 1 次提交
    • K
      fortify: Add KUnit test for FORTIFY_SOURCE internals · 875bfd52
      Kees Cook 提交于
      Add lib/fortify_kunit.c KUnit test for checking the expected behavioral
      characteristics of FORTIFY_SOURCE internals.
      
      Cc: Nick Desaulniers <ndesaulniers@google.com>
      Cc: Nathan Chancellor <nathan@kernel.org>
      Cc: Tom Rix <trix@redhat.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: "Steven Rostedt (Google)" <rostedt@goodmis.org>
      Cc: Yury Norov <yury.norov@gmail.com>
      Cc: Masami Hiramatsu <mhiramat@kernel.org>
      Cc: Sander Vanheule <sander@svanheule.net>
      Cc: linux-hardening@vger.kernel.org
      Cc: llvm@lists.linux.dev
      Reviewed-by: NDavid Gow <davidgow@google.com>
      Signed-off-by: NKees Cook <keescook@chromium.org>
      875bfd52
  19. 07 9月, 2022 1 次提交
  20. 01 9月, 2022 1 次提交
  21. 30 8月, 2022 1 次提交
  22. 24 8月, 2022 1 次提交
  23. 28 7月, 2022 1 次提交
  24. 18 7月, 2022 1 次提交
  25. 08 7月, 2022 1 次提交
  26. 04 7月, 2022 1 次提交
    • R
      mm: shrinkers: introduce debugfs interface for memory shrinkers · 5035ebc6
      Roman Gushchin 提交于
      This commit introduces the /sys/kernel/debug/shrinker debugfs interface
      which provides an ability to observe the state of individual kernel memory
      shrinkers.
      
      Because the feature adds some memory overhead (which shouldn't be large
      unless there is a huge amount of registered shrinkers), it's guarded by a
      config option (enabled by default).
      
      This commit introduces the "count" interface for each shrinker registered
      in the system.
      
      The output is in the following format:
      <cgroup inode id> <nr of objects on node 0> <nr of objects on node 1>...
      <cgroup inode id> <nr of objects on node 0> <nr of objects on node 1>...
      ...
      
      To reduce the size of output on machines with many thousands cgroups, if
      the total number of objects on all nodes is 0, the line is omitted.
      
      If the shrinker is not memcg-aware or CONFIG_MEMCG is off, 0 is printed as
      cgroup inode id.  If the shrinker is not numa-aware, 0's are printed for
      all nodes except the first one.
      
      This commit gives debugfs entries simple numeric names, which are not very
      convenient.  The following commit in the series will provide shrinkers
      with more meaningful names.
      
      [akpm@linux-foundation.org: remove WARN_ON_ONCE(), per Roman]
        Reported-by: syzbot+300d27c79fe6d4cbcc39@syzkaller.appspotmail.com
      Link: https://lkml.kernel.org/r/20220601032227.4076670-3-roman.gushchin@linux.devSigned-off-by: NRoman Gushchin <roman.gushchin@linux.dev>
      Reviewed-by: NKent Overstreet <kent.overstreet@gmail.com>
      Acked-by: NMuchun Song <songmuchun@bytedance.com>
      Cc: Christophe JAILLET <christophe.jaillet@wanadoo.fr>
      Cc: Dave Chinner <dchinner@redhat.com>
      Cc: Hillf Danton <hdanton@sina.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      5035ebc6
  27. 28 5月, 2022 1 次提交
  28. 18 5月, 2022 1 次提交
    • J
      random: remove ratelimiting for in-kernel unseeded randomness · cc1e127b
      Jason A. Donenfeld 提交于
      The CONFIG_WARN_ALL_UNSEEDED_RANDOM debug option controls whether the
      kernel warns about all unseeded randomness or just the first instance.
      There's some complicated rate limiting and comparison to the previous
      caller, such that even with CONFIG_WARN_ALL_UNSEEDED_RANDOM enabled,
      developers still don't see all the messages or even an accurate count of
      how many were missed. This is the result of basically parallel
      mechanisms aimed at accomplishing more or less the same thing, added at
      different points in random.c history, which sort of compete with the
      first-instance-only limiting we have now.
      
      It turns out, however, that nobody cares about the first unseeded
      randomness instance of in-kernel users. The same first user has been
      there for ages now, and nobody is doing anything about it. It isn't even
      clear that anybody _can_ do anything about it. Most places that can do
      something about it have switched over to using get_random_bytes_wait()
      or wait_for_random_bytes(), which is the right thing to do, but there is
      still much code that needs randomness sometimes during init, and as a
      geeneral rule, if you're not using one of the _wait functions or the
      readiness notifier callback, you're bound to be doing it wrong just
      based on that fact alone.
      
      So warning about this same first user that can't easily change is simply
      not an effective mechanism for anything at all. Users can't do anything
      about it, as the Kconfig text points out -- the problem isn't in
      userspace code -- and kernel developers don't or more often can't react
      to it.
      
      Instead, show the warning for all instances when CONFIG_WARN_ALL_UNSEEDED_RANDOM
      is set, so that developers can debug things need be, or if it isn't set,
      don't show a warning at all.
      
      At the same time, CONFIG_WARN_ALL_UNSEEDED_RANDOM now implies setting
      random.ratelimit_disable=1 on by default, since if you care about one
      you probably care about the other too. And we can clean up usage around
      the related urandom_warning ratelimiter as well (whose behavior isn't
      changing), so that it properly counts missed messages after the 10
      message threshold is reached.
      
      Cc: Theodore Ts'o <tytso@mit.edu>
      Cc: Dominik Brodowski <linux@dominikbrodowski.net>
      Signed-off-by: NJason A. Donenfeld <Jason@zx2c4.com>
      cc1e127b
  29. 13 5月, 2022 1 次提交
  30. 30 4月, 2022 1 次提交