1. 13 3月, 2022 3 次提交
    • J
      random: don't let 644 read-only sysctls be written to · 77553cf8
      Jason A. Donenfeld 提交于
      We leave around these old sysctls for compatibility, and we keep them
      "writable" for compatibility, but even after writing, we should keep
      reporting the same value. This is consistent with how userspaces tend to
      use sysctl_random_write_wakeup_bits, writing to it, and then later
      reading from it and using the value.
      
      Cc: Theodore Ts'o <tytso@mit.edu>
      Reviewed-by: NDominik Brodowski <linux@dominikbrodowski.net>
      Signed-off-by: NJason A. Donenfeld <Jason@zx2c4.com>
      77553cf8
    • J
      random: give sysctl_random_min_urandom_seed a more sensible value · d0efdf35
      Jason A. Donenfeld 提交于
      This isn't used by anything or anywhere, but we can't delete it due to
      compatibility. So at least give it the correct value of what it's
      supposed to be instead of a garbage one.
      
      Cc: Theodore Ts'o <tytso@mit.edu>
      Reviewed-by: NDominik Brodowski <linux@dominikbrodowski.net>
      Signed-off-by: NJason A. Donenfeld <Jason@zx2c4.com>
      d0efdf35
    • J
      random: block in /dev/urandom · 6f98a4bf
      Jason A. Donenfeld 提交于
      This topic has come up countless times, and usually doesn't go anywhere.
      This time I thought I'd bring it up with a slightly narrower focus,
      updated for some developments over the last three years: we finally can
      make /dev/urandom always secure, in light of the fact that our RNG is
      now always seeded.
      
      Ever since Linus' 50ee7529 ("random: try to actively add entropy
      rather than passively wait for it"), the RNG does a haveged-style jitter
      dance around the scheduler, in order to produce entropy (and credit it)
      for the case when we're stuck in wait_for_random_bytes(). How ever you
      feel about the Linus Jitter Dance is beside the point: it's been there
      for three years and usually gets the RNG initialized in a second or so.
      
      As a matter of fact, this is what happens currently when people use
      getrandom(). It's already there and working, and most people have been
      using it for years without realizing.
      
      So, given that the kernel has grown this mechanism for seeding itself
      from nothing, and that this procedure happens pretty fast, maybe there's
      no point any longer in having /dev/urandom give insecure bytes. In the
      past we didn't want the boot process to deadlock, which was
      understandable. But now, in the worst case, a second goes by, and the
      problem is resolved. It seems like maybe we're finally at a point when
      we can get rid of the infamous "urandom read hole".
      
      The one slight drawback is that the Linus Jitter Dance relies on random_
      get_entropy() being implemented. The first lines of try_to_generate_
      entropy() are:
      
      	stack.now = random_get_entropy();
      	if (stack.now == random_get_entropy())
      		return;
      
      On most platforms, random_get_entropy() is simply aliased to get_cycles().
      The number of machines without a cycle counter or some other
      implementation of random_get_entropy() in 2022, which can also run a
      mainline kernel, and at the same time have a both broken and out of date
      userspace that relies on /dev/urandom never blocking at boot is thought
      to be exceedingly low. And to be clear: those museum pieces without
      cycle counters will continue to run Linux just fine, and even
      /dev/urandom will be operable just like before; the RNG just needs to be
      seeded first through the usual means, which should already be the case
      now.
      
      On systems that really do want unseeded randomness, we already offer
      getrandom(GRND_INSECURE), which is in use by, e.g., systemd for seeding
      their hash tables at boot. Nothing in this commit would affect
      GRND_INSECURE, and it remains the means of getting those types of random
      numbers.
      
      This patch goes a long way toward eliminating a long overdue userspace
      crypto footgun. After several decades of endless user confusion, we will
      finally be able to say, "use any single one of our random interfaces and
      you'll be fine. They're all the same. It doesn't matter." And that, I
      think, is really something. Finally all of those blog posts and
      disagreeing forums and contradictory articles will all become correct
      about whatever they happened to recommend, and along with it, a whole
      class of vulnerabilities eliminated.
      
      With very minimal downside, we're finally in a position where we can
      make this change.
      
      Cc: Dinh Nguyen <dinguyen@kernel.org>
      Cc: Nick Hu <nickhu@andestech.com>
      Cc: Max Filippov <jcmvbkbc@gmail.com>
      Cc: Palmer Dabbelt <palmer@dabbelt.com>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Cc: Michal Simek <monstr@monstr.eu>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Guo Ren <guoren@kernel.org>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Joshua Kinard <kumba@gentoo.org>
      Cc: David Laight <David.Laight@aculab.com>
      Cc: Dominik Brodowski <linux@dominikbrodowski.net>
      Cc: Eric Biggers <ebiggers@google.com>
      Cc: Ard Biesheuvel <ardb@kernel.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Lennart Poettering <mzxreary@0pointer.de>
      Cc: Konstantin Ryabitsev <konstantin@linuxfoundation.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Theodore Ts'o <tytso@mit.edu>
      Signed-off-by: NJason A. Donenfeld <Jason@zx2c4.com>
      6f98a4bf
  2. 28 2月, 2022 3 次提交
    • J
      random: do crng pre-init loading in worker rather than irq · c2a7de4f
      Jason A. Donenfeld 提交于
      Taking spinlocks from IRQ context is generally problematic for
      PREEMPT_RT. That is, in part, why we take trylocks instead. However, a
      spin_try_lock() is also problematic since another spin_lock() invocation
      can potentially PI-boost the wrong task, as the spin_try_lock() is
      invoked from an IRQ-context, so the task on CPU (random task or idle) is
      not the actual owner.
      
      Additionally, by deferring the crng pre-init loading to the worker, we
      can use the cryptographic hash function rather than xor, which is
      perhaps a meaningful difference when considering this data has only been
      through the relatively weak fast_mix() function.
      
      The biggest downside of this approach is that the pre-init loading is
      now deferred until later, which means things that need random numbers
      after interrupts are enabled, but before workqueues are running -- or
      before this particular worker manages to run -- are going to get into
      trouble. Hopefully in the real world, this window is rather small,
      especially since this code won't run until 64 interrupts had occurred.
      
      Cc: Sultan Alsawaf <sultan@kerneltoast.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Eric Biggers <ebiggers@kernel.org>
      Cc: Theodore Ts'o <tytso@mit.edu>
      Acked-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de>
      Reviewed-by: NDominik Brodowski <linux@dominikbrodowski.net>
      Signed-off-by: NJason A. Donenfeld <Jason@zx2c4.com>
      c2a7de4f
    • J
      random: unify cycles_t and jiffies usage and types · abded93e
      Jason A. Donenfeld 提交于
      random_get_entropy() returns a cycles_t, not an unsigned long, which is
      sometimes 64 bits on various 32-bit platforms, including x86.
      Conversely, jiffies is always unsigned long. This commit fixes things to
      use cycles_t for fields that use random_get_entropy(), named "cycles",
      and unsigned long for fields that use jiffies, named "now". It's also
      good to mix in a cycles_t and a jiffies in the same way for both
      add_device_randomness and add_timer_randomness, rather than using xor in
      one case. Finally, we unify the order of these volatile reads, always
      reading the more precise cycles counter, and then jiffies, so that the
      cycle counter is as close to the event as possible.
      
      Cc: Theodore Ts'o <tytso@mit.edu>
      Reviewed-by: NDominik Brodowski <linux@dominikbrodowski.net>
      Signed-off-by: NJason A. Donenfeld <Jason@zx2c4.com>
      abded93e
    • J
      random: cleanup UUID handling · 64276a99
      Jason A. Donenfeld 提交于
      Rather than hard coding various lengths, we can use the right constants.
      Strings should be `char *` while buffers should be `u8 *`. Rather than
      have a nonsensical and unused maxlength, just remove it. Finally, use
      snprintf instead of sprintf, just out of good hygiene.
      
      As well, remove the old comment about returning a binary UUID via the
      binary sysctl syscall. That syscall was removed from the kernel in 5.5,
      and actually, the "uuid_strategy" function and related infrastructure
      for even serving it via the binary sysctl syscall was removed with
      894d2491 ("sysctl drivers: Remove dead binary sysctl support") back
      in 2.6.33.
      Reviewed-by: NDominik Brodowski <linux@dominikbrodowski.net>
      Signed-off-by: NJason A. Donenfeld <Jason@zx2c4.com>
      64276a99
  3. 24 2月, 2022 2 次提交
  4. 22 2月, 2022 26 次提交
  5. 21 2月, 2022 6 次提交
    • J
      random: do not xor RDRAND when writing into /dev/random · 91c2afca
      Jason A. Donenfeld 提交于
      Continuing the reasoning of "random: ensure early RDSEED goes through
      mixer on init", we don't want RDRAND interacting with anything without
      going through the mixer function, as a backdoored CPU could presumably
      cancel out data during an xor, which it'd have a harder time doing when
      being forced through a cryptographic hash function. There's actually no
      need at all to be calling RDRAND in write_pool(), because before we
      extract from the pool, we always do so with 32 bytes of RDSEED hashed in
      at that stage. Xoring at this stage is needless and introduces a minor
      liability.
      
      Cc: Theodore Ts'o <tytso@mit.edu>
      Reviewed-by: NDominik Brodowski <linux@dominikbrodowski.net>
      Reviewed-by: NEric Biggers <ebiggers@google.com>
      Signed-off-by: NJason A. Donenfeld <Jason@zx2c4.com>
      91c2afca
    • J
      random: ensure early RDSEED goes through mixer on init · a02cf3d0
      Jason A. Donenfeld 提交于
      Continuing the reasoning of "random: use RDSEED instead of RDRAND in
      entropy extraction" from this series, at init time we also don't want to
      be xoring RDSEED directly into the crng. Instead it's safer to put it
      into our entropy collector and then re-extract it, so that it goes
      through a hash function with preimage resistance. As a matter of hygiene,
      we also order these now so that the RDSEED byte are hashed in first,
      followed by the bytes that are likely more predictable (e.g. utsname()).
      
      Cc: Theodore Ts'o <tytso@mit.edu>
      Reviewed-by: NDominik Brodowski <linux@dominikbrodowski.net>
      Reviewed-by: NEric Biggers <ebiggers@google.com>
      Signed-off-by: NJason A. Donenfeld <Jason@zx2c4.com>
      a02cf3d0
    • J
      random: inline leaves of rand_initialize() · 85664172
      Jason A. Donenfeld 提交于
      This is a preparatory commit for the following one. We simply inline the
      various functions that rand_initialize() calls that have no other
      callers. The compiler was doing this anyway before. Doing this will
      allow us to reorganize this after. We can then move the trust_cpu and
      parse_trust_cpu definitions a bit closer to where they're actually used,
      which makes the code easier to read.
      
      Cc: Theodore Ts'o <tytso@mit.edu>
      Reviewed-by: NDominik Brodowski <linux@dominikbrodowski.net>
      Reviewed-by: NEric Biggers <ebiggers@google.com>
      Signed-off-by: NJason A. Donenfeld <Jason@zx2c4.com>
      85664172
    • J
      random: get rid of secondary crngs · a9412d51
      Jason A. Donenfeld 提交于
      As the comment said, this is indeed a "hack". Since it was introduced,
      it's been a constant state machine nightmare, with lots of subtle early
      boot issues and a wildly complex set of machinery to keep everything in
      sync. Rather than continuing to play whack-a-mole with this approach,
      this commit simply removes it entirely. This commit is preparation for
      "random: use simpler fast key erasure flow on per-cpu keys" in this
      series, which introduces a simpler (and faster) mechanism to accomplish
      the same thing.
      
      Cc: Theodore Ts'o <tytso@mit.edu>
      Reviewed-by: NEric Biggers <ebiggers@google.com>
      Reviewed-by: NDominik Brodowski <linux@dominikbrodowski.net>
      Signed-off-by: NJason A. Donenfeld <Jason@zx2c4.com>
      a9412d51
    • J
      random: use RDSEED instead of RDRAND in entropy extraction · 28f425e5
      Jason A. Donenfeld 提交于
      When /dev/random was directly connected with entropy extraction, without
      any expansion stage, extract_buf() was called for every 10 bytes of data
      read from /dev/random. For that reason, RDRAND was used rather than
      RDSEED. At the same time, crng_reseed() was still only called every 5
      minutes, so there RDSEED made sense.
      
      Those olden days were also a time when the entropy collector did not use
      a cryptographic hash function, which meant most bets were off in terms
      of real preimage resistance. For that reason too it didn't matter
      _that_ much whether RDSEED was mixed in before or after entropy
      extraction; both choices were sort of bad.
      
      But now we have a cryptographic hash function at work, and with that we
      get real preimage resistance. We also now only call extract_entropy()
      every 5 minutes, rather than every 10 bytes. This allows us to do two
      important things.
      
      First, we can switch to using RDSEED in extract_entropy(), as Dominik
      suggested. Second, we can ensure that RDSEED input always goes into the
      cryptographic hash function with other things before being used
      directly. This eliminates a category of attacks in which the CPU knows
      the current state of the crng and knows that we're going to xor RDSEED
      into it, and so it computes a malicious RDSEED. By going through our
      hash function, it would require the CPU to compute a preimage on the
      fly, which isn't going to happen.
      
      Cc: Theodore Ts'o <tytso@mit.edu>
      Reviewed-by: NEric Biggers <ebiggers@google.com>
      Reviewed-by: NDominik Brodowski <linux@dominikbrodowski.net>
      Suggested-by: NDominik Brodowski <linux@dominikbrodowski.net>
      Signed-off-by: NJason A. Donenfeld <Jason@zx2c4.com>
      28f425e5
    • D
      random: fix locking in crng_fast_load() · 7c2fe2b3
      Dominik Brodowski 提交于
      crng_init is protected by primary_crng->lock, so keep holding that lock
      when incrementing crng_init from 0 to 1 in crng_fast_load(). The call to
      pr_notice() can wait until the lock is released; this code path cannot
      be reached twice, as crng_fast_load() aborts early if crng_init > 0.
      Signed-off-by: NDominik Brodowski <linux@dominikbrodowski.net>
      Reviewed-by: NEric Biggers <ebiggers@google.com>
      Signed-off-by: NJason A. Donenfeld <Jason@zx2c4.com>
      7c2fe2b3