1. 28 2月, 2022 1 次提交
    • J
      random: cleanup UUID handling · 64276a99
      Jason A. Donenfeld 提交于
      Rather than hard coding various lengths, we can use the right constants.
      Strings should be `char *` while buffers should be `u8 *`. Rather than
      have a nonsensical and unused maxlength, just remove it. Finally, use
      snprintf instead of sprintf, just out of good hygiene.
      
      As well, remove the old comment about returning a binary UUID via the
      binary sysctl syscall. That syscall was removed from the kernel in 5.5,
      and actually, the "uuid_strategy" function and related infrastructure
      for even serving it via the binary sysctl syscall was removed with
      894d2491 ("sysctl drivers: Remove dead binary sysctl support") back
      in 2.6.33.
      Reviewed-by: NDominik Brodowski <linux@dominikbrodowski.net>
      Signed-off-by: NJason A. Donenfeld <Jason@zx2c4.com>
      64276a99
  2. 24 2月, 2022 2 次提交
  3. 22 2月, 2022 26 次提交
  4. 21 2月, 2022 11 次提交
    • J
      random: do not xor RDRAND when writing into /dev/random · 91c2afca
      Jason A. Donenfeld 提交于
      Continuing the reasoning of "random: ensure early RDSEED goes through
      mixer on init", we don't want RDRAND interacting with anything without
      going through the mixer function, as a backdoored CPU could presumably
      cancel out data during an xor, which it'd have a harder time doing when
      being forced through a cryptographic hash function. There's actually no
      need at all to be calling RDRAND in write_pool(), because before we
      extract from the pool, we always do so with 32 bytes of RDSEED hashed in
      at that stage. Xoring at this stage is needless and introduces a minor
      liability.
      
      Cc: Theodore Ts'o <tytso@mit.edu>
      Reviewed-by: NDominik Brodowski <linux@dominikbrodowski.net>
      Reviewed-by: NEric Biggers <ebiggers@google.com>
      Signed-off-by: NJason A. Donenfeld <Jason@zx2c4.com>
      91c2afca
    • J
      random: ensure early RDSEED goes through mixer on init · a02cf3d0
      Jason A. Donenfeld 提交于
      Continuing the reasoning of "random: use RDSEED instead of RDRAND in
      entropy extraction" from this series, at init time we also don't want to
      be xoring RDSEED directly into the crng. Instead it's safer to put it
      into our entropy collector and then re-extract it, so that it goes
      through a hash function with preimage resistance. As a matter of hygiene,
      we also order these now so that the RDSEED byte are hashed in first,
      followed by the bytes that are likely more predictable (e.g. utsname()).
      
      Cc: Theodore Ts'o <tytso@mit.edu>
      Reviewed-by: NDominik Brodowski <linux@dominikbrodowski.net>
      Reviewed-by: NEric Biggers <ebiggers@google.com>
      Signed-off-by: NJason A. Donenfeld <Jason@zx2c4.com>
      a02cf3d0
    • J
      random: inline leaves of rand_initialize() · 85664172
      Jason A. Donenfeld 提交于
      This is a preparatory commit for the following one. We simply inline the
      various functions that rand_initialize() calls that have no other
      callers. The compiler was doing this anyway before. Doing this will
      allow us to reorganize this after. We can then move the trust_cpu and
      parse_trust_cpu definitions a bit closer to where they're actually used,
      which makes the code easier to read.
      
      Cc: Theodore Ts'o <tytso@mit.edu>
      Reviewed-by: NDominik Brodowski <linux@dominikbrodowski.net>
      Reviewed-by: NEric Biggers <ebiggers@google.com>
      Signed-off-by: NJason A. Donenfeld <Jason@zx2c4.com>
      85664172
    • J
      random: get rid of secondary crngs · a9412d51
      Jason A. Donenfeld 提交于
      As the comment said, this is indeed a "hack". Since it was introduced,
      it's been a constant state machine nightmare, with lots of subtle early
      boot issues and a wildly complex set of machinery to keep everything in
      sync. Rather than continuing to play whack-a-mole with this approach,
      this commit simply removes it entirely. This commit is preparation for
      "random: use simpler fast key erasure flow on per-cpu keys" in this
      series, which introduces a simpler (and faster) mechanism to accomplish
      the same thing.
      
      Cc: Theodore Ts'o <tytso@mit.edu>
      Reviewed-by: NEric Biggers <ebiggers@google.com>
      Reviewed-by: NDominik Brodowski <linux@dominikbrodowski.net>
      Signed-off-by: NJason A. Donenfeld <Jason@zx2c4.com>
      a9412d51
    • J
      random: use RDSEED instead of RDRAND in entropy extraction · 28f425e5
      Jason A. Donenfeld 提交于
      When /dev/random was directly connected with entropy extraction, without
      any expansion stage, extract_buf() was called for every 10 bytes of data
      read from /dev/random. For that reason, RDRAND was used rather than
      RDSEED. At the same time, crng_reseed() was still only called every 5
      minutes, so there RDSEED made sense.
      
      Those olden days were also a time when the entropy collector did not use
      a cryptographic hash function, which meant most bets were off in terms
      of real preimage resistance. For that reason too it didn't matter
      _that_ much whether RDSEED was mixed in before or after entropy
      extraction; both choices were sort of bad.
      
      But now we have a cryptographic hash function at work, and with that we
      get real preimage resistance. We also now only call extract_entropy()
      every 5 minutes, rather than every 10 bytes. This allows us to do two
      important things.
      
      First, we can switch to using RDSEED in extract_entropy(), as Dominik
      suggested. Second, we can ensure that RDSEED input always goes into the
      cryptographic hash function with other things before being used
      directly. This eliminates a category of attacks in which the CPU knows
      the current state of the crng and knows that we're going to xor RDSEED
      into it, and so it computes a malicious RDSEED. By going through our
      hash function, it would require the CPU to compute a preimage on the
      fly, which isn't going to happen.
      
      Cc: Theodore Ts'o <tytso@mit.edu>
      Reviewed-by: NEric Biggers <ebiggers@google.com>
      Reviewed-by: NDominik Brodowski <linux@dominikbrodowski.net>
      Suggested-by: NDominik Brodowski <linux@dominikbrodowski.net>
      Signed-off-by: NJason A. Donenfeld <Jason@zx2c4.com>
      28f425e5
    • D
      random: fix locking in crng_fast_load() · 7c2fe2b3
      Dominik Brodowski 提交于
      crng_init is protected by primary_crng->lock, so keep holding that lock
      when incrementing crng_init from 0 to 1 in crng_fast_load(). The call to
      pr_notice() can wait until the lock is released; this code path cannot
      be reached twice, as crng_fast_load() aborts early if crng_init > 0.
      Signed-off-by: NDominik Brodowski <linux@dominikbrodowski.net>
      Reviewed-by: NEric Biggers <ebiggers@google.com>
      Signed-off-by: NJason A. Donenfeld <Jason@zx2c4.com>
      7c2fe2b3
    • J
      random: remove batched entropy locking · 77760fd7
      Jason A. Donenfeld 提交于
      Rather than use spinlocks to protect batched entropy, we can instead
      disable interrupts locally, since we're dealing with per-cpu data, and
      manage resets with a basic generation counter. At the same time, we
      can't quite do this on PREEMPT_RT, where we still want spinlocks-as-
      mutexes semantics. So we use a local_lock_t, which provides the right
      behavior for each. Because this is a per-cpu lock, that generation
      counter is still doing the necessary CPU-to-CPU communication.
      
      This should improve performance a bit. It will also fix the linked splat
      that Jonathan received with a PROVE_RAW_LOCK_NESTING=y.
      Reviewed-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de>
      Reviewed-by: NDominik Brodowski <linux@dominikbrodowski.net>
      Reviewed-by: NEric Biggers <ebiggers@google.com>
      Suggested-by: NAndy Lutomirski <luto@kernel.org>
      Reported-by: NJonathan Neuschäfer <j.neuschaefer@gmx.net>
      Tested-by: NJonathan Neuschäfer <j.neuschaefer@gmx.net>
      Link: https://lore.kernel.org/lkml/YfMa0QgsjCVdRAvJ@latitude/Signed-off-by: NJason A. Donenfeld <Jason@zx2c4.com>
      77760fd7
    • E
      random: remove use_input_pool parameter from crng_reseed() · 5d58ea3a
      Eric Biggers 提交于
      The primary_crng is always reseeded from the input_pool, while the NUMA
      crngs are always reseeded from the primary_crng.  Remove the redundant
      'use_input_pool' parameter from crng_reseed() and just directly check
      whether the crng is the primary_crng.
      Signed-off-by: NEric Biggers <ebiggers@google.com>
      Signed-off-by: NJason A. Donenfeld <Jason@zx2c4.com>
      5d58ea3a
    • J
      random: make credit_entropy_bits() always safe · a49c010e
      Jason A. Donenfeld 提交于
      This is called from various hwgenerator drivers, so rather than having
      one "safe" version for userspace and one "unsafe" version for the
      kernel, just make everything safe; the checks are cheap and sensible to
      have anyway.
      Reported-by: NSultan Alsawaf <sultan@kerneltoast.com>
      Reviewed-by: NEric Biggers <ebiggers@google.com>
      Reviewed-by: NDominik Brodowski <linux@dominikbrodowski.net>
      Signed-off-by: NJason A. Donenfeld <Jason@zx2c4.com>
      a49c010e
    • J
      random: always wake up entropy writers after extraction · 489c7fc4
      Jason A. Donenfeld 提交于
      Now that POOL_BITS == POOL_MIN_BITS, we must unconditionally wake up
      entropy writers after every extraction. Therefore there's no point of
      write_wakeup_threshold, so we can move it to the dustbin of unused
      compatibility sysctls. While we're at it, we can fix a small comparison
      where we were waking up after <= min rather than < min.
      
      Cc: Theodore Ts'o <tytso@mit.edu>
      Suggested-by: NEric Biggers <ebiggers@kernel.org>
      Reviewed-by: NEric Biggers <ebiggers@google.com>
      Reviewed-by: NDominik Brodowski <linux@dominikbrodowski.net>
      Signed-off-by: NJason A. Donenfeld <Jason@zx2c4.com>
      489c7fc4
    • J
      random: use linear min-entropy accumulation crediting · c5704490
      Jason A. Donenfeld 提交于
      30e37ec5 ("random: account for entropy loss due to overwrites")
      assumed that adding new entropy to the LFSR pool probabilistically
      cancelled out old entropy there, so entropy was credited asymptotically,
      approximating Shannon entropy of independent sources (rather than a
      stronger min-entropy notion) using 1/8th fractional bits and replacing
      a constant 2-2/√𝑒 term (~0.786938) with 3/4 (0.75) to slightly
      underestimate it. This wasn't superb, but it was perhaps better than
      nothing, so that's what was done. Which entropy specifically was being
      cancelled out and how much precisely each time is hard to tell, though
      as I showed with the attack code in my previous commit, a motivated
      adversary with sufficient information can actually cancel out
      everything.
      
      Since we're no longer using an LFSR for entropy accumulation, this
      probabilistic cancellation is no longer relevant. Rather, we're now
      using a computational hash function as the accumulator and we've
      switched to working in the random oracle model, from which we can now
      revisit the question of min-entropy accumulation, which is done in
      detail in <https://eprint.iacr.org/2019/198>.
      
      Consider a long input bit string that is built by concatenating various
      smaller independent input bit strings. Each one of these inputs has a
      designated min-entropy, which is what we're passing to
      credit_entropy_bits(h). When we pass the concatenation of these to a
      random oracle, it means that an adversary trying to receive back the
      same reply as us would need to become certain about each part of the
      concatenated bit string we passed in, which means becoming certain about
      all of those h values. That means we can estimate the accumulation by
      simply adding up the h values in calls to credit_entropy_bits(h);
      there's no probabilistic cancellation at play like there was said to be
      for the LFSR. Incidentally, this is also what other entropy accumulators
      based on computational hash functions do as well.
      
      So this commit replaces credit_entropy_bits(h) with essentially `total =
      min(POOL_BITS, total + h)`, done with a cmpxchg loop as before.
      
      What if we're wrong and the above is nonsense? It's not, but let's
      assume we don't want the actual _behavior_ of the code to change much.
      Currently that behavior is not extracting from the input pool until it
      has 128 bits of entropy in it. With the old algorithm, we'd hit that
      magic 128 number after roughly 256 calls to credit_entropy_bits(1). So,
      we can retain more or less the old behavior by waiting to extract from
      the input pool until it hits 256 bits of entropy using the new code. For
      people concerned about this change, it means that there's not that much
      practical behavioral change. And for folks actually trying to model
      the behavior rigorously, it means that we have an even higher margin
      against attacks.
      
      Cc: Theodore Ts'o <tytso@mit.edu>
      Cc: Dominik Brodowski <linux@dominikbrodowski.net>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Reviewed-by: NEric Biggers <ebiggers@google.com>
      Reviewed-by: NJean-Philippe Aumasson <jeanphilippe.aumasson@gmail.com>
      Signed-off-by: NJason A. Donenfeld <Jason@zx2c4.com>
      c5704490