1. 07 1月, 2022 14 次提交
    • D
      random: harmonize "crng init done" messages · 161212c7
      Dominik Brodowski 提交于
      We print out "crng init done" for !TRUST_CPU, so we should also print
      out the same for TRUST_CPU.
      Signed-off-by: NDominik Brodowski <linux@dominikbrodowski.net>
      Signed-off-by: NJason A. Donenfeld <Jason@zx2c4.com>
      161212c7
    • J
      random: mix bootloader randomness into pool · 57826fee
      Jason A. Donenfeld 提交于
      If we're trusting bootloader randomness, crng_fast_load() is called by
      add_hwgenerator_randomness(), which sets us to crng_init==1. However,
      usually it is only called once for an initial 64-byte push, so bootloader
      entropy will not mix any bytes into the input pool. So it's conceivable
      that crng_init==1 when crng_initialize_primary() is called later, but
      then the input pool is empty. When that happens, the crng state key will
      be overwritten with extracted output from the empty input pool. That's
      bad.
      
      In contrast, if we're not trusting bootloader randomness, we call
      crng_slow_load() *and* we call mix_pool_bytes(), so that later
      crng_initialize_primary() isn't drawing on nothing.
      
      In order to prevent crng_initialize_primary() from extracting an empty
      pool, have the trusted bootloader case mirror that of the untrusted
      bootloader case, mixing the input into the pool.
      
      [linux@dominikbrodowski.net: rewrite commit message]
      Signed-off-by: NDominik Brodowski <linux@dominikbrodowski.net>
      Signed-off-by: NJason A. Donenfeld <Jason@zx2c4.com>
      57826fee
    • J
      random: do not throw away excess input to crng_fast_load · 73c7733f
      Jason A. Donenfeld 提交于
      When crng_fast_load() is called by add_hwgenerator_randomness(), we
      currently will advance to crng_init==1 once we've acquired 64 bytes, and
      then throw away the rest of the buffer. Usually, that is not a problem:
      When add_hwgenerator_randomness() gets called via EFI or DT during
      setup_arch(), there won't be any IRQ randomness. Therefore, the 64 bytes
      passed by EFI exactly matches what is needed to advance to crng_init==1.
      Usually, DT seems to pass 64 bytes as well -- with one notable exception
      being kexec, which hands over 128 bytes of entropy to the kexec'd kernel.
      In that case, we'll advance to crng_init==1 once 64 of those bytes are
      consumed by crng_fast_load(), but won't continue onward feeding in bytes
      to progress to crng_init==2. This commit fixes the issue by feeding
      any leftover bytes into the next phase in add_hwgenerator_randomness().
      
      [linux@dominikbrodowski.net: rewrite commit message]
      Signed-off-by: NDominik Brodowski <linux@dominikbrodowski.net>
      Signed-off-by: NJason A. Donenfeld <Jason@zx2c4.com>
      73c7733f
    • J
      random: do not re-init if crng_reseed completes before primary init · 9c3ddde3
      Jason A. Donenfeld 提交于
      If the bootloader supplies sufficient material and crng_reseed() is called
      very early on, but not too early that wqs aren't available yet, then we
      might transition to crng_init==2 before rand_initialize()'s call to
      crng_initialize_primary() made. Then, when crng_initialize_primary() is
      called, if we're trusting the CPU's RDRAND instructions, we'll
      needlessly reinitialize the RNG and emit a message about it. This is
      mostly harmless, as numa_crng_init() will allocate and then free what it
      just allocated, and excessive calls to invalidate_batched_entropy()
      aren't so harmful. But it is funky and the extra message is confusing,
      so avoid the re-initialization all together by checking for crng_init <
      2 in crng_initialize_primary(), just as we already do in crng_reseed().
      Reviewed-by: NDominik Brodowski <linux@dominikbrodowski.net>
      Signed-off-by: NJason A. Donenfeld <Jason@zx2c4.com>
      9c3ddde3
    • D
      random: fix crash on multiple early calls to add_bootloader_randomness() · f7e67b8e
      Dominik Brodowski 提交于
      Currently, if CONFIG_RANDOM_TRUST_BOOTLOADER is enabled, multiple calls
      to add_bootloader_randomness() are broken and can cause a NULL pointer
      dereference, as noted by Ivan T. Ivanov. This is not only a hypothetical
      problem, as qemu on arm64 may provide bootloader entropy via EFI and via
      devicetree.
      
      On the first call to add_hwgenerator_randomness(), crng_fast_load() is
      executed, and if the seed is long enough, crng_init will be set to 1.
      On subsequent calls to add_bootloader_randomness() and then to
      add_hwgenerator_randomness(), crng_fast_load() will be skipped. Instead,
      wait_event_interruptible() and then credit_entropy_bits() will be called.
      If the entropy count for that second seed is large enough, that proceeds
      to crng_reseed().
      
      However, both wait_event_interruptible() and crng_reseed() depends
      (at least in numa_crng_init()) on workqueues. Therefore, test whether
      system_wq is already initialized, which is a sufficient indicator that
      workqueue_init_early() has progressed far enough.
      
      If we wind up hitting the !system_wq case, we later want to do what
      would have been done there when wqs are up, so set a flag, and do that
      work later from the rand_initialize() call.
      Reported-by: NIvan T. Ivanov <iivanov@suse.de>
      Fixes: 18b915ac ("efi/random: Treat EFI_RNG_PROTOCOL output as bootloader randomness")
      Cc: stable@vger.kernel.org
      Signed-off-by: NDominik Brodowski <linux@dominikbrodowski.net>
      [Jason: added crng_need_done state and related logic.]
      Signed-off-by: NJason A. Donenfeld <Jason@zx2c4.com>
      f7e67b8e
    • J
      random: do not sign extend bytes for rotation when mixing · 0d9488ff
      Jason A. Donenfeld 提交于
      By using `char` instead of `unsigned char`, certain platforms will sign
      extend the byte when `w = rol32(*bytes++, input_rotate)` is called,
      meaning that bit 7 is overrepresented when mixing. This isn't a real
      problem (unless the mixer itself is already broken) since it's still
      invertible, but it's not quite correct either. Fix this by using an
      explicit unsigned type.
      Signed-off-by: NJason A. Donenfeld <Jason@zx2c4.com>
      0d9488ff
    • J
      random: use BLAKE2s instead of SHA1 in extraction · 9f9eff85
      Jason A. Donenfeld 提交于
      This commit addresses one of the lower hanging fruits of the RNG: its
      usage of SHA1.
      
      BLAKE2s is generally faster, and certainly more secure, than SHA1, which
      has [1] been [2] really [3] very [4] broken [5]. Additionally, the
      current construction in the RNG doesn't use the full SHA1 function, as
      specified, and allows overwriting the IV with RDRAND output in an
      undocumented way, even in the case when RDRAND isn't set to "trusted",
      which means potential malicious IV choices. And its short length means
      that keeping only half of it secret when feeding back into the mixer
      gives us only 2^80 bits of forward secrecy. In other words, not only is
      the choice of hash function dated, but the use of it isn't really great
      either.
      
      This commit aims to fix both of these issues while also keeping the
      general structure and semantics as close to the original as possible.
      Specifically:
      
         a) Rather than overwriting the hash IV with RDRAND, we put it into
            BLAKE2's documented "salt" and "personal" fields, which were
            specifically created for this type of usage.
         b) Since this function feeds the full hash result back into the
            entropy collector, we only return from it half the length of the
            hash, just as it was done before. This increases the
            construction's forward secrecy from 2^80 to a much more
            comfortable 2^128.
         c) Rather than using the raw "sha1_transform" function alone, we
            instead use the full proper BLAKE2s function, with finalization.
      
      This also has the advantage of supplying 16 bytes at a time rather than
      SHA1's 10 bytes, which, in addition to having a faster compression
      function to begin with, means faster extraction in general. On an Intel
      i7-11850H, this commit makes initial seeding around 131% faster.
      
      BLAKE2s itself has the nice property of internally being based on the
      ChaCha permutation, which the RNG is already using for expansion, so
      there shouldn't be any issue with newness, funkiness, or surprising CPU
      behavior, since it's based on something already in use.
      
      [1] https://eprint.iacr.org/2005/010.pdf
      [2] https://www.iacr.org/archive/crypto2005/36210017/36210017.pdf
      [3] https://eprint.iacr.org/2015/967.pdf
      [4] https://shattered.io/static/shattered.pdf
      [5] https://www.usenix.org/system/files/sec20-leurent.pdfReviewed-by: NTheodore Ts'o <tytso@mit.edu>
      Reviewed-by: NEric Biggers <ebiggers@google.com>
      Reviewed-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      Reviewed-by: NJean-Philippe Aumasson <jeanphilippe.aumasson@gmail.com>
      Signed-off-by: NJason A. Donenfeld <Jason@zx2c4.com>
      9f9eff85
    • J
      lib/crypto: blake2s: include as built-in · 6048fdcc
      Jason A. Donenfeld 提交于
      In preparation for using blake2s in the RNG, we change the way that it
      is wired-in to the build system. Instead of using ifdefs to select the
      right symbol, we use weak symbols. And because ARM doesn't need the
      generic implementation, we make the generic one default only if an arch
      library doesn't need it already, and then have arch libraries that do
      need it opt-in. So that the arch libraries can remain tristate rather
      than bool, we then split the shash part from the glue code.
      Acked-by: NHerbert Xu <herbert@gondor.apana.org.au>
      Acked-by: NArd Biesheuvel <ardb@kernel.org>
      Acked-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Masahiro Yamada <masahiroy@kernel.org>
      Cc: linux-kbuild@vger.kernel.org
      Cc: linux-crypto@vger.kernel.org
      Signed-off-by: NJason A. Donenfeld <Jason@zx2c4.com>
      6048fdcc
    • E
      random: fix data race on crng init time · 009ba856
      Eric Biggers 提交于
      _extract_crng() does plain loads of crng->init_time and
      crng_global_init_time, which causes undefined behavior if
      crng_reseed() and RNDRESEEDCRNG modify these corrently.
      
      Use READ_ONCE() and WRITE_ONCE() to make the behavior defined.
      
      Don't fix the race on crng->init_time by protecting it with crng->lock,
      since it's not a problem for duplicate reseedings to occur.  I.e., the
      lockless access with READ_ONCE() is fine.
      
      Fixes: d848e5f8 ("random: add new ioctl RNDRESEEDCRNG")
      Fixes: e192be9d ("random: replace non-blocking pool with a Chacha20-based CRNG")
      Cc: stable@vger.kernel.org
      Signed-off-by: NEric Biggers <ebiggers@google.com>
      Acked-by: NPaul E. McKenney <paulmck@kernel.org>
      Signed-off-by: NJason A. Donenfeld <Jason@zx2c4.com>
      009ba856
    • E
      random: fix data race on crng_node_pool · 5d73d1e3
      Eric Biggers 提交于
      extract_crng() and crng_backtrack_protect() load crng_node_pool with a
      plain load, which causes undefined behavior if do_numa_crng_init()
      modifies it concurrently.
      
      Fix this by using READ_ONCE().  Note: as per the previous discussion
      https://lore.kernel.org/lkml/20211219025139.31085-1-ebiggers@kernel.org/T/#u,
      READ_ONCE() is believed to be sufficient here, and it was requested that
      it be used here instead of smp_load_acquire().
      
      Also change do_numa_crng_init() to set crng_node_pool using
      cmpxchg_release() instead of mb() + cmpxchg(), as the former is
      sufficient here but is more lightweight.
      
      Fixes: 1e7f583a ("random: make /dev/urandom scalable for silly userspace programs")
      Cc: stable@vger.kernel.org
      Signed-off-by: NEric Biggers <ebiggers@google.com>
      Acked-by: NPaul E. McKenney <paulmck@kernel.org>
      Signed-off-by: NJason A. Donenfeld <Jason@zx2c4.com>
      5d73d1e3
    • S
      irq: remove unused flags argument from __handle_irq_event_percpu() · 5320eb42
      Sebastian Andrzej Siewior 提交于
      The __IRQF_TIMER bit from the flags argument was used in
      add_interrupt_randomness() to distinguish the timer interrupt from other
      interrupts. This is no longer the case.
      
      Remove the flags argument from __handle_irq_event_percpu().
      Signed-off-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de>
      Signed-off-by: NJason A. Donenfeld <Jason@zx2c4.com>
      5320eb42
    • S
      random: remove unused irq_flags argument from add_interrupt_randomness() · 703f7066
      Sebastian Andrzej Siewior 提交于
      Since commit
         ee3e00e9 ("random: use registers from interrupted code for CPU's w/o a cycle counter")
      
      the irq_flags argument is no longer used.
      
      Remove unused irq_flags.
      
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Dexuan Cui <decui@microsoft.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Haiyang Zhang <haiyangz@microsoft.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: K. Y. Srinivasan <kys@microsoft.com>
      Cc: Stephen Hemminger <sthemmin@microsoft.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Wei Liu <wei.liu@kernel.org>
      Cc: linux-hyperv@vger.kernel.org
      Cc: x86@kernel.org
      Signed-off-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de>
      Acked-by: NWei Liu <wei.liu@kernel.org>
      Signed-off-by: NJason A. Donenfeld <Jason@zx2c4.com>
      703f7066
    • M
      random: document add_hwgenerator_randomness() with other input functions · 2b6c6e3d
      Mark Brown 提交于
      The section at the top of random.c which documents the input functions
      available does not document add_hwgenerator_randomness() which might lead
      a reader to overlook it. Add a brief note about it.
      Signed-off-by: NMark Brown <broonie@kernel.org>
      [Jason: reorganize position of function in doc comment and also document
       add_bootloader_randomness() while we're at it.]
      Signed-off-by: NJason A. Donenfeld <Jason@zx2c4.com>
      2b6c6e3d
    • J
      MAINTAINERS: add git tree for random.c · 9bafaa93
      Jason A. Donenfeld 提交于
      This is handy not just for humans, but also so that the 0-day bot can
      automatically test posted mailing list patches against the right tree.
      Signed-off-by: NJason A. Donenfeld <Jason@zx2c4.com>
      9bafaa93
  2. 06 1月, 2022 5 次提交
  3. 05 1月, 2022 8 次提交
  4. 04 1月, 2022 13 次提交