1. 18 1月, 2022 11 次提交
  2. 07 1月, 2022 15 次提交
    • J
      random: don't reset crng_init_cnt on urandom_read() · 6c8e11e0
      Jann Horn 提交于
      At the moment, urandom_read() (used for /dev/urandom) resets crng_init_cnt
      to zero when it is called at crng_init<2. This is inconsistent: We do it
      for /dev/urandom reads, but not for the equivalent
      getrandom(GRND_INSECURE).
      
      (And worse, as Jason pointed out, we're only doing this as long as
      maxwarn>0.)
      
      crng_init_cnt is only read in crng_fast_load(); it is relevant at
      crng_init==0 for determining when to switch to crng_init==1 (and where in
      the RNG state array to write).
      
      As far as I understand:
      
       - crng_init==0 means "we have nothing, we might just be returning the same
         exact numbers on every boot on every machine, we don't even have
         non-cryptographic randomness; we should shove every bit of entropy we
         can get into the RNG immediately"
       - crng_init==1 means "well we have something, it might not be
         cryptographic, but at least we're not gonna return the same data every
         time or whatever, it's probably good enough for TCP and ASLR and stuff;
         we now have time to build up actual cryptographic entropy in the input
         pool"
       - crng_init==2 means "this is supposed to be cryptographically secure now,
         but we'll keep adding more entropy just to be sure".
      
      The current code means that if someone is pulling data from /dev/urandom
      fast enough at crng_init==0, we'll keep resetting crng_init_cnt, and we'll
      never make forward progress to crng_init==1. It seems to be intended to
      prevent an attacker from bruteforcing the contents of small individual RNG
      inputs on the way from crng_init==0 to crng_init==1, but that's misguided;
      crng_init==1 isn't supposed to provide proper cryptographic security
      anyway, RNG users who care about getting secure RNG output have to wait
      until crng_init==2.
      
      This code was inconsistent, and it probably made things worse - just get
      rid of it.
      Signed-off-by: NJann Horn <jannh@google.com>
      Signed-off-by: NJason A. Donenfeld <Jason@zx2c4.com>
      6c8e11e0
    • J
      random: avoid superfluous call to RDRAND in CRNG extraction · 2ee25b69
      Jason A. Donenfeld 提交于
      RDRAND is not fast. RDRAND is actually quite slow. We've known this for
      a while, which is why functions like get_random_u{32,64} were converted
      to use batching of our ChaCha-based CRNG instead.
      
      Yet CRNG extraction still includes a call to RDRAND, in the hot path of
      every call to get_random_bytes(), /dev/urandom, and getrandom(2).
      
      This call to RDRAND here seems quite superfluous. CRNG is already
      extracting things based on a 256-bit key, based on good entropy, which
      is then reseeded periodically, updated, backtrack-mutated, and so
      forth. The CRNG extraction construction is something that we're already
      relying on to be secure and solid. If it's not, that's a serious
      problem, and it's unlikely that mixing in a measly 32 bits from RDRAND
      is going to alleviate things.
      
      And in the case where the CRNG doesn't have enough entropy yet, we're
      already initializing the ChaCha key row with RDRAND in
      crng_init_try_arch_early().
      
      Removing the call to RDRAND improves performance on an i7-11850H by
      370%. In other words, the vast majority of the work done by
      extract_crng() prior to this commit was devoted to fetching 32 bits of
      RDRAND.
      Reviewed-by: NTheodore Ts'o <tytso@mit.edu>
      Acked-by: NArd Biesheuvel <ardb@kernel.org>
      Signed-off-by: NJason A. Donenfeld <Jason@zx2c4.com>
      2ee25b69
    • D
      random: early initialization of ChaCha constants · 96562f28
      Dominik Brodowski 提交于
      Previously, the ChaCha constants for the primary pool were only
      initialized in crng_initialize_primary(), called by rand_initialize().
      However, some randomness is actually extracted from the primary pool
      beforehand, e.g. by kmem_cache_create(). Therefore, statically
      initialize the ChaCha constants for the primary pool.
      
      Cc: Herbert Xu <herbert@gondor.apana.org.au>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: <linux-crypto@vger.kernel.org>
      Signed-off-by: NDominik Brodowski <linux@dominikbrodowski.net>
      Signed-off-by: NJason A. Donenfeld <Jason@zx2c4.com>
      96562f28
    • J
      random: use IS_ENABLED(CONFIG_NUMA) instead of ifdefs · 7b873241
      Jason A. Donenfeld 提交于
      Rather than an awkward combination of ifdefs and __maybe_unused, we can
      ensure more source gets parsed, regardless of the configuration, by
      using IS_ENABLED for the CONFIG_NUMA conditional code. This makes things
      cleaner and easier to follow.
      
      I've confirmed that on !CONFIG_NUMA, we don't wind up with excess code
      by accident; the generated object file is the same.
      Reviewed-by: NDominik Brodowski <linux@dominikbrodowski.net>
      Signed-off-by: NJason A. Donenfeld <Jason@zx2c4.com>
      7b873241
    • D
      random: harmonize "crng init done" messages · 161212c7
      Dominik Brodowski 提交于
      We print out "crng init done" for !TRUST_CPU, so we should also print
      out the same for TRUST_CPU.
      Signed-off-by: NDominik Brodowski <linux@dominikbrodowski.net>
      Signed-off-by: NJason A. Donenfeld <Jason@zx2c4.com>
      161212c7
    • J
      random: mix bootloader randomness into pool · 57826fee
      Jason A. Donenfeld 提交于
      If we're trusting bootloader randomness, crng_fast_load() is called by
      add_hwgenerator_randomness(), which sets us to crng_init==1. However,
      usually it is only called once for an initial 64-byte push, so bootloader
      entropy will not mix any bytes into the input pool. So it's conceivable
      that crng_init==1 when crng_initialize_primary() is called later, but
      then the input pool is empty. When that happens, the crng state key will
      be overwritten with extracted output from the empty input pool. That's
      bad.
      
      In contrast, if we're not trusting bootloader randomness, we call
      crng_slow_load() *and* we call mix_pool_bytes(), so that later
      crng_initialize_primary() isn't drawing on nothing.
      
      In order to prevent crng_initialize_primary() from extracting an empty
      pool, have the trusted bootloader case mirror that of the untrusted
      bootloader case, mixing the input into the pool.
      
      [linux@dominikbrodowski.net: rewrite commit message]
      Signed-off-by: NDominik Brodowski <linux@dominikbrodowski.net>
      Signed-off-by: NJason A. Donenfeld <Jason@zx2c4.com>
      57826fee
    • J
      random: do not throw away excess input to crng_fast_load · 73c7733f
      Jason A. Donenfeld 提交于
      When crng_fast_load() is called by add_hwgenerator_randomness(), we
      currently will advance to crng_init==1 once we've acquired 64 bytes, and
      then throw away the rest of the buffer. Usually, that is not a problem:
      When add_hwgenerator_randomness() gets called via EFI or DT during
      setup_arch(), there won't be any IRQ randomness. Therefore, the 64 bytes
      passed by EFI exactly matches what is needed to advance to crng_init==1.
      Usually, DT seems to pass 64 bytes as well -- with one notable exception
      being kexec, which hands over 128 bytes of entropy to the kexec'd kernel.
      In that case, we'll advance to crng_init==1 once 64 of those bytes are
      consumed by crng_fast_load(), but won't continue onward feeding in bytes
      to progress to crng_init==2. This commit fixes the issue by feeding
      any leftover bytes into the next phase in add_hwgenerator_randomness().
      
      [linux@dominikbrodowski.net: rewrite commit message]
      Signed-off-by: NDominik Brodowski <linux@dominikbrodowski.net>
      Signed-off-by: NJason A. Donenfeld <Jason@zx2c4.com>
      73c7733f
    • J
      random: do not re-init if crng_reseed completes before primary init · 9c3ddde3
      Jason A. Donenfeld 提交于
      If the bootloader supplies sufficient material and crng_reseed() is called
      very early on, but not too early that wqs aren't available yet, then we
      might transition to crng_init==2 before rand_initialize()'s call to
      crng_initialize_primary() made. Then, when crng_initialize_primary() is
      called, if we're trusting the CPU's RDRAND instructions, we'll
      needlessly reinitialize the RNG and emit a message about it. This is
      mostly harmless, as numa_crng_init() will allocate and then free what it
      just allocated, and excessive calls to invalidate_batched_entropy()
      aren't so harmful. But it is funky and the extra message is confusing,
      so avoid the re-initialization all together by checking for crng_init <
      2 in crng_initialize_primary(), just as we already do in crng_reseed().
      Reviewed-by: NDominik Brodowski <linux@dominikbrodowski.net>
      Signed-off-by: NJason A. Donenfeld <Jason@zx2c4.com>
      9c3ddde3
    • D
      random: fix crash on multiple early calls to add_bootloader_randomness() · f7e67b8e
      Dominik Brodowski 提交于
      Currently, if CONFIG_RANDOM_TRUST_BOOTLOADER is enabled, multiple calls
      to add_bootloader_randomness() are broken and can cause a NULL pointer
      dereference, as noted by Ivan T. Ivanov. This is not only a hypothetical
      problem, as qemu on arm64 may provide bootloader entropy via EFI and via
      devicetree.
      
      On the first call to add_hwgenerator_randomness(), crng_fast_load() is
      executed, and if the seed is long enough, crng_init will be set to 1.
      On subsequent calls to add_bootloader_randomness() and then to
      add_hwgenerator_randomness(), crng_fast_load() will be skipped. Instead,
      wait_event_interruptible() and then credit_entropy_bits() will be called.
      If the entropy count for that second seed is large enough, that proceeds
      to crng_reseed().
      
      However, both wait_event_interruptible() and crng_reseed() depends
      (at least in numa_crng_init()) on workqueues. Therefore, test whether
      system_wq is already initialized, which is a sufficient indicator that
      workqueue_init_early() has progressed far enough.
      
      If we wind up hitting the !system_wq case, we later want to do what
      would have been done there when wqs are up, so set a flag, and do that
      work later from the rand_initialize() call.
      Reported-by: NIvan T. Ivanov <iivanov@suse.de>
      Fixes: 18b915ac ("efi/random: Treat EFI_RNG_PROTOCOL output as bootloader randomness")
      Cc: stable@vger.kernel.org
      Signed-off-by: NDominik Brodowski <linux@dominikbrodowski.net>
      [Jason: added crng_need_done state and related logic.]
      Signed-off-by: NJason A. Donenfeld <Jason@zx2c4.com>
      f7e67b8e
    • J
      random: do not sign extend bytes for rotation when mixing · 0d9488ff
      Jason A. Donenfeld 提交于
      By using `char` instead of `unsigned char`, certain platforms will sign
      extend the byte when `w = rol32(*bytes++, input_rotate)` is called,
      meaning that bit 7 is overrepresented when mixing. This isn't a real
      problem (unless the mixer itself is already broken) since it's still
      invertible, but it's not quite correct either. Fix this by using an
      explicit unsigned type.
      Signed-off-by: NJason A. Donenfeld <Jason@zx2c4.com>
      0d9488ff
    • J
      random: use BLAKE2s instead of SHA1 in extraction · 9f9eff85
      Jason A. Donenfeld 提交于
      This commit addresses one of the lower hanging fruits of the RNG: its
      usage of SHA1.
      
      BLAKE2s is generally faster, and certainly more secure, than SHA1, which
      has [1] been [2] really [3] very [4] broken [5]. Additionally, the
      current construction in the RNG doesn't use the full SHA1 function, as
      specified, and allows overwriting the IV with RDRAND output in an
      undocumented way, even in the case when RDRAND isn't set to "trusted",
      which means potential malicious IV choices. And its short length means
      that keeping only half of it secret when feeding back into the mixer
      gives us only 2^80 bits of forward secrecy. In other words, not only is
      the choice of hash function dated, but the use of it isn't really great
      either.
      
      This commit aims to fix both of these issues while also keeping the
      general structure and semantics as close to the original as possible.
      Specifically:
      
         a) Rather than overwriting the hash IV with RDRAND, we put it into
            BLAKE2's documented "salt" and "personal" fields, which were
            specifically created for this type of usage.
         b) Since this function feeds the full hash result back into the
            entropy collector, we only return from it half the length of the
            hash, just as it was done before. This increases the
            construction's forward secrecy from 2^80 to a much more
            comfortable 2^128.
         c) Rather than using the raw "sha1_transform" function alone, we
            instead use the full proper BLAKE2s function, with finalization.
      
      This also has the advantage of supplying 16 bytes at a time rather than
      SHA1's 10 bytes, which, in addition to having a faster compression
      function to begin with, means faster extraction in general. On an Intel
      i7-11850H, this commit makes initial seeding around 131% faster.
      
      BLAKE2s itself has the nice property of internally being based on the
      ChaCha permutation, which the RNG is already using for expansion, so
      there shouldn't be any issue with newness, funkiness, or surprising CPU
      behavior, since it's based on something already in use.
      
      [1] https://eprint.iacr.org/2005/010.pdf
      [2] https://www.iacr.org/archive/crypto2005/36210017/36210017.pdf
      [3] https://eprint.iacr.org/2015/967.pdf
      [4] https://shattered.io/static/shattered.pdf
      [5] https://www.usenix.org/system/files/sec20-leurent.pdfReviewed-by: NTheodore Ts'o <tytso@mit.edu>
      Reviewed-by: NEric Biggers <ebiggers@google.com>
      Reviewed-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      Reviewed-by: NJean-Philippe Aumasson <jeanphilippe.aumasson@gmail.com>
      Signed-off-by: NJason A. Donenfeld <Jason@zx2c4.com>
      9f9eff85
    • E
      random: fix data race on crng init time · 009ba856
      Eric Biggers 提交于
      _extract_crng() does plain loads of crng->init_time and
      crng_global_init_time, which causes undefined behavior if
      crng_reseed() and RNDRESEEDCRNG modify these corrently.
      
      Use READ_ONCE() and WRITE_ONCE() to make the behavior defined.
      
      Don't fix the race on crng->init_time by protecting it with crng->lock,
      since it's not a problem for duplicate reseedings to occur.  I.e., the
      lockless access with READ_ONCE() is fine.
      
      Fixes: d848e5f8 ("random: add new ioctl RNDRESEEDCRNG")
      Fixes: e192be9d ("random: replace non-blocking pool with a Chacha20-based CRNG")
      Cc: stable@vger.kernel.org
      Signed-off-by: NEric Biggers <ebiggers@google.com>
      Acked-by: NPaul E. McKenney <paulmck@kernel.org>
      Signed-off-by: NJason A. Donenfeld <Jason@zx2c4.com>
      009ba856
    • E
      random: fix data race on crng_node_pool · 5d73d1e3
      Eric Biggers 提交于
      extract_crng() and crng_backtrack_protect() load crng_node_pool with a
      plain load, which causes undefined behavior if do_numa_crng_init()
      modifies it concurrently.
      
      Fix this by using READ_ONCE().  Note: as per the previous discussion
      https://lore.kernel.org/lkml/20211219025139.31085-1-ebiggers@kernel.org/T/#u,
      READ_ONCE() is believed to be sufficient here, and it was requested that
      it be used here instead of smp_load_acquire().
      
      Also change do_numa_crng_init() to set crng_node_pool using
      cmpxchg_release() instead of mb() + cmpxchg(), as the former is
      sufficient here but is more lightweight.
      
      Fixes: 1e7f583a ("random: make /dev/urandom scalable for silly userspace programs")
      Cc: stable@vger.kernel.org
      Signed-off-by: NEric Biggers <ebiggers@google.com>
      Acked-by: NPaul E. McKenney <paulmck@kernel.org>
      Signed-off-by: NJason A. Donenfeld <Jason@zx2c4.com>
      5d73d1e3
    • S
      random: remove unused irq_flags argument from add_interrupt_randomness() · 703f7066
      Sebastian Andrzej Siewior 提交于
      Since commit
         ee3e00e9 ("random: use registers from interrupted code for CPU's w/o a cycle counter")
      
      the irq_flags argument is no longer used.
      
      Remove unused irq_flags.
      
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Dexuan Cui <decui@microsoft.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Haiyang Zhang <haiyangz@microsoft.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: K. Y. Srinivasan <kys@microsoft.com>
      Cc: Stephen Hemminger <sthemmin@microsoft.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Wei Liu <wei.liu@kernel.org>
      Cc: linux-hyperv@vger.kernel.org
      Cc: x86@kernel.org
      Signed-off-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de>
      Acked-by: NWei Liu <wei.liu@kernel.org>
      Signed-off-by: NJason A. Donenfeld <Jason@zx2c4.com>
      703f7066
    • M
      random: document add_hwgenerator_randomness() with other input functions · 2b6c6e3d
      Mark Brown 提交于
      The section at the top of random.c which documents the input functions
      available does not document add_hwgenerator_randomness() which might lead
      a reader to overlook it. Add a brief note about it.
      Signed-off-by: NMark Brown <broonie@kernel.org>
      [Jason: reorganize position of function in doc comment and also document
       add_bootloader_randomness() while we're at it.]
      Signed-off-by: NJason A. Donenfeld <Jason@zx2c4.com>
      2b6c6e3d
  3. 02 4月, 2021 2 次提交
  4. 04 2月, 2021 1 次提交
  5. 21 1月, 2021 1 次提交
    • A
      random: avoid arch_get_random_seed_long() when collecting IRQ randomness · 390596c9
      Ard Biesheuvel 提交于
      When reseeding the CRNG periodically, arch_get_random_seed_long() is
      called to obtain entropy from an architecture specific source if one
      is implemented. In most cases, these are special instructions, but in
      some cases, such as on ARM, we may want to back this using firmware
      calls, which are considerably more expensive.
      
      Another call to arch_get_random_seed_long() exists in the CRNG driver,
      in add_interrupt_randomness(), which collects entropy by capturing
      inter-interrupt timing and relying on interrupt jitter to provide
      random bits. This is done by keeping a per-CPU state, and mixing in
      the IRQ number, the cycle counter and the return address every time an
      interrupt is taken, and mixing this per-CPU state into the entropy pool
      every 64 invocations, or at least once per second. The entropy that is
      gathered this way is credited as 1 bit of entropy. Every time this
      happens, arch_get_random_seed_long() is invoked, and the result is
      mixed in as well, and also credited with 1 bit of entropy.
      
      This means that arch_get_random_seed_long() is called at least once
      per second on every CPU, which seems excessive, and doesn't really
      scale, especially in a virtualization scenario where CPUs may be
      oversubscribed: in cases where arch_get_random_seed_long() is backed
      by an instruction that actually goes back to a shared hardware entropy
      source (such as RNDRRS on ARM), we will end up hitting it hundreds of
      times per second.
      
      So let's drop the call to arch_get_random_seed_long() from
      add_interrupt_randomness(), and instead, rely on crng_reseed() to call
      the arch hook to get random seed material from the platform.
      Signed-off-by: NArd Biesheuvel <ardb@kernel.org>
      Reviewed-by: NAndre Przywara <andre.przywara@arm.com>
      Tested-by: NAndre Przywara <andre.przywara@arm.com>
      Reviewed-by: NEric Biggers <ebiggers@google.com>
      Acked-by: NMarc Zyngier <maz@kernel.org>
      Reviewed-by: NJason A. Donenfeld <Jason@zx2c4.com>
      Link: https://lore.kernel.org/r/20201105152944.16953-1-ardb@kernel.orgSigned-off-by: NWill Deacon <will@kernel.org>
      390596c9
  6. 20 11月, 2020 1 次提交
    • E
      crypto: sha - split sha.h into sha1.h and sha2.h · a24d22b2
      Eric Biggers 提交于
      Currently <crypto/sha.h> contains declarations for both SHA-1 and SHA-2,
      and <crypto/sha3.h> contains declarations for SHA-3.
      
      This organization is inconsistent, but more importantly SHA-1 is no
      longer considered to be cryptographically secure.  So to the extent
      possible, SHA-1 shouldn't be grouped together with any of the other SHA
      versions, and usage of it should be phased out.
      
      Therefore, split <crypto/sha.h> into two headers <crypto/sha1.h> and
      <crypto/sha2.h>, and make everyone explicitly specify whether they want
      the declarations for SHA-1, SHA-2, or both.
      
      This avoids making the SHA-1 declarations visible to files that don't
      want anything to do with SHA-1.  It also prepares for potentially moving
      sha1.h into a new insecure/ or dangerous/ directory.
      Signed-off-by: NEric Biggers <ebiggers@google.com>
      Acked-by: NArd Biesheuvel <ardb@kernel.org>
      Acked-by: NJason A. Donenfeld <Jason@zx2c4.com>
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      a24d22b2
  7. 25 10月, 2020 1 次提交
    • G
      random32: make prandom_u32() output unpredictable · c51f8f88
      George Spelvin 提交于
      Non-cryptographic PRNGs may have great statistical properties, but
      are usually trivially predictable to someone who knows the algorithm,
      given a small sample of their output.  An LFSR like prandom_u32() is
      particularly simple, even if the sample is widely scattered bits.
      
      It turns out the network stack uses prandom_u32() for some things like
      random port numbers which it would prefer are *not* trivially predictable.
      Predictability led to a practical DNS spoofing attack.  Oops.
      
      This patch replaces the LFSR with a homebrew cryptographic PRNG based
      on the SipHash round function, which is in turn seeded with 128 bits
      of strong random key.  (The authors of SipHash have *not* been consulted
      about this abuse of their algorithm.)  Speed is prioritized over security;
      attacks are rare, while performance is always wanted.
      
      Replacing all callers of prandom_u32() is the quick fix.
      Whether to reinstate a weaker PRNG for uses which can tolerate it
      is an open question.
      
      Commit f227e3ec ("random32: update the net random state on interrupt
      and activity") was an earlier attempt at a solution.  This patch replaces
      it.
      Reported-by: NAmit Klein <aksecurity@gmail.com>
      Cc: Willy Tarreau <w@1wt.eu>
      Cc: Eric Dumazet <edumazet@google.com>
      Cc: "Jason A. Donenfeld" <Jason@zx2c4.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: tytso@mit.edu
      Cc: Florian Westphal <fw@strlen.de>
      Cc: Marc Plumb <lkml.mplumb@gmail.com>
      Fixes: f227e3ec ("random32: update the net random state on interrupt and activity")
      Signed-off-by: NGeorge Spelvin <lkml@sdf.org>
      Link: https://lore.kernel.org/netdev/20200808152628.GA27941@SDF.ORG/
      [ willy: partial reversal of f227e3ec; moved SIPROUND definitions
        to prandom.h for later use; merged George's prandom_seed() proposal;
        inlined siprand_u32(); replaced the net_rand_state[] array with 4
        members to fix a build issue; cosmetic cleanups to make checkpatch
        happy; fixed RANDOM32_SELFTEST build ]
      Signed-off-by: NWilly Tarreau <w@1wt.eu>
      c51f8f88
  8. 30 7月, 2020 1 次提交
    • W
      random32: update the net random state on interrupt and activity · f227e3ec
      Willy Tarreau 提交于
      This modifies the first 32 bits out of the 128 bits of a random CPU's
      net_rand_state on interrupt or CPU activity to complicate remote
      observations that could lead to guessing the network RNG's internal
      state.
      
      Note that depending on some network devices' interrupt rate moderation
      or binding, this re-seeding might happen on every packet or even almost
      never.
      
      In addition, with NOHZ some CPUs might not even get timer interrupts,
      leaving their local state rarely updated, while they are running
      networked processes making use of the random state.  For this reason, we
      also perform this update in update_process_times() in order to at least
      update the state when there is user or system activity, since it's the
      only case we care about.
      Reported-by: NAmit Klein <aksecurity@gmail.com>
      Suggested-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Cc: Eric Dumazet <edumazet@google.com>
      Cc: "Jason A. Donenfeld" <Jason@zx2c4.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NWilly Tarreau <w@1wt.eu>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      f227e3ec
  9. 08 6月, 2020 1 次提交
  10. 08 5月, 2020 2 次提交
    • E
      crypto: lib/sha1 - fold linux/cryptohash.h into crypto/sha.h · 228c4f26
      Eric Biggers 提交于
      <linux/cryptohash.h> sounds very generic and important, like it's the
      header to include if you're doing cryptographic hashing in the kernel.
      But actually it only includes the library implementation of the SHA-1
      compression function (not even the full SHA-1).  This should basically
      never be used anymore; SHA-1 is no longer considered secure, and there
      are much better ways to do cryptographic hashing in the kernel.
      
      Remove this header and fold it into <crypto/sha.h> which already
      contains constants and functions for SHA-1 (along with SHA-2).
      Signed-off-by: NEric Biggers <ebiggers@google.com>
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      228c4f26
    • E
      crypto: lib/sha1 - rename "sha" to "sha1" · 6b0b0fa2
      Eric Biggers 提交于
      The library implementation of the SHA-1 compression function is
      confusingly called just "sha_transform()".  Alongside it are some "SHA_"
      constants and "sha_init()".  Presumably these are left over from a time
      when SHA just meant SHA-1.  But now there are also SHA-2 and SHA-3, and
      moreover SHA-1 is now considered insecure and thus shouldn't be used.
      
      Therefore, rename these functions and constants to make it very clear
      that they are for SHA-1.  Also add a comment to make it clear that these
      shouldn't be used.
      
      For the extra-misleadingly named "SHA_MESSAGE_BYTES", rename it to
      SHA1_BLOCK_SIZE and define it to just '64' rather than '(512/8)' so that
      it matches the same definition in <crypto/sha.h>.  This prepares for
      merging <linux/cryptohash.h> into <crypto/sha.h>.
      Signed-off-by: NEric Biggers <ebiggers@google.com>
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      6b0b0fa2
  11. 27 4月, 2020 1 次提交
  12. 19 3月, 2020 1 次提交
    • M
      random: avoid warnings for !CONFIG_NUMA builds · ab9a7e27
      Mark Rutland 提交于
      As crng_initialize_secondary() is only called by do_numa_crng_init(),
      and the latter is under ifdeffery for CONFIG_NUMA, when CONFIG_NUMA is
      not selected the compiler will warn that the former is unused:
      
      | drivers/char/random.c:820:13: warning: 'crng_initialize_secondary' defined but not used [-Wunused-function]
      |   820 | static void crng_initialize_secondary(struct crng_state *crng)
      |       |             ^~~~~~~~~~~~~~~~~~~~~~~~~
      
      Stephen reports that this happens for x86_64 noallconfig builds.
      
      We could move crng_initialize_secondary() and crng_init_try_arch() under
      the CONFIG_NUMA ifdeffery, but this has the unfortunate property of
      separating them from crng_initialize_primary() and
      crng_init_try_arch_early() respectively. Instead, let's mark
      crng_initialize_secondary() as __maybe_unused.
      
      Link: https://lore.kernel.org/r/20200310121747.GA49602@lakrids.cambridge.arm.com
      Fixes: 5cbe0f13 ("random: split primary/secondary crng init paths")
      Reported-by: NStephen Rothwell <sfr@canb.auug.org.au>
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Theodore Ts'o <tytso@mit.edu>
      Signed-off-by: NTheodore Ts'o <tytso@mit.edu>
      ab9a7e27
  13. 28 2月, 2020 2 次提交
    • Q
      random: fix data races at timer_rand_state · e00d996a
      Qian Cai 提交于
      Fields in "struct timer_rand_state" could be accessed concurrently.
      Lockless plain reads and writes result in data races. Fix them by adding
      pairs of READ|WRITE_ONCE(). The data races were reported by KCSAN,
      
       BUG: KCSAN: data-race in add_timer_randomness / add_timer_randomness
      
       write to 0xffff9f320a0a01d0 of 8 bytes by interrupt on cpu 22:
        add_timer_randomness+0x100/0x190
        add_timer_randomness at drivers/char/random.c:1152
        add_disk_randomness+0x85/0x280
        scsi_end_request+0x43a/0x4a0
        scsi_io_completion+0xb7/0x7e0
        scsi_finish_command+0x1ed/0x2a0
        scsi_softirq_done+0x1c9/0x1d0
        blk_done_softirq+0x181/0x1d0
        __do_softirq+0xd9/0x57c
        irq_exit+0xa2/0xc0
        do_IRQ+0x8b/0x190
        ret_from_intr+0x0/0x42
        cpuidle_enter_state+0x15e/0x980
        cpuidle_enter+0x69/0xc0
        call_cpuidle+0x23/0x40
        do_idle+0x248/0x280
        cpu_startup_entry+0x1d/0x1f
        start_secondary+0x1b2/0x230
        secondary_startup_64+0xb6/0xc0
      
       no locks held by swapper/22/0.
       irq event stamp: 32871382
       _raw_spin_unlock_irqrestore+0x53/0x60
       _raw_spin_lock_irqsave+0x21/0x60
       _local_bh_enable+0x21/0x30
       irq_exit+0xa2/0xc0
      
       read to 0xffff9f320a0a01d0 of 8 bytes by interrupt on cpu 2:
        add_timer_randomness+0xe8/0x190
        add_disk_randomness+0x85/0x280
        scsi_end_request+0x43a/0x4a0
        scsi_io_completion+0xb7/0x7e0
        scsi_finish_command+0x1ed/0x2a0
        scsi_softirq_done+0x1c9/0x1d0
        blk_done_softirq+0x181/0x1d0
        __do_softirq+0xd9/0x57c
        irq_exit+0xa2/0xc0
        do_IRQ+0x8b/0x190
        ret_from_intr+0x0/0x42
        cpuidle_enter_state+0x15e/0x980
        cpuidle_enter+0x69/0xc0
        call_cpuidle+0x23/0x40
        do_idle+0x248/0x280
        cpu_startup_entry+0x1d/0x1f
        start_secondary+0x1b2/0x230
        secondary_startup_64+0xb6/0xc0
      
       no locks held by swapper/2/0.
       irq event stamp: 37846304
       _raw_spin_unlock_irqrestore+0x53/0x60
       _raw_spin_lock_irqsave+0x21/0x60
       _local_bh_enable+0x21/0x30
       irq_exit+0xa2/0xc0
      
       Reported by Kernel Concurrency Sanitizer on:
       Hardware name: HP ProLiant BL660c Gen9, BIOS I38 10/17/2018
      
      Link: https://lore.kernel.org/r/1582648024-13111-1-git-send-email-cai@lca.pwSigned-off-by: NQian Cai <cai@lca.pw>
      Signed-off-by: NTheodore Ts'o <tytso@mit.edu>
      e00d996a
    • J
      random: always use batched entropy for get_random_u{32,64} · 69efea71
      Jason A. Donenfeld 提交于
      It turns out that RDRAND is pretty slow. Comparing these two
      constructions:
      
        for (i = 0; i < CHACHA_BLOCK_SIZE; i += sizeof(ret))
          arch_get_random_long(&ret);
      
      and
      
        long buf[CHACHA_BLOCK_SIZE / sizeof(long)];
        extract_crng((u8 *)buf);
      
      it amortizes out to 352 cycles per long for the top one and 107 cycles
      per long for the bottom one, on Coffee Lake Refresh, Intel Core i9-9880H.
      
      And importantly, the top one has the drawback of not benefiting from the
      real rng, whereas the bottom one has all the nice benefits of using our
      own chacha rng. As get_random_u{32,64} gets used in more places (perhaps
      beyond what it was originally intended for when it was introduced as
      get_random_{int,long} back in the md5 monstrosity era), it seems like it
      might be a good thing to strengthen its posture a tiny bit. Doing this
      should only be stronger and not any weaker because that pool is already
      initialized with a bunch of rdrand data (when available). This way, we
      get the benefits of the hardware rng as well as our own rng.
      
      Another benefit of this is that we no longer hit pitfalls of the recent
      stream of AMD bugs in RDRAND. One often used code pattern for various
      things is:
      
        do {
        	val = get_random_u32();
        } while (hash_table_contains_key(val));
      
      That recent AMD bug rendered that pattern useless, whereas we're really
      very certain that chacha20 output will give pretty distributed numbers,
      no matter what.
      
      So, this simplification seems better both from a security perspective
      and from a performance perspective.
      Signed-off-by: NJason A. Donenfeld <Jason@zx2c4.com>
      Reviewed-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      Link: https://lore.kernel.org/r/20200221201037.30231-1-Jason@zx2c4.comSigned-off-by: NTheodore Ts'o <tytso@mit.edu>
      69efea71