1. 07 1月, 2022 3 次提交
  2. 02 4月, 2021 2 次提交
  3. 04 2月, 2021 1 次提交
  4. 21 1月, 2021 1 次提交
    • A
      random: avoid arch_get_random_seed_long() when collecting IRQ randomness · 390596c9
      Ard Biesheuvel 提交于
      When reseeding the CRNG periodically, arch_get_random_seed_long() is
      called to obtain entropy from an architecture specific source if one
      is implemented. In most cases, these are special instructions, but in
      some cases, such as on ARM, we may want to back this using firmware
      calls, which are considerably more expensive.
      
      Another call to arch_get_random_seed_long() exists in the CRNG driver,
      in add_interrupt_randomness(), which collects entropy by capturing
      inter-interrupt timing and relying on interrupt jitter to provide
      random bits. This is done by keeping a per-CPU state, and mixing in
      the IRQ number, the cycle counter and the return address every time an
      interrupt is taken, and mixing this per-CPU state into the entropy pool
      every 64 invocations, or at least once per second. The entropy that is
      gathered this way is credited as 1 bit of entropy. Every time this
      happens, arch_get_random_seed_long() is invoked, and the result is
      mixed in as well, and also credited with 1 bit of entropy.
      
      This means that arch_get_random_seed_long() is called at least once
      per second on every CPU, which seems excessive, and doesn't really
      scale, especially in a virtualization scenario where CPUs may be
      oversubscribed: in cases where arch_get_random_seed_long() is backed
      by an instruction that actually goes back to a shared hardware entropy
      source (such as RNDRRS on ARM), we will end up hitting it hundreds of
      times per second.
      
      So let's drop the call to arch_get_random_seed_long() from
      add_interrupt_randomness(), and instead, rely on crng_reseed() to call
      the arch hook to get random seed material from the platform.
      Signed-off-by: NArd Biesheuvel <ardb@kernel.org>
      Reviewed-by: NAndre Przywara <andre.przywara@arm.com>
      Tested-by: NAndre Przywara <andre.przywara@arm.com>
      Reviewed-by: NEric Biggers <ebiggers@google.com>
      Acked-by: NMarc Zyngier <maz@kernel.org>
      Reviewed-by: NJason A. Donenfeld <Jason@zx2c4.com>
      Link: https://lore.kernel.org/r/20201105152944.16953-1-ardb@kernel.orgSigned-off-by: NWill Deacon <will@kernel.org>
      390596c9
  5. 20 11月, 2020 1 次提交
    • E
      crypto: sha - split sha.h into sha1.h and sha2.h · a24d22b2
      Eric Biggers 提交于
      Currently <crypto/sha.h> contains declarations for both SHA-1 and SHA-2,
      and <crypto/sha3.h> contains declarations for SHA-3.
      
      This organization is inconsistent, but more importantly SHA-1 is no
      longer considered to be cryptographically secure.  So to the extent
      possible, SHA-1 shouldn't be grouped together with any of the other SHA
      versions, and usage of it should be phased out.
      
      Therefore, split <crypto/sha.h> into two headers <crypto/sha1.h> and
      <crypto/sha2.h>, and make everyone explicitly specify whether they want
      the declarations for SHA-1, SHA-2, or both.
      
      This avoids making the SHA-1 declarations visible to files that don't
      want anything to do with SHA-1.  It also prepares for potentially moving
      sha1.h into a new insecure/ or dangerous/ directory.
      Signed-off-by: NEric Biggers <ebiggers@google.com>
      Acked-by: NArd Biesheuvel <ardb@kernel.org>
      Acked-by: NJason A. Donenfeld <Jason@zx2c4.com>
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      a24d22b2
  6. 25 10月, 2020 1 次提交
    • G
      random32: make prandom_u32() output unpredictable · c51f8f88
      George Spelvin 提交于
      Non-cryptographic PRNGs may have great statistical properties, but
      are usually trivially predictable to someone who knows the algorithm,
      given a small sample of their output.  An LFSR like prandom_u32() is
      particularly simple, even if the sample is widely scattered bits.
      
      It turns out the network stack uses prandom_u32() for some things like
      random port numbers which it would prefer are *not* trivially predictable.
      Predictability led to a practical DNS spoofing attack.  Oops.
      
      This patch replaces the LFSR with a homebrew cryptographic PRNG based
      on the SipHash round function, which is in turn seeded with 128 bits
      of strong random key.  (The authors of SipHash have *not* been consulted
      about this abuse of their algorithm.)  Speed is prioritized over security;
      attacks are rare, while performance is always wanted.
      
      Replacing all callers of prandom_u32() is the quick fix.
      Whether to reinstate a weaker PRNG for uses which can tolerate it
      is an open question.
      
      Commit f227e3ec ("random32: update the net random state on interrupt
      and activity") was an earlier attempt at a solution.  This patch replaces
      it.
      Reported-by: NAmit Klein <aksecurity@gmail.com>
      Cc: Willy Tarreau <w@1wt.eu>
      Cc: Eric Dumazet <edumazet@google.com>
      Cc: "Jason A. Donenfeld" <Jason@zx2c4.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: tytso@mit.edu
      Cc: Florian Westphal <fw@strlen.de>
      Cc: Marc Plumb <lkml.mplumb@gmail.com>
      Fixes: f227e3ec ("random32: update the net random state on interrupt and activity")
      Signed-off-by: NGeorge Spelvin <lkml@sdf.org>
      Link: https://lore.kernel.org/netdev/20200808152628.GA27941@SDF.ORG/
      [ willy: partial reversal of f227e3ec; moved SIPROUND definitions
        to prandom.h for later use; merged George's prandom_seed() proposal;
        inlined siprand_u32(); replaced the net_rand_state[] array with 4
        members to fix a build issue; cosmetic cleanups to make checkpatch
        happy; fixed RANDOM32_SELFTEST build ]
      Signed-off-by: NWilly Tarreau <w@1wt.eu>
      c51f8f88
  7. 30 7月, 2020 1 次提交
    • W
      random32: update the net random state on interrupt and activity · f227e3ec
      Willy Tarreau 提交于
      This modifies the first 32 bits out of the 128 bits of a random CPU's
      net_rand_state on interrupt or CPU activity to complicate remote
      observations that could lead to guessing the network RNG's internal
      state.
      
      Note that depending on some network devices' interrupt rate moderation
      or binding, this re-seeding might happen on every packet or even almost
      never.
      
      In addition, with NOHZ some CPUs might not even get timer interrupts,
      leaving their local state rarely updated, while they are running
      networked processes making use of the random state.  For this reason, we
      also perform this update in update_process_times() in order to at least
      update the state when there is user or system activity, since it's the
      only case we care about.
      Reported-by: NAmit Klein <aksecurity@gmail.com>
      Suggested-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Cc: Eric Dumazet <edumazet@google.com>
      Cc: "Jason A. Donenfeld" <Jason@zx2c4.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NWilly Tarreau <w@1wt.eu>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      f227e3ec
  8. 08 6月, 2020 1 次提交
  9. 08 5月, 2020 2 次提交
    • E
      crypto: lib/sha1 - fold linux/cryptohash.h into crypto/sha.h · 228c4f26
      Eric Biggers 提交于
      <linux/cryptohash.h> sounds very generic and important, like it's the
      header to include if you're doing cryptographic hashing in the kernel.
      But actually it only includes the library implementation of the SHA-1
      compression function (not even the full SHA-1).  This should basically
      never be used anymore; SHA-1 is no longer considered secure, and there
      are much better ways to do cryptographic hashing in the kernel.
      
      Remove this header and fold it into <crypto/sha.h> which already
      contains constants and functions for SHA-1 (along with SHA-2).
      Signed-off-by: NEric Biggers <ebiggers@google.com>
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      228c4f26
    • E
      crypto: lib/sha1 - rename "sha" to "sha1" · 6b0b0fa2
      Eric Biggers 提交于
      The library implementation of the SHA-1 compression function is
      confusingly called just "sha_transform()".  Alongside it are some "SHA_"
      constants and "sha_init()".  Presumably these are left over from a time
      when SHA just meant SHA-1.  But now there are also SHA-2 and SHA-3, and
      moreover SHA-1 is now considered insecure and thus shouldn't be used.
      
      Therefore, rename these functions and constants to make it very clear
      that they are for SHA-1.  Also add a comment to make it clear that these
      shouldn't be used.
      
      For the extra-misleadingly named "SHA_MESSAGE_BYTES", rename it to
      SHA1_BLOCK_SIZE and define it to just '64' rather than '(512/8)' so that
      it matches the same definition in <crypto/sha.h>.  This prepares for
      merging <linux/cryptohash.h> into <crypto/sha.h>.
      Signed-off-by: NEric Biggers <ebiggers@google.com>
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      6b0b0fa2
  10. 27 4月, 2020 1 次提交
  11. 19 3月, 2020 1 次提交
    • M
      random: avoid warnings for !CONFIG_NUMA builds · ab9a7e27
      Mark Rutland 提交于
      As crng_initialize_secondary() is only called by do_numa_crng_init(),
      and the latter is under ifdeffery for CONFIG_NUMA, when CONFIG_NUMA is
      not selected the compiler will warn that the former is unused:
      
      | drivers/char/random.c:820:13: warning: 'crng_initialize_secondary' defined but not used [-Wunused-function]
      |   820 | static void crng_initialize_secondary(struct crng_state *crng)
      |       |             ^~~~~~~~~~~~~~~~~~~~~~~~~
      
      Stephen reports that this happens for x86_64 noallconfig builds.
      
      We could move crng_initialize_secondary() and crng_init_try_arch() under
      the CONFIG_NUMA ifdeffery, but this has the unfortunate property of
      separating them from crng_initialize_primary() and
      crng_init_try_arch_early() respectively. Instead, let's mark
      crng_initialize_secondary() as __maybe_unused.
      
      Link: https://lore.kernel.org/r/20200310121747.GA49602@lakrids.cambridge.arm.com
      Fixes: 5cbe0f13 ("random: split primary/secondary crng init paths")
      Reported-by: NStephen Rothwell <sfr@canb.auug.org.au>
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Theodore Ts'o <tytso@mit.edu>
      Signed-off-by: NTheodore Ts'o <tytso@mit.edu>
      ab9a7e27
  12. 28 2月, 2020 4 次提交
    • Q
      random: fix data races at timer_rand_state · e00d996a
      Qian Cai 提交于
      Fields in "struct timer_rand_state" could be accessed concurrently.
      Lockless plain reads and writes result in data races. Fix them by adding
      pairs of READ|WRITE_ONCE(). The data races were reported by KCSAN,
      
       BUG: KCSAN: data-race in add_timer_randomness / add_timer_randomness
      
       write to 0xffff9f320a0a01d0 of 8 bytes by interrupt on cpu 22:
        add_timer_randomness+0x100/0x190
        add_timer_randomness at drivers/char/random.c:1152
        add_disk_randomness+0x85/0x280
        scsi_end_request+0x43a/0x4a0
        scsi_io_completion+0xb7/0x7e0
        scsi_finish_command+0x1ed/0x2a0
        scsi_softirq_done+0x1c9/0x1d0
        blk_done_softirq+0x181/0x1d0
        __do_softirq+0xd9/0x57c
        irq_exit+0xa2/0xc0
        do_IRQ+0x8b/0x190
        ret_from_intr+0x0/0x42
        cpuidle_enter_state+0x15e/0x980
        cpuidle_enter+0x69/0xc0
        call_cpuidle+0x23/0x40
        do_idle+0x248/0x280
        cpu_startup_entry+0x1d/0x1f
        start_secondary+0x1b2/0x230
        secondary_startup_64+0xb6/0xc0
      
       no locks held by swapper/22/0.
       irq event stamp: 32871382
       _raw_spin_unlock_irqrestore+0x53/0x60
       _raw_spin_lock_irqsave+0x21/0x60
       _local_bh_enable+0x21/0x30
       irq_exit+0xa2/0xc0
      
       read to 0xffff9f320a0a01d0 of 8 bytes by interrupt on cpu 2:
        add_timer_randomness+0xe8/0x190
        add_disk_randomness+0x85/0x280
        scsi_end_request+0x43a/0x4a0
        scsi_io_completion+0xb7/0x7e0
        scsi_finish_command+0x1ed/0x2a0
        scsi_softirq_done+0x1c9/0x1d0
        blk_done_softirq+0x181/0x1d0
        __do_softirq+0xd9/0x57c
        irq_exit+0xa2/0xc0
        do_IRQ+0x8b/0x190
        ret_from_intr+0x0/0x42
        cpuidle_enter_state+0x15e/0x980
        cpuidle_enter+0x69/0xc0
        call_cpuidle+0x23/0x40
        do_idle+0x248/0x280
        cpu_startup_entry+0x1d/0x1f
        start_secondary+0x1b2/0x230
        secondary_startup_64+0xb6/0xc0
      
       no locks held by swapper/2/0.
       irq event stamp: 37846304
       _raw_spin_unlock_irqrestore+0x53/0x60
       _raw_spin_lock_irqsave+0x21/0x60
       _local_bh_enable+0x21/0x30
       irq_exit+0xa2/0xc0
      
       Reported by Kernel Concurrency Sanitizer on:
       Hardware name: HP ProLiant BL660c Gen9, BIOS I38 10/17/2018
      
      Link: https://lore.kernel.org/r/1582648024-13111-1-git-send-email-cai@lca.pwSigned-off-by: NQian Cai <cai@lca.pw>
      Signed-off-by: NTheodore Ts'o <tytso@mit.edu>
      e00d996a
    • J
      random: always use batched entropy for get_random_u{32,64} · 69efea71
      Jason A. Donenfeld 提交于
      It turns out that RDRAND is pretty slow. Comparing these two
      constructions:
      
        for (i = 0; i < CHACHA_BLOCK_SIZE; i += sizeof(ret))
          arch_get_random_long(&ret);
      
      and
      
        long buf[CHACHA_BLOCK_SIZE / sizeof(long)];
        extract_crng((u8 *)buf);
      
      it amortizes out to 352 cycles per long for the top one and 107 cycles
      per long for the bottom one, on Coffee Lake Refresh, Intel Core i9-9880H.
      
      And importantly, the top one has the drawback of not benefiting from the
      real rng, whereas the bottom one has all the nice benefits of using our
      own chacha rng. As get_random_u{32,64} gets used in more places (perhaps
      beyond what it was originally intended for when it was introduced as
      get_random_{int,long} back in the md5 monstrosity era), it seems like it
      might be a good thing to strengthen its posture a tiny bit. Doing this
      should only be stronger and not any weaker because that pool is already
      initialized with a bunch of rdrand data (when available). This way, we
      get the benefits of the hardware rng as well as our own rng.
      
      Another benefit of this is that we no longer hit pitfalls of the recent
      stream of AMD bugs in RDRAND. One often used code pattern for various
      things is:
      
        do {
        	val = get_random_u32();
        } while (hash_table_contains_key(val));
      
      That recent AMD bug rendered that pattern useless, whereas we're really
      very certain that chacha20 output will give pretty distributed numbers,
      no matter what.
      
      So, this simplification seems better both from a security perspective
      and from a performance perspective.
      Signed-off-by: NJason A. Donenfeld <Jason@zx2c4.com>
      Reviewed-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      Link: https://lore.kernel.org/r/20200221201037.30231-1-Jason@zx2c4.comSigned-off-by: NTheodore Ts'o <tytso@mit.edu>
      69efea71
    • M
      random: add arch_get_random_*long_early() · 253d3194
      Mark Rutland 提交于
      Some architectures (e.g. arm64) can have heterogeneous CPUs, and the
      boot CPU may be able to provide entropy while secondary CPUs cannot. On
      such systems, arch_get_random_long() and arch_get_random_seed_long()
      will fail unless support for RNG instructions has been detected on all
      CPUs. This prevents the boot CPU from being able to provide
      (potentially) trusted entropy when seeding the primary CRNG.
      
      To make it possible to seed the primary CRNG from the boot CPU without
      adversely affecting the runtime versions of arch_get_random_long() and
      arch_get_random_seed_long(), this patch adds new early versions of the
      functions used when initializing the primary CRNG.
      
      Default implementations are provided atop of the existing
      arch_get_random_long() and arch_get_random_seed_long() so that only
      architectures with such constraints need to provide the new helpers.
      
      There should be no functional change as a result of this patch.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Mark Brown <broonie@kernel.org>
      Cc: Theodore Ts'o <tytso@mit.edu>
      Link: https://lore.kernel.org/r/20200210130015.17664-3-mark.rutland@arm.comSigned-off-by: NTheodore Ts'o <tytso@mit.edu>
      253d3194
    • M
      random: split primary/secondary crng init paths · 5cbe0f13
      Mark Rutland 提交于
      Currently crng_initialize() is used for both the primary CRNG and
      secondary CRNGs. While we wish to share common logic, we need to do a
      number of additional things for the primary CRNG, and this would be
      easier to deal with were these handled in separate functions.
      
      This patch splits crng_initialize() into crng_initialize_primary() and
      crng_initialize_secondary(), with common logic factored out into a
      crng_init_try_arch() helper.
      
      There should be no functional change as a result of this patch.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Mark Brown <broonie@kernel.org>
      Cc: Theodore Ts'o <tytso@mit.edu>
      Link: https://lore.kernel.org/r/20200210130015.17664-2-mark.rutland@arm.comSigned-off-by: NTheodore Ts'o <tytso@mit.edu>
      5cbe0f13
  13. 08 1月, 2020 14 次提交
  14. 18 12月, 2019 1 次提交
  15. 17 11月, 2019 1 次提交
  16. 23 10月, 2019 1 次提交
  17. 03 10月, 2019 1 次提交
    • B
      char/random: Add a newline at the end of the file · 3fd57e7a
      Borislav Petkov 提交于
      On Tue, Oct 01, 2019 at 10:14:40AM -0700, Linus Torvalds wrote:
      > The previous state of the file didn't have that 0xa at the end, so you get that
      >
      >
      >   -EXPORT_SYMBOL_GPL(add_bootloader_randomness);
      >   \ No newline at end of file
      >   +EXPORT_SYMBOL_GPL(add_bootloader_randomness);
      >
      > which is "the '-' line doesn't have a newline, the '+' line does" marker.
      
      Aaha, that makes total sense, thanks for explaining. Oh well, let's fix
      it then so that people don't scratch heads like me.
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      3fd57e7a
  18. 30 9月, 2019 1 次提交
    • L
      random: try to actively add entropy rather than passively wait for it · 50ee7529
      Linus Torvalds 提交于
      For 5.3 we had to revert a nice ext4 IO pattern improvement, because it
      caused a bootup regression due to lack of entropy at bootup together
      with arguably broken user space that was asking for secure random
      numbers when it really didn't need to.
      
      See commit 72dbcf72 (Revert "ext4: make __ext4_get_inode_loc plug").
      
      This aims to solve the issue by actively generating entropy noise using
      the CPU cycle counter when waiting for the random number generator to
      initialize.  This only works when you have a high-frequency time stamp
      counter available, but that's the case on all modern x86 CPU's, and on
      most other modern CPU's too.
      
      What we do is to generate jitter entropy from the CPU cycle counter
      under a somewhat complex load: calling the scheduler while also
      guaranteeing a certain amount of timing noise by also triggering a
      timer.
      
      I'm sure we can tweak this, and that people will want to look at other
      alternatives, but there's been a number of papers written on jitter
      entropy, and this should really be fairly conservative by crediting one
      bit of entropy for every timer-induced jump in the cycle counter.  Not
      because the timer itself would be all that unpredictable, but because
      the interaction between the timer and the loop is going to be.
      
      Even if (and perhaps particularly if) the timer actually happens on
      another CPU, the cacheline interaction between the loop that reads the
      cycle counter and the timer itself firing is going to add perturbations
      to the cycle counter values that get mixed into the entropy pool.
      
      As Thomas pointed out, with a modern out-of-order CPU, even quite simple
      loops show a fair amount of hard-to-predict timing variability even in
      the absense of external interrupts.  But this tries to take that further
      by actually having a fairly complex interaction.
      
      This is not going to solve the entropy issue for architectures that have
      no CPU cycle counter, but it's not clear how (and if) that is solvable,
      and the hardware in question is largely starting to be irrelevant.  And
      by doing this we can at least avoid some of the even more contentious
      approaches (like making the entropy waiting time out in order to avoid
      the possibly unbounded waiting).
      
      Cc: Ahmed Darwish <darwish.07@gmail.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Theodore Ts'o <tytso@mit.edu>
      Cc: Nicholas Mc Guire <hofrat@opentech.at>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Willy Tarreau <w@1wt.eu>
      Cc: Alexander E. Patrakov <patrakov@gmail.com>
      Cc: Lennart Poettering <mzxreary@0pointer.de>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      50ee7529
  19. 09 9月, 2019 1 次提交
    • S
      random: Use wait_event_freezable() in add_hwgenerator_randomness() · 59b56948
      Stephen Boyd 提交于
      Sebastian reports that after commit ff296293 ("random: Support freezable
      kthreads in add_hwgenerator_randomness()") we can call might_sleep() when the
      task state is TASK_INTERRUPTIBLE (state=1). This leads to the following warning.
      
       do not call blocking ops when !TASK_RUNNING; state=1 set at [<00000000349d1489>] prepare_to_wait_event+0x5a/0x180
       WARNING: CPU: 0 PID: 828 at kernel/sched/core.c:6741 __might_sleep+0x6f/0x80
       Modules linked in:
      
       CPU: 0 PID: 828 Comm: hwrng Not tainted 5.3.0-rc7-next-20190903+ #46
       RIP: 0010:__might_sleep+0x6f/0x80
      
       Call Trace:
        kthread_freezable_should_stop+0x1b/0x60
        add_hwgenerator_randomness+0xdd/0x130
        hwrng_fillfn+0xbf/0x120
        kthread+0x10c/0x140
        ret_from_fork+0x27/0x50
      
      We shouldn't call kthread_freezable_should_stop() from deep within the
      wait_event code because the task state is still set as
      TASK_INTERRUPTIBLE instead of TASK_RUNNING and
      kthread_freezable_should_stop() will try to call into the freezer with
      the task in the wrong state. Use wait_event_freezable() instead so that
      it calls schedule() in the right place and tries to enter the freezer
      when the task state is TASK_RUNNING instead.
      Reported-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de>
      Tested-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de>
      Cc: Keerthy <j-keerthy@ti.com>
      Fixes: ff296293 ("random: Support freezable kthreads in add_hwgenerator_randomness()")
      Signed-off-by: NStephen Boyd <swboyd@chromium.org>
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      59b56948
  20. 23 8月, 2019 1 次提交
    • H
      fdt: add support for rng-seed · 428826f5
      Hsin-Yi Wang 提交于
      Introducing a chosen node, rng-seed, which is an entropy that can be
      passed to kernel called very early to increase initial device
      randomness. Bootloader should provide this entropy and the value is
      read from /chosen/rng-seed in DT.
      
      Obtain of_fdt_crc32 for CRC check after early_init_dt_scan_nodes(),
      since early_init_dt_scan_chosen() would modify fdt to erase rng-seed.
      
      Add a new interface add_bootloader_randomness() for rng-seed use case.
      Depends on whether the seed is trustworthy, rng seed would be passed to
      add_hwgenerator_randomness(). Otherwise it would be passed to
      add_device_randomness(). Decision is controlled by kernel config
      RANDOM_TRUST_BOOTLOADER.
      Signed-off-by: NHsin-Yi Wang <hsinyi@chromium.org>
      Reviewed-by: NStephen Boyd <swboyd@chromium.org>
      Reviewed-by: NRob Herring <robh@kernel.org>
      Reviewed-by: Theodore Ts'o <tytso@mit.edu> # drivers/char/random.c
      Signed-off-by: NWill Deacon <will@kernel.org>
      428826f5