1. 20 9月, 2019 1 次提交
    • L
      hwrng: core - don't wait on add_early_randomness() · 78887832
      Laurent Vivier 提交于
      add_early_randomness() is called by hwrng_register() when the
      hardware is added. If this hardware and its module are present
      at boot, and if there is no data available the boot hangs until
      data are available and can't be interrupted.
      
      For instance, in the case of virtio-rng, in some cases the host can be
      not able to provide enough entropy for all the guests.
      
      We can have two easy ways to reproduce the problem but they rely on
      misconfiguration of the hypervisor or the egd daemon:
      
      - if virtio-rng device is configured to connect to the egd daemon of the
      host but when the virtio-rng driver asks for data the daemon is not
      connected,
      
      - if virtio-rng device is configured to connect to the egd daemon of the
      host but the egd daemon doesn't provide data.
      
      The guest kernel will hang at boot until the virtio-rng driver provides
      enough data.
      
      To avoid that, call rng_get_data() in non-blocking mode (wait=0)
      from add_early_randomness().
      Signed-off-by: NLaurent Vivier <lvivier@redhat.com>
      Fixes: d9e79726 ("hwrng: add randomness to system from rng...")
      Cc: <stable@vger.kernel.org>
      Reviewed-by: NTheodore Ts'o <tytso@mit.edu>
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      78887832
  2. 15 8月, 2019 1 次提交
    • S
      hwrng: core - Freeze khwrng thread during suspend · 03a3bb7a
      Stephen Boyd 提交于
      The hwrng_fill() function can run while devices are suspending and
      resuming. If the hwrng is behind a bus such as i2c or SPI and that bus
      is suspended, the hwrng may hang the bus while attempting to add some
      randomness. It's been observed on ChromeOS devices with suspend-to-idle
      (s2idle) and an i2c based hwrng that this kthread may run and ask the
      hwrng device for randomness before the i2c bus has been resumed.
      
      Let's make this kthread freezable so that we don't try to touch the
      hwrng during suspend/resume. This ensures that we can't cause the hwrng
      backing driver to get into a bad state because the device is guaranteed
      to be resumed before the hwrng kthread is thawed.
      
      Cc: Andrey Pronin <apronin@chromium.org>
      Cc: Duncan Laurie <dlaurie@chromium.org>
      Cc: Jason Gunthorpe <jgg@ziepe.ca>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Guenter Roeck <groeck@chromium.org>
      Cc: Alexander Steffen <Alexander.Steffen@infineon.com>
      Signed-off-by: NStephen Boyd <swboyd@chromium.org>
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      03a3bb7a
  3. 15 7月, 2019 1 次提交
  4. 05 10月, 2018 1 次提交
    • M
      hwrng: core - document the quality field · fae29f13
      Michael S. Tsirkin 提交于
      quality field is currently documented as being 'per mill'.  In fact the
      math involved is:
      
                      add_hwgenerator_randomness((void *)rng_fillbuf, rc,
                                                 rc * current_quality * 8 >> 10);
      
      thus the actual definition is "bits of entropy per 1024 bits of input".
      
      The current documentation seems to have confused multiple people
      in the past, let's fix the documentation to match code.
      
      An alternative is to change core to match driver expectations, replacing
      	rc * current_quality * 8 >> 10
      with
      	rc * current_quality / 1000
      but that has performance costs, so probably isn't a good option.
      
      Fixes: 0f734e6e ("hwrng: add per-device entropy derating")
      Reported-by: N"Dr. David Alan Gilbert" <dgilbert@redhat.com>
      Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      fae29f13
  5. 15 6月, 2018 1 次提交
    • M
      hwrng: core - Always drop the RNG in hwrng_unregister() · 837bf7cc
      Michael Büsch 提交于
      enable_best_rng() is used in hwrng_unregister() to switch away from the
      currently active RNG, if that is the one currently being removed.
      However enable_best_rng() might fail, if the next RNG's init routine
      fails. In that case enable_best_rng() will return an error code and
      the currently active RNG will remain active.
      After unregistering this might lead to crashes due to use-after-free.
      
      Fix this by dropping the currently active RNG, if enable_best_rng()
      failed. This will result in no RNG to be active, if the next-best
      one failed to initialize.
      
      This problem was introduced by 142a27f0
      
      Fixes: 142a27f0 ("hwrng: core - Reset user selected rng by...")
      Reported-by: NWirz <spam@lukas-wirz.de>
      Tested-by: NWirz <spam@lukas-wirz.de>
      Signed-off-by: NMichael Büsch <m@bues.ch>
      Cc: stable@vger.kernel.org
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      837bf7cc
  6. 22 12月, 2017 1 次提交
  7. 03 11月, 2017 1 次提交
  8. 12 10月, 2017 1 次提交
  9. 18 7月, 2017 2 次提交
  10. 02 3月, 2017 1 次提交
  11. 09 2月, 2017 1 次提交
    • D
      Revert "hwrng: core - zeroize buffers with random data" · 3b802c94
      David Daney 提交于
      This reverts commit 2cc75154.
      
      With this commit in place I get on a Cavium ThunderX (arm64) system:
      
      $ if=/dev/hwrng bs=256 count=1 | od -t x1 -A x -v > rng-bad.txt
      1+0 records in
      1+0 records out
      256 bytes (256 B) copied, 9.1171e-05 s, 2.8 MB/s
      $ dd if=/dev/hwrng bs=256 count=1 | od -t x1 -A x -v >> rng-bad.txt
      1+0 records in
      1+0 records out
      256 bytes (256 B) copied, 9.6141e-05 s, 2.7 MB/s
      $ cat rng-bad.txt
      000000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
      000010 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
      000020 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
      000030 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
      000040 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
      000050 00 00 00 00 37 20 46 ae d0 fc 1c 55 25 6e b0 b8
      000060 7c 7e d7 d4 00 0f 6f b2 91 1e 30 a8 fa 3e 52 0e
      000070 06 2d 53 30 be a1 20 0f aa 56 6e 0e 44 6e f4 35
      000080 b7 6a fe d2 52 70 7e 58 56 02 41 ea d1 9c 6a 6a
      000090 d1 bd d8 4c da 35 45 ef 89 55 fc 59 d5 cd 57 ba
      0000a0 4e 3e 02 1c 12 76 43 37 23 e1 9f 7a 9f 9e 99 24
      0000b0 47 b2 de e3 79 85 f6 55 7e ad 76 13 4f a0 b5 41
      0000c0 c6 92 42 01 d9 12 de 8f b4 7b 6e ae d7 24 fc 65
      0000d0 4d af 0a aa 36 d9 17 8d 0e 8b 7a 3b b6 5f 96 47
      0000e0 46 f7 d8 ce 0b e8 3e c6 13 a6 2c b6 d6 cc 17 26
      0000f0 e3 c3 17 8e 9e 45 56 1e 41 ef 29 1a a8 65 c8 3a
      000100
      000000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
      000010 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
      000020 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
      000030 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
      000040 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
      000050 00 00 00 00 f4 90 65 aa 8b f2 5e 31 01 53 b4 d4
      000060 06 c0 23 a2 99 3d 01 e4 b0 c1 b1 55 0f 80 63 cf
      000070 33 24 d8 3a 1d 5e cd 2c ba c0 d0 18 6f bc 97 46
      000080 1e 19 51 b1 90 15 af 80 5e d1 08 0d eb b0 6c ab
      000090 6a b4 fe 62 37 c5 e1 ee 93 c3 58 78 91 2a d5 23
      0000a0 63 50 eb 1f 3b 84 35 18 cf b2 a4 b8 46 69 9e cf
      0000b0 0c 95 af 03 51 45 a8 42 f1 64 c9 55 fc 69 76 63
      0000c0 98 9d 82 fa 76 85 24 da 80 07 29 fe 4e 76 0c 61
      0000d0 ff 23 94 4f c8 5c ce 0b 50 e8 31 bc 9d ce f4 ca
      0000e0 be ca 28 da e6 fa cc 64 1c ec a8 41 db fe 42 bd
      0000f0 a0 e2 4b 32 b4 52 ba 03 70 8e c1 8e d0 50 3a c6
      000100
      
      To my untrained mental entropy detector, the first several bytes of
      each read from /dev/hwrng seem to not be very random (i.e. all zero).
      
      When I revert the patch (apply this patch), I get back to what we have
      in v4.9, which looks like (much more random appearing):
      
      $ dd if=/dev/hwrng bs=256 count=1 | od -t x1 -A x -v > rng-good.txt
      1+0 records in
      1+0 records out
      256 bytes (256 B) copied, 0.000252233 s, 1.0 MB/s
      $ dd if=/dev/hwrng bs=256 count=1 | od -t x1 -A x -v >> rng-good.txt
      1+0 records in
      1+0 records out
      256 bytes (256 B) copied, 0.000113571 s, 2.3 MB/s
      $ cat rng-good.txt
      000000 75 d1 2d 19 68 1f d2 26 a1 49 22 61 66 e8 09 e5
      000010 e0 4e 10 d0 1a 2c 45 5d 59 04 79 8e e2 b7 2c 2e
      000020 e8 ad da 34 d5 56 51 3d 58 29 c7 7a 8e ed 22 67
      000030 f9 25 b9 fb c6 b7 9c 35 1f 84 21 35 c1 1d 48 34
      000040 45 7c f6 f1 57 63 1a 88 38 e8 81 f0 a9 63 ad 0e
      000050 be 5d 3e 74 2e 4e cb 36 c2 01 a8 14 e1 38 e1 bb
      000060 23 79 09 56 77 19 ff 98 e8 44 f3 27 eb 6e 0a cb
      000070 c9 36 e3 2a 96 13 07 a0 90 3f 3b bd 1d 04 1d 67
      000080 be 33 14 f8 02 c2 a4 02 ab 8b 5b 74 86 17 f0 5e
      000090 a1 d7 aa ef a6 21 7b 93 d1 85 86 eb 4e 8c d0 4c
      0000a0 56 ac e4 45 27 44 84 9f 71 db 36 b9 f7 47 d7 b3
      0000b0 f2 9c 62 41 a3 46 2b 5b e3 80 63 a4 35 b5 3c f4
      0000c0 bc 1e 3a ad e4 59 4a 98 6c e8 8d ff 1b 16 f8 52
      0000d0 05 5c 2f 52 2a 0f 45 5b 51 fb 93 97 a4 49 4f 06
      0000e0 f3 a0 d1 1e ba 3d ed a7 60 8f bb 84 2c 21 94 2d
      0000f0 b3 66 a6 61 1e 58 30 24 85 f8 c8 18 c3 77 00 22
      000100
      000000 73 ca cc a1 d9 bb 21 8d c3 5c f3 ab 43 6d a7 a4
      000010 4a fd c5 f4 9c ba 4a 0f b1 2e 19 15 4e 84 26 e0
      000020 67 c9 f2 52 4d 65 1f 81 b7 8b 6d 2b 56 7b 99 75
      000030 2e cd d0 db 08 0c 4b df f3 83 c6 83 00 2e 2b b8
      000040 0f af 61 1d f2 02 35 74 b5 a4 6f 28 f3 a1 09 12
      000050 f2 53 b5 d2 da 45 01 e5 12 d6 46 f8 0b db ed 51
      000060 7b f4 0d 54 e0 63 ea 22 e2 1d d0 d6 d0 e7 7e e0
      000070 93 91 fb 87 95 43 41 28 de 3d 8b a3 a8 8f c4 9e
      000080 30 95 12 7a b2 27 28 ff 37 04 2e 09 7c dd 7c 12
      000090 e1 50 60 fb 6d 5f a8 65 14 40 89 e3 4c d2 87 8f
      0000a0 34 76 7e 66 7a 8e 6b a3 fc cf 38 52 2e f9 26 f0
      0000b0 98 63 15 06 34 99 b2 88 4f aa d8 14 88 71 f1 81
      0000c0 be 51 11 2b f4 7e a0 1e 12 b2 44 2e f6 8d 84 ea
      0000d0 63 82 2b 66 b3 9a fd 08 73 5a c2 cc ab 5a af b1
      0000e0 88 e3 a6 80 4b fc db ed 71 e0 ae c0 0a a4 8c 35
      0000f0 eb 89 f9 8a 4b 52 59 6f 09 7c 01 3f 56 e7 c7 bf
      000100
      Signed-off-by: NDavid Daney <david.daney@cavium.com>
      Acked-by: NHerbert Xu <herbert@gondor.apana.org.au>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      3b802c94
  12. 27 12月, 2016 6 次提交
  13. 25 12月, 2016 1 次提交
  14. 01 11月, 2016 1 次提交
  15. 19 10月, 2016 1 次提交
  16. 13 9月, 2016 1 次提交
  17. 04 12月, 2015 1 次提交
    • J
      hwrng: core - sleep interruptible in read · 1ab87298
      Jiri Slaby 提交于
      hwrng kthread can be waiting via hwrng_fillfn for some data from a rng
      like virtio-rng:
      hwrng           D ffff880093e17798     0   382      2 0x00000000
      ...
      Call Trace:
       [<ffffffff817339c6>] wait_for_completion_killable+0x96/0x210
       [<ffffffffa00aa1b7>] virtio_read+0x57/0xf0 [virtio_rng]
       [<ffffffff814f4a35>] hwrng_fillfn+0x75/0x130
       [<ffffffff810aa243>] kthread+0xf3/0x110
      
      And when some user program tries to read the /dev node in this state,
      we get:
      rngd            D ffff880093e17798     0   762      1 0x00000004
      ...
      Call Trace:
       [<ffffffff817351ac>] mutex_lock_nested+0x15c/0x3e0
       [<ffffffff814f478e>] rng_dev_read+0x6e/0x240
       [<ffffffff81231958>] __vfs_read+0x28/0xe0
       [<ffffffff81232393>] vfs_read+0x83/0x130
      
      And this is indeed unkillable. So use mutex_lock_interruptible
      instead of mutex_lock in rng_dev_read and exit immediatelly when
      interrupted. And possibly return already read data, if any (as POSIX
      allows).
      
      v2: use ERESTARTSYS instead of EINTR
      Signed-off-by: NJiri Slaby <jslaby@suse.cz>
      Cc: Matt Mackall <mpm@selenic.com>
      Cc: Herbert Xu <herbert@gondor.apana.org.au>
      Cc: <linux-crypto@vger.kernel.org>
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      1ab87298
  18. 21 9月, 2015 1 次提交
  19. 28 7月, 2015 1 次提交
  20. 25 3月, 2015 1 次提交
  21. 18 3月, 2015 1 次提交
  22. 16 3月, 2015 1 次提交
  23. 26 12月, 2014 5 次提交
  24. 22 12月, 2014 6 次提交
  25. 24 10月, 2014 1 次提交