1. 12 11月, 2013 1 次提交
  2. 04 11月, 2013 5 次提交
  3. 11 10月, 2013 14 次提交
    • T
      random: convert DEBUG_ENT to tracepoints · f80bbd8b
      Theodore Ts'o 提交于
      Instead of using the random driver's ad-hoc DEBUG_ENT() mechanism, use
      tracepoints instead.  This allows for a much more fine-grained control
      of which debugging mechanism which a developer might need, and unifies
      the debugging messages with all of the existing tracepoints.
      Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
      f80bbd8b
    • T
      random: push extra entropy to the output pools · 6265e169
      Theodore Ts'o 提交于
      As the input pool gets filled, start transfering entropy to the output
      pools until they get filled.  This allows us to use the output pools
      to store more system entropy.  Waste not, want not....
      Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
      6265e169
    • T
      random: drop trickle mode · 95b709b6
      Theodore Ts'o 提交于
      The add_timer_randomness() used to drop into trickle mode when entropy
      pool was estimated to be 87.5% full.  This was important when
      add_timer_randomness() was used to sample interrupts.  It's not used
      for this any more --- add_interrupt_randomness() now uses fast_mix()
      instead.  By elimitating trickle mode, it allows us to fully utilize
      entropy provided by add_input_randomness() and add_disk_randomness()
      even when the input pool is above the old trickle threshold of 87.5%.
      
      This helps to answer the criticism in [1] in their hypothetical
      scenario where our entropy estimator was inaccurate, even though the
      measurements in [2] seem to indicate that our entropy estimator given
      real-life entropy collection is actually pretty good, albeit on the
      conservative side (which was as it was designed).
      
      [1] http://eprint.iacr.org/2013/338.pdf
      [2] http://eprint.iacr.org/2012/251.pdfSigned-off-by: N"Theodore Ts'o" <tytso@mit.edu>
      95b709b6
    • T
      random: adjust the generator polynomials in the mixing function slightly · 6e9fa2c8
      Theodore Ts'o 提交于
      Our mixing functions were analyzed by Lacharme, Roeck, Strubel, and
      Videau in their paper, "The Linux Pseudorandom Number Generator
      Revisited" (see: http://eprint.iacr.org/2012/251.pdf).
      
      They suggested a slight change to improve our mixing functions
      slightly.  I also adjusted the comments to better explain what is
      going on, and to document why the polynomials were changed.
      Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
      6e9fa2c8
    • T
      random: speed up the fast_mix function by a factor of four · 655b2264
      Theodore Ts'o 提交于
      By mixing the entropy in chunks of 32-bit words instead of byte by
      byte, we can speed up the fast_mix function significantly.  Since it
      is called on every single interrupt, on systems with a very heavy
      interrupt load, this can make a noticeable difference.
      
      Also fix a compilation warning in add_interrupt_randomness() and avoid
      xor'ing cycles and jiffies together just in case we have an
      architecture which tries to define random_get_entropy() by returning
      jiffies.
      Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
      Reported-by: NJörn Engel <joern@logfs.org>
      655b2264
    • T
      random: cap the rate which the /dev/urandom pool gets reseeded · f5c2742c
      Theodore Ts'o 提交于
      In order to avoid draining the input pool of its entropy at too high
      of a rate, enforce a minimum time interval between reseedings of the
      urandom pool.  This is set to 60 seconds by default.
      Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
      f5c2742c
    • T
      random: optimize the entropy_store structure · c59974ae
      Theodore Ts'o 提交于
      Use smaller types to slightly shrink the size of the entropy store
      structure.
      Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
      c59974ae
    • T
      random: optimize spinlock use in add_device_randomness() · 3ef4cb2d
      Theodore Ts'o 提交于
      The add_device_randomness() function calls mix_pool_bytes() twice for
      the input pool and the non-blocking pool, for a total of four times.
      By using _mix_pool_byte() and taking the spinlock in
      add_device_randomness(), we can halve the number of times we need
      take each pool's spinlock.
      Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
      3ef4cb2d
    • T
      random: fix the tracepoint for get_random_bytes(_arch) · 5910895f
      Theodore Ts'o 提交于
      Fix a problem where get_random_bytes_arch() was calling the tracepoint
      get_random_bytes().  So add a new tracepoint for
      get_random_bytes_arch(), and make get_random_bytes() and
      get_random_bytes_arch() call their correct tracepoint.
      
      Also, add a new tracepoint for add_device_randomness()
      Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
      5910895f
    • H
      random: account for entropy loss due to overwrites · 30e37ec5
      H. Peter Anvin 提交于
      When we write entropy into a non-empty pool, we currently don't
      account at all for the fact that we will probabilistically overwrite
      some of the entropy in that pool.  This means that unless the pool is
      fully empty, we are currently *guaranteed* to overestimate the amount
      of entropy in the pool!
      
      Assuming Shannon entropy with zero correlations we end up with an
      exponentally decaying value of new entropy added:
      
      	entropy <- entropy + (pool_size - entropy) *
      		(1 - exp(-add_entropy/pool_size))
      
      However, calculations involving fractional exponentials are not
      practical in the kernel, so apply a piecewise linearization:
      
      	  For add_entropy <= pool_size/2 then
      
      	  (1 - exp(-add_entropy/pool_size)) >= (add_entropy/pool_size)*0.7869...
      
      	  ... so we can approximate the exponential with
      	  3/4*add_entropy/pool_size and still be on the
      	  safe side by adding at most pool_size/2 at a time.
      
      In order for the loop not to take arbitrary amounts of time if a bad
      ioctl is received, terminate if we are within one bit of full.  This
      way the loop is guaranteed to terminate after no more than
      log2(poolsize) iterations, no matter what the input value is.  The
      vast majority of the time the loop will be executed exactly once.
      
      The piecewise linearization is very conservative, approaching 3/4 of
      the usable input value for small inputs, however, our entropy
      estimation is pretty weak at best, especially for small values; we
      have no handle on correlation; and the Shannon entropy measure (Rényi
      entropy of order 1) is not the correct one to use in the first place,
      but rather the correct entropy measure is the min-entropy, the Rényi
      entropy of infinite order.
      
      As such, this conservatism seems more than justified.
      
      This does introduce fractional bit values.  I have left it to have 3
      bits of fraction, so that with a pool of 2^12 bits the multiply in
      credit_entropy_bits() can still fit into an int, as 2*(3+12) < 31.  It
      is definitely possible to allow for more fractional accounting, but
      that multiply then would have to be turned into a 32*32 -> 64 multiply.
      Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      Signed-off-by: NTheodore Ts'o <tytso@mit.edu>
      Cc: DJ Johnston <dj.johnston@intel.com>
      30e37ec5
    • H
      random: allow fractional bits to be tracked · a283b5c4
      H. Peter Anvin 提交于
      Allow fractional bits of entropy to be tracked by scaling the entropy
      counter (fixed point).  This will be used in a subsequent patch that
      accounts for entropy lost due to overwrites.
      
      [ Modified by tytso to fix up a few missing places where the
        entropy_count wasn't properly converted from fractional bits to
        bits. ]
      Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      Signed-off-by: NTheodore Ts'o <tytso@mit.edu>
      a283b5c4
    • H
      random: statically compute poolbitshift, poolbytes, poolbits · 9ed17b70
      H. Peter Anvin 提交于
      Use a macro to statically compute poolbitshift (will be used in a
      subsequent patch), poolbytes, and poolbits.  On virtually all
      architectures the cost of a memory load with an offset is the same as
      the one of a memory load.
      
      It is still possible for this to generate worse code since the C
      compiler doesn't know the fixed relationship between these fields, but
      that is somewhat unlikely.
      Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      Signed-off-by: NTheodore Ts'o <tytso@mit.edu>
      9ed17b70
    • T
      random: mix in architectural randomness earlier in extract_buf() · 85a1f777
      Theodore Ts'o 提交于
      Previously if CPU chip had a built-in random number generator (i.e.,
      RDRAND on newer x86 chips), we mixed it in at the very end of
      extract_buf() using an XOR operation.
      
      We now mix it in right after the calculate a hash across the entire
      pool.  This has the advantage that any contribution of entropy from
      the CPU's HWRNG will get mixed back into the pool.  In addition, it
      means that if the HWRNG has any defects (either accidentally or
      maliciously introduced), this will be mitigated via the non-linear
      transform of the SHA-1 hash function before we hand out generated
      output.
      Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
      85a1f777
    • T
      random: allow architectures to optionally define random_get_entropy() · 61875f30
      Theodore Ts'o 提交于
      Allow architectures which have a disabled get_cycles() function to
      provide a random_get_entropy() function which provides a fine-grained,
      rapidly changing counter that can be used by the /dev/random driver.
      
      For example, an architecture might have a rapidly changing register
      used to control random TLB cache eviction, or DRAM refresh that
      doesn't meet the requirements of get_cycles(), but which is good
      enough for the needs of the random driver.
      Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
      Cc: stable@vger.kernel.org
      61875f30
  4. 23 9月, 2013 1 次提交
  5. 13 9月, 2013 1 次提交
  6. 18 6月, 2013 1 次提交
  7. 25 5月, 2013 2 次提交
    • J
      random: fix accounting race condition with lockless irq entropy_count update · 10b3a32d
      Jiri Kosina 提交于
      Commit 902c098a ("random: use lockless techniques in the interrupt
      path") turned IRQ path from being spinlock protected into lockless
      cmpxchg-retry update.
      
      That commit removed r->lock serialization between crediting entropy bits
      from IRQ context and accounting when extracting entropy on userspace
      read path, but didn't turn the r->entropy_count reads/updates in
      account() to use cmpxchg as well.
      
      It has been observed, that under certain circumstances this leads to
      read() on /dev/urandom to return 0 (EOF), as r->entropy_count gets
      corrupted and becomes negative, which in turn results in propagating 0
      all the way from account() to the actual read() call.
      
      Convert the accounting code to be the proper lockless counterpart of
      what has been partially done by 902c098a.
      Signed-off-by: NJiri Kosina <jkosina@suse.cz>
      Cc: Theodore Ts'o <tytso@mit.edu>
      Cc: Greg KH <greg@kroah.com>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      10b3a32d
    • J
      drivers/char/random.c: fix priming of last_data · 1e7e2e05
      Jarod Wilson 提交于
      Commit ec8f02da ("random: prime last_data value per fips
      requirements") added priming of last_data per fips requirements.
      
      Unfortuantely, it did so in a way that can lead to multiple threads all
      incrementing nbytes, but only one actually doing anything with the extra
      data, which leads to some fun random corruption and panics.
      
      The fix is to simply do everything needed to prime last_data in a single
      shot, so there's no window for multiple cpus to increment nbytes -- in
      fact, we won't even increment or decrement nbytes anymore, we'll just
      extract the needed EXTRACT_SIZE one time per pool and then carry on with
      the normal routine.
      
      All these changes have been tested across multiple hosts and
      architectures where panics were previously encoutered.  The code changes
      are are strictly limited to areas only touched when when booted in fips
      mode.
      
      This change should also go into 3.8-stable, to make the myriads of fips
      users on 3.8.x happy.
      Signed-off-by: NJarod Wilson <jarod@redhat.com>
      Tested-by: NJan Stancek <jstancek@redhat.com>
      Tested-by: NJan Stodola <jstodola@redhat.com>
      Cc: Herbert Xu <herbert@gondor.apana.org.au>
      Acked-by: NNeil Horman <nhorman@tuxdriver.com>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Matt Mackall <mpm@selenic.com>
      Cc: "Theodore Ts'o" <tytso@mit.edu>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      1e7e2e05
  8. 01 5月, 2013 1 次提交
  9. 24 3月, 2013 1 次提交
  10. 05 3月, 2013 1 次提交
  11. 19 2月, 2013 1 次提交
  12. 08 11月, 2012 2 次提交
    • J
      random: prime last_data value per fips requirements · ec8f02da
      Jarod Wilson 提交于
      The value stored in last_data must be primed for FIPS 140-2 purposes. Upon
      first use, either on system startup or after an RNDCLEARPOOL ioctl, we
      need to take an initial random sample, store it internally in last_data,
      then pass along the value after that to the requester, so that consistency
      checks aren't being run against stale and possibly known data.
      
      CC: Herbert Xu <herbert@gondor.apana.org.au>
      CC: "David S. Miller" <davem@davemloft.net>
      CC: Matt Mackall <mpm@selenic.com>
      CC: linux-crypto@vger.kernel.org
      Acked-by: NNeil Horman <nhorman@tuxdriver.com>
      Signed-off-by: NJarod Wilson <jarod@redhat.com>
      Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
      ec8f02da
    • J
      random: fix debug format strings · 8eb2ffbf
      Jiri Kosina 提交于
      Fix the following warnings in formatting debug output:
      
      drivers/char/random.c: In function ‘xfer_secondary_pool’:
      drivers/char/random.c:827: warning: format ‘%d’ expects type ‘int’, but argument 7 has type ‘size_t’
      drivers/char/random.c: In function ‘account’:
      drivers/char/random.c:859: warning: format ‘%d’ expects type ‘int’, but argument 5 has type ‘size_t’
      drivers/char/random.c:881: warning: format ‘%d’ expects type ‘int’, but argument 5 has type ‘size_t’
      drivers/char/random.c: In function ‘random_read’:
      drivers/char/random.c:1141: warning: format ‘%d’ expects type ‘int’, but argument 5 has type ‘ssize_t’
      drivers/char/random.c:1145: warning: format ‘%d’ expects type ‘int’, but argument 5 has type ‘ssize_t’
      drivers/char/random.c:1145: warning: format ‘%d’ expects type ‘int’, but argument 6 has type ‘long unsigned int’
      
      by using '%zd' instead of '%d' to properly denote ssize_t/size_t conversion.
      Signed-off-by: NJiri Kosina <jkosina@suse.cz>
      Signed-off-by: NTheodore Ts'o <tytso@mit.edu>
      8eb2ffbf
  13. 16 10月, 2012 1 次提交
    • J
      random: make it possible to enable debugging without rebuild · be5b779a
      Jiri Kosina 提交于
      The module parameter that turns debugging mode (which basically means
      printing a few extra lines during runtime) is in '#if 0' block. Forcing
      everyone who would like to see how entropy is behaving on his system to
      rebuild seems to be a little bit too harsh.
      
      If we were concerned about speed, we could potentially turn 'debug' into a
      static key, but I don't think it's necessary.
      
      Drop the '#if 0' block to allow using the 'debug' parameter without rebuilding.
      Signed-off-by: NJiri Kosina <jkosina@suse.cz>
      Signed-off-by: NTheodore Ts'o <tytso@mit.edu>
      be5b779a
  14. 28 7月, 2012 1 次提交
  15. 25 7月, 2012 1 次提交
    • T
      random: Add comment to random_initialize() · cbc96b75
      Tony Luck 提交于
      Many platforms have per-machine instance data (serial numbers,
      asset tags, etc.) squirreled away in areas that are accessed
      during early system bringup. Mixing this data into the random
      pools has a very high value in providing better random data,
      so we should allow (and even encourage) architecture code to
      call add_device_randomness() from the setup_arch() paths.
      
      However, this limits our options for internal structure of
      the random driver since random_initialize() is not called
      until long after setup_arch().
      
      Add a big fat comment to rand_initialize() spelling out
      this requirement.
      Suggested-by: NTheodore Ts'o <tytso@mit.edu>
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      Signed-off-by: NTheodore Ts'o <tytso@mit.edu>
      cbc96b75
  16. 19 7月, 2012 1 次提交
  17. 15 7月, 2012 5 次提交
    • T
      00ce1db1
    • T
      random: add new get_random_bytes_arch() function · c2557a30
      Theodore Ts'o 提交于
      Create a new function, get_random_bytes_arch() which will use the
      architecture-specific hardware random number generator if it is
      present.  Change get_random_bytes() to not use the HW RNG, even if it
      is avaiable.
      
      The reason for this is that the hw random number generator is fast (if
      it is present), but it requires that we trust the hardware
      manufacturer to have not put in a back door.  (For example, an
      increasing counter encrypted by an AES key known to the NSA.)
      
      It's unlikely that Intel (for example) was paid off by the US
      Government to do this, but it's impossible for them to prove otherwise
      --- especially since Bull Mountain is documented to use AES as a
      whitener.  Hence, the output of an evil, trojan-horse version of
      RDRAND is statistically indistinguishable from an RDRAND implemented
      to the specifications claimed by Intel.  Short of using a tunnelling
      electronic microscope to reverse engineer an Ivy Bridge chip and
      disassembling and analyzing the CPU microcode, there's no way for us
      to tell for sure.
      
      Since users of get_random_bytes() in the Linux kernel need to be able
      to support hardware systems where the HW RNG is not present, most
      time-sensitive users of this interface have already created their own
      cryptographic RNG interface which uses get_random_bytes() as a seed.
      So it's much better to use the HW RNG to improve the existing random
      number generator, by mixing in any entropy returned by the HW RNG into
      /dev/random's entropy pool, but to always _use_ /dev/random's entropy
      pool.
      
      This way we get almost of the benefits of the HW RNG without any
      potential liabilities.  The only benefits we forgo is the
      speed/performance enhancements --- and generic kernel code can't
      depend on depend on get_random_bytes() having the speed of a HW RNG
      anyway.
      
      For those places that really want access to the arch-specific HW RNG,
      if it is available, we provide get_random_bytes_arch().
      Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
      Cc: stable@vger.kernel.org
      c2557a30
    • T
      random: use the arch-specific rng in xfer_secondary_pool · e6d4947b
      Theodore Ts'o 提交于
      If the CPU supports a hardware random number generator, use it in
      xfer_secondary_pool(), where it will significantly improve things and
      where we can afford it.
      
      Also, remove the use of the arch-specific rng in
      add_timer_randomness(), since the call is significantly slower than
      get_cycles(), and we're much better off using it in
      xfer_secondary_pool() anyway.
      Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
      Cc: stable@vger.kernel.org
      e6d4947b
    • L
      random: create add_device_randomness() interface · a2080a67
      Linus Torvalds 提交于
      Add a new interface, add_device_randomness() for adding data to the
      random pool that is likely to differ between two devices (or possibly
      even per boot).  This would be things like MAC addresses or serial
      numbers, or the read-out of the RTC. This does *not* add any actual
      entropy to the pool, but it initializes the pool to different values
      for devices that might otherwise be identical and have very little
      entropy available to them (particularly common in the embedded world).
      
      [ Modified by tytso to mix in a timestamp, since there may be some
        variability caused by the time needed to detect/configure the hardware
        in question. ]
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
      Cc: stable@vger.kernel.org
      a2080a67
    • T
      random: use lockless techniques in the interrupt path · 902c098a
      Theodore Ts'o 提交于
      The real-time Linux folks don't like add_interrupt_randomness() taking
      a spinlock since it is called in the low-level interrupt routine.
      This also allows us to reduce the overhead in the fast path, for the
      random driver, which is the interrupt collection path.
      Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
      Cc: stable@vger.kernel.org
      902c098a