- 15 7月, 2014 1 次提交
-
-
由 Theodore Ts'o 提交于
Instead of using lockless techniques introduced in commit 902c098a, use spin_trylock to try to grab entropy pool's lock. If we can't get the lock, then just try again on the next interrupt. Based on discussions with George Spelvin. Signed-off-by: NTheodore Ts'o <tytso@mit.edu> Cc: George Spelvin <linux@horizon.com>
-
- 16 6月, 2014 1 次提交
-
-
由 Theodore Ts'o 提交于
Commit 0fb7a01a "random: simplify accounting code", introduced in v3.15, has a very nasty accounting problem when the entropy pool has has fewer bytes of entropy than the number of requested reserved bytes. In that case, "have_bytes - reserved" goes negative, and since size_t is unsigned, the expression: ibytes = min_t(size_t, ibytes, have_bytes - reserved); ... does not do the right thing. This is rather bad, because it defeats the catastrophic reseeding feature in the xfer_secondary_pool() path. It also can cause the "BUG: spinlock trylock failure on UP" for some kernel configurations when prandom_reseed() calls get_random_bytes() in the early init, since when the entropy count gets corrupted, credit_entropy_bits() erroneously believes that the nonblocking pool has been fully initialized (when in fact it is not), and so it calls prandom_reseed(true) recursively leading to the spinlock BUG. The logic is *not* the same it was originally, but in the cases where it matters, the behavior is the same, and the resulting code is hopefully easier to read and understand. Fixes: 0fb7a01a "random: simplify accounting code" Signed-off-by: NTheodore Ts'o <tytso@mit.edu> Cc: Greg Price <price@mit.edu> Cc: stable@vger.kernel.org #v3.15
-
- 07 6月, 2014 1 次提交
-
-
由 Joe Perches 提交于
This typedef is unnecessary and should just be removed. Signed-off-by: NJoe Perches <joe@perches.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 17 5月, 2014 1 次提交
-
-
由 Theodore Ts'o 提交于
Commit ee1de406 ("random: simplify accounting logic") simplified things too much, in that it allows the following to trigger an overflow that results in a BUG_ON crash: dd if=/dev/urandom of=/dev/zero bs=67108707 count=1 Thanks to Peter Zihlstra for discovering the crash, and Hannes Frederic for analyizing the root cause. Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu> Reported-by: NPeter Zijlstra <peterz@infradead.org> Reported-by: NHannes Frederic Sowa <hannes@stressinduktion.org> Cc: Greg Price <price@mit.edu>
-
- 28 4月, 2014 1 次提交
-
-
由 Christoph Hellwig 提交于
This will be needed for pending changes to the scsi midlayer that now calls lower level block APIs, as well as any blk-mq driver that wants to contribute to the random pool. Signed-off-by: NChristoph Hellwig <hch@lst.de> Acked-by: N"Theodore Ts'o" <tytso@mit.edu> Signed-off-by: NJens Axboe <axboe@fb.com>
-
- 20 3月, 2014 15 次提交
-
-
由 H. Peter Anvin 提交于
Add predicate functions for having arch_get_random[_seed]*(). The only current use is to avoid the loop in arch_random_refill() when arch_get_random_seed_long() is unavailable. Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Michael Ellerman <michael@ellerman.id.au> Signed-off-by: NTheodore Ts'o <tytso@mit.edu>
-
由 H. Peter Anvin 提交于
If we have arch_get_random_seed*(), try to use it for emergency refill of the entropy pool before giving up and blocking on /dev/random. It may or may not work in the moment, but if it does work, it will give the user better service than blocking will. Reviewed-by: NIngo Molnar <mingo@kernel.org> Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com> Signed-off-by: NTheodore Ts'o <tytso@mit.edu>
-
由 H. Peter Anvin 提交于
Use arch_get_random_seed*() in two places in the Linux random driver (drivers/char/random.c): 1. During entropy pool initialization, use RDSEED in favor of RDRAND, with a fallback to the latter. Entropy exhaustion is unlikely to happen there on physical hardware as the machine is single-threaded at that point, but could happen in a virtual machine. In that case, the fallback to RDRAND will still provide more than adequate entropy pool initialization. 2. Once a second, issue RDSEED and, if successful, feed it to the entropy pool. To ensure an extra layer of security, only credit half the entropy just in case. Suggested-by: NLinus Torvalds <torvalds@linux-foundation.org> Reviewed-by: NIngo Molnar <mingo@kernel.org> Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com> Signed-off-by: NTheodore Ts'o <tytso@mit.edu>
-
由 Theodore Ts'o 提交于
To help assuage the fears of those who think the NSA can introduce a massive hack into the instruction decode and out of order execution engine in the CPU without hundreds of Intel engineers knowing about it (only one of which woud need to have the conscience and courage of Edward Snowden to spill the beans to the public), use the HWRNG to initialize the SHA starting value, instead of xor'ing it in afterwards. Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
-
由 Greg Price 提交于
These are a recurring cause of confusion, so rename them to hopefully be clearer. Signed-off-by: NGreg Price <price@mit.edu> Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
-
由 Greg Price 提交于
The variable 'entropy_bytes' is set from an expression that actually counts bits. Fortunately it's also only compared to values that also count bits. Rename it accordingly. Signed-off-by: NGreg Price <price@mit.edu> Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
-
由 Greg Price 提交于
With this we handle "reserved" in just one place. As a bonus the code becomes less nested, and the "wakeup_write" flag variable becomes unnecessary. The variable "flags" was already unused. This code behaves identically to the previous version except in two pathological cases that don't occur. If the argument "nbytes" is already less than "min", then we didn't previously enforce "min". If r->limit is false while "reserved" is nonzero, then we previously applied "reserved" in checking whether we had enough bits, even though we don't apply it to actually limit how many we take. The callers of account() never exercise either of these cases. Before the previous commit, it was possible for "nbytes" to be less than "min" if userspace chose a pathological configuration, but no longer. Cc: Jiri Kosina <jkosina@suse.cz> Cc: "H. Peter Anvin" <hpa@zytor.com> Signed-off-by: NGreg Price <price@mit.edu> Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
-
由 Greg Price 提交于
We use this value in a few places other than its literal meaning, in particular in _xfer_secondary_pool() as a minimum number of bits to pull from the input pool at a time into either output pool. It doesn't make sense to pull more bits than the whole size of an output pool. We could and possibly should separate the quantities "how much should the input pool have to have to wake up /dev/random readers" and "how much should we transfer from the input to an output pool at a time", but nobody is likely to be sad they can't set the first quantity to more than 1024 bits, so for now just limit them both. Signed-off-by: NGreg Price <price@mit.edu> Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
-
由 Greg Price 提交于
The only mutable data accessed here is ->entropy_count, but since 10b3a32d ("random: fix accounting race condition") we use cmpxchg to protect our accesses to ->entropy_count here. Drop the use of the lock. Cc: Jiri Kosina <jkosina@suse.cz> Signed-off-by: NGreg Price <price@mit.edu> Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
-
由 Greg Price 提交于
This logic is exactly equivalent to the old logic, but it should be easier to see what it's doing. The equivalence depends on one fact from outside this function: when 'r->limit' is false, 'reserved' is zero. (Well, two facts; the other is that 'reserved' is never negative.) Cc: Jiri Kosina <jkosina@suse.cz> Cc: "H. Peter Anvin" <hpa@zytor.com> Signed-off-by: NGreg Price <price@mit.edu> Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
-
由 Greg Price 提交于
This comment didn't quite keep up as extract_entropy() was split into four functions. Put each bit by the function it describes. Signed-off-by: NGreg Price <price@mit.edu> Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
-
由 Greg Price 提交于
The loop condition never changes until just before a break, so we might as well write it as a constant. Also since a996996d ("random: drop weird m_time/a_time manipulation") we don't do anything after the loop finishes, so the 'break's might as well return directly. Some other simplifications. There should be no change in behavior introduced by this commit. Signed-off-by: NGreg Price <price@mit.edu> Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
-
由 Greg Price 提交于
After this remark was written, commit d2e7c96a added a use of arch_get_random_long() inside the get_random_bytes codepath. The main point stands, but it needs to be reworded. Signed-off-by: NGreg Price <price@mit.edu> Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
-
由 Greg Price 提交于
There's only one function here now, as uuid_strategy is long gone. Also make the bit about "If accesses via ..." clearer. Signed-off-by: NGreg Price <price@mit.edu> Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
-
由 Greg Price 提交于
Signed-off-by: NGreg Price <price@mit.edu> Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
-
- 12 11月, 2013 1 次提交
-
-
由 Hannes Frederic Sowa 提交于
The Tausworthe PRNG is initialized at late_initcall time. At that time the entropy pool serving get_random_bytes is not filled sufficiently. This patch adds an additional reseeding step as soon as the nonblocking pool gets marked as initialized. On some machines it might be possible that late_initcall gets called after the pool has been initialized. In this situation we won't reseed again. (A call to prandom_seed_late blocks later invocations of early reseed attempts.) Joint work with Daniel Borkmann. Cc: Eric Dumazet <eric.dumazet@gmail.com> Cc: Theodore Ts'o <tytso@mit.edu> Signed-off-by: NHannes Frederic Sowa <hannes@stressinduktion.org> Signed-off-by: NDaniel Borkmann <dborkman@redhat.com> Acked-by: N"Theodore Ts'o" <tytso@mit.edu> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 04 11月, 2013 5 次提交
-
-
由 Theodore Ts'o 提交于
Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
-
由 Theodore Ts'o 提交于
Since we initialize jiffies to wrap five minutes before boot (see INITIAL_JIFFIES defined in include/linux/jiffies.h) it's important to make sure the last_time field is initialized to INITIAL_JIFFIES. Otherwise, the entropy estimator will overestimate the amount of entropy resulting from the first call to add_timer_randomness(), generally by about 8 bits. Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
-
由 Theodore Ts'o 提交于
The rand_initialize() function was being run fairly late in the kernel boot sequence. This was unfortunate, since it zero'ed the entropy counters, thus throwing away credit that was accumulated earlier in the boot sequence, and it also meant that initcall functions run before rand_initialize were using a minimally initialized pool. To fix this, fix init_std_data() to no longer zap the entropy counter; it wasn't necessary, and move rand_initialize() to be an early initcall. Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
-
由 Theodore Ts'o 提交于
Print a notification to the console when the nonblocking pool is initialized. Also printk a warning when a process tries reading from /dev/urandom before it is fully initialized. Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
-
由 Theodore Ts'o 提交于
Change add_timer_randomness() so that it directs incoming entropy to the nonblocking pool first if it hasn't been fully initialized yet. This matches the strategy we use in add_interrupt_randomness(), which allows us to push the randomness where we need it the most during when the system is first booting up, so that get_random_bytes() and /dev/urandom become safe to use as soon as possible. Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
-
- 11 10月, 2013 14 次提交
-
-
由 Theodore Ts'o 提交于
Instead of using the random driver's ad-hoc DEBUG_ENT() mechanism, use tracepoints instead. This allows for a much more fine-grained control of which debugging mechanism which a developer might need, and unifies the debugging messages with all of the existing tracepoints. Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
-
由 Theodore Ts'o 提交于
As the input pool gets filled, start transfering entropy to the output pools until they get filled. This allows us to use the output pools to store more system entropy. Waste not, want not.... Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
-
由 Theodore Ts'o 提交于
The add_timer_randomness() used to drop into trickle mode when entropy pool was estimated to be 87.5% full. This was important when add_timer_randomness() was used to sample interrupts. It's not used for this any more --- add_interrupt_randomness() now uses fast_mix() instead. By elimitating trickle mode, it allows us to fully utilize entropy provided by add_input_randomness() and add_disk_randomness() even when the input pool is above the old trickle threshold of 87.5%. This helps to answer the criticism in [1] in their hypothetical scenario where our entropy estimator was inaccurate, even though the measurements in [2] seem to indicate that our entropy estimator given real-life entropy collection is actually pretty good, albeit on the conservative side (which was as it was designed). [1] http://eprint.iacr.org/2013/338.pdf [2] http://eprint.iacr.org/2012/251.pdfSigned-off-by: N"Theodore Ts'o" <tytso@mit.edu>
-
由 Theodore Ts'o 提交于
Our mixing functions were analyzed by Lacharme, Roeck, Strubel, and Videau in their paper, "The Linux Pseudorandom Number Generator Revisited" (see: http://eprint.iacr.org/2012/251.pdf). They suggested a slight change to improve our mixing functions slightly. I also adjusted the comments to better explain what is going on, and to document why the polynomials were changed. Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
-
由 Theodore Ts'o 提交于
By mixing the entropy in chunks of 32-bit words instead of byte by byte, we can speed up the fast_mix function significantly. Since it is called on every single interrupt, on systems with a very heavy interrupt load, this can make a noticeable difference. Also fix a compilation warning in add_interrupt_randomness() and avoid xor'ing cycles and jiffies together just in case we have an architecture which tries to define random_get_entropy() by returning jiffies. Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu> Reported-by: NJörn Engel <joern@logfs.org>
-
由 Theodore Ts'o 提交于
In order to avoid draining the input pool of its entropy at too high of a rate, enforce a minimum time interval between reseedings of the urandom pool. This is set to 60 seconds by default. Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
-
由 Theodore Ts'o 提交于
Use smaller types to slightly shrink the size of the entropy store structure. Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
-
由 Theodore Ts'o 提交于
The add_device_randomness() function calls mix_pool_bytes() twice for the input pool and the non-blocking pool, for a total of four times. By using _mix_pool_byte() and taking the spinlock in add_device_randomness(), we can halve the number of times we need take each pool's spinlock. Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
-
由 Theodore Ts'o 提交于
Fix a problem where get_random_bytes_arch() was calling the tracepoint get_random_bytes(). So add a new tracepoint for get_random_bytes_arch(), and make get_random_bytes() and get_random_bytes_arch() call their correct tracepoint. Also, add a new tracepoint for add_device_randomness() Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
-
由 H. Peter Anvin 提交于
When we write entropy into a non-empty pool, we currently don't account at all for the fact that we will probabilistically overwrite some of the entropy in that pool. This means that unless the pool is fully empty, we are currently *guaranteed* to overestimate the amount of entropy in the pool! Assuming Shannon entropy with zero correlations we end up with an exponentally decaying value of new entropy added: entropy <- entropy + (pool_size - entropy) * (1 - exp(-add_entropy/pool_size)) However, calculations involving fractional exponentials are not practical in the kernel, so apply a piecewise linearization: For add_entropy <= pool_size/2 then (1 - exp(-add_entropy/pool_size)) >= (add_entropy/pool_size)*0.7869... ... so we can approximate the exponential with 3/4*add_entropy/pool_size and still be on the safe side by adding at most pool_size/2 at a time. In order for the loop not to take arbitrary amounts of time if a bad ioctl is received, terminate if we are within one bit of full. This way the loop is guaranteed to terminate after no more than log2(poolsize) iterations, no matter what the input value is. The vast majority of the time the loop will be executed exactly once. The piecewise linearization is very conservative, approaching 3/4 of the usable input value for small inputs, however, our entropy estimation is pretty weak at best, especially for small values; we have no handle on correlation; and the Shannon entropy measure (Rényi entropy of order 1) is not the correct one to use in the first place, but rather the correct entropy measure is the min-entropy, the Rényi entropy of infinite order. As such, this conservatism seems more than justified. This does introduce fractional bit values. I have left it to have 3 bits of fraction, so that with a pool of 2^12 bits the multiply in credit_entropy_bits() can still fit into an int, as 2*(3+12) < 31. It is definitely possible to allow for more fractional accounting, but that multiply then would have to be turned into a 32*32 -> 64 multiply. Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com> Signed-off-by: NTheodore Ts'o <tytso@mit.edu> Cc: DJ Johnston <dj.johnston@intel.com>
-
由 H. Peter Anvin 提交于
Allow fractional bits of entropy to be tracked by scaling the entropy counter (fixed point). This will be used in a subsequent patch that accounts for entropy lost due to overwrites. [ Modified by tytso to fix up a few missing places where the entropy_count wasn't properly converted from fractional bits to bits. ] Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com> Signed-off-by: NTheodore Ts'o <tytso@mit.edu>
-
由 H. Peter Anvin 提交于
Use a macro to statically compute poolbitshift (will be used in a subsequent patch), poolbytes, and poolbits. On virtually all architectures the cost of a memory load with an offset is the same as the one of a memory load. It is still possible for this to generate worse code since the C compiler doesn't know the fixed relationship between these fields, but that is somewhat unlikely. Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com> Signed-off-by: NTheodore Ts'o <tytso@mit.edu>
-
由 Theodore Ts'o 提交于
Previously if CPU chip had a built-in random number generator (i.e., RDRAND on newer x86 chips), we mixed it in at the very end of extract_buf() using an XOR operation. We now mix it in right after the calculate a hash across the entire pool. This has the advantage that any contribution of entropy from the CPU's HWRNG will get mixed back into the pool. In addition, it means that if the HWRNG has any defects (either accidentally or maliciously introduced), this will be mitigated via the non-linear transform of the SHA-1 hash function before we hand out generated output. Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
-
由 Theodore Ts'o 提交于
Allow architectures which have a disabled get_cycles() function to provide a random_get_entropy() function which provides a fine-grained, rapidly changing counter that can be used by the /dev/random driver. For example, an architecture might have a rapidly changing register used to control random TLB cache eviction, or DRAM refresh that doesn't meet the requirements of get_cycles(), but which is good enough for the needs of the random driver. Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu> Cc: stable@vger.kernel.org
-