提交 dc9d4399 编写于 作者: J Jason A. Donenfeld 提交者: Zheng Zengkai

random: use hash function for crng_slow_load()

stable inclusion
from stable-v5.10.119
commit 655a69cb41e008943ea2f0990141863159126b82
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I5L6BB

Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id=655a69cb41e008943ea2f0990141863159126b82

--------------------------------

commit 66e4c2b9 upstream.

Since we have a hash function that's really fast, and the goal of
crng_slow_load() is reportedly to "touch all of the crng's state", we
can just hash the old state together with the new state and call it a
day. This way we dont need to reason about another LFSR or worry about
various attacks there. This code is only ever used at early boot and
then never again.

Cc: Theodore Ts'o <tytso@mit.edu>
Reviewed-by: NDominik Brodowski <linux@dominikbrodowski.net>
Reviewed-by: NEric Biggers <ebiggers@google.com>
Signed-off-by: NJason A. Donenfeld <Jason@zx2c4.com>
Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
Acked-by: NXie XiuQi <xiexiuqi@huawei.com>
上级 7881c19a
...@@ -477,42 +477,30 @@ static size_t crng_fast_load(const u8 *cp, size_t len) ...@@ -477,42 +477,30 @@ static size_t crng_fast_load(const u8 *cp, size_t len)
* all), and (2) it doesn't have the performance constraints of * all), and (2) it doesn't have the performance constraints of
* crng_fast_load(). * crng_fast_load().
* *
* So we do something more comprehensive which is guaranteed to touch * So, we simply hash the contents in with the current key. Finally,
* all of the primary_crng's state, and which uses a LFSR with a * we do *not* advance crng_init_cnt since buffer we may get may be
* period of 255 as part of the mixing algorithm. Finally, we do * something like a fixed DMI table (for example), which might very
* *not* advance crng_init_cnt since buffer we may get may be something * well be unique to the machine, but is otherwise unvarying.
* like a fixed DMI table (for example), which might very well be
* unique to the machine, but is otherwise unvarying.
*/ */
static int crng_slow_load(const u8 *cp, size_t len) static void crng_slow_load(const u8 *cp, size_t len)
{ {
unsigned long flags; unsigned long flags;
static u8 lfsr = 1; struct blake2s_state hash;
u8 tmp;
unsigned int i, max = sizeof(base_crng.key); blake2s_init(&hash, sizeof(base_crng.key));
const u8 *src_buf = cp;
u8 *dest_buf = base_crng.key;
if (!spin_trylock_irqsave(&base_crng.lock, flags)) if (!spin_trylock_irqsave(&base_crng.lock, flags))
return 0; return;
if (crng_init != 0) { if (crng_init != 0) {
spin_unlock_irqrestore(&base_crng.lock, flags); spin_unlock_irqrestore(&base_crng.lock, flags);
return 0; return;
}
if (len > max)
max = len;
for (i = 0; i < max; i++) {
tmp = lfsr;
lfsr >>= 1;
if (tmp & 1)
lfsr ^= 0xE1;
tmp = dest_buf[i % sizeof(base_crng.key)];
dest_buf[i % sizeof(base_crng.key)] ^= src_buf[i % len] ^ lfsr;
lfsr += (tmp << 3) | (tmp >> 5);
} }
blake2s_update(&hash, base_crng.key, sizeof(base_crng.key));
blake2s_update(&hash, cp, len);
blake2s_final(&hash, base_crng.key);
spin_unlock_irqrestore(&base_crng.lock, flags); spin_unlock_irqrestore(&base_crng.lock, flags);
return 1;
} }
static void crng_reseed(void) static void crng_reseed(void)
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册