summaryrefslogtreecommitdiffstats
path: root/lib
diff options
context:
space:
mode:
authorLinus Torvalds <torvalds@linux-foundation.org>2022-03-21 14:55:32 -0700
committerLinus Torvalds <torvalds@linux-foundation.org>2022-03-21 14:55:32 -0700
commit5628b8de1228436d47491c662dc521bc138a3d43 (patch)
tree50371169cec13bff5ca3f663baf1c66968eb1889 /lib
parentf400bea2d44beec76f7e7f45e5372ef790336a4d (diff)
parent3e504d2026eb6c8762cd6040ae57db166516824a (diff)
downloadlinux-5628b8de1228436d47491c662dc521bc138a3d43.tar.bz2
Merge tag 'random-5.18-rc1-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/crng/random
Pull random number generator updates from Jason Donenfeld: "There have been a few important changes to the RNG's crypto, but the intent for 5.18 has been to shore up the existing design as much as possible with modern cryptographic functions and proven constructions, rather than actually changing up anything fundamental to the RNG's design. So it's still the same old RNG at its core as before: it still counts entropy bits, and collects from the various sources with the same heuristics as before, and so forth. However, the cryptographic algorithms that transform that entropic data into safe random numbers have been modernized. Just as important, if not more, is that the code has been cleaned up and re-documented. As one of the first drivers in Linux, going back to 1.3.30, its general style and organization was showing its age and becoming both a maintenance burden and an auditability impediment. Hopefully this provides a more solid foundation to build on for the future. I encourage you to open up the file in full, and maybe you'll remark, "oh, that's what it's doing," and enjoy reading it. That, at least, is the eventual goal, which this pull begins working toward. Here's a summary of the various patches in this pull: - /dev/urandom and /dev/random now do the same thing, per the patch we discussed on the list. I think this is worth trying out. If it does appear problematic, I've made sure to keep it standalone and revertible without any conflicts. - Fixes and cleanups for numerous integer type problems, locking issues, and general code quality concerns. - The input pool's LFSR has been replaced with a cryptographically secure hash function, which has security and performance benefits alike, and consequently allows us to count entropy bits linearly. - The pre-init injection now uses a real hash function too, instead of an LFSR or vanilla xor. - The interrupt handler's fast_mix() function now uses one round of SipHash, rather than the fake crypto that was there before. - All additions of RDRAND and RDSEED now go through the input pool's hash function, in part to mitigate ridiculous hypothetical CPU backdoors, but more so to have a consistent interface for ingesting entropy that's easy to analyze, making everything happen one way, instead of a potpourri of different ways. - The crng now works on per-cpu data, while also being in accordance with the actual "fast key erasure RNG" design. This allows us to fix several boot-time race complications associated with the prior dynamically allocated model, eliminates much locking, and makes our backtrack protection more robust. - Batched entropy now erases doled out values so that it's backtrack resistant. - Working closely with Sebastian, the interrupt handler no longer needs to take any locks at all, as we punt the synchronized/expensive operations to a workqueue. This is especially nice for PREEMPT_RT, where taking spinlocks in irq context is problematic. It also makes the handler faster for the rest of us. - Also working with Sebastian, we now do the right thing on CPU hotplug, so that we don't use stale entropy or fail to accumulate new entropy when CPUs come back online. - We handle virtual machines that fork / clone / snapshot, using the "vmgenid" ACPI specification for retrieving a unique new RNG seed, which we can use to also make WireGuard (and in the future, other things) safe across VM forks. - Around boot time, we now try to reseed more often if enough entropy is available, before settling on the usual 5 minute schedule. - Last, but certainly not least, the documentation in the file has been updated considerably" * tag 'random-5.18-rc1-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/crng/random: (60 commits) random: check for signal and try earlier when generating entropy random: reseed more often immediately after booting random: make consistent usage of crng_ready() random: use SipHash as interrupt entropy accumulator wireguard: device: clear keys on VM fork random: provide notifier for VM fork random: replace custom notifier chain with standard one random: do not export add_vmfork_randomness() unless needed virt: vmgenid: notify RNG of VM fork and supply generation ID ACPI: allow longer device IDs random: add mechanism for VM forks to reinitialize crng random: don't let 644 read-only sysctls be written to random: give sysctl_random_min_urandom_seed a more sensible value random: block in /dev/urandom random: do crng pre-init loading in worker rather than irq random: unify cycles_t and jiffies usage and types random: cleanup UUID handling random: only wake up writers after zap if threshold was passed random: round-robin registers as ulong, not u32 random: clear fast pool, crng, and batches in cpuhp bring up ...
Diffstat (limited to 'lib')
-rw-r--r--lib/random32.c14
-rw-r--r--lib/vsprintf.c10
2 files changed, 13 insertions, 11 deletions
diff --git a/lib/random32.c b/lib/random32.c
index a57a0e18819d..976632003ec6 100644
--- a/lib/random32.c
+++ b/lib/random32.c
@@ -41,7 +41,6 @@
#include <linux/bitops.h>
#include <linux/slab.h>
#include <asm/unaligned.h>
-#include <trace/events/random.h>
/**
* prandom_u32_state - seeded pseudo-random number generator.
@@ -387,7 +386,6 @@ u32 prandom_u32(void)
struct siprand_state *state = get_cpu_ptr(&net_rand_state);
u32 res = siprand_u32(state);
- trace_prandom_u32(res);
put_cpu_ptr(&net_rand_state);
return res;
}
@@ -553,9 +551,11 @@ static void prandom_reseed(struct timer_list *unused)
* To avoid worrying about whether it's safe to delay that interrupt
* long enough to seed all CPUs, just schedule an immediate timer event.
*/
-static void prandom_timer_start(struct random_ready_callback *unused)
+static int prandom_timer_start(struct notifier_block *nb,
+ unsigned long action, void *data)
{
mod_timer(&seed_timer, jiffies);
+ return 0;
}
#ifdef CONFIG_RANDOM32_SELFTEST
@@ -619,13 +619,13 @@ core_initcall(prandom32_state_selftest);
*/
static int __init prandom_init_late(void)
{
- static struct random_ready_callback random_ready = {
- .func = prandom_timer_start
+ static struct notifier_block random_ready = {
+ .notifier_call = prandom_timer_start
};
- int ret = add_random_ready_callback(&random_ready);
+ int ret = register_random_ready_notifier(&random_ready);
if (ret == -EALREADY) {
- prandom_timer_start(&random_ready);
+ prandom_timer_start(&random_ready, 0, NULL);
ret = 0;
}
return ret;
diff --git a/lib/vsprintf.c b/lib/vsprintf.c
index 3b8129dd374c..36574a806a81 100644
--- a/lib/vsprintf.c
+++ b/lib/vsprintf.c
@@ -757,14 +757,16 @@ static void enable_ptr_key_workfn(struct work_struct *work)
static DECLARE_WORK(enable_ptr_key_work, enable_ptr_key_workfn);
-static void fill_random_ptr_key(struct random_ready_callback *unused)
+static int fill_random_ptr_key(struct notifier_block *nb,
+ unsigned long action, void *data)
{
/* This may be in an interrupt handler. */
queue_work(system_unbound_wq, &enable_ptr_key_work);
+ return 0;
}
-static struct random_ready_callback random_ready = {
- .func = fill_random_ptr_key
+static struct notifier_block random_ready = {
+ .notifier_call = fill_random_ptr_key
};
static int __init initialize_ptr_random(void)
@@ -778,7 +780,7 @@ static int __init initialize_ptr_random(void)
return 0;
}
- ret = add_random_ready_callback(&random_ready);
+ ret = register_random_ready_notifier(&random_ready);
if (!ret) {
return 0;
} else if (ret == -EALREADY) {