summaryrefslogtreecommitdiffstats
path: root/lib
diff options
context:
space:
mode:
authorLinus Torvalds <torvalds@linux-foundation.org>2022-05-24 11:58:10 -0700
committerLinus Torvalds <torvalds@linux-foundation.org>2022-05-24 11:58:10 -0700
commitac2ab99072cce553c78f326ea22d72856f570d88 (patch)
tree6c3e9edca79ae971f89c598105212434e3946fb7 /lib
parenteadb2f47a3ced5c64b23b90fd2a3463f63726066 (diff)
parent1ce6c8d68f8ac587f54d0a271ac594d3d51f3efb (diff)
downloadlinux-ac2ab99072cce553c78f326ea22d72856f570d88.tar.bz2
Merge tag 'random-5.19-rc1-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/crng/random
Pull random number generator updates from Jason Donenfeld: "These updates continue to refine the work began in 5.17 and 5.18 of modernizing the RNG's crypto and streamlining and documenting its code. New for 5.19, the updates aim to improve entropy collection methods and make some initial decisions regarding the "premature next" problem and our threat model. The cloc utility now reports that random.c is 931 lines of code and 466 lines of comments, not that basic metrics like that mean all that much, but at the very least it tells you that this is very much a manageable driver now. Here's a summary of the various updates: - The random_get_entropy() function now always returns something at least minimally useful. This is the primary entropy source in most collectors, which in the best case expands to something like RDTSC, but prior to this change, in the worst case it would just return 0, contributing nothing. For 5.19, additional architectures are wired up, and architectures that are entirely missing a cycle counter now have a generic fallback path, which uses the highest resolution clock available from the timekeeping subsystem. Some of those clocks can actually be quite good, despite the CPU not having a cycle counter of its own, and going off-core for a stamp is generally thought to increase jitter, something positive from the perspective of entropy gathering. Done very early on in the development cycle, this has been sitting in next getting some testing for a while now and has relevant acks from the archs, so it should be pretty well tested and fine, but is nonetheless the thing I'll be keeping my eye on most closely. - Of particular note with the random_get_entropy() improvements is MIPS, which, on CPUs that lack the c0 count register, will now combine the high-speed but short-cycle c0 random register with the lower-speed but long-cycle generic fallback path. - With random_get_entropy() now always returning something useful, the interrupt handler now collects entropy in a consistent construction. - Rather than comparing two samples of random_get_entropy() for the jitter dance, the algorithm now tests many samples, and uses the amount of differing ones to determine whether or not jitter entropy is usable and how laborious it must be. The problem with comparing only two samples was that if the cycle counter was extremely slow, but just so happened to be on the cusp of a change, the slowness wouldn't be detected. Taking many samples fixes that to some degree. This, combined with the other improvements to random_get_entropy(), should make future unification of /dev/random and /dev/urandom maybe more possible. At the very least, were we to attempt it again today (we're not), it wouldn't break any of Guenter's test rigs that broke when we tried it with 5.18. So, not today, but perhaps down the road, that's something we can revisit. - We attempt to reseed the RNG immediately upon waking up from system suspend or hibernation, making use of the various timestamps about suspend time and such available, as well as the usual inputs such as RDRAND when available. - Batched randomness now falls back to ordinary randomness before the RNG is initialized. This provides more consistent guarantees to the types of random numbers being returned by the various accessors. - The "pre-init injection" code is now gone for good. I suspect you in particular will be happy to read that, as I recall you expressing your distaste for it a few months ago. Instead, to avoid a "premature first" issue, while still allowing for maximal amount of entropy availability during system boot, the first 128 bits of estimated entropy are used immediately as it arrives, with the next 128 bits being buffered. And, as before, after the RNG has been fully initialized, it winds up reseeding anyway a few seconds later in most cases. This resulted in a pretty big simplification of the initialization code and let us remove various ad-hoc mechanisms like the ugly crng_pre_init_inject(). - The RNG no longer pretends to handle the "premature next" security model, something that various academics and other RNG designs have tried to care about in the past. After an interesting mailing list thread, these issues are thought to be a) mainly academic and not practical at all, and b) actively harming the real security of the RNG by delaying new entropy additions after a potential compromise, making a potentially bad situation even worse. As well, in the first place, our RNG never even properly handled the premature next issue, so removing an incomplete solution to a fake problem was particularly nice. This allowed for numerous other simplifications in the code, which is a lot cleaner as a consequence. If you didn't see it before, https://lore.kernel.org/lkml/YmlMGx6+uigkGiZ0@zx2c4.com/ may be a thread worth skimming through. - While the interrupt handler received a separate code path years ago that avoids locks by using per-cpu data structures and a faster mixing algorithm, in order to reduce interrupt latency, input and disk events that are triggered in hardirq handlers were still hitting locks and more expensive algorithms. Those are now redirected to use the faster per-cpu data structures. - Rather than having the fake-crypto almost-siphash-based random32 implementation be used right and left, and in many places where cryptographically secure randomness is desirable, the batched entropy code is now fast enough to replace that. - As usual, numerous code quality and documentation cleanups. For example, the initialization state machine now uses enum symbolic constants instead of just hard coding numbers everywhere. - Since the RNG initializes once, and then is always initialized thereafter, a pretty heavy amount of code used during that initialization is never used again. It is now completely cordoned off using static branches and it winds up in the .text.unlikely section so that it doesn't reduce cache compactness after the RNG is ready. - A variety of functions meant for waiting on the RNG to be initialized were only used by vsprintf, and in not a particularly optimal way. Replacing that usage with a more ordinary setup made it possible to remove those functions. - A cleanup of how we warn userspace about the use of uninitialized /dev/urandom and uninitialized get_random_bytes() usage. Interestingly, with the change you merged for 5.18 that attempts to use jitter (but does not block if it can't), the majority of users should never see those warnings for /dev/urandom at all now, and the one for in-kernel usage is mainly a debug thing. - The file_operations struct for /dev/[u]random now implements .read_iter and .write_iter instead of .read and .write, allowing it to also implement .splice_read and .splice_write, which makes splice(2) work again after it was broken here (and in many other places in the tree) during the set_fs() removal. This was a bit of a last minute arrival from Jens that hasn't had as much time to bake, so I'll be keeping my eye on this as well, but it seems fairly ordinary. Unfortunately, read_iter() is around 3% slower than read() in my tests, which I'm not thrilled about. But Jens and Al, spurred by this observation, seem to be making progress in removing the bottlenecks on the iter paths in the VFS layer in general, which should remove the performance gap for all drivers. - Assorted other bug fixes, cleanups, and optimizations. - A small SipHash cleanup" * tag 'random-5.19-rc1-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/crng/random: (49 commits) random: check for signals after page of pool writes random: wire up fops->splice_{read,write}_iter() random: convert to using fops->write_iter() random: convert to using fops->read_iter() random: unify batched entropy implementations random: move randomize_page() into mm where it belongs random: remove mostly unused async readiness notifier random: remove get_random_bytes_arch() and add rng_has_arch_random() random: move initialization functions out of hot pages random: make consistent use of buf and len random: use proper return types on get_random_{int,long}_wait() random: remove extern from functions in header random: use static branch for crng_ready() random: credit architectural init the exact amount random: handle latent entropy and command line from random_init() random: use proper jiffies comparison macro random: remove ratelimiting for in-kernel unseeded randomness random: move initialization out of reseeding hot path random: avoid initializing twice in credit race random: use symbolic constants for crng_init states ...
Diffstat (limited to 'lib')
-rw-r--r--lib/Kconfig.debug3
-rw-r--r--lib/random32.c347
-rw-r--r--lib/siphash.c32
-rw-r--r--lib/vsprintf.c67
4 files changed, 40 insertions, 409 deletions
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index 55b9acb2f524..a30d5279efda 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -1617,8 +1617,7 @@ config WARN_ALL_UNSEEDED_RANDOM
so architecture maintainers really need to do what they can
to get the CRNG seeded sooner after the system is booted.
However, since users cannot do anything actionable to
- address this, by default the kernel will issue only a single
- warning for the first use of unseeded randomness.
+ address this, by default this option is disabled.
Say Y here if you want to receive warnings for all uses of
unseeded randomness. This will be of use primarily for
diff --git a/lib/random32.c b/lib/random32.c
index 976632003ec6..d5d9029362cb 100644
--- a/lib/random32.c
+++ b/lib/random32.c
@@ -245,25 +245,13 @@ static struct prandom_test2 {
{ 407983964U, 921U, 728767059U },
};
-static u32 __extract_hwseed(void)
-{
- unsigned int val = 0;
-
- (void)(arch_get_random_seed_int(&val) ||
- arch_get_random_int(&val));
-
- return val;
-}
-
-static void prandom_seed_early(struct rnd_state *state, u32 seed,
- bool mix_with_hwseed)
+static void prandom_state_selftest_seed(struct rnd_state *state, u32 seed)
{
#define LCG(x) ((x) * 69069U) /* super-duper LCG */
-#define HWSEED() (mix_with_hwseed ? __extract_hwseed() : 0)
- state->s1 = __seed(HWSEED() ^ LCG(seed), 2U);
- state->s2 = __seed(HWSEED() ^ LCG(state->s1), 8U);
- state->s3 = __seed(HWSEED() ^ LCG(state->s2), 16U);
- state->s4 = __seed(HWSEED() ^ LCG(state->s3), 128U);
+ state->s1 = __seed(LCG(seed), 2U);
+ state->s2 = __seed(LCG(state->s1), 8U);
+ state->s3 = __seed(LCG(state->s2), 16U);
+ state->s4 = __seed(LCG(state->s3), 128U);
}
static int __init prandom_state_selftest(void)
@@ -274,7 +262,7 @@ static int __init prandom_state_selftest(void)
for (i = 0; i < ARRAY_SIZE(test1); i++) {
struct rnd_state state;
- prandom_seed_early(&state, test1[i].seed, false);
+ prandom_state_selftest_seed(&state, test1[i].seed);
prandom_warmup(&state);
if (test1[i].result != prandom_u32_state(&state))
@@ -289,7 +277,7 @@ static int __init prandom_state_selftest(void)
for (i = 0; i < ARRAY_SIZE(test2); i++) {
struct rnd_state state;
- prandom_seed_early(&state, test2[i].seed, false);
+ prandom_state_selftest_seed(&state, test2[i].seed);
prandom_warmup(&state);
for (j = 0; j < test2[i].iteration - 1; j++)
@@ -310,324 +298,3 @@ static int __init prandom_state_selftest(void)
}
core_initcall(prandom_state_selftest);
#endif
-
-/*
- * The prandom_u32() implementation is now completely separate from the
- * prandom_state() functions, which are retained (for now) for compatibility.
- *
- * Because of (ab)use in the networking code for choosing random TCP/UDP port
- * numbers, which open DoS possibilities if guessable, we want something
- * stronger than a standard PRNG. But the performance requirements of
- * the network code do not allow robust crypto for this application.
- *
- * So this is a homebrew Junior Spaceman implementation, based on the
- * lowest-latency trustworthy crypto primitive available, SipHash.
- * (The authors of SipHash have not been consulted about this abuse of
- * their work.)
- *
- * Standard SipHash-2-4 uses 2n+4 rounds to hash n words of input to
- * one word of output. This abbreviated version uses 2 rounds per word
- * of output.
- */
-
-struct siprand_state {
- unsigned long v0;
- unsigned long v1;
- unsigned long v2;
- unsigned long v3;
-};
-
-static DEFINE_PER_CPU(struct siprand_state, net_rand_state) __latent_entropy;
-DEFINE_PER_CPU(unsigned long, net_rand_noise);
-EXPORT_PER_CPU_SYMBOL(net_rand_noise);
-
-/*
- * This is the core CPRNG function. As "pseudorandom", this is not used
- * for truly valuable things, just intended to be a PITA to guess.
- * For maximum speed, we do just two SipHash rounds per word. This is
- * the same rate as 4 rounds per 64 bits that SipHash normally uses,
- * so hopefully it's reasonably secure.
- *
- * There are two changes from the official SipHash finalization:
- * - We omit some constants XORed with v2 in the SipHash spec as irrelevant;
- * they are there only to make the output rounds distinct from the input
- * rounds, and this application has no input rounds.
- * - Rather than returning v0^v1^v2^v3, return v1+v3.
- * If you look at the SipHash round, the last operation on v3 is
- * "v3 ^= v0", so "v0 ^ v3" just undoes that, a waste of time.
- * Likewise "v1 ^= v2". (The rotate of v2 makes a difference, but
- * it still cancels out half of the bits in v2 for no benefit.)
- * Second, since the last combining operation was xor, continue the
- * pattern of alternating xor/add for a tiny bit of extra non-linearity.
- */
-static inline u32 siprand_u32(struct siprand_state *s)
-{
- unsigned long v0 = s->v0, v1 = s->v1, v2 = s->v2, v3 = s->v3;
- unsigned long n = raw_cpu_read(net_rand_noise);
-
- v3 ^= n;
- PRND_SIPROUND(v0, v1, v2, v3);
- PRND_SIPROUND(v0, v1, v2, v3);
- v0 ^= n;
- s->v0 = v0; s->v1 = v1; s->v2 = v2; s->v3 = v3;
- return v1 + v3;
-}
-
-
-/**
- * prandom_u32 - pseudo random number generator
- *
- * A 32 bit pseudo-random number is generated using a fast
- * algorithm suitable for simulation. This algorithm is NOT
- * considered safe for cryptographic use.
- */
-u32 prandom_u32(void)
-{
- struct siprand_state *state = get_cpu_ptr(&net_rand_state);
- u32 res = siprand_u32(state);
-
- put_cpu_ptr(&net_rand_state);
- return res;
-}
-EXPORT_SYMBOL(prandom_u32);
-
-/**
- * prandom_bytes - get the requested number of pseudo-random bytes
- * @buf: where to copy the pseudo-random bytes to
- * @bytes: the requested number of bytes
- */
-void prandom_bytes(void *buf, size_t bytes)
-{
- struct siprand_state *state = get_cpu_ptr(&net_rand_state);
- u8 *ptr = buf;
-
- while (bytes >= sizeof(u32)) {
- put_unaligned(siprand_u32(state), (u32 *)ptr);
- ptr += sizeof(u32);
- bytes -= sizeof(u32);
- }
-
- if (bytes > 0) {
- u32 rem = siprand_u32(state);
-
- do {
- *ptr++ = (u8)rem;
- rem >>= BITS_PER_BYTE;
- } while (--bytes > 0);
- }
- put_cpu_ptr(&net_rand_state);
-}
-EXPORT_SYMBOL(prandom_bytes);
-
-/**
- * prandom_seed - add entropy to pseudo random number generator
- * @entropy: entropy value
- *
- * Add some additional seed material to the prandom pool.
- * The "entropy" is actually our IP address (the only caller is
- * the network code), not for unpredictability, but to ensure that
- * different machines are initialized differently.
- */
-void prandom_seed(u32 entropy)
-{
- int i;
-
- add_device_randomness(&entropy, sizeof(entropy));
-
- for_each_possible_cpu(i) {
- struct siprand_state *state = per_cpu_ptr(&net_rand_state, i);
- unsigned long v0 = state->v0, v1 = state->v1;
- unsigned long v2 = state->v2, v3 = state->v3;
-
- do {
- v3 ^= entropy;
- PRND_SIPROUND(v0, v1, v2, v3);
- PRND_SIPROUND(v0, v1, v2, v3);
- v0 ^= entropy;
- } while (unlikely(!v0 || !v1 || !v2 || !v3));
-
- WRITE_ONCE(state->v0, v0);
- WRITE_ONCE(state->v1, v1);
- WRITE_ONCE(state->v2, v2);
- WRITE_ONCE(state->v3, v3);
- }
-}
-EXPORT_SYMBOL(prandom_seed);
-
-/*
- * Generate some initially weak seeding values to allow
- * the prandom_u32() engine to be started.
- */
-static int __init prandom_init_early(void)
-{
- int i;
- unsigned long v0, v1, v2, v3;
-
- if (!arch_get_random_long(&v0))
- v0 = jiffies;
- if (!arch_get_random_long(&v1))
- v1 = random_get_entropy();
- v2 = v0 ^ PRND_K0;
- v3 = v1 ^ PRND_K1;
-
- for_each_possible_cpu(i) {
- struct siprand_state *state;
-
- v3 ^= i;
- PRND_SIPROUND(v0, v1, v2, v3);
- PRND_SIPROUND(v0, v1, v2, v3);
- v0 ^= i;
-
- state = per_cpu_ptr(&net_rand_state, i);
- state->v0 = v0; state->v1 = v1;
- state->v2 = v2; state->v3 = v3;
- }
-
- return 0;
-}
-core_initcall(prandom_init_early);
-
-
-/* Stronger reseeding when available, and periodically thereafter. */
-static void prandom_reseed(struct timer_list *unused);
-
-static DEFINE_TIMER(seed_timer, prandom_reseed);
-
-static void prandom_reseed(struct timer_list *unused)
-{
- unsigned long expires;
- int i;
-
- /*
- * Reinitialize each CPU's PRNG with 128 bits of key.
- * No locking on the CPUs, but then somewhat random results are,
- * well, expected.
- */
- for_each_possible_cpu(i) {
- struct siprand_state *state;
- unsigned long v0 = get_random_long(), v2 = v0 ^ PRND_K0;
- unsigned long v1 = get_random_long(), v3 = v1 ^ PRND_K1;
-#if BITS_PER_LONG == 32
- int j;
-
- /*
- * On 32-bit machines, hash in two extra words to
- * approximate 128-bit key length. Not that the hash
- * has that much security, but this prevents a trivial
- * 64-bit brute force.
- */
- for (j = 0; j < 2; j++) {
- unsigned long m = get_random_long();
-
- v3 ^= m;
- PRND_SIPROUND(v0, v1, v2, v3);
- PRND_SIPROUND(v0, v1, v2, v3);
- v0 ^= m;
- }
-#endif
- /*
- * Probably impossible in practice, but there is a
- * theoretical risk that a race between this reseeding
- * and the target CPU writing its state back could
- * create the all-zero SipHash fixed point.
- *
- * To ensure that never happens, ensure the state
- * we write contains no zero words.
- */
- state = per_cpu_ptr(&net_rand_state, i);
- WRITE_ONCE(state->v0, v0 ? v0 : -1ul);
- WRITE_ONCE(state->v1, v1 ? v1 : -1ul);
- WRITE_ONCE(state->v2, v2 ? v2 : -1ul);
- WRITE_ONCE(state->v3, v3 ? v3 : -1ul);
- }
-
- /* reseed every ~60 seconds, in [40 .. 80) interval with slack */
- expires = round_jiffies(jiffies + 40 * HZ + prandom_u32_max(40 * HZ));
- mod_timer(&seed_timer, expires);
-}
-
-/*
- * The random ready callback can be called from almost any interrupt.
- * To avoid worrying about whether it's safe to delay that interrupt
- * long enough to seed all CPUs, just schedule an immediate timer event.
- */
-static int prandom_timer_start(struct notifier_block *nb,
- unsigned long action, void *data)
-{
- mod_timer(&seed_timer, jiffies);
- return 0;
-}
-
-#ifdef CONFIG_RANDOM32_SELFTEST
-/* Principle: True 32-bit random numbers will all have 16 differing bits on
- * average. For each 32-bit number, there are 601M numbers differing by 16
- * bits, and 89% of the numbers differ by at least 12 bits. Note that more
- * than 16 differing bits also implies a correlation with inverted bits. Thus
- * we take 1024 random numbers and compare each of them to the other ones,
- * counting the deviation of correlated bits to 16. Constants report 32,
- * counters 32-log2(TEST_SIZE), and pure randoms, around 6 or lower. With the
- * u32 total, TEST_SIZE may be as large as 4096 samples.
- */
-#define TEST_SIZE 1024
-static int __init prandom32_state_selftest(void)
-{
- unsigned int x, y, bits, samples;
- u32 xor, flip;
- u32 total;
- u32 *data;
-
- data = kmalloc(sizeof(*data) * TEST_SIZE, GFP_KERNEL);
- if (!data)
- return 0;
-
- for (samples = 0; samples < TEST_SIZE; samples++)
- data[samples] = prandom_u32();
-
- flip = total = 0;
- for (x = 0; x < samples; x++) {
- for (y = 0; y < samples; y++) {
- if (x == y)
- continue;
- xor = data[x] ^ data[y];
- flip |= xor;
- bits = hweight32(xor);
- total += (bits - 16) * (bits - 16);
- }
- }
-
- /* We'll return the average deviation as 2*sqrt(corr/samples), which
- * is also sqrt(4*corr/samples) which provides a better resolution.
- */
- bits = int_sqrt(total / (samples * (samples - 1)) * 4);
- if (bits > 6)
- pr_warn("prandom32: self test failed (at least %u bits"
- " correlated, fixed_mask=%#x fixed_value=%#x\n",
- bits, ~flip, data[0] & ~flip);
- else
- pr_info("prandom32: self test passed (less than %u bits"
- " correlated)\n",
- bits+1);
- kfree(data);
- return 0;
-}
-core_initcall(prandom32_state_selftest);
-#endif /* CONFIG_RANDOM32_SELFTEST */
-
-/*
- * Start periodic full reseeding as soon as strong
- * random numbers are available.
- */
-static int __init prandom_init_late(void)
-{
- static struct notifier_block random_ready = {
- .notifier_call = prandom_timer_start
- };
- int ret = register_random_ready_notifier(&random_ready);
-
- if (ret == -EALREADY) {
- prandom_timer_start(&random_ready, 0, NULL);
- ret = 0;
- }
- return ret;
-}
-late_initcall(prandom_init_late);
diff --git a/lib/siphash.c b/lib/siphash.c
index 72b9068ab57b..71d315a6ad62 100644
--- a/lib/siphash.c
+++ b/lib/siphash.c
@@ -18,19 +18,13 @@
#include <asm/word-at-a-time.h>
#endif
-#define SIPROUND \
- do { \
- v0 += v1; v1 = rol64(v1, 13); v1 ^= v0; v0 = rol64(v0, 32); \
- v2 += v3; v3 = rol64(v3, 16); v3 ^= v2; \
- v0 += v3; v3 = rol64(v3, 21); v3 ^= v0; \
- v2 += v1; v1 = rol64(v1, 17); v1 ^= v2; v2 = rol64(v2, 32); \
- } while (0)
+#define SIPROUND SIPHASH_PERMUTATION(v0, v1, v2, v3)
#define PREAMBLE(len) \
- u64 v0 = 0x736f6d6570736575ULL; \
- u64 v1 = 0x646f72616e646f6dULL; \
- u64 v2 = 0x6c7967656e657261ULL; \
- u64 v3 = 0x7465646279746573ULL; \
+ u64 v0 = SIPHASH_CONST_0; \
+ u64 v1 = SIPHASH_CONST_1; \
+ u64 v2 = SIPHASH_CONST_2; \
+ u64 v3 = SIPHASH_CONST_3; \
u64 b = ((u64)(len)) << 56; \
v3 ^= key->key[1]; \
v2 ^= key->key[0]; \
@@ -389,19 +383,13 @@ u32 hsiphash_4u32(const u32 first, const u32 second, const u32 third,
}
EXPORT_SYMBOL(hsiphash_4u32);
#else
-#define HSIPROUND \
- do { \
- v0 += v1; v1 = rol32(v1, 5); v1 ^= v0; v0 = rol32(v0, 16); \
- v2 += v3; v3 = rol32(v3, 8); v3 ^= v2; \
- v0 += v3; v3 = rol32(v3, 7); v3 ^= v0; \
- v2 += v1; v1 = rol32(v1, 13); v1 ^= v2; v2 = rol32(v2, 16); \
- } while (0)
+#define HSIPROUND HSIPHASH_PERMUTATION(v0, v1, v2, v3)
#define HPREAMBLE(len) \
- u32 v0 = 0; \
- u32 v1 = 0; \
- u32 v2 = 0x6c796765U; \
- u32 v3 = 0x74656462U; \
+ u32 v0 = HSIPHASH_CONST_0; \
+ u32 v1 = HSIPHASH_CONST_1; \
+ u32 v2 = HSIPHASH_CONST_2; \
+ u32 v3 = HSIPHASH_CONST_3; \
u32 b = ((u32)(len)) << 24; \
v3 ^= key->key[1]; \
v2 ^= key->key[0]; \
diff --git a/lib/vsprintf.c b/lib/vsprintf.c
index 40d26a07a133..fb77f7bfd126 100644
--- a/lib/vsprintf.c
+++ b/lib/vsprintf.c
@@ -750,61 +750,38 @@ static int __init debug_boot_weak_hash_enable(char *str)
}
early_param("debug_boot_weak_hash", debug_boot_weak_hash_enable);
-static DEFINE_STATIC_KEY_TRUE(not_filled_random_ptr_key);
-static siphash_key_t ptr_key __read_mostly;
+static DEFINE_STATIC_KEY_FALSE(filled_random_ptr_key);
static void enable_ptr_key_workfn(struct work_struct *work)
{
- get_random_bytes(&ptr_key, sizeof(ptr_key));
- /* Needs to run from preemptible context */
- static_branch_disable(&not_filled_random_ptr_key);
+ static_branch_enable(&filled_random_ptr_key);
}
-static DECLARE_WORK(enable_ptr_key_work, enable_ptr_key_workfn);
-
-static int fill_random_ptr_key(struct notifier_block *nb,
- unsigned long action, void *data)
-{
- /* This may be in an interrupt handler. */
- queue_work(system_unbound_wq, &enable_ptr_key_work);
- return 0;
-}
-
-static struct notifier_block random_ready = {
- .notifier_call = fill_random_ptr_key
-};
-
-static int __init initialize_ptr_random(void)
-{
- int key_size = sizeof(ptr_key);
- int ret;
-
- /* Use hw RNG if available. */
- if (get_random_bytes_arch(&ptr_key, key_size) == key_size) {
- static_branch_disable(&not_filled_random_ptr_key);
- return 0;
- }
-
- ret = register_random_ready_notifier(&random_ready);
- if (!ret) {
- return 0;
- } else if (ret == -EALREADY) {
- /* This is in preemptible context */
- enable_ptr_key_workfn(&enable_ptr_key_work);
- return 0;
- }
-
- return ret;
-}
-early_initcall(initialize_ptr_random);
-
/* Maps a pointer to a 32 bit unique identifier. */
static inline int __ptr_to_hashval(const void *ptr, unsigned long *hashval_out)
{
+ static siphash_key_t ptr_key __read_mostly;
unsigned long hashval;
- if (static_branch_unlikely(&not_filled_random_ptr_key))
- return -EAGAIN;
+ if (!static_branch_likely(&filled_random_ptr_key)) {
+ static bool filled = false;
+ static DEFINE_SPINLOCK(filling);
+ static DECLARE_WORK(enable_ptr_key_work, enable_ptr_key_workfn);
+ unsigned long flags;
+
+ if (!system_unbound_wq ||
+ (!rng_is_initialized() && !rng_has_arch_random()) ||
+ !spin_trylock_irqsave(&filling, flags))
+ return -EAGAIN;
+
+ if (!filled) {
+ get_random_bytes(&ptr_key, sizeof(ptr_key));
+ queue_work(system_unbound_wq, &enable_ptr_key_work);
+ filled = true;
+ }
+ spin_unlock_irqrestore(&filling, flags);
+ }
+
#ifdef CONFIG_64BIT
hashval = (unsigned long)siphash_1u64((u64)ptr, &ptr_key);