diff options
author | Jason A. Donenfeld <Jason@zx2c4.com> | 2022-05-22 22:25:41 +0200 |
---|---|---|
committer | Jason A. Donenfeld <Jason@zx2c4.com> | 2022-05-22 22:34:31 +0200 |
commit | 1ce6c8d68f8ac587f54d0a271ac594d3d51f3efb (patch) | |
tree | 99a878d151a501287409d01650887c7e198874e8 /lib/ratelimit.c | |
parent | 79025e727a846be6fd215ae9cdb654368ac3f9a6 (diff) | |
download | linux-1ce6c8d68f8ac587f54d0a271ac594d3d51f3efb.tar.bz2 |
random: check for signals after page of pool writes
get_random_bytes_user() checks for signals after producing a PAGE_SIZE
worth of output, just like /dev/zero does. write_pool() is doing
basically the same work (actually, slightly more expensive), and so
should stop to check for signals in the same way. Let's also name it
write_pool_user() to match get_random_bytes_user(), so this won't be
misused in the future.
Before this patch, massive writes to /dev/urandom would tie up the
process for an extremely long time and make it unterminatable. After, it
can be successfully interrupted. The following test program can be used
to see this works as intended:
#include <unistd.h>
#include <fcntl.h>
#include <signal.h>
#include <stdio.h>
static unsigned char x[~0U];
static void handle(int) { }
int main(int argc, char *argv[])
{
pid_t pid = getpid(), child;
int fd;
signal(SIGUSR1, handle);
if (!(child = fork())) {
for (;;)
kill(pid, SIGUSR1);
}
fd = open("/dev/urandom", O_WRONLY);
pause();
printf("interrupted after writing %zd bytes\n", write(fd, x, sizeof(x)));
close(fd);
kill(child, SIGTERM);
return 0;
}
Result before: "interrupted after writing 2147479552 bytes"
Result after: "interrupted after writing 4096 bytes"
Cc: Dominik Brodowski <linux@dominikbrodowski.net>
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Diffstat (limited to 'lib/ratelimit.c')
0 files changed, 0 insertions, 0 deletions