summaryrefslogtreecommitdiffstats
path: root/crypto/chacha20_generic.c
AgeCommit message (Collapse)AuthorFilesLines
2018-11-20crypto: chacha20-generic - refactor to allow varying number of roundsEric Biggers1-185/+0
In preparation for adding XChaCha12 support, rename/refactor chacha20-generic to support different numbers of rounds. The justification for needing XChaCha12 support is explained in more detail in the patch "crypto: chacha - add XChaCha12 support". The only difference between ChaCha{8,12,20} are the number of rounds itself; all other parts of the algorithm are the same. Therefore, remove the "20" from all definitions, structures, functions, files, etc. that will be shared by all ChaCha versions. Also make ->setkey() store the round count in the chacha_ctx (previously chacha20_ctx). The generic code then passes the round count through to chacha_block(). There will be a ->setkey() function for each explicitly allowed round count; the encrypt/decrypt functions will be the same. I decided not to do it the opposite way (same ->setkey() function for all round counts, with different encrypt/decrypt functions) because that would have required more boilerplate code in architecture-specific implementations of ChaCha and XChaCha. Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: Martin Willi <martin@strongswan.org> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-11-20crypto: chacha20-generic - add XChaCha20 supportEric Biggers1-36/+84
Add support for the XChaCha20 stream cipher. XChaCha20 is the application of the XSalsa20 construction (https://cr.yp.to/snuffle/xsalsa-20081128.pdf) to ChaCha20 rather than to Salsa20. XChaCha20 extends ChaCha20's nonce length from 64 bits (or 96 bits, depending on convention) to 192 bits, while provably retaining ChaCha20's security. XChaCha20 uses the ChaCha20 permutation to map the key and first 128 nonce bits to a 256-bit subkey. Then, it does the ChaCha20 stream cipher with the subkey and remaining 64 bits of nonce. We need XChaCha support in order to add support for the Adiantum encryption mode. Note that to meet our performance requirements, we actually plan to primarily use the variant XChaCha12. But we believe it's wise to first add XChaCha20 as a baseline with a higher security margin, in case there are any situations where it can be used. Supporting both variants is straightforward. Since XChaCha20's subkey differs for each request, XChaCha20 can't be a template that wraps ChaCha20; that would require re-keying the underlying ChaCha20 for every request, which wouldn't be thread-safe. Instead, we make XChaCha20 its own top-level algorithm which calls the ChaCha20 streaming implementation internally. Similar to the existing ChaCha20 implementation, we define the IV to be the nonce and stream position concatenated together. This allows users to seek to any position in the stream. I considered splitting the code into separate chacha20-common, chacha20, and xchacha20 modules, so that chacha20 and xchacha20 could be enabled/disabled independently. However, since nearly all the code is shared anyway, I ultimately decided there would have been little benefit to the added complexity of separate modules. Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: Martin Willi <martin@strongswan.org> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-11-20crypto: chacha20-generic - don't unnecessarily use atomic walkEric Biggers1-1/+1
chacha20-generic doesn't use SIMD instructions or otherwise disable preemption, so passing atomic=true to skcipher_walk_virt() is unnecessary. Suggested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: Martin Willi <martin@strongswan.org> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-09-21crypto: chacha20 - Fix chacha20_block() keystream alignment (again)Eric Biggers1-3/+4
In commit 9f480faec58c ("crypto: chacha20 - Fix keystream alignment for chacha20_block()"), I had missed that chacha20_block() can be called directly on the buffer passed to get_random_bytes(), which can have any alignment. So, while my commit didn't break anything, it didn't fully solve the alignment problems. Revert my solution and just update chacha20_block() to use put_unaligned_le32(), so the output buffer need not be aligned. This is simpler, and on many CPUs it's the same speed. But, I kept the 'tmp' buffers in extract_crng_user() and _get_random_bytes() 4-byte aligned, since that alignment is actually needed for _crng_backtrack_protect() too. Reported-by: Stephan Müller <smueller@chronox.de> Cc: Theodore Ts'o <tytso@mit.edu> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2017-11-29crypto: chacha20 - Fix keystream alignment for chacha20_block()Eric Biggers1-3/+3
When chacha20_block() outputs the keystream block, it uses 'u32' stores directly. However, the callers (crypto/chacha20_generic.c and drivers/char/random.c) declare the keystream buffer as a 'u8' array, which is not guaranteed to have the needed alignment. Fix it by having both callers declare the keystream as a 'u32' array. For now this is preferable to switching over to the unaligned access macros because chacha20_block() is only being used in cases where we can easily control the alignment (stack buffers). Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2017-11-29crypto: chacha20 - Remove cra_alignmaskEric Biggers1-1/+0
Now that crypto_chacha20_setkey() and crypto_chacha20_init() use the unaligned access macros and crypto_xor() also accepts unaligned buffers, there is no need to have a cra_alignmask set for chacha20-generic. Signed-off-by: Eric Biggers <ebiggers@google.com> Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2017-11-29crypto: chacha20 - Use unaligned access macros when loading key and IVEric Biggers1-10/+6
The generic ChaCha20 implementation has a cra_alignmask of 3, which ensures that the key passed into crypto_chacha20_setkey() and the IV passed into crypto_chacha20_init() are 4-byte aligned. However, these functions are also called from the ARM and ARM64 implementations of ChaCha20, which intentionally do not have a cra_alignmask set. This is broken because 32-bit words are being loaded from potentially-unaligned buffers without the unaligned access macros. Fix it by using the unaligned access macros when loading the key and IV. Signed-off-by: Eric Biggers <ebiggers@google.com> Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2017-11-29crypto: chacha20 - Fix unaligned access when loading constantsEric Biggers1-6/+4
The four 32-bit constants for the initial state of ChaCha20 were loaded from a char array which is not guaranteed to have the needed alignment. Fix it by just assigning the constants directly instead. Signed-off-by: Eric Biggers <ebiggers@google.com> Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2017-08-22crypto: chacha20 - fix handling of chunked inputArd Biesheuvel1-2/+7
Commit 9ae433bc79f9 ("crypto: chacha20 - convert generic and x86 versions to skcipher") ported the existing chacha20 code to use the new skcipher API, and introduced a bug along the way. Unfortunately, the tcrypt tests did not catch the error, and it was only found recently by Tobias. Stefan kindly diagnosed the error, and proposed a fix which is similar to the one below, with the exception that 'walk.stride' is used rather than the hardcoded block size. This does not actually matter in this case, but it's a better example of how to use the skcipher walk API. Fixes: 9ae433bc79f9 ("crypto: chacha20 - convert generic and x86 ...") Cc: <stable@vger.kernel.org> # v4.11+ Cc: Steffen Klassert <steffen.klassert@secunet.com> Reported-by: Tobias Brunner <tobias@strongswan.org> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-12-27crypto: chacha20 - convert generic and x86 versions to skcipherArd Biesheuvel1-43/+30
This converts the ChaCha20 code from a blkcipher to a skcipher, which is now the preferred way to implement symmetric block and stream ciphers. This ports the generic and x86 versions at the same time because the latter reuses routines of the former. Note that the skcipher_walk() API guarantees that all presented blocks except the final one are a multiple of the chunk size, so we can simplify the encrypt() routine somewhat. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-07-03random: replace non-blocking pool with a Chacha20-based CRNGTheodore Ts'o1-61/+0
The CRNG is faster, and we don't pretend to track entropy usage in the CRNG any more. Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2015-07-17crypto: chacha20 - Export common ChaCha20 helpersMartin Willi1-16/+12
As architecture specific drivers need a software fallback, export a ChaCha20 en-/decryption function together with some helpers in a header file. Signed-off-by: Martin Willi <martin@strongswan.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2015-06-04crypto: chacha20 - Add a generic ChaCha20 stream cipher implementationMartin Willi1-0/+216
ChaCha20 is a high speed 256-bit key size stream cipher algorithm designed by Daniel J. Bernstein. It is further specified in RFC7539 for use in IETF protocols as a building block for the ChaCha20-Poly1305 AEAD. This is a portable C implementation without any architecture specific optimizations. It uses a 16-byte IV, which includes the 12-byte ChaCha20 nonce prepended by the initial block counter. Some algorithms require an explicit counter value, for example the mentioned AEAD construction. Signed-off-by: Martin Willi <martin@strongswan.org> Acked-by: Steffen Klassert <steffen.klassert@secunet.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>