summaryrefslogtreecommitdiffstats
path: root/include/crypto
AgeCommit message (Collapse)AuthorFilesLines
2020-07-16crypto: algapi - introduce the flag CRYPTO_ALG_ALLOCATES_MEMORYEric Biggers1-1/+2
Introduce a new algorithm flag CRYPTO_ALG_ALLOCATES_MEMORY. If this flag is set, then the driver allocates memory in its request routine. Such drivers are not suitable for disk encryption because GFP_ATOMIC allocation can fail anytime (causing random I/O errors) and GFP_KERNEL allocation can recurse into the block layer, causing a deadlock. For now, this flag is only implemented for some algorithm types. We also assume some usage constraints for it to be meaningful, since there are lots of edge cases the crypto API allows (e.g., misaligned or fragmented scatterlists) that mean that nearly any crypto algorithm can allocate memory in some case. See the comment for details. Also add this flag to CRYPTO_ALG_INHERITED_FLAGS so that when a template is instantiated, this flag is set on the template instance if it is set on any algorithm the instance uses. Based on a patch by Mikulas Patocka <mpatocka@redhat.com> (https://lore.kernel.org/r/alpine.LRH.2.02.2006301414580.30526@file01.intranet.prod.int.rdu2.redhat.com). Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-07-16crypto: algapi - add NEED_FALLBACK to INHERITED_FLAGSEric Biggers1-1/+2
CRYPTO_ALG_NEED_FALLBACK is handled inconsistently. When it's requested to be clear, some templates propagate that request to child algorithms, while others don't. It's apparently desired for NEED_FALLBACK to be propagated, to avoid deadlocks where a module tries to load itself while it's being initialized, and to avoid unnecessarily complex fallback chains where we have e.g. cbc-aes-$driver falling back to cbc(aes-$driver) where aes-$driver itself falls back to aes-generic, instead of cbc-aes-$driver simply falling back to cbc(aes-generic). There have been a number of fixes to this effect: commit 89027579bc6c ("crypto: xts - Propagate NEED_FALLBACK bit") commit d2c2a85cfe82 ("crypto: ctr - Propagate NEED_FALLBACK bit") commit e6c2e65c70a6 ("crypto: cbc - Propagate NEED_FALLBACK bit") But it seems that other templates can have the same problems too. To avoid this whack-a-mole, just add NEED_FALLBACK to INHERITED_FLAGS so that it's always inherited. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-07-16crypto: algapi - use common mechanism for inheriting flagsEric Biggers1-7/+16
The flag CRYPTO_ALG_ASYNC is "inherited" in the sense that when a template is instantiated, the template will have CRYPTO_ALG_ASYNC set if any of the algorithms it uses has CRYPTO_ALG_ASYNC set. We'd like to add a second flag (CRYPTO_ALG_ALLOCATES_MEMORY) that gets "inherited" in the same way. This is difficult because the handling of CRYPTO_ALG_ASYNC is hardcoded everywhere. Address this by: - Add CRYPTO_ALG_INHERITED_FLAGS, which contains the set of flags that have these inheritance semantics. - Add crypto_algt_inherited_mask(), for use by template ->create() methods. It returns any of these flags that the user asked to be unset and thus must be passed in the 'mask' to crypto_grab_*(). - Also modify crypto_check_attr_type() to handle computing the 'mask' so that most templates can just use this. - Make crypto_grab_*() propagate these flags to the template instance being created so that templates don't have to do this themselves. Make crypto/simd.c propagate these flags too, since it "wraps" another algorithm, similar to a template. Based on a patch by Mikulas Patocka <mpatocka@redhat.com> (https://lore.kernel.org/r/alpine.LRH.2.02.2006301414580.30526@file01.intranet.prod.int.rdu2.redhat.com). Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-07-16crypto: geniv - remove unneeded arguments from aead_geniv_alloc()Eric Biggers1-1/+1
The type and mask arguments to aead_geniv_alloc() are always 0, so remove them. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-07-16crypto: lib/sha256 - add sha256() functionEric Biggers1-0/+1
Add a function sha256() which computes a SHA-256 digest in one step, combining sha256_init() + sha256_update() + sha256_final(). This is similar to how we also have blake2s(). Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Tested-by: Hans de Goede <hdegoede@redhat.com> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-07-16crypto: x86/chacha-sse3 - use unaligned loads for state arrayArd Biesheuvel1-4/+0
Due to the fact that the x86 port does not support allocating objects on the stack with an alignment that exceeds 8 bytes, we have a rather ugly hack in the x86 code for ChaCha to ensure that the state array is aligned to 16 bytes, allowing the SSE3 implementation of the algorithm to use aligned loads. Given that the performance benefit of using of aligned loads appears to be limited (~0.25% for 1k blocks using tcrypt on a Corei7-8650U), and the fact that this hack has leaked into generic ChaCha code, let's just remove it. Cc: Martin Willi <martin@strongswan.org> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: Eric Biggers <ebiggers@kernel.org> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Martin Willi <martin@strongswan.org> Reviewed-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-07-16crypto: lib/chacha20poly1305 - Add missing function declarationHerbert Xu1-0/+2
This patch adds a declaration for chacha20poly1305_selftest to silence a sparse warning. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-07-09crypto: api - permit users to specify numa node of acomp hardwareBarry Song1-0/+18
For a Linux server with NUMA, there are possibly multiple (de)compressors which are either local or remote to some NUMA node. Some drivers will automatically use the (de)compressor near the CPU calling acomp_alloc(). However, it is not necessarily correct because users who send acomp_req could be from different NUMA node with the CPU which allocates acomp. Just like kernel has kmalloc() and kmalloc_node(), here crypto can have same support. Cc: Seth Jennings <sjenning@redhat.com> Cc: Dan Streetman <ddstreet@ieee.org> Cc: Vitaly Wool <vitaly.wool@konsulko.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com> Signed-off-by: Barry Song <song.bao.hua@hisilicon.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-06-19docs: crypto: convert asymmetric-keys.txt to ReSTMauro Carvalho Chehab1-1/+1
This file is almost compatible with ReST. Just minor changes were needed: - Adjust document and titles markups; - Adjust numbered list markups; - Add a comments markup for the Contents section; - Add markups for literal blocks. Acked-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com> Signed-off-by: Mauro Carvalho Chehab <mchehab+huawei@kernel.org> Link: https://lore.kernel.org/r/c2275ea94e0507a01b020ab66dfa824d8b1c2545.1592203650.git.mchehab+huawei@kernel.org Signed-off-by: Jonathan Corbet <corbet@lwn.net>
2020-06-18crypto: algif_aead - Only wake up when ctx->more is zeroHerbert Xu1-1/+3
AEAD does not support partial requests so we must not wake up while ctx->more is set. In order to distinguish between the case of no data sent yet and a zero-length request, a new init flag has been added to ctx. SKCIPHER has also been modified to ensure that at least a block of data is available if there is more data to come. Fixes: 2d97591ef43d ("crypto: af_alg - consolidation of...") Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-06-18crypto: af_alg - fix use-after-free in af_alg_accept() due to bh_lock_sock()Herbert Xu1-2/+2
The locking in af_alg_release_parent is broken as the BH socket lock can only be taken if there is a code-path to handle the case where the lock is owned by process-context. Instead of adding such handling, we can fix this by changing the ref counts to atomic_t. This patch also modifies the main refcnt to include both normal and nokey sockets. This way we don't have to fudge the nokey ref count when a socket changes from nokey to normal. Credits go to Mauricio Faria de Oliveira who diagnosed this bug and sent a patch for it: https://lore.kernel.org/linux-crypto/20200605161657.535043-1-mfo@canonical.com/ Reported-by: Brian Moyles <bmoyles@netflix.com> Reported-by: Mauricio Faria de Oliveira <mfo@canonical.com> Fixes: 37f96694cf73 ("crypto: af_alg - Use bh_lock_sock in...") Cc: <stable@vger.kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-05-08crypto: lib/sha1 - fold linux/cryptohash.h into crypto/sha.hEric Biggers1-0/+10
<linux/cryptohash.h> sounds very generic and important, like it's the header to include if you're doing cryptographic hashing in the kernel. But actually it only includes the library implementation of the SHA-1 compression function (not even the full SHA-1). This should basically never be used anymore; SHA-1 is no longer considered secure, and there are much better ways to do cryptographic hashing in the kernel. Remove this header and fold it into <crypto/sha.h> which already contains constants and functions for SHA-1 (along with SHA-2). Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-05-08crypto: hash - introduce crypto_shash_tfm_digest()Eric Biggers1-0/+19
Currently the simplest use of the shash API is to use crypto_shash_digest() to digest a whole buffer. However, this still requires allocating a hash descriptor (struct shash_desc). Many users don't really want to preallocate one and instead just use a one-off descriptor on the stack like the following: { SHASH_DESC_ON_STACK(desc, tfm); int err; desc->tfm = tfm; err = crypto_shash_digest(desc, data, len, out); shash_desc_zero(desc); } Wrap this in a new helper function crypto_shash_tfm_digest() that can be used instead of the above. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-05-08crypto: lib/sha256 - return voidEric Biggers2-14/+10
The SHA-256 / SHA-224 library functions can't fail, so remove the useless return value. Also long as the declarations are being changed anyway, also fix some parameter names in the declarations to match the definitions. Signed-off-by: Eric Biggers <ebiggers@google.com> Reviewed-by: Jason A. Donenfeld <Jason@zx2c4.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-05-08crypto: acomp - search acomp with scomp backend in crypto_has_acompBarry Song1-1/+1
users may call crypto_has_acomp to confirm the existence of acomp before using crypto_acomp APIs. Right now, many acomp have scomp backend, for example, lz4, lzo, deflate etc. crypto_has_acomp will return false for them even though they support acomp APIs. Signed-off-by: Barry Song <song.bao.hua@hisilicon.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-05-08crypto: engine - support for batch requestsIuliana Prodan1-0/+5
Added support for batch requests, per crypto engine. A new callback is added, do_batch_requests, which executes a batch of requests. This has the crypto_engine structure as argument (for cases when more than one crypto-engine is used). The crypto_engine_alloc_init_and_set function, initializes crypto-engine, but also, sets the do_batch_requests callback. On crypto_pump_requests, if do_batch_requests callback is implemented in a driver, this will be executed. The link between the requests will be done in driver, if possible. do_batch_requests is available only if the hardware has support for multiple request. Signed-off-by: Iuliana Prodan <iuliana.prodan@nxp.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-05-08crypto: engine - support for parallel requests based on retry mechanismIuliana Prodan1-2/+8
Added support for executing multiple requests, in parallel, for crypto engine based on a retry mechanism. If hardware was unable to execute a backlog request, enqueue it back in front of crypto-engine queue, to keep the order of requests. A new variable is added, retry_support (this is to keep the backward compatibility of crypto-engine) , which keeps track whether the hardware has support for retry mechanism and, also, if can run multiple requests. If do_one_request() returns: >= 0: hardware executed the request successfully; < 0: this is the old error path. If hardware has support for retry mechanism, the request is put back in front of crypto-engine queue. For backwards compatibility, if the retry support is not available, the crypto-engine will work as before. If hardware queue is full (-ENOSPC), requeue request regardless of MAY_BACKLOG flag. If hardware throws any other error code (like -EIO, -EINVAL, -ENOMEM, etc.) only MAY_BACKLOG requests are enqueued back into crypto-engine's queue, since the others can be dropped. The new crypto_engine_alloc_init_and_set function, initializes crypto-engine, sets the maximum size for crypto-engine software queue (not hardcoded anymore) and the retry_support variable is set, by default, to false. On crypto_pump_requests(), if do_one_request() returns >= 0, a new request is send to hardware, until there is no space in hardware and do_one_request() returns < 0. By default, retry_support is false and crypto-engine will work as before - will send requests to hardware, one-by-one, on crypto_pump_requests(), and complete it, on crypto_finalize_request(), and so on. To support multiple requests, in each driver, retry_support must be set on true, and if do_one_request() returns an error the request must not be freed, since it will be enqueued back into crypto-engine's queue. When all drivers, that use crypto-engine now, will be updated for retry mechanism, the retry_support variable can be removed. Signed-off-by: Iuliana Prodan <iuliana.prodan@nxp.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-05-08crypto: algapi - create function to add request in front of queueIuliana Prodan1-0/+2
Add crypto_enqueue_request_head function that enqueues a request in front of queue. This will be used in crypto-engine, on error path. In case a request was not executed by hardware, enqueue it back in front of queue (to keep the order of requests). Signed-off-by: Iuliana Prodan <iuliana.prodan@nxp.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-04-24crypto: drbg - always seeded with SP800-90B compliant noise sourceStephan Müller1-5/+1
As the Jitter RNG provides an SP800-90B compliant noise source, use this noise source always for the (re)seeding of the DRBG. To make sure the DRBG is always properly seeded, the reseed threshold is reduced to 1<<20 generate operations. The Jitter RNG may report health test failures. Such health test failures are treated as transient as follows. The DRBG will not reseed from the Jitter RNG (but from get_random_bytes) in case of a health test failure. Though, it produces the requested random number. The Jitter RNG has a failure counter where at most 1024 consecutive resets due to a health test failure are considered as a transient error. If more consecutive resets are required, the Jitter RNG will return a permanent error which is returned to the caller by the DRBG. With this approach, the worst case reseed threshold is significantly lower than mandated by SP800-90A in order to seed with an SP800-90B noise source: the DRBG has a reseed threshold of 2^20 * 1024 = 2^30 generate requests. Yet, in case of a transient Jitter RNG health test failure, the DRBG is seeded with the data obtained from get_random_bytes. However, if the Jitter RNG fails during the initial seeding operation even due to a health test error, the DRBG will send an error to the caller because at that time, the DRBG has received no seed that is SP800-90B compliant. Signed-off-by: Stephan Mueller <smueller@chronox.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-04-09crypto: curve25519 - do not pollute dispatcher based on assemblerJason A. Donenfeld1-4/+2
Since we're doing a static inline dispatch here, we normally branch based on whether or not there's an arch implementation. That would have been fine in general, except the crypto Makefile prior used to turn things off -- despite the Kconfig -- resulting in us needing to also hard code various assembler things into the dispatcher too. The horror! Now that the assembler config options are done by Kconfig, we can get rid of the inconsistency. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Acked-by: Herbert Xu <herbert@gondor.apana.org.au> Acked-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
2020-04-01Merge branch 'linus' of ↵Linus Torvalds2-22/+28
git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6 Pull crypto updates from Herbert Xu: "API: - Fix out-of-sync IVs in self-test for IPsec AEAD algorithms Algorithms: - Use formally verified implementation of x86/curve25519 Drivers: - Enhance hwrng support in caam - Use crypto_engine for skcipher/aead/rsa/hash in caam - Add Xilinx AES driver - Add uacce driver - Register zip engine to uacce in hisilicon - Add support for OCTEON TX CPT engine in marvell" * 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (162 commits) crypto: af_alg - bool type cosmetics crypto: arm[64]/poly1305 - add artifact to .gitignore files crypto: caam - limit single JD RNG output to maximum of 16 bytes crypto: caam - enable prediction resistance in HRWNG bus: fsl-mc: add api to retrieve mc version crypto: caam - invalidate entropy register during RNG initialization crypto: caam - check if RNG job failed crypto: caam - simplify RNG implementation crypto: caam - drop global context pointer and init_done crypto: caam - use struct hwrng's .init for initialization crypto: caam - allocate RNG instantiation descriptor with GFP_DMA crypto: ccree - remove duplicated include from cc_aead.c crypto: chelsio - remove set but not used variable 'adap' crypto: marvell - enable OcteonTX cpt options for build crypto: marvell - add the Virtual Function driver for CPT crypto: marvell - add support for OCTEON TX CPT engine crypto: marvell - create common Kconfig and Makefile for Marvell crypto: arm/neon - memzero_explicit aes-cbc key crypto: bcm - Use scnprintf() for avoiding potential buffer overflow crypto: atmel-i2c - Fix wakeup fail ...
2020-03-12crypto: aead - improve documentation for scatterlist layoutEric Biggers1-21/+27
Properly document the scatterlist layout for AEAD ciphers. Reported-by: Gilad Ben-Yossef <gilad@benyossef.com> Cc: Stephan Mueller <smueller@chronox.de> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-03-06crypto: Replace zero-length array with flexible-array memberGustavo A. R. Silva1-1/+1
The current codebase makes use of the zero-length array language extension to the C90 standard, but the preferred mechanism to declare variable-length types such as these ones is a flexible array member[1][2], introduced in C99: struct foo { int stuff; struct boo array[]; }; By making use of the mechanism above, we will get a compiler warning in case the flexible array does not occur last in the structure, which will help us prevent some kind of undefined behavior bugs from being inadvertently introduced[3] to the codebase from now on. Also, notice that, dynamic memory allocations won't be affected by this change: "Flexible array members have incomplete type, and so the sizeof operator may not be applied. As a quirk of the original implementation of zero-length arrays, sizeof evaluates to zero."[1] This issue was found with the help of Coccinelle. [1] https://gcc.gnu.org/onlinedocs/gcc/Zero-Length.html [2] https://github.com/KSPP/linux/issues/21 [3] commit 76497732932f ("cxgb3/l2t: Fix undefined behaviour") Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com> Reviewed-by: Horia Geantă <horia.geanta@nxp.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-03-05crypto: x86/curve25519 - support assemblers with no adx supportJason A. Donenfeld1-2/+4
Some older version of GAS do not support the ADX instructions, similarly to how they also don't support AVX and such. This commit adds the same build-time detection mechanisms we use for AVX and others for ADX, and then makes sure that the curve25519 library dispatcher calls the right functions. Reported-by: Willy Tarreau <w@1wt.eu> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-01-16crypto: poly1305 - add new 32 and 64-bit generic versionsJason A. Donenfeld3-40/+35
These two C implementations from Zinc -- a 32x32 one and a 64x64 one, depending on the platform -- come from Andrew Moon's public domain poly1305-donna portable code, modified for usage in the kernel. The precomputation in the 32-bit version and the use of 64x64 multiplies in the 64-bit version make these perform better than the code it replaces. Moon's code is also very widespread and has received many eyeballs of scrutiny. There's a bit of interference between the x86 implementation, which relies on internal details of the old scalar implementation. In the next commit, the x86 implementation will be replaced with a faster one that doesn't rely on this, so none of this matters much. But for now, to keep this passing the tests, we inline the bits of the old implementation that the x86 implementation relied on. Also, since we now support a slightly larger key space, via the union, some offsets had to be fixed up. Nonce calculation was folded in with the emit function, to take advantage of 64x64 arithmetic. However, Adiantum appeared to rely on no nonce handling in emit, so this path was conditionalized. We also introduced a new struct, poly1305_core_key, to represent the precise amount of space that particular implementation uses. Testing with kbench9000, depending on the CPU, the update function for the 32x32 version has been improved by 4%-7%, and for the 64x64 by 19%-30%. The 32x32 gains are small, but I think there's great value in having a parallel implementation to the 64x64 one so that the two can be compared side-by-side as nice stand-alone units. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-01-09crypto: algapi - remove crypto_template::{alloc,free}()Eric Biggers1-2/+0
Now that all templates provide a ->create() method which creates an instance, installs a strongly-typed ->free() method directly to it, and registers it, the older ->alloc() and ->free() methods in 'struct crypto_template' are no longer used. Remove them. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-01-09crypto: shash - convert shash_free_instance() to new styleEric Biggers1-1/+1
Convert shash_free_instance() and its users to the new way of freeing instances, where a ->free() method is installed to the instance struct itself. This replaces the weakly-typed method crypto_template::free(). This will allow removing support for the old way of freeing instances. Also give shash_free_instance() a more descriptive name to reflect that it's only for instances with a single spawn, not for any instance. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-01-09crypto: geniv - convert to new way of freeing instancesEric Biggers1-1/+0
Convert the "seqiv" template to the new way of freeing instances where a ->free() method is installed to the instance struct itself. Also remove the unused implementation of the old way of freeing instances from the "echainiv" template, since it's already using the new way too. In doing this, also simplify the code by making the helper function aead_geniv_alloc() install the ->free() method, instead of making seqiv and echainiv do this themselves. This is analogous to how skcipher_alloc_instance_simple() works. This will allow removing support for the old way of freeing instances. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-01-09crypto: hash - add support for new way of freeing instancesEric Biggers1-0/+2
Add support to shash and ahash for the new way of freeing instances (already used for skcipher, aead, and akcipher) where a ->free() method is installed to the instance struct itself. These methods are more strongly-typed than crypto_template::free(), which they replace. This will allow removing support for the old way of freeing instances. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-01-09crypto: algapi - fold crypto_init_spawn() into crypto_grab_spawn()Eric Biggers1-3/+0
Now that crypto_init_spawn() is only called by crypto_grab_spawn(), simplify things by moving its functionality into crypto_grab_spawn(). In the process of doing this, also be more consistent about when the spawn and instance are updated, and remove the crypto_spawn::dropref flag since now it's always set. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-01-09crypto: ahash - unexport crypto_ahash_typeEric Biggers1-2/+0
Now that all the templates that need ahash spawns have been converted to use crypto_grab_ahash() rather than look up the algorithm directly, crypto_ahash_type is no longer used outside of ahash.c. Make it static. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-01-09crypto: algapi - remove obsoleted instance creation helpersEric Biggers2-53/+0
Remove lots of helper functions that were previously used for instantiating crypto templates, but are now unused: - crypto_get_attr_alg() and similar functions looked up an inner algorithm directly from a template parameter. These were replaced with getting the algorithm's name, then calling crypto_grab_*(). - crypto_init_spawn2() and similar functions initialized a spawn, given an algorithm. Similarly, these were replaced with crypto_grab_*(). - crypto_alloc_instance() and similar functions allocated an instance with a single spawn, given the inner algorithm. These aren't useful anymore since crypto_grab_*() need the instance allocated first. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-01-09crypto: cipher - make crypto_spawn_cipher() take a crypto_cipher_spawnEric Biggers1-2/+2
Now that all users of single-block cipher spawns have been converted to use 'struct crypto_cipher_spawn' rather than the less specifically typed 'struct crypto_spawn', make crypto_spawn_cipher() take a pointer to a 'struct crypto_cipher_spawn' rather than a 'struct crypto_spawn'. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-01-09crypto: skcipher - use crypto_grab_cipher() and simplify error pathsEric Biggers1-2/+2
Make skcipher_alloc_instance_simple() use the new function crypto_grab_cipher() to initialize its cipher spawn. This is needed to make all spawns be initialized in a consistent way. Also simplify the error handling by taking advantage of crypto_drop_*() now accepting (as a no-op) spawns that haven't been initialized yet, and by taking advantage of crypto_grab_*() now handling ERR_PTR() names. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-01-09crypto: cipher - introduce crypto_cipher_spawn and crypto_grab_cipher()Eric Biggers1-0/+25
Currently, "cipher" (single-block cipher) spawns are usually initialized by using crypto_get_attr_alg() to look up the algorithm, then calling crypto_init_spawn(). In one case, crypto_grab_spawn() is used directly. The former way is different from how skcipher, aead, and akcipher spawns are initialized (they use crypto_grab_*()), and for no good reason. This difference introduces unnecessary complexity. The crypto_grab_*() functions used to have some problems, like not holding a reference to the algorithm and requiring the caller to initialize spawn->base.inst. But those problems are fixed now. Also, the cipher spawns are not strongly typed; e.g., the API requires that the user manually specify the flags CRYPTO_ALG_TYPE_CIPHER and CRYPTO_ALG_TYPE_MASK. Though the "cipher" algorithm type itself isn't yet strongly typed, we can start by making the spawns strongly typed. So, let's introduce a new 'struct crypto_cipher_spawn', and functions crypto_grab_cipher() and crypto_drop_cipher() to grab and drop them. Later patches will convert all cipher spawns to use these, then make crypto_spawn_cipher() take 'struct crypto_cipher_spawn' as well, instead of a bare 'struct crypto_spawn' as it currently does. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-01-09crypto: ahash - introduce crypto_grab_ahash()Eric Biggers1-0/+10
Currently, ahash spawns are initialized by using ahash_attr_alg() or crypto_find_alg() to look up the ahash algorithm, then calling crypto_init_ahash_spawn(). This is different from how skcipher, aead, and akcipher spawns are initialized (they use crypto_grab_*()), and for no good reason. This difference introduces unnecessary complexity. The crypto_grab_*() functions used to have some problems, like not holding a reference to the algorithm and requiring the caller to initialize spawn->base.inst. But those problems are fixed now. So, let's introduce crypto_grab_ahash() so that we can convert all templates to the same way of initializing their spawns. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-01-09crypto: shash - introduce crypto_grab_shash()Eric Biggers1-0/+10
Currently, shash spawns are initialized by using shash_attr_alg() or crypto_alg_mod_lookup() to look up the shash algorithm, then calling crypto_init_shash_spawn(). This is different from how skcipher, aead, and akcipher spawns are initialized (they use crypto_grab_*()), and for no good reason. This difference introduces unnecessary complexity. The crypto_grab_*() functions used to have some problems, like not holding a reference to the algorithm and requiring the caller to initialize spawn->base.inst. But those problems are fixed now. So, let's introduce crypto_grab_shash() so that we can convert all templates to the same way of initializing their spawns. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-01-09crypto: algapi - pass instance to crypto_grab_spawn()Eric Biggers1-8/+2
Currently, crypto_spawn::inst is first used temporarily to pass the instance to crypto_grab_spawn(). Then crypto_init_spawn() overwrites it with crypto_spawn::next, which shares the same union. Finally, crypto_spawn::inst is set again when the instance is registered. Make this less convoluted by just passing the instance as an argument to crypto_grab_spawn() instead. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-01-09crypto: akcipher - pass instance to crypto_grab_akcipher()Eric Biggers1-9/+3
Initializing a crypto_akcipher_spawn currently requires: 1. Set spawn->base.inst to point to the instance. 2. Call crypto_grab_akcipher(). But there's no reason for these steps to be separate, and in fact this unneeded complication has caused at least one bug, the one fixed by commit 6db43410179b ("crypto: adiantum - initialize crypto_spawn::inst") So just make crypto_grab_akcipher() take the instance as an argument. To keep the function call from getting too unwieldy due to this extra argument, also introduce a 'mask' variable into pkcs1pad_create(). Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-01-09crypto: aead - pass instance to crypto_grab_aead()Eric Biggers1-8/+3
Initializing a crypto_aead_spawn currently requires: 1. Set spawn->base.inst to point to the instance. 2. Call crypto_grab_aead(). But there's no reason for these steps to be separate, and in fact this unneeded complication has caused at least one bug, the one fixed by commit 6db43410179b ("crypto: adiantum - initialize crypto_spawn::inst") So just make crypto_grab_aead() take the instance as an argument. To keep the function calls from getting too unwieldy due to this extra argument, also introduce a 'mask' variable into the affected places which weren't already using one. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-01-09crypto: skcipher - pass instance to crypto_grab_skcipher()Eric Biggers1-8/+3
Initializing a crypto_skcipher_spawn currently requires: 1. Set spawn->base.inst to point to the instance. 2. Call crypto_grab_skcipher(). But there's no reason for these steps to be separate, and in fact this unneeded complication has caused at least one bug, the one fixed by commit 6db43410179b ("crypto: adiantum - initialize crypto_spawn::inst") So just make crypto_grab_skcipher() take the instance as an argument. To keep the function calls from getting too unwieldy due to this extra argument, also introduce a 'mask' variable into the affected places which weren't already using one. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-01-09crypto: ahash - make struct ahash_instance be the full sizeEric Biggers1-3/+9
Define struct ahash_instance in a way analogous to struct skcipher_instance, struct aead_instance, and struct akcipher_instance, where the struct is defined to include both the algorithm structure at the beginning and the additional crypto_instance fields at the end. This is needed to allow allocating ahash instances directly using kzalloc(sizeof(*inst) + sizeof(*ictx), ...) in the same way as skcipher, aead, and akcipher instances. In turn, that's needed to make spawns be initialized in a consistent way everywhere. Also take advantage of the addition of the base instance to struct ahash_instance by simplifying the ahash_crypto_instance() and ahash_instance() functions. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-01-09crypto: shash - make struct shash_instance be the full sizeEric Biggers1-4/+9
Define struct shash_instance in a way analogous to struct skcipher_instance, struct aead_instance, and struct akcipher_instance, where the struct is defined to include both the algorithm structure at the beginning and the additional crypto_instance fields at the end. This is needed to allow allocating shash instances directly using kzalloc(sizeof(*inst) + sizeof(*ictx), ...) in the same way as skcipher, aead, and akcipher instances. In turn, that's needed to make spawns be initialized in a consistent way everywhere. Also take advantage of the addition of the base instance to struct shash_instance by simplifying the shash_crypto_instance() and shash_instance() functions. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-01-09crypto: remove CRYPTO_TFM_RES_WEAK_KEYEric Biggers2-21/+5
The CRYPTO_TFM_RES_WEAK_KEY flag was apparently meant as a way to make the ->setkey() functions provide more information about errors. However, no one actually checks for this flag, which makes it pointless. There are also no tests that verify that all algorithms actually set (or don't set) it correctly. This is also the last remaining CRYPTO_TFM_RES_* flag, which means that it's the only thing still needing all the boilerplate code which propagates these flags around from child => parent tfms. And if someone ever needs to distinguish this error in the future (which is somewhat unlikely, as it's been unneeded for a long time), it would be much better to just define a new return value like -EKEYREJECTED. That would be much simpler, less error-prone, and easier to test. So just remove this flag. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-01-09crypto: remove CRYPTO_TFM_RES_BAD_KEY_LENEric Biggers4-15/+6
The CRYPTO_TFM_RES_BAD_KEY_LEN flag was apparently meant as a way to make the ->setkey() functions provide more information about errors. However, no one actually checks for this flag, which makes it pointless. Also, many algorithms fail to set this flag when given a bad length key. Reviewing just the generic implementations, this is the case for aes-fixed-time, cbcmac, echainiv, nhpoly1305, pcrypt, rfc3686, rfc4309, rfc7539, rfc7539esp, salsa20, seqiv, and xcbc. But there are probably many more in arch/*/crypto/ and drivers/crypto/. Some algorithms can even set this flag when the key is the correct length. For example, authenc and authencesn set it when the key payload is malformed in any way (not just a bad length), the atmel-sha and ccree drivers can set it if a memory allocation fails, and the chelsio driver sets it for bad auth tag lengths, not just bad key lengths. So even if someone actually wanted to start checking this flag (which seems unlikely, since it's been unused for a long time), there would be a lot of work needed to get it working correctly. But it would probably be much better to go back to the drawing board and just define different return values, like -EINVAL if the key is invalid for the algorithm vs. -EKEYREJECTED if the key was rejected by a policy like "no weak keys". That would be much simpler, less error-prone, and easier to test. So just remove this flag. Signed-off-by: Eric Biggers <ebiggers@google.com> Reviewed-by: Horia Geantă <horia.geanta@nxp.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-01-09crypto: skcipher - remove skcipher_walk_aead()Eric Biggers1-2/+0
skcipher_walk_aead() is unused and is identical to skcipher_walk_aead_encrypt(), so remove it. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-12-27crypto: skcipher - Add skcipher_ialg_simple helperHerbert Xu1-3/+11
This patch introduces the skcipher_ialg_simple helper which fetches the crypto_alg structure from a simple skcipher instance's spawn. This allows us to remove the third argument from the function skcipher_alloc_instance_simple. In doing so the reference count to the algorithm is now maintained by the Crypto API and the caller no longer needs to drop the alg refcount. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-12-27crypto: api - Retain alg refcount in crypto_grab_spawnHerbert Xu1-2/+15
This patch changes crypto_grab_spawn to retain the reference count on the algorithm. This is because the caller needs to access the algorithm parameters and without the reference count the algorithm can be freed at any time. The reference count will be subsequently dropped by the crypto API once the instance has been registered. The helper crypto_drop_spawn will also conditionally drop the reference count depending on whether it has been registered. Note that the code is actually added to crypto_init_spawn. However, unless the caller activates this by setting spawn->dropref beforehand then nothing happens. The only caller that sets dropref is currently crypto_grab_spawn. Once all legacy users of crypto_init_spawn disappear, then we can kill the dropref flag. Internally each instance will maintain a list of its spawns prior to registration. This memory used by this list is shared with other fields that are only used after registration. In order for this to work a new flag spawn->registered is added to indicate whether spawn->inst can be used. Fixes: d6ef2f198d4c ("crypto: api - Add crypto_grab_spawn primitive") Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-12-20crypto: algapi - make unregistration functions return voidEric Biggers4-10/+6
Some of the algorithm unregistration functions return -ENOENT when asked to unregister a non-registered algorithm, while others always return 0 or always return void. But no users check the return value, except for two of the bulk unregistration functions which print a message on error but still always return 0 to their caller, and crypto_del_alg() which calls crypto_unregister_instance() which always returns 0. Since unregistering a non-registered algorithm is always a kernel bug but there isn't anything callers should do to handle this situation at runtime, let's simplify things by making all the unregistration functions return void, and moving the error message into crypto_unregister_alg() and upgrading it to a WARN(). Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-12-11crypto: hmac - Use init_tfm/exit_tfm interfaceHerbert Xu1-0/+6
This patch switches hmac over to the new init_tfm/exit_tfm interface as opposed to cra_init/cra_exit. This way the shash API can make sure that descsize does not exceed the maximum. This patch also adds the API helper shash_alg_instance. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Reviewed-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>