summaryrefslogtreecommitdiffstats
AgeCommit message (Collapse)AuthorFilesLines
2020-07-23crypto: testmgr - delete duplicated wordsRandy Dunlap1-5/+5
Delete the doubled word "from" in multiple places. Signed-off-by: Randy Dunlap <rdunlap@infradead.org> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: "David S. Miller" <davem@davemloft.net> Cc: linux-crypto@vger.kernel.org Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-07-23crypto: Replace HTTP links with HTTPS onesAlexander A. Klimov16-43/+43
Rationale: Reduces attack surface on kernel devs opening the links for MITM as HTTPS traffic is much harder to manipulate. Deterministic algorithm: For each file: If not .svg: For each line: If doesn't contain `\bxmlns\b`: For each link, `\bhttp://[^# \t\r\n]*(?:\w|/)`: If neither `\bgnu\.org/license`, nor `\bmozilla\.org/MPL\b`: If both the HTTP and HTTPS versions return 200 OK and serve the same content: Replace HTTP with HTTPS. Signed-off-by: Alexander A. Klimov <grandmaster@al2klimov.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-07-23crypto: skcipher - drop duplicated word in kernel-docRandy Dunlap1-1/+1
Drop the doubled word "request" in a kernel-doc comment. Signed-off-by: Randy Dunlap <rdunlap@infradead.org> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: "David S. Miller" <davem@davemloft.net> Cc: linux-crypto@vger.kernel.org Acked-by: David S. Miller <davem@davemloft.net> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-07-23crypto: hash - drop duplicated word in a commentRandy Dunlap1-1/+1
Drop the doubled word "in" in a comment. Signed-off-by: Randy Dunlap <rdunlap@infradead.org> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: "David S. Miller" <davem@davemloft.net> Cc: linux-crypto@vger.kernel.org Acked-by: David S. Miller <davem@davemloft.net> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-07-23crypto: omap-aes - Fix sparse and compiler warningsHerbert Xu1-3/+3
This patch fixes all the sparse and W=1 compiler warnings in the driver. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-07-23hwrng: imx-rngc - enable driver for i.MX6Horia Geantă1-1/+1
i.MX6 SL, SLL, ULL, ULZ SoCs have an RNGB block. Since imx-rngc driver supports also rngb, let's enable it for these SoCs too. Signed-off-by: Horia Geantă <horia.geanta@nxp.com> Reviewed-by: Martin Kaiser <martin@kaiser.cx> Reviewed-by: Marco Felsch <m.felsch@pengutronix.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-07-23dt-bindings: rng: add RNGB compatibles for i.MX6 SoCsHoria Geantă1-0/+3
RNGB block is found in some i.MX6 SoCs - 6SL, 6SLL, 6ULL, 6ULZ. Add corresponding compatible strings. Note: Several NXP SoC from QorIQ family (P1010, P1023, P4080, P3041, P5020) also have a RNGB, however it's part of the CAAM (Cryptograhic Accelerator and Assurance Module) crypto accelerator. In this case, RNGB is managed in the caam driver (drivers/crypto/caam/), since it's tightly related to the caam "job ring" interface, not to mention CAAM internally relying on RNGB as source of randomness. On the other hand, the i.MX6 SoCs with RNGB have a DCP (Data Co-Processor) crypto accelerator and this block and RNGB are independent. Signed-off-by: Horia Geantă <horia.geanta@nxp.com> Reviewed-by: Rob Herring <robh@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-07-23padata: remove padata_parallel_queueDaniel Jordan2-39/+22
Only its reorder field is actually used now, so remove the struct and embed @reorder directly in parallel_data. No functional change, just a cleanup. Signed-off-by: Daniel Jordan <daniel.m.jordan@oracle.com> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: Steffen Klassert <steffen.klassert@secunet.com> Cc: linux-crypto@vger.kernel.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-07-23padata: fold padata_alloc_possible() into padata_alloc()Daniel Jordan4-31/+8
There's no reason to have two interfaces when there's only one caller. Removing _possible saves text and simplifies future changes. Signed-off-by: Daniel Jordan <daniel.m.jordan@oracle.com> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: Steffen Klassert <steffen.klassert@secunet.com> Cc: linux-crypto@vger.kernel.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-07-23padata: remove effective cpumasks from the instanceDaniel Jordan2-29/+3
A padata instance has effective cpumasks that store the user-supplied masks ANDed with the online mask, but this middleman is unnecessary. parallel_data keeps the same information around. Removing this saves text and code churn in future changes. Signed-off-by: Daniel Jordan <daniel.m.jordan@oracle.com> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: Steffen Klassert <steffen.klassert@secunet.com> Cc: linux-crypto@vger.kernel.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-07-23padata: inline single call of pd_setup_cpumasks()Daniel Jordan1-23/+9
pd_setup_cpumasks() has only one caller. Move its contents inline to prepare for the next cleanup. Signed-off-by: Daniel Jordan <daniel.m.jordan@oracle.com> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: Steffen Klassert <steffen.klassert@secunet.com> Cc: linux-crypto@vger.kernel.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-07-23padata: remove stop functionDaniel Jordan4-38/+5
padata_stop() has two callers and is unnecessary in both cases. When pcrypt calls it before padata_free(), it's being unloaded so there are no outstanding padata jobs[0]. When __padata_free() calls it, it's either along the same path or else pcrypt initialization failed, which of course means there are also no outstanding jobs. Removing it simplifies padata and saves text. [0] https://lore.kernel.org/linux-crypto/20191119225017.mjrak2fwa5vccazl@gondor.apana.org.au/ Signed-off-by: Daniel Jordan <daniel.m.jordan@oracle.com> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: Steffen Klassert <steffen.klassert@secunet.com> Cc: linux-crypto@vger.kernel.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-07-23padata: remove start functionDaniel Jordan3-29/+1
padata_start() is only used right after pcrypt allocates an instance with all possible CPUs, when PADATA_INVALID can't happen, so there's no need for a separate "start" step. It can be done during allocation to save text, make using padata easier, and avoid unneeded calls in the future. Signed-off-by: Daniel Jordan <daniel.m.jordan@oracle.com> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: Steffen Klassert <steffen.klassert@secunet.com> Cc: linux-crypto@vger.kernel.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-07-23crypto: qat - fix double free in qat_uclo_create_batch_init_listTom Rix1-2/+7
clang static analysis flags this error qat_uclo.c:297:3: warning: Attempt to free released memory [unix.Malloc] kfree(*init_tab_base); ^~~~~~~~~~~~~~~~~~~~~ When input *init_tab_base is null, the function allocates memory for the head of the list. When there is problem allocating other list elements the list is unwound and freed. Then a check is made if the list head was allocated and is also freed. Keeping track of the what may need to be freed is the variable 'tail_old'. The unwinding/freeing block is while (tail_old) { mem_init = tail_old->next; kfree(tail_old); tail_old = mem_init; } The problem is that the first element of tail_old is also what was allocated for the list head init_header = kzalloc(sizeof(*init_header), GFP_KERNEL); ... *init_tab_base = init_header; flag = 1; } tail_old = init_header; So *init_tab_base/init_header are freed twice. There is another problem. When the input *init_tab_base is non null the tail_old is calculated by traveling down the list to first non null entry. tail_old = init_header; while (tail_old->next) tail_old = tail_old->next; When the unwinding free happens, the last entry of the input list will be freed. So the freeing needs a general changed. If locally allocated the first element of tail_old is freed, else it is skipped. As a bit of cleanup, reset *init_tab_base if it came in as null. Fixes: b4b7e67c917f ("crypto: qat - Intel(R) QAT ucode part of fw loader") Cc: <stable@vger.kernel.org> Signed-off-by: Tom Rix <trix@redhat.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-07-23crypto: sa2ul - add device links to child devicesTero Kristo1-0/+11
The child devices for sa2ul (like the RNG) have hard dependency towards the parent, they can't function without the parent enabled. Add device link for this purpose so that the dependencies are taken care of properly. Signed-off-by: Tero Kristo <t-kristo@ti.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-07-23crypto: sa2ul - Add AEAD algorithm supportKeerthy2-21/+518
Add support for sa2ul hardware AEAD for hmac(sha256),cbc(aes) and hmac(sha1),cbc(aes) algorithms. Signed-off-by: Keerthy <j-keerthy@ti.com> [t-kristo@ti.com: number of bug fixes, major refactoring and cleanup of code] Signed-off-by: Tero Kristo <t-kristo@ti.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-07-23crypto: sa2ul - add sha1/sha256/sha512 supportKeerthy2-13/+560
Add support for sha1/sha256/sha512 sa2ul based hardware authentication. With the hash update mechanism, we always use software fallback mechanism for now, as there is no way to fetch the partial hash state from the HW accelerator. HW accelerator is only used when digest is called for a data chunk of known size. Signed-off-by: Keerthy <j-keerthy@ti.com> [t-kristo@ti.com: various bug fixes, major cleanups and refactoring of code] Signed-off-by: Tero Kristo <t-kristo@ti.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-07-23crypto: sa2ul - Add crypto driverKeerthy4-0/+1783
Adds a basic crypto driver and currently supports AES/3DES in cbc mode for both encryption and decryption. Signed-off-by: Keerthy <j-keerthy@ti.com> [t-kristo@ti.com: major re-work to fix various bugs in the driver and to cleanup the code] Signed-off-by: Tero Kristo <t-kristo@ti.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-07-23dt-bindings: crypto: Add TI SA2UL crypto accelerator documentationKeerthy1-0/+76
The Security Accelerator Ultra Lite (SA2UL) subsystem provides hardware cryptographic acceleration for the following use cases: * Encryption and authentication for secure boot * Encryption and authentication of content in applications requiring DRM (digital rights management) and content/asset protection SA2UL provides support for number of different cryptographic algorithms including SHA1, SHA256, SHA512, AES, 3DES, and various combinations of the previous for AEAD use. Reviewed-by: Rob Herring <robh@kernel.org> Signed-off-by: Keerthy <j-keerthy@ti.com> [t-kristo@ti.com: converted documentation to yaml] Signed-off-by: Tero Kristo <t-kristo@ti.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-07-23crypto: x86/crc32c - fix building with clang iasArnd Bergmann1-1/+1
The clang integrated assembler complains about movzxw: arch/x86/crypto/crc32c-pcl-intel-asm_64.S:173:2: error: invalid instruction mnemonic 'movzxw' It seems that movzwq is the mnemonic that it expects instead, and this is what objdump prints when disassembling the file. Fixes: 6a8ce1ef3940 ("crypto: crc32c - Optimize CRC32C calculation with PCLMULQDQ instruction") Signed-off-by: Arnd Bergmann <arnd@arndb.de> Reviewed-by: Nathan Chancellor <natechancellor@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-07-16crypto: chelsio - Fix some pr_xxx messagesChristophe JAILLET1-10/+9
At the top this file, we have: #define pr_fmt(fmt) "chcr:" fmt So there is no need to repeat "chcr : " in some error message when the pr_xxx macro is used. This would lead to log "chcr:chcr : blabla" Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-07-16crypto: chelsio - Avoid some code duplicationChristophe JAILLET1-3/+1
The error handling path of 'chcr_authenc_setkey()' is the same as this error handling code. So just 'goto out' as done everywhere in the function to simplify the code. Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-07-16crypto: lrw - prefix function and struct names with "lrw"Eric Biggers1-58/+61
Overly-generic names can cause problems like naming collisions, confusing crash reports, and reduced grep-ability. E.g. see commit d099ea6e6fde ("crypto - Avoid free() namespace collision"). Clean this up for the lrw template by prefixing the names with "lrw_". (I didn't use "crypto_lrw_" instead because that seems overkill.) Also constify the tfm context in a couple places. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-07-16crypto: xts - prefix function and struct names with "xts"Eric Biggers1-65/+72
Overly-generic names can cause problems like naming collisions, confusing crash reports, and reduced grep-ability. E.g. see commit d099ea6e6fde ("crypto - Avoid free() namespace collision"). Clean this up for the xts template by prefixing the names with "xts_". (I didn't use "crypto_xts_" instead because that seems overkill.) Also constify the tfm context in a couple places, and make xts_free_instance() use the instance context structure so that it doesn't just assume the crypto_skcipher_spawn is at the beginning. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-07-16crypto: hisilicon/hpre - disable FLR triggered by hardwareHui Tang1-4/+22
for Hi1620 hardware, we should disable these hardware flr: 1. BME_FLR - bit 7, 2. PM_FLR - bit 11, 3. SRIOV_FLR - bit 12, Or HPRE may goto D3 state, when we bind and unbind HPRE quickly, as it does FLR triggered by BME/PM/SRIOV. Fixes: c8b4b477079d("crypto: hisilicon - add HiSilicon HPRE accelerator") Signed-off-by: Hui Tang <tanghui20@huawei.com> Signed-off-by: Meng Yu <yumeng18@huawei.com> Reviewed-by: Zaibo Xu <xuzaibo@huawei.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-07-16crypto: hisilicon/hpre - update debugfs interface parametersMeng Yu1-33/+26
Update debugfs interface parameters, and adjust the processing logic inside the corresponding function. Fixes: 848974151618("crypto: hisilicon - Add debugfs for HPRE") Signed-off-by: Meng Yu <yumeng18@huawei.com> Reviewed-by: Zaibo Xu <xuzaibo@huawei.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-07-16crypto: hisilicon/hpre - Add a switch in sriov_configureMeng Yu1-1/+2
If CONFIG_PCI_IOV is not enabled, we can not use "sriov_configure". Fixes: 5ec302a364bf("crypto: hisilicon - add SRIOV support for HPRE") Signed-off-by: Meng Yu <yumeng18@huawei.com> Reviewed-by: Zaibo Xu <xuzaibo@huawei.com> Reviewed-by: Shukun Tan <tanshukun1@huawei.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-07-16crypto: hisilicon/hpre - Modify the Macro definition and formatMeng Yu1-7/+5
1. Bit 1 to bit 5 are NFE, not CE. 2. Macro 'HPRE_VF_NUM' is defined in 'qm.h', so delete it here. 3. Delete multiple blank lines. 4. Adjust format alignment. Signed-off-by: Meng Yu <yumeng18@huawei.com> Reviewed-by: Zaibo Xu <xuzaibo@huawei.com> Reviewed-by: Longfang Liu <liulongfang@huawei.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-07-16crypto: hisilicon/hpre - HPRE_OVERTIME_THRHLD can be written by debugfsHui Tang1-4/+6
Registers in "hpre_dfx_files" can only be cleaned to zero but HPRE_OVERTIME_THRHLD, which can be written as any number. Fixes: 64a6301ebee7("crypto: hisilicon/hpre - add debugfs for ...") Signed-off-by: Hui Tang <tanghui20@huawei.com> Signed-off-by: Meng Yu <yumeng18@huawei.com> Reviewed-by: Zaibo Xu <xuzaibo@huawei.com> Reviewed-by: Zhou Wang <wangzhou1@hisilicon.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-07-16crypto: hisilicon/hpre - Init the value of current_q of debugfsMeng Yu1-0/+1
Initialize current queue number as HPRE_PF_DEF_Q_NUM, or it is zero and we can't set its value by "current_q_write". Signed-off-by: Meng Yu <yumeng18@huawei.com> Reviewed-by: Zaibo Xu <xuzaibo@huawei.com> Reviewed-by: Hui Tang <tanghui20@huawei.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-07-16crypto: drivers - set the flag CRYPTO_ALG_ALLOCATES_MEMORYMikulas Patocka34-143/+361
Set the flag CRYPTO_ALG_ALLOCATES_MEMORY in the crypto drivers that allocate memory. drivers/crypto/allwinner/sun8i-ce/sun8i-ce-core.c: sun8i_ce_cipher drivers/crypto/allwinner/sun8i-ss/sun8i-ss-core.c: sun8i_ss_cipher drivers/crypto/amlogic/amlogic-gxl-core.c: meson_cipher drivers/crypto/axis/artpec6_crypto.c: artpec6_crypto_common_init drivers/crypto/bcm/cipher.c: spu_skcipher_rx_sg_create drivers/crypto/caam/caamalg.c: aead_edesc_alloc drivers/crypto/caam/caamalg_qi.c: aead_edesc_alloc drivers/crypto/caam/caamalg_qi2.c: aead_edesc_alloc drivers/crypto/caam/caamhash.c: hash_digest_key drivers/crypto/cavium/cpt/cptvf_algs.c: process_request drivers/crypto/cavium/nitrox/nitrox_aead.c: nitrox_process_se_request drivers/crypto/cavium/nitrox/nitrox_skcipher.c: nitrox_process_se_request drivers/crypto/ccp/ccp-crypto-aes-cmac.c: ccp_do_cmac_update drivers/crypto/ccp/ccp-crypto-aes-galois.c: ccp_crypto_enqueue_request drivers/crypto/ccp/ccp-crypto-aes-xts.c: ccp_crypto_enqueue_request drivers/crypto/ccp/ccp-crypto-aes.c: ccp_crypto_enqueue_request drivers/crypto/ccp/ccp-crypto-des3.c: ccp_crypto_enqueue_request drivers/crypto/ccp/ccp-crypto-sha.c: ccp_crypto_enqueue_request drivers/crypto/chelsio/chcr_algo.c: create_cipher_wr drivers/crypto/hisilicon/sec/sec_algs.c: sec_alloc_and_fill_hw_sgl drivers/crypto/hisilicon/sec2/sec_crypto.c: sec_alloc_req_id drivers/crypto/inside-secure/safexcel_cipher.c: safexcel_queue_req drivers/crypto/inside-secure/safexcel_hash.c: safexcel_ahash_enqueue drivers/crypto/ixp4xx_crypto.c: ablk_perform drivers/crypto/marvell/cesa/cipher.c: mv_cesa_skcipher_dma_req_init drivers/crypto/marvell/cesa/hash.c: mv_cesa_ahash_dma_req_init drivers/crypto/marvell/octeontx/otx_cptvf_algs.c: create_ctx_hdr drivers/crypto/n2_core.c: n2_compute_chunks drivers/crypto/picoxcell_crypto.c: spacc_sg_to_ddt drivers/crypto/qat/qat_common/qat_algs.c: qat_alg_skcipher_encrypt drivers/crypto/qce/skcipher.c: qce_skcipher_async_req_handle drivers/crypto/talitos.c : talitos_edesc_alloc drivers/crypto/virtio/virtio_crypto_algs.c: __virtio_crypto_skcipher_do_req drivers/crypto/xilinx/zynqmp-aes-gcm.c: zynqmp_aes_aead_cipher Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> [EB: avoid overly-long lines] Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-07-16crypto: algapi - introduce the flag CRYPTO_ALG_ALLOCATES_MEMORYEric Biggers2-1/+34
Introduce a new algorithm flag CRYPTO_ALG_ALLOCATES_MEMORY. If this flag is set, then the driver allocates memory in its request routine. Such drivers are not suitable for disk encryption because GFP_ATOMIC allocation can fail anytime (causing random I/O errors) and GFP_KERNEL allocation can recurse into the block layer, causing a deadlock. For now, this flag is only implemented for some algorithm types. We also assume some usage constraints for it to be meaningful, since there are lots of edge cases the crypto API allows (e.g., misaligned or fragmented scatterlists) that mean that nearly any crypto algorithm can allocate memory in some case. See the comment for details. Also add this flag to CRYPTO_ALG_INHERITED_FLAGS so that when a template is instantiated, this flag is set on the template instance if it is set on any algorithm the instance uses. Based on a patch by Mikulas Patocka <mpatocka@redhat.com> (https://lore.kernel.org/r/alpine.LRH.2.02.2006301414580.30526@file01.intranet.prod.int.rdu2.redhat.com). Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-07-16crypto: algapi - add NEED_FALLBACK to INHERITED_FLAGSEric Biggers5-9/+4
CRYPTO_ALG_NEED_FALLBACK is handled inconsistently. When it's requested to be clear, some templates propagate that request to child algorithms, while others don't. It's apparently desired for NEED_FALLBACK to be propagated, to avoid deadlocks where a module tries to load itself while it's being initialized, and to avoid unnecessarily complex fallback chains where we have e.g. cbc-aes-$driver falling back to cbc(aes-$driver) where aes-$driver itself falls back to aes-generic, instead of cbc-aes-$driver simply falling back to cbc(aes-generic). There have been a number of fixes to this effect: commit 89027579bc6c ("crypto: xts - Propagate NEED_FALLBACK bit") commit d2c2a85cfe82 ("crypto: ctr - Propagate NEED_FALLBACK bit") commit e6c2e65c70a6 ("crypto: cbc - Propagate NEED_FALLBACK bit") But it seems that other templates can have the same problems too. To avoid this whack-a-mole, just add NEED_FALLBACK to INHERITED_FLAGS so that it's always inherited. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-07-16crypto: algapi - use common mechanism for inheriting flagsEric Biggers23-234/+153
The flag CRYPTO_ALG_ASYNC is "inherited" in the sense that when a template is instantiated, the template will have CRYPTO_ALG_ASYNC set if any of the algorithms it uses has CRYPTO_ALG_ASYNC set. We'd like to add a second flag (CRYPTO_ALG_ALLOCATES_MEMORY) that gets "inherited" in the same way. This is difficult because the handling of CRYPTO_ALG_ASYNC is hardcoded everywhere. Address this by: - Add CRYPTO_ALG_INHERITED_FLAGS, which contains the set of flags that have these inheritance semantics. - Add crypto_algt_inherited_mask(), for use by template ->create() methods. It returns any of these flags that the user asked to be unset and thus must be passed in the 'mask' to crypto_grab_*(). - Also modify crypto_check_attr_type() to handle computing the 'mask' so that most templates can just use this. - Make crypto_grab_*() propagate these flags to the template instance being created so that templates don't have to do this themselves. Make crypto/simd.c propagate these flags too, since it "wraps" another algorithm, similar to a template. Based on a patch by Mikulas Patocka <mpatocka@redhat.com> (https://lore.kernel.org/r/alpine.LRH.2.02.2006301414580.30526@file01.intranet.prod.int.rdu2.redhat.com). Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-07-16crypto: seqiv - remove seqiv_create()Eric Biggers1-15/+1
seqiv_create() is pointless because it just checks that the template is being instantiated as an AEAD, then calls seqiv_aead_create(). But seqiv_aead_create() does the exact same check, via aead_geniv_alloc(). Just remove seqiv_create() and use seqiv_aead_create() directly. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-07-16crypto: geniv - remove unneeded arguments from aead_geniv_alloc()Eric Biggers4-6/+7
The type and mask arguments to aead_geniv_alloc() are always 0, so remove them. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-07-16crypto: x86 - Remove include/asm/inst.hUros Bizjak7-716/+399
Current minimum required version of binutils is 2.23, which supports PSHUFB, PCLMULQDQ, PEXTRD, AESKEYGENASSIST, AESIMC, AESENC, AESENCLAST, AESDEC, AESDECLAST and MOVQ instruction mnemonics. Substitute macros from include/asm/inst.h with a proper instruction mnemonics in various assmbly files from x86/crypto directory, and remove now unneeded file. The patch was tested by calculating and comparing sha256sum hashes of stripped object files before and after the patch, to be sure that executable code didn't change. Signed-off-by: Uros Bizjak <ubizjak@gmail.com> CC: Herbert Xu <herbert@gondor.apana.org.au> CC: "David S. Miller" <davem@davemloft.net> CC: Thomas Gleixner <tglx@linutronix.de> CC: Ingo Molnar <mingo@redhat.com> CC: Borislav Petkov <bp@alien8.de> CC: "H. Peter Anvin" <hpa@zytor.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-07-16crypto: ccp - Silence strncpy warningHerbert Xu1-1/+2
This patch kills an strncpy by using strscpy instead. The name would be silently truncated if it is too long. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Acked-by: John Allen <john.allen@amd.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-07-16hwrng: ks-sa - Replace HTTP links with HTTPS onesAlexander A. Klimov1-1/+1
Rationale: Reduces attack surface on kernel devs opening the links for MITM as HTTPS traffic is much harder to manipulate. Deterministic algorithm: For each file: If not .svg: For each line: If doesn't contain `\bxmlns\b`: For each link, `\bhttp://[^# \t\r\n]*(?:\w|/)`: If neither `\bgnu\.org/license`, nor `\bmozilla\.org/MPL\b`: If both the HTTP and HTTPS versions return 200 OK and serve the same content: Replace HTTP with HTTPS. Signed-off-by: Alexander A. Klimov <grandmaster@al2klimov.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-07-16ASoC: cros_ec_codec: use sha256() instead of open codingEric Biggers1-25/+2
Now that there's a function that calculates the SHA-256 digest of a buffer in one step, use it instead of sha256_init() + sha256_update() + sha256_final(). Also simplify the code by inlining calculate_sha256() into its caller and switching a debug log statement to use %*phN instead of bin2hex(). Acked-by: Tzung-Bi Shih <tzungbi@google.com> Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Cc: alsa-devel@alsa-project.org Cc: Ard Biesheuvel <ardb@kernel.org> Cc: Cheng-Yi Chiang <cychiang@chromium.org> Cc: Enric Balletbo i Serra <enric.balletbo@collabora.com> Cc: Guenter Roeck <groeck@chromium.org> Cc: Tzung-Bi Shih <tzungbi@google.com> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-07-16mptcp: use sha256() instead of open codingEric Biggers1-12/+3
Now that there's a function that calculates the SHA-256 digest of a buffer in one step, use it instead of sha256_init() + sha256_update() + sha256_final(). Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Acked-by: Matthieu Baerts <matthieu.baerts@tessares.net> Cc: mptcp@lists.01.org Cc: Mat Martineau <mathew.j.martineau@linux.intel.com> Cc: Matthieu Baerts <matthieu.baerts@tessares.net> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-07-16efi: use sha256() instead of open codingEric Biggers1-6/+3
Now that there's a function that calculates the SHA-256 digest of a buffer in one step, use it instead of sha256_init() + sha256_update() + sha256_final(). Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Tested-by: Hans de Goede <hdegoede@redhat.com> Cc: linux-efi@vger.kernel.org Cc: Ard Biesheuvel <ardb@kernel.org> Cc: Hans de Goede <hdegoede@redhat.com> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-07-16crypto: lib/sha256 - add sha256() functionEric Biggers2-0/+11
Add a function sha256() which computes a SHA-256 digest in one step, combining sha256_init() + sha256_update() + sha256_final(). This is similar to how we also have blake2s(). Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Tested-by: Hans de Goede <hdegoede@redhat.com> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-07-16crypto: sparc - rename sha256 to sha256_algEric Biggers1-7/+7
To avoid a naming collision when we add a sha256() library function, rename the "sha256" static variable in sha256_glue.c to "sha256_alg". For consistency, also rename "sha224" to "sha224_alg". Reported-by: kernel test robot <lkp@intel.com> Cc: sparclinux@vger.kernel.org Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-07-16crypto: x86/chacha-sse3 - use unaligned loads for state arrayArd Biesheuvel3-27/+10
Due to the fact that the x86 port does not support allocating objects on the stack with an alignment that exceeds 8 bytes, we have a rather ugly hack in the x86 code for ChaCha to ensure that the state array is aligned to 16 bytes, allowing the SSE3 implementation of the algorithm to use aligned loads. Given that the performance benefit of using of aligned loads appears to be limited (~0.25% for 1k blocks using tcrypt on a Corei7-8650U), and the fact that this hack has leaked into generic ChaCha code, let's just remove it. Cc: Martin Willi <martin@strongswan.org> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: Eric Biggers <ebiggers@kernel.org> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Martin Willi <martin@strongswan.org> Reviewed-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-07-16crypto: lib/chacha20poly1305 - Add missing function declarationHerbert Xu2-2/+2
This patch adds a declaration for chacha20poly1305_selftest to silence a sparse warning. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-07-16crypto: mediatek - use AES library for GCM key derivationArd Biesheuvel2-57/+9
The Mediatek accelerator driver calls into a dynamically allocated skcipher of the ctr(aes) variety to perform GCM key derivation, which involves AES encryption of a single block consisting of NUL bytes. There is no point in using the skcipher API for this, so use the AES library interface instead. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-07-16crypto: sahara - permit asynchronous skcipher as fallbackArd Biesheuvel1-51/+45
Even though the sahara driver implements asynchronous versions of ecb(aes) and cbc(aes), the fallbacks it allocates are required to be synchronous. Given that SIMD based software implementations are usually asynchronous as well, even though they rarely complete asynchronously (this typically only happens in cases where the request was made from softirq context, while SIMD was already in use in the task context that it interrupted), these implementations are disregarded, and either the generic C version or another table based version implemented in assembler is selected instead. Since falling back to synchronous AES is not only a performance issue, but potentially a security issue as well (due to the fact that table based AES is not time invariant), let's fix this, by allocating an ordinary skcipher as the fallback, and invoke it with the completion routine that was given to the outer request. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Horia Geantă <horia.geanta@nxp.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-07-16crypto: qce - permit asynchronous skcipher as fallbackArd Biesheuvel2-21/+24
Even though the qce driver implements asynchronous versions of ecb(aes), cbc(aes)and xts(aes), the fallbacks it allocates are required to be synchronous. Given that SIMD based software implementations are usually asynchronous as well, even though they rarely complete asynchronously (this typically only happens in cases where the request was made from softirq context, while SIMD was already in use in the task context that it interrupted), these implementations are disregarded, and either the generic C version or another table based version implemented in assembler is selected instead. Since falling back to synchronous AES is not only a performance issue, but potentially a security issue as well (due to the fact that table based AES is not time invariant), let's fix this, by allocating an ordinary skcipher as the fallback, and invoke it with the completion routine that was given to the outer request. While at it, remove the pointless memset() from qce_skcipher_init(), and remove the call to it qce_skcipher_init_fallback(). Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-07-16crypto: picoxcell - permit asynchronous skcipher as fallbackArd Biesheuvel1-16/+22
Even though the picoxcell driver implements asynchronous versions of ecb(aes) and cbc(aes), the fallbacks it allocates are required to be synchronous. Given that SIMD based software implementations are usually asynchronous as well, even though they rarely complete asynchronously (this typically only happens in cases where the request was made from softirq context, while SIMD was already in use in the task context that it interrupted), these implementations are disregarded, and either the generic C version or another table based version implemented in assembler is selected instead. Since falling back to synchronous AES is not only a performance issue, but potentially a security issue as well (due to the fact that table based AES is not time invariant), let's fix this, by allocating an ordinary skcipher as the fallback, and invoke it with the completion routine that was given to the outer request. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Jamie Iles <jamie@jamieiles.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>