Age | Commit message (Collapse) | Author | Files | Lines |
|
git://git.kernel.org/pub/scm/linux/kernel/git/arnd/asm-generic
Pull asm-generic cleanups from Arnd Bergmann:
"The asm-generic changes for 4.4 are mostly a series from Christoph
Hellwig to clean up various abuses of headers in there. The patch to
rename the io-64-nonatomic-*.h headers caused some conflicts with new
users, so I added a workaround that we can remove in the next merge
window.
The only other patch is a warning fix from Marek Vasut"
* tag 'asm-generic-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/arnd/asm-generic:
asm-generic: temporarily add back asm-generic/io-64-nonatomic*.h
asm-generic: cmpxchg: avoid warnings from macro-ized cmpxchg() implementations
gpio-mxc: stop including <asm-generic/bug>
n_tracesink: stop including <asm-generic/bug>
n_tracerouter: stop including <asm-generic/bug>
mlx5: stop including <asm-generic/kmap_types.h>
hifn_795x: stop including <asm-generic/kmap_types.h>
drbd: stop including <asm-generic/kmap_types.h>
move count_zeroes.h out of asm-generic
move io-64-nonatomic*.h out of asm-generic
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6
Pull crypto update from Herbert Xu:
"API:
- Add support for cipher output IVs in testmgr
- Add missing crypto_ahash_blocksize helper
- Mark authenc and des ciphers as not allowed under FIPS.
Algorithms:
- Add CRC support to 842 compression
- Add keywrap algorithm
- A number of changes to the akcipher interface:
+ Separate functions for setting public/private keys.
+ Use SG lists.
Drivers:
- Add Intel SHA Extension optimised SHA1 and SHA256
- Use dma_map_sg instead of custom functions in crypto drivers
- Add support for STM32 RNG
- Add support for ST RNG
- Add Device Tree support to exynos RNG driver
- Add support for mxs-dcp crypto device on MX6SL
- Add xts(aes) support to caam
- Add ctr(aes) and xts(aes) support to qat
- A large set of fixes from Russell King for the marvell/cesa driver"
* 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (115 commits)
crypto: asymmetric_keys - Fix unaligned access in x509_get_sig_params()
crypto: akcipher - Don't #include crypto/public_key.h as the contents aren't used
hwrng: exynos - Add Device Tree support
hwrng: exynos - Fix missing configuration after suspend to RAM
hwrng: exynos - Add timeout for waiting on init done
dt-bindings: rng: Describe Exynos4 PRNG bindings
crypto: marvell/cesa - use __le32 for hardware descriptors
crypto: marvell/cesa - fix missing cpu_to_le32() in mv_cesa_dma_add_op()
crypto: marvell/cesa - use memcpy_fromio()/memcpy_toio()
crypto: marvell/cesa - use gfp_t for gfp flags
crypto: marvell/cesa - use dma_addr_t for cur_dma
crypto: marvell/cesa - use readl_relaxed()/writel_relaxed()
crypto: caam - fix indentation of close braces
crypto: caam - only export the state we really need to export
crypto: caam - fix non-block aligned hash calculation
crypto: caam - avoid needlessly saving and restoring caam_hash_ctx
crypto: caam - print errno code when hash registration fails
crypto: marvell/cesa - fix memory leak
crypto: marvell/cesa - fix first-fragment handling in mv_cesa_ahash_dma_last_req()
crypto: marvell/cesa - rearrange handling for sw padded hashes
...
|
|
Much of the driver uses cpu_to_le32() to convert values for descriptors
to little endian before writing. Use __le32 to define the hardware-
accessed parts of the descriptors, and ensure most places where it's
reasonable to do so use cpu_to_le32() when assigning to these.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
When tdma->src is freed in mv_cesa_dma_cleanup(), we convert the DMA
address from a little-endian value prior to calling dma_pool_free().
However, mv_cesa_dma_add_op() assigns tdma->src without first converting
the DMA address to little endian. Fix this.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Use the IO memcpy() functions when copying from/to MMIO memory.
These locations were found via sparse.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Use gfp_t not u32 for the GFP flags.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
cur_dma is part of the software state, not read by the hardware.
Storing it in LE32 format is wrong, use dma_addr_t for this.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Use relaxed IO accessors where appropriate.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
The kernel's coding style suggests that closing braces for initialisers
should not be aligned to the open brace column. The CodingStyle doc
shows how this should be done. Remove the additional tab.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Avoid exporting lots of state by only exporting what we really require,
which is the buffer containing the set of pending bytes to be hashed,
number of pending bytes, the context buffer, and the function pointer
state. This reduces down the exported state size to 216 bytes from
576 bytes.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
caam does not properly calculate the size of the retained state
when non-block aligned hashes are requested - it uses the wrong
buffer sizes, which results in errors such as:
caam_jr 2102000.jr1: 40000501: DECO: desc idx 5: SGT Length Error. The descriptor is trying to read more data than is contained in the SGT table.
We end up here with:
in_len 0x46 blocksize 0x40 last_bufsize 0x0 next_bufsize 0x6
to_hash 0x40 ctx_len 0x28 nbytes 0x20
which results in a job descriptor of:
jobdesc@889: ed03d918: b0861c08 3daa0080 f1400000 3d03d938
jobdesc@889: ed03d928: 00000068 f8400000 3cde2a40 00000028
where the word at 0xed03d928 is the expected data size (0x68), and a
scatterlist containing:
sg@892: ed03d938: 00000000 3cde2a40 00000028 00000000
sg@892: ed03d948: 00000000 3d03d100 00000006 00000000
sg@892: ed03d958: 00000000 7e8aa700 40000020 00000000
0x68 comes from 0x28 (the context size) plus the "in_len" rounded down
to a block size (0x40). in_len comes from 0x26 bytes of unhashed data
from the previous operation, plus the 0x20 bytes from the latest
operation.
The fixed version would create:
sg@892: ed03d938: 00000000 3cde2a40 00000028 00000000
sg@892: ed03d948: 00000000 3d03d100 00000026 00000000
sg@892: ed03d958: 00000000 7e8aa700 40000020 00000000
which replaces the 0x06 length with the correct 0x26 bytes of previously
unhashed data.
This fixes a previous commit which erroneously "fixed" this due to a
DMA-API bug report; that commit indicates that the bug was caused via a
test_ahash_pnum() function in the tcrypt module. No such function has
ever existed in the mainline kernel. Given that the change in this
commit has been tested with DMA API debug enabled and shows no issue,
I can only conclude that test_ahash_pnum() was triggering that bad
behaviour by CAAM.
Fixes: 7d5196aba3c8 ("crypto: caam - Correct DMA unmap size in ahash_update_ctx()")
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
When exporting and importing the hash state, we will only export and
import into hashes which share the same struct crypto_ahash pointer.
(See hash_accept->af_alg_accept->hash_accept_parent.)
This means that saving the caam_hash_ctx structure on export, and
restoring it on import is a waste of resources. So, remove this code.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Print the errno code when hash registration fails, so we know why the
failure occurred. This aids debugging.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
To: Boris Brezillon <boris.brezillon@free-electrons.com>,Arnaud Ebalard <arno@natisbad.org>,Thomas Petazzoni <thomas.petazzoni@free-electrons.com>,Jason Cooper <jason@lakedaemon.net>
The local chain variable is not cleaned up if an error occurs in the middle
of DMA chain creation. Fix that by dropping the local chain variable and
using the dreq->chain field which will be cleaned up by
mv_cesa_dma_cleanup() in case of errors.
Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
Reported-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
mv_cesa_ahash_dma_last_req()
When adding the software padding, this must be done using the first/mid
fragment mode, and any subsequent operation needs to be a mid-fragment.
Fix this.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Rearrange the last request handling for hashes which require software
padding.
We prepare the padding to be appended, and then append as much of the
padding to any existing data that's already queued up, adding an
operation block and launching the operation.
Any remainder is then appended as a separate operation.
This ensures that the hardware only ever sees multiples of the hash
block size to be operated on for software padded hashes, thus ensuring
that the engine always indicates that it has finished the calculation.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Rearrange the last request handling for hardware finished hashes
by moving the generation of the fragment operation into this path.
This results in a simplified sequence to handle this case, and
allows us to move the software padded case further down into the
function. Add comments describing these parts.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Move the test for the last request out of mv_cesa_ahash_dma_last_req()
to its caller, and move the mv_cesa_dma_add_frag() down into this
function.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Avoid adding the final operation within the loop, but instead add it
outside. We combine this with the handling for the no-data case.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
When we process the last request of data, and the request contains user
data, the loop in mv_cesa_ahash_dma_req_init() marks the first data size
as being iter.base.op_len which does not include the size of the cache
data. This means we end up hashing an insufficient amount of data.
Fix this by always including the cache size in the first operation
length of any request.
This has the effect that for a request containing no user data,
iter.base.op_len === iter.src.op_offset === creq->cache_ptr
As a result, we include one further change to use iter.base.op_len in
the cache-but-no-user-data case to make the next change clearer.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Use the presence of the scatterlist to determine whether we should load
any new user data to the engine. The following shall always be true at
this point:
iter.base.op_len == 0 === iter.src.sg
In doing so, we can:
1. eliminate the test for iter.base.op_len inside the loop, which
makes the loop operation more obvious and understandable.
2. move the operation generation for the cache-only case.
This prepares the code for the next step in its transformation, and also
uncovers a bug that will be fixed in the next patch.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Move the calls to mv_cesa_dma_add_frag() into the parent function,
mv_cesa_ahash_dma_req_init(). This is in preparation to changing
when we generate the operation blocks, as we need to avoid generating
a block for a partial hash block at the end of the user data.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
If we add a template first-fragment operation, always update the
template to be a mid-fragment. This ensures that mid-fragments
always follow on from a first fragment in every case.
This means we can move the first to mid-fragment update code out of
mv_cesa_ahash_dma_add_data().
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Add a helper to add the fragment operation block followed by the DMA
entry to launch the operation.
Although at the moment this pattern only strictly appears at one site,
two other sites can be factored as well by slightly changing the order
in which the DMA operations are performed. This should be harmless as
the only thing which matters is to have all the data loaded into SRAM
prior to launching the operation.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Multiple locations in the driver test the operation context fragment
type, checking whether it is a first fragment or not. Introduce a
mv_cesa_mac_op_is_first_frag() helper, which returns true if the
fragment operation is for a first fragment.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
mv_cesa_get_op_cfg() does not write to its argument, it only reads.
So, let's make it const.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Ensure that the template operation is fully initialised, otherwise we
end up loading data from the kernel stack into the engines, which can
upset the hash results.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
The endianness of the bit length used in the final stage depends on the
endianness of the algorithm - md5 hashes need it to be in little endian
format, whereas SHA hashes need it in big endian format. Use the
previously added algorithm endianness flag to control this.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Rather than determining whether we're using a MD5 hash by looking at
the digest size, switch to a cleaner solution using a per-request flag
initialised by the method type.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Currently, we read/write the state in CPU endian, but on the final
request, we convert its endian according to the requested algorithm.
(md5 is little endian, SHA are big endian.)
Always keep creq->state in CPU native endian format, and perform the
necessary conversion when copying the hash to the result.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
There's an easier way to get at the hash transform - rather than
using crypto_ahash_tfm(ahash), we can get it directly from
req->base.tfm.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
qat_crypto_get_instance_node function needs to handle situation when the
first dev in the list is not started.
Signed-off-by: Tadeusz Struk <tadeusz.struk@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Some array of const char are not set as const.
This patch fix that.
Signed-off-by: LABBE Corentin <clabbe.montjoie@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Some array of const char are not set as const.
This patch fix that.
Signed-off-by: LABBE Corentin <clabbe.montjoie@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
<linux/highmem.h> is the placace the get the kmap type flags, asm-generic
files are generic implementations only to be used by architecture code.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Using the devm_xxx() managed function to stripdown the error
and remove code.
In the same time, we replace request_mem_region/ioremap by the unified
devm_ioremap_resource() function.
Signed-off-by: LABBE Corentin <clabbe.montjoie@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Using the devm_xxx() managed function to stripdown the error and remove
code.
Signed-off-by: LABBE Corentin <clabbe.montjoie@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
The mxs-dcp driver relies on the stmp_reset_block() helper function, which
is provided by CONFIG_STMP_DEVICE. This symbol is always set on MXS,
but the driver can now also be built for MXC (i.MX6), which results
in a built error if no other driver selects STMP_DEVICE:
drivers/built-in.o: In function `mxs_dcp_probe':
vf610-ocotp.c:(.text+0x3df302): undefined reference to `stmp_reset_block'
This adds the 'select', like all other stmp drivers have it.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Fixes: a2712e6c75f ("crypto: mxs-dcp - Allow MXS_DCP to be used on MX6SL")
Acked-by: Marek Vasut <marex@denx.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
As all the import functions and export functions are virtually
identical, factor out their common parts into a generic
mv_cesa_ahash_import() and mv_cesa_ahash_export() respectively. This
performs the actual import or export, and we pass the data pointers and
length into these functions.
We have to switch a % const operation to do_div() in the common import
function to avoid provoking gcc to use the expensive 64-bit by 64-bit
modulus operation.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Acked-by: Boris Brezillon <boris.brezillon@free-electrons.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Attempting to use the sha1 digest for openssh via openssl reveals that
the result from the hash is wrong: this happens when we export the
state from one socket and import it into another via calling accept().
The reason for this is because the operation is reset to "initial block"
state, whereas we may be past the first fragment of data to be hashed.
Arrange for the operation code to avoid the initialisation of the state,
thereby preserving the imported state.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Acked-by: Boris Brezillon <boris.brezillon@free-electrons.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
When a AF_ALG fd is accepted a second time (hence hash_accept() is
used), hash_accept_parent() allocates a new private context using
sock_kmalloc(). This context is uninitialised. After use of the new
fd, we eventually end up with the kernel complaining:
marvell-cesa f1090000.crypto: dma_pool_free cesa_padding, c0627770/0 (bad dma)
where c0627770 is a random address. Poisoning the memory allocated by
the above sock_kmalloc() produces kernel oopses within the marvell hash
code, particularly the interrupt handling.
The following simplfied call sequence occurs:
hash_accept()
crypto_ahash_export()
marvell hash export function
af_alg_accept()
hash_accept_parent() <== allocates uninitialised struct hash_ctx
crypto_ahash_import()
marvell hash import function
hash_ctx contains the struct mv_cesa_ahash_req in its req.__ctx member,
and, as the marvell hash import function only partially initialises
this structure, we end up with a lot of members which are left with
whatever data was in memory prior to sock_kmalloc().
Add zero-initialisation of this structure.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Acked-by: Boris Brezillon <boris.brezillon@free-electronc.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Several of the algorithms in marvell/hash.c have a statesize of zero.
When an AF_ALG accept() on an already-accepted file descriptor to
calls into hash_accept(), this causes:
char state[crypto_ahash_statesize(crypto_ahash_reqtfm(req))];
to be zero-sized, but we still pass this to:
err = crypto_ahash_export(req, state);
which proceeds to write to 'state' as if it was a "struct md5_state",
"struct sha1_state" etc. Add the necessary initialisers for the
.statesize member.
Acked-by: Boris Brezillon <boris.brezillon@free-electrons.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
This patch adds CRC generation and validation support for nx-842.
Add CRC flag so that nx842 coprocessor includes CRC during compression
and validates during decompression.
Also changes in 842 SW compression to append CRC value at the end
of template and checks during decompression.
Signed-off-by: Haren Myneni <haren@us.ibm.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Setkey function has been split into set_priv_key and set_pub_key.
Akcipher requests takes sgl for src and dst instead of void *.
Users of the API i.e. two existing RSA implementation and
test mgr code have been updated accordingly.
Signed-off-by: Tadeusz Struk <tadeusz.struk@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
clk_prepare_enable() can fail so add a check for this and
return the error code if it fails.
Signed-off-by: LABBE Corentin <clabbe.montjoie@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Add support for AES working in XEX-based Tweaked-codebook mode with
ciphertext Stealing (XTS)
sector index - HW limitation: CAAM device supports sector index of only
8 bytes to be used for sector index inside IV, instead of whole 16 bytes
received on request. This represents 2 ^ 64 = 16,777,216 Tera of possible
values for sector index.
Signed-off-by: Cristian Hristea <cristi.hristea@gmail.com>
Signed-off-by: Horia Geanta <horia.geanta@freescale.com>
Signed-off-by: Alex Porosanu <alexandru.porosanu@freescale.com>
Signed-off-by: Catalin Vasile <catalin.vasile@freescale.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
The qce driver use two dma_map_sg path according to SG are chained
or not.
Since dma_map_sg can handle both case, clean the code with all
references to sg chained.
Thus removing qce_mapsg, qce_unmapsg and qce_countsg functions.
Signed-off-by: LABBE Corentin <clabbe.montjoie@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
The convention is to use the name of the module in the driver structures
that are used for registering the device. The CCP module is currently
using a descriptive name. Replace the descriptive name with module name.
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
The CCP is meant to be more of an offload engine than an accelerator
engine. To avoid any confusion, change references to accelerator to
offload.
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
With the creation of the device_dma_is_coherent API the "use_acpi" field
is no longer needed, so remove it.
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|