Age | Commit message (Collapse) | Author | Files | Lines |
|
The flag CRYPTO_ALG_ASYNC is "inherited" in the sense that when a
template is instantiated, the template will have CRYPTO_ALG_ASYNC set if
any of the algorithms it uses has CRYPTO_ALG_ASYNC set.
We'd like to add a second flag (CRYPTO_ALG_ALLOCATES_MEMORY) that gets
"inherited" in the same way. This is difficult because the handling of
CRYPTO_ALG_ASYNC is hardcoded everywhere. Address this by:
- Add CRYPTO_ALG_INHERITED_FLAGS, which contains the set of flags that
have these inheritance semantics.
- Add crypto_algt_inherited_mask(), for use by template ->create()
methods. It returns any of these flags that the user asked to be
unset and thus must be passed in the 'mask' to crypto_grab_*().
- Also modify crypto_check_attr_type() to handle computing the 'mask'
so that most templates can just use this.
- Make crypto_grab_*() propagate these flags to the template instance
being created so that templates don't have to do this themselves.
Make crypto/simd.c propagate these flags too, since it "wraps" another
algorithm, similar to a template.
Based on a patch by Mikulas Patocka <mpatocka@redhat.com>
(https://lore.kernel.org/r/alpine.LRH.2.02.2006301414580.30526@file01.intranet.prod.int.rdu2.redhat.com).
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Simplify the error handling in crypto_rfc4309_create() by taking
advantage of crypto_grab_aead() now handling an ERR_PTR() name and by
taking advantage of crypto_drop_aead() now accepting (as a no-op) a
spawn that hasn't been grabbed yet.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Convert shash_free_instance() and its users to the new way of freeing
instances, where a ->free() method is installed to the instance struct
itself. This replaces the weakly-typed method crypto_template::free().
This will allow removing support for the old way of freeing instances.
Also give shash_free_instance() a more descriptive name to reflect that
it's only for instances with a single spawn, not for any instance.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Now that all users of single-block cipher spawns have been converted to
use 'struct crypto_cipher_spawn' rather than the less specifically typed
'struct crypto_spawn', make crypto_spawn_cipher() take a pointer to a
'struct crypto_cipher_spawn' rather than a 'struct crypto_spawn'.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Make the cbcmac template use the new function crypto_grab_cipher() to
initialize its cipher spawn.
This is needed to make all spawns be initialized in a consistent way.
This required making cbcmac_create() allocate the instance directly
rather than use shash_alloc_instance().
Also simplify the error handling by taking advantage of crypto_drop_*()
now accepting (as a no-op) spawns that haven't been initialized yet, and
by taking advantage of crypto_grab_*() now handling ERR_PTR() names.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Make the ccm and ccm_base templates use the new function
crypto_grab_ahash() to initialize their ahash spawn.
This is needed to make all spawns be initialized in a consistent way.
Also simplify the error handling by taking advantage of crypto_drop_*()
now accepting (as a no-op) spawns that haven't been initialized yet.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Initializing a crypto_aead_spawn currently requires:
1. Set spawn->base.inst to point to the instance.
2. Call crypto_grab_aead().
But there's no reason for these steps to be separate, and in fact this
unneeded complication has caused at least one bug, the one fixed by
commit 6db43410179b ("crypto: adiantum - initialize crypto_spawn::inst")
So just make crypto_grab_aead() take the instance as an argument.
To keep the function calls from getting too unwieldy due to this extra
argument, also introduce a 'mask' variable into the affected places
which weren't already using one.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Initializing a crypto_skcipher_spawn currently requires:
1. Set spawn->base.inst to point to the instance.
2. Call crypto_grab_skcipher().
But there's no reason for these steps to be separate, and in fact this
unneeded complication has caused at least one bug, the one fixed by
commit 6db43410179b ("crypto: adiantum - initialize crypto_spawn::inst")
So just make crypto_grab_skcipher() take the instance as an argument.
To keep the function calls from getting too unwieldy due to this extra
argument, also introduce a 'mask' variable into the affected places
which weren't already using one.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
The CRYPTO_TFM_RES_* flags were apparently meant as a way to make the
->setkey() functions provide more information about errors. But these
flags weren't actually being used or tested, and in many cases they
weren't being set correctly anyway. So they've now been removed.
Also, if someone ever actually needs to start better distinguishing
->setkey() errors (which is somewhat unlikely, as this has been unneeded
for a long time), we'd be much better off just defining different return
values, like -EINVAL if the key is invalid for the algorithm vs.
-EKEYREJECTED if the key was rejected by a policy like "no weak keys".
That would be much simpler, less error-prone, and easier to test.
So just remove CRYPTO_TFM_RES_MASK and all the unneeded logic that
propagates these flags around.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
The wusb code takes a very peculiar approach at implementing CBC-MAC,
by using plain CBC into a scratch buffer, and taking the output IV
as the MAC.
We can clean up this code substantially by switching to the cbcmac
shash, as exposed by the CCM template. To ensure that the module is
loaded on demand, add the cbcmac template name as a module alias.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
Based on 1 normalized pattern(s):
this program is free software you can redistribute it and or modify
it under the terms of the gnu general public license as published by
the free software foundation either version 2 of the license or at
your option any later version
extracted by the scancode license scanner the SPDX license identifier
GPL-2.0-or-later
has been chosen to replace the boilerplate/reference in 3029 file(s).
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Allison Randal <allison@lohutok.net>
Cc: linux-spdx@vger.kernel.org
Link: https://lkml.kernel.org/r/20190527070032.746973796@linutronix.de
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
CCM instances can be created by either the "ccm" template, which only
allows choosing the block cipher, e.g. "ccm(aes)"; or by "ccm_base",
which allows choosing the ctr and cbcmac implementations, e.g.
"ccm_base(ctr(aes-generic),cbcmac(aes-generic))".
However, a "ccm_base" instance prevents a "ccm" instance from being
registered using the same implementations. Nor will the instance be
found by lookups of "ccm". This can be used as a denial of service.
Moreover, "ccm_base" instances are never tested by the crypto
self-tests, even if there are compatible "ccm" tests.
The root cause of these problems is that instances of the two templates
use different cra_names. Therefore, fix these problems by making
"ccm_base" instances set the same cra_name as "ccm" instances, e.g.
"ccm(aes)" instead of "ccm_base(ctr(aes-generic),cbcmac(aes-generic))".
This requires extracting the block cipher name from the name of the ctr
and cbcmac algorithms. It also requires starting to verify that the
algorithms are really ctr and cbcmac using the same block cipher, not
something else entirely. But it would be bizarre if anyone were
actually using non-ccm-compatible algorithms with ccm_base, so this
shouldn't break anyone in practice.
Fixes: 4a49b499dfa0 ("[CRYPTO] ccm: Added CCM mode")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Use subsys_initcall for registration of all templates and generic
algorithm implementations, rather than module_init. Then change
cryptomgr to use arch_initcall, to place it before the subsys_initcalls.
This is needed so that when both a generic and optimized implementation
of an algorithm are built into the kernel (not loadable modules), the
generic implementation is registered before the optimized one.
Otherwise, the self-tests for the optimized implementation are unable to
allocate the generic implementation for the new comparison fuzz tests.
Note that on arm, a side effect of this change is that self-tests for
generic implementations may run before the unaligned access handler has
been installed. So, unaligned accesses will crash the kernel. This is
arguably a good thing as it makes it easier to detect that type of bug.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Use crypto template array registering API to simplify the code.
Signed-off-by: Xiongfeng Wang <xiongfeng.wang@linaro.org>
Reviewed-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
In the quest to remove all stack VLA usage from the kernel[1], this drops
AHASH_REQUEST_ON_STACK by preallocating the ahash request area combined
with the skcipher area (which are not used at the same time).
[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
The IV buffer used during CCM operations is used twice, during both the
hashing step and the ciphering step.
When using a hardware accelerator that updates the contents of the IV
buffer at the end of ciphering operations, the value will be modified.
In the decryption case, the subsequent setup of the hashing algorithm
will interpret the updated IV instead of the original value, which can
lead to out-of-bounds writes.
Reuse the idata buffer, only used in the hashing step, to preserve the
IV's value during the ciphering step in the decryption case.
Signed-off-by: Romain Izard <romain.izard.pro@gmail.com>
Reviewed-by: Tudor Ambarus <tudor.ambarus@microchip.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Commit f15f05b0a5de ("crypto: ccm - switch to separate cbcmac driver")
refactored the CCM driver to allow separate implementations of the
underlying MAC to be provided by a platform. However, in doing so, it
moved some data from the linear region to the stack, which violates the
SG constraints when the stack is virtually mapped.
So move idata/odata back to the request ctx struct, of which we can
reasonably expect that it has been allocated using kmalloc() et al.
Reported-by: Johannes Berg <johannes@sipsolutions.net>
Fixes: f15f05b0a5de ("crypto: ccm - switch to separate cbcmac driver")
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Johannes Berg <johannes@sipsolutions.net>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
The CCM driver forces 32-bit alignment even if the underlying ciphers
don't care about alignment. This is because crypto_xor() used to require
this, but since this is no longer the case, drop the hardcoded minimum
of 32 bits.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
The CCM driver was recently updated to defer the MAC part of the algorithm
to a dedicated crypto transform, and a template for instantiating such
transforms was added at the same time.
However, this new cbcmac template fails to take the alignmask of the
encapsulated cipher into account, which may result in buffer addresses
being passed down that are not sufficiently aligned.
So update the code to ensure that the digest buffer in the desc ctx
appears at a sufficiently aligned offset, and tweak the code so that all
calls to crypto_cipher_encrypt_one() operate on this buffer exclusively.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Update the generic CCM driver to defer CBC-MAC processing to a
dedicated CBC-MAC ahash transform rather than open coding this
transform (and much of the associated scatterwalk plumbing) in
the CCM driver itself.
This cleans up the code considerably, but more importantly, it allows
the use of alternative CBC-MAC implementations that don't suffer from
performance degradation due to significant setup time (e.g., the NEON
based AES code needs to enable/disable the NEON, and load the S-box
into 16 SIMD registers, which cannot be amortized over the entire input
when using the cipher interface)
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Since commit 3a01d0ee2b99 ("crypto: skcipher - Remove top-level
givcipher interface"), crypto_spawn_skcipher2() and
crypto_spawn_skcipher() are equivalent. So switch callers of
crypto_spawn_skcipher2() to crypto_spawn_skcipher() and remove it.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Since commit 3a01d0ee2b99 ("crypto: skcipher - Remove top-level
givcipher interface"), crypto_grab_skcipher2() and
crypto_grab_skcipher() are equivalent. So switch callers of
crypto_grab_skcipher2() to crypto_grab_skcipher() and remove it.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
This patch converts ccm to use the new skcipher interface as opposed
to ablkcipher.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
This patch removes the CRYPTO_ALG_AEAD_NEW flag now that everyone
has been converted.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
This patch converts generic ccm and its associated transforms to
the new AEAD interface.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
This patch uses the crypto_aead_set_reqsize helper to avoid directly
touching the internals of aead.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
This adds the module loading prefix "crypto-" to the template lookup
as well.
For example, attempting to load 'vfat(blowfish)' via AF_ALG now correctly
includes the "crypto-" prefix at every level, correctly rejecting "vfat":
net-pf-38
algif-hash
crypto-vfat(blowfish)
crypto-vfat(blowfish)-all
crypto-vfat
Reported-by: Mathias Krause <minipli@googlemail.com>
Signed-off-by: Kees Cook <keescook@chromium.org>
Acked-by: Mathias Krause <minipli@googlemail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
This prefixes all crypto module loading with "crypto-" so we never run
the risk of exposing module auto-loading to userspace via a crypto API,
as demonstrated by Mathias Krause:
https://lkml.org/lkml/2013/3/4/70
Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
There are cases when cryptlen can be zero in crypto_ccm_auth():
-encryptiom: input scatterlist length is zero (no plaintext)
-decryption: input scatterlist contains only the mac
plus the condition of having different source and destination buffers
(or else scatterlist length = max(plaintext_len, ciphertext_len)).
These are not handled correctly, leading to crashes like:
root@p4080ds:~/crypto# insmod tcrypt.ko mode=45
------------[ cut here ]------------
kernel BUG at crypto/scatterwalk.c:37!
Oops: Exception in kernel mode, sig: 5 [#1]
SMP NR_CPUS=8 P4080 DS
Modules linked in: tcrypt(+) crc32c xts xcbc vmac pcbc ecb gcm ghash_generic gf128mul ccm ctr seqiv
CPU: 3 PID: 1082 Comm: cryptomgr_test Not tainted 3.11.0 #14
task: ee12c5b0 ti: eecd0000 task.ti: eecd0000
NIP: c0204d98 LR: f9225848 CTR: c0204d80
REGS: eecd1b70 TRAP: 0700 Not tainted (3.11.0)
MSR: 00029002 <CE,EE,ME> CR: 22044022 XER: 20000000
GPR00: f9225c94 eecd1c20 ee12c5b0 eecd1c28 ee879400 ee879400 00000000 ee607464
GPR08: 00000001 00000001 00000000 006b0000 c0204d80 00000000 00000002 c0698e20
GPR16: ee987000 ee895000 fffffff4 ee879500 00000100 eecd1d58 00000001 00000000
GPR24: ee879400 00000020 00000000 00000000 ee5b2800 ee607430 00000004 ee607460
NIP [c0204d98] scatterwalk_start+0x18/0x30
LR [f9225848] get_data_to_compute+0x28/0x2f0 [ccm]
Call Trace:
[eecd1c20] [f9225974] get_data_to_compute+0x154/0x2f0 [ccm] (unreliable)
[eecd1c70] [f9225c94] crypto_ccm_auth+0x184/0x1d0 [ccm]
[eecd1cb0] [f9225d40] crypto_ccm_encrypt+0x60/0x2d0 [ccm]
[eecd1cf0] [c020d77c] __test_aead+0x3ec/0xe20
[eecd1e20] [c020f35c] test_aead+0x6c/0xe0
[eecd1e40] [c020f420] alg_test_aead+0x50/0xd0
[eecd1e60] [c020e5e4] alg_test+0x114/0x2e0
[eecd1ee0] [c020bd1c] cryptomgr_test+0x4c/0x60
[eecd1ef0] [c0047058] kthread+0xa8/0xb0
[eecd1f40] [c000eb0c] ret_from_kernel_thread+0x5c/0x64
Instruction dump:
0f080000 81290024 552807fe 0f080000 5529003a 4bffffb4 90830000 39400000
39000001 8124000c 2f890000 7d28579e <0f090000> 81240008 91230004 4e800020
---[ end trace 6d652dfcd1be37bd ]---
Cc: <stable@vger.kernel.org>
Cc: Jussi Kivilinna <jussi.kivilinna@mbnet.fi>
Signed-off-by: Horia Geanta <horia.geanta@freescale.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
When comparing MAC hashes, AEAD authentication tags, or other hash
values in the context of authentication or integrity checking, it
is important not to leak timing information to a potential attacker,
i.e. when communication happens over a network.
Bytewise memory comparisons (such as memcmp) are usually optimized so
that they return a nonzero value as soon as a mismatch is found. E.g,
on x86_64/i5 for 512 bytes this can be ~50 cyc for a full mismatch
and up to ~850 cyc for a full match (cold). This early-return behavior
can leak timing information as a side channel, allowing an attacker to
iteratively guess the correct result.
This patch adds a new method crypto_memneq ("memory not equal to each
other") to the crypto API that compares memory areas of the same length
in roughly "constant time" (cache misses could change the timing, but
since they don't reveal information about the content of the strings
being compared, they are effectively benign). Iow, best and worst case
behaviour take the same amount of time to complete (in contrast to
memcmp).
Note that crypto_memneq (unlike memcmp) can only be used to test for
equality or inequality, NOT for lexicographical order. This, however,
is not an issue for its use-cases within the crypto API.
We tried to locate all of the places in the crypto API where memcmp was
being used for authentication or integrity checking, and convert them
over to crypto_memneq.
crypto_memneq is declared noinline, placed in its own source file,
and compiled with optimizations that might increase code size disabled
("Os") because a smart compiler (or LTO) might notice that the return
value is always compared against zero/nonzero, and might then
reintroduce the same early-return optimization that we are trying to
avoid.
Using #pragma or __attribute__ optimization annotations of the code
for disabling optimization was avoided as it seems to be considered
broken or unmaintained for long time in GCC [1]. Therefore, we work
around that by specifying the compile flag for memneq.o directly in
the Makefile. We found that this seems to be most appropriate.
As we use ("Os"), this patch also provides a loop-free "fast-path" for
frequently used 16 byte digests. Similarly to kernel library string
functions, leave an option for future even further optimized architecture
specific assembler implementations.
This was a joint work of James Yonan and Daniel Borkmann. Also thanks
for feedback from Florian Weimer on this and earlier proposals [2].
[1] http://gcc.gnu.org/ml/gcc/2012-07/msg00211.html
[2] https://lkml.org/lkml/2013/2/10/131
Signed-off-by: James Yonan <james@openvpn.net>
Signed-off-by: Daniel Borkmann <dborkman@redhat.com>
Cc: Florian Weimer <fw@deneb.enyo.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Replace PTR_ERR followed by ERR_PTR by ERR_CAST, to be more concise.
The semantic patch that makes this change is as follows:
(http://coccinelle.lip6.fr/)
// <smpl>
@@
expression err,x;
@@
- err = PTR_ERR(x);
if (IS_ERR(x))
- return ERR_PTR(err);
+ return ERR_CAST(x);
// </smpl>
Signed-off-by: Julia Lawall <Julia.Lawall@lip6.fr>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Signed-off-by: Cong Wang <amwang@redhat.com>
|
|
Its a valid use case to have null associated data in a ccm vector, but
this case isn't being handled properly right now.
The following ccm decryption/verification test vector, using the
rfc4309 implementation regularly triggers a panic, as will any
other vector with null assoc data:
* key: ab2f8a74b71cd2b1ff802e487d82f8b9
* iv: c6fb7d800d13abd8a6b2d8
* Associated Data: [NULL]
* Tag Length: 8
* input: d5e8939fc7892e2b
The resulting panic looks like so:
Unable to handle kernel paging request at ffff810064ddaec0 RIP:
[<ffffffff8864c4d7>] :ccm:get_data_to_compute+0x1a6/0x1d6
PGD 8063 PUD 0
Oops: 0002 [1] SMP
last sysfs file: /module/libata/version
CPU 0
Modules linked in: crypto_tester_kmod(U) seqiv krng ansi_cprng chainiv rng ctr aes_generic aes_x86_64 ccm cryptomgr testmgr_cipher testmgr aead crypto_blkcipher crypto_a
lgapi des ipv6 xfrm_nalgo crypto_api autofs4 hidp l2cap bluetooth nfs lockd fscache nfs_acl sunrpc ip_conntrack_netbios_ns ipt_REJECT xt_state ip_conntrack nfnetlink xt_
tcpudp iptable_filter ip_tables x_tables dm_mirror dm_log dm_multipath scsi_dh dm_mod video hwmon backlight sbs i2c_ec button battery asus_acpi acpi_memhotplug ac lp sg
snd_intel8x0 snd_ac97_codec ac97_bus snd_seq_dummy snd_seq_oss joydev snd_seq_midi_event snd_seq snd_seq_device snd_pcm_oss snd_mixer_oss ide_cd snd_pcm floppy parport_p
c shpchp e752x_edac snd_timer e1000 i2c_i801 edac_mc snd soundcore snd_page_alloc i2c_core cdrom parport serio_raw pcspkr ata_piix libata sd_mod scsi_mod ext3 jbd uhci_h
cd ohci_hcd ehci_hcd
Pid: 12844, comm: crypto-tester Tainted: G 2.6.18-128.el5.fips1 #1
RIP: 0010:[<ffffffff8864c4d7>] [<ffffffff8864c4d7>] :ccm:get_data_to_compute+0x1a6/0x1d6
RSP: 0018:ffff8100134434e8 EFLAGS: 00010246
RAX: 0000000000000000 RBX: ffff8100104898b0 RCX: ffffffffab6aea10
RDX: 0000000000000010 RSI: ffff8100104898c0 RDI: ffff810064ddaec0
RBP: 0000000000000000 R08: ffff8100104898b0 R09: 0000000000000000
R10: ffff8100103bac84 R11: ffff8100104898b0 R12: ffff810010489858
R13: ffff8100104898b0 R14: ffff8100103bac00 R15: 0000000000000000
FS: 00002ab881adfd30(0000) GS:ffffffff803ac000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b
CR2: ffff810064ddaec0 CR3: 0000000012a88000 CR4: 00000000000006e0
Process crypto-tester (pid: 12844, threadinfo ffff810013442000, task ffff81003d165860)
Stack: ffff8100103bac00 ffff8100104898e8 ffff8100134436f8 ffffffff00000000
0000000000000000 ffff8100104898b0 0000000000000000 ffff810010489858
0000000000000000 ffff8100103bac00 ffff8100134436f8 ffffffff8864c634
Call Trace:
[<ffffffff8864c634>] :ccm:crypto_ccm_auth+0x12d/0x140
[<ffffffff8864cf73>] :ccm:crypto_ccm_decrypt+0x161/0x23a
[<ffffffff88633643>] :crypto_tester_kmod:cavs_test_rfc4309_ccm+0x4a5/0x559
[...]
The above is from a RHEL5-based kernel, but upstream is susceptible too.
The fix is trivial: in crypto/ccm.c:crypto_ccm_auth(), pctx->ilen contains
whatever was in memory when pctx was allocated if assoclen is 0. The tested
fix is to simply add an else clause setting pctx->ilen to 0 for the
assoclen == 0 case, so that get_data_to_compute() doesn't try doing
things its not supposed to.
Signed-off-by: Jarod Wilson <jarod@redhat.com>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
This patch adds Counter with CBC-MAC (CCM) support.
RFC 3610 and NIST Special Publication 800-38C were referenced.
Signed-off-by: Joy Latten <latten@austin.ibm.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|