IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
ecc.c have algorithms that could be used togeter by ecdh and ecrdsa.
Make it separate module. Add CRYPTO_ECC into Kconfig. EXPORT_SYMBOL and
document to what seems appropriate. Move structs ecc_point and ecc_curve
from ecc_curve_defs.h into ecc.h.
No code changes.
Signed-off-by: Vitaly Chikunov <vt@altlinux.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Group RSA, DH, and ECDH into Public-key cryptography config section.
Signed-off-by: Vitaly Chikunov <vt@altlinux.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Some public key algorithms (like EC-DSA) keep in parameters field
important data such as digest and curve OIDs (possibly more for
different EC-DSA variants). Thus, just setting a public key (as
for RSA) is not enough.
Append parameters into the key stream for akcipher_set_{pub,priv}_key.
Appended data is: (u32) algo OID, (u32) parameters length, parameters
data.
This does not affect current akcipher API nor RSA ciphers (they could
ignore it). Idea of appending parameters to the key stream is by Herbert
Xu.
Cc: David Howells <dhowells@redhat.com>
Cc: Denis Kenzior <denkenz@gmail.com>
Cc: keyrings@vger.kernel.org
Signed-off-by: Vitaly Chikunov <vt@altlinux.org>
Reviewed-by: Denis Kenzior <denkenz@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Treat (struct public_key_signature)'s digest same as its signature (s).
Since digest should be already in the kmalloc'd memory do not kmemdup
digest value before calling {public,tpm}_key_verify_signature.
Patch is split from the previous as suggested by Herbert Xu.
Suggested-by: David Howells <dhowells@redhat.com>
Cc: David Howells <dhowells@redhat.com>
Cc: keyrings@vger.kernel.org
Signed-off-by: Vitaly Chikunov <vt@altlinux.org>
Reviewed-by: Denis Kenzior <denkenz@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Previous akcipher .verify() just `decrypts' (using RSA encrypt which is
using public key) signature to uncover message hash, which was then
compared in upper level public_key_verify_signature() with the expected
hash value, which itself was never passed into verify().
This approach was incompatible with EC-DSA family of algorithms,
because, to verify a signature EC-DSA algorithm also needs a hash value
as input; then it's used (together with a signature divided into halves
`r||s') to produce a witness value, which is then compared with `r' to
determine if the signature is correct. Thus, for EC-DSA, nor
requirements of .verify() itself, nor its output expectations in
public_key_verify_signature() wasn't sufficient.
Make improved .verify() call which gets hash value as input and produce
complete signature check without any output besides status.
Now for the top level verification only crypto_akcipher_verify() needs
to be called and its return value inspected.
Make sure that `digest' is in kmalloc'd memory (in place of `output`) in
{public,tpm}_key_verify_signature() as insisted by Herbert Xu, and will
be changed in the following commit.
Cc: David Howells <dhowells@redhat.com>
Cc: keyrings@vger.kernel.org
Signed-off-by: Vitaly Chikunov <vt@altlinux.org>
Reviewed-by: Denis Kenzior <denkenz@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
In preparation for new akcipher verify call remove sign/verify callbacks
from RSA backends and make PKCS1 driver call encrypt/decrypt instead.
This also complies with the well-known idea that raw RSA should never be
used for sign/verify. It only should be used with proper padding scheme
such as PKCS1 driver provides.
Cc: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Cc: qat-linux@intel.com
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Gary Hook <gary.hook@amd.com>
Cc: Horia Geantă <horia.geanta@nxp.com>
Cc: Aymen Sghaier <aymen.sghaier@nxp.com>
Signed-off-by: Vitaly Chikunov <vt@altlinux.org>
Reviewed-by: Horia Geantă <horia.geanta@nxp.com>
Acked-by: Gary R Hook <gary.hook@amd.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Because with the introduction of EC-RDSA and change in workings of RSA
in regard to sign/verify, akcipher could have not all callbacks defined,
check the presence of callbacks in crypto_register_akcipher() and
provide default implementation if the callback is not implemented.
This is suggested by Herbert Xu instead of checking the presence of the
callback on every request.
Signed-off-by: Vitaly Chikunov <vt@altlinux.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds a requirement to the generic 3DES implementation
such that 2-key 3DES (K1 == K3) is no longer allowed in FIPS mode.
We will also provide helpers that may be used by drivers that
implement 3DES to make the same check.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
If the user-provided IV needs to be aligned to the algorithm's
alignmask, then skcipher_walk_virt() copies the IV into a new aligned
buffer walk.iv. But skcipher_walk_virt() can fail afterwards, and then
if the caller unconditionally accesses walk.iv, it's a use-after-free.
salsa20-generic doesn't set an alignmask, so currently it isn't affected
by this despite unconditionally accessing walk.iv. However this is more
subtle than desired, and it was actually broken prior to the alignmask
being removed by commit b62b3db76f73 ("crypto: salsa20-generic - cleanup
and convert to skcipher API").
Since salsa20-generic does not update the IV and does not need any IV
alignment, update it to use req->iv instead of walk.iv.
Fixes: 2407d60872dd ("[CRYPTO] salsa20: Salsa20 stream cipher")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
If the user-provided IV needs to be aligned to the algorithm's
alignmask, then skcipher_walk_virt() copies the IV into a new aligned
buffer walk.iv. But skcipher_walk_virt() can fail afterwards, and then
if the caller unconditionally accesses walk.iv, it's a use-after-free.
Fix this in the LRW template by checking the return value of
skcipher_walk_virt().
This bug was detected by my patches that improve testmgr to fuzz
algorithms against their generic implementation. When the extra
self-tests were run on a KASAN-enabled kernel, a KASAN use-after-free
splat occured during lrw(aes) testing.
Fixes: c778f96bf347 ("crypto: lrw - Optimize tweak computation")
Cc: <stable@vger.kernel.org> # v4.20+
Cc: Ondrej Mosnacek <omosnace@redhat.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
When we perform a walk in the completion function, we need to ensure
that it is atomic.
Fixes: ac3c8f36c31d ("crypto: lrw - Do not use auxiliary buffer")
Cc: <stable@vger.kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Acked-by: Ondrej Mosnacek <omosnace@redhat.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
When we perform a walk in the completion function, we need to ensure
that it is atomic.
Reported-by: syzbot+6f72c20560060c98b566@syzkaller.appspotmail.com
Fixes: 78105c7e769b ("crypto: xts - Drop use of auxiliary buffer")
Cc: <stable@vger.kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Acked-by: Ondrej Mosnacek <omosnace@redhat.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The x86_64 implementation of Poly1305 produces the wrong result on some
inputs because poly1305_4block_avx2() incorrectly assumes that when
partially reducing the accumulator, the bits carried from limb 'd4' to
limb 'h0' fit in a 32-bit integer. This is true for poly1305-generic
which processes only one block at a time. However, it's not true for
the AVX2 implementation, which processes 4 blocks at a time and
therefore can produce intermediate limbs about 4x larger.
Fix it by making the relevant calculations use 64-bit arithmetic rather
than 32-bit. Note that most of the carries already used 64-bit
arithmetic, but the d4 -> h0 carry was different for some reason.
To be safe I also made the same change to the corresponding SSE2 code,
though that only operates on 1 or 2 blocks at a time. I don't think
it's really needed for poly1305_block_sse2(), but it doesn't hurt
because it's already x86_64 code. It *might* be needed for
poly1305_2block_sse2(), but overflows aren't easy to reproduce there.
This bug was originally detected by my patches that improve testmgr to
fuzz algorithms against their generic implementation. But also add a
test vector which reproduces it directly (in the AVX2 case).
Fixes: b1ccc8f4b631 ("crypto: poly1305 - Add a four block AVX2 variant for x86_64")
Fixes: c70f4abef07a ("crypto: poly1305 - Add a SSE2 SIMD variant for x86_64")
Cc: <stable@vger.kernel.org> # v4.3+
Cc: Martin Willi <martin@strongswan.org>
Cc: Jason A. Donenfeld <Jason@zx2c4.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Reviewed-by: Martin Willi <martin@strongswan.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add a module parameter cryptomgr.panic_on_fail which causes the kernel
to panic if any crypto self-tests fail.
Use cases:
- More easily detect crypto self-test failures by boot testing,
e.g. on KernelCI.
- Get a bug report if syzkaller manages to use the template system to
instantiate an algorithm that fails its self-tests.
The command-line option "fips=1" already does this, but it also makes
other changes not wanted for general testing, such as disabling
"unapproved" algorithms. panic_on_fail just does what it says.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
My patches to make testmgr fuzz algorithms against their generic
implementation detected that the arm64 implementations of
"cts(cbc(aes))" handle empty messages differently from the cts template.
Namely, the arm64 implementations forbids (with -EINVAL) all messages
shorter than the block size, including the empty message; but the cts
template permits empty messages as a special case.
No user should be CTS-encrypting/decrypting empty messages, but we need
to keep the behavior consistent. Unfortunately, as noted in the source
of OpenSSL's CTS implementation [1], there's no common specification for
CTS. This makes it somewhat debatable what the behavior should be.
However, all CTS specifications seem to agree that messages shorter than
the block size are not allowed, and OpenSSL follows this in both CTS
conventions it implements. It would also simplify the user-visible
semantics to have empty messages no longer be a special case.
Therefore, make the cts template return -EINVAL on *all* messages
shorter than the block size, including the empty message.
[1] https://github.com/openssl/openssl/blob/master/crypto/modes/cts128.c
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
If the rfc7539 template is instantiated with specific implementations,
e.g. "rfc7539(chacha20-generic,poly1305-generic)" rather than
"rfc7539(chacha20,poly1305)", then the implementation names end up
included in the instance's cra_name. This is incorrect because it then
prevents all users from allocating "rfc7539(chacha20,poly1305)", if the
highest priority implementations of chacha20 and poly1305 were selected.
Also, the self-tests aren't run on an instance allocated in this way.
Fix it by setting the instance's cra_name from the underlying
algorithms' actual cra_names, rather than from the requested names.
This matches what other templates do.
Fixes: 71ebc4d1b27d ("crypto: chacha20poly1305 - Add a ChaCha20-Poly1305 AEAD construction, RFC7539")
Cc: <stable@vger.kernel.org> # v4.2+
Cc: Martin Willi <martin@strongswan.org>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Reviewed-by: Martin Willi <martin@strongswan.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
skcipher_walk_done() assumes it's a bug if, after the "slow" path is
executed where the next chunk of data is processed via a bounce buffer,
the algorithm says it didn't process all bytes. Thus it WARNs on this.
However, this can happen legitimately when the message needs to be
evenly divisible into "blocks" but isn't, and the algorithm has a
'walksize' greater than the block size. For example, ecb-aes-neonbs
sets 'walksize' to 128 bytes and only supports messages evenly divisible
into 16-byte blocks. If, say, 17 message bytes remain but they straddle
scatterlist elements, the skcipher_walk code will take the "slow" path
and pass the algorithm all 17 bytes in the bounce buffer. But the
algorithm will only be able to process 16 bytes, triggering the WARN.
Fix this by just removing the WARN_ON(). Returning -EINVAL, as the code
already does, is the right behavior.
This bug was detected by my patches that improve testmgr to fuzz
algorithms against their generic implementation.
Fixes: b286d8b1a690 ("crypto: skcipher - Add skcipher walk interface")
Cc: <stable@vger.kernel.org> # v4.10+
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The ->digest() method of crct10dif-generic reads the current CRC value
from the shash_desc context. But this value is uninitialized, causing
crypto_shash_digest() to compute the wrong result. Fix it.
Probably this wasn't noticed before because lib/crc-t10dif.c only uses
crypto_shash_update(), not crypto_shash_digest(). Likewise,
crypto_shash_digest() is not yet tested by the crypto self-tests because
those only test the ahash API which only uses shash init/update/final.
This bug was detected by my patches that improve testmgr to fuzz
algorithms against their generic implementation.
Fixes: 2d31e518a428 ("crypto: crct10dif - Wrap crc_t10dif function all to use crypto transform framework")
Cc: <stable@vger.kernel.org> # v3.11+
Cc: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
cacheline_aligned is a special section. It cannot be const at the same
time because it's not read-only. It doesn't give any MMU protection.
Mark it ____cacheline_aligned to not place it in a special section,
but just align it in .rodata
Cc: herbert@gondor.apana.org.au
Suggested-by: Rasmus Villemoes <linux@rasmusvillemoes.dk>
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Two per-CPU variables are allocated as pointer to per-CPU memory which
then are used as scratch buffers.
We could be smart about this and use instead a per-CPU struct which
contains the pointers already and then we need to allocate just the
scratch buffers.
Add a lock to the struct. By doing so we can avoid the get_cpu()
statement and gain lockdep coverage (if enabled) to ensure that the lock
is always acquired in the right context. On non-preemptible kernels the
lock vanishes.
It is okay to use raw_cpu_ptr() in order to get a pointer to the struct
since it is protected by the spinlock.
The diffstat of this is negative and according to size scompress.o:
text data bss dec hex filename
1847 160 24 2031 7ef dbg_before.o
1754 232 4 1990 7c6 dbg_after.o
1799 64 24 1887 75f no_dbg-before.o
1703 88 4 1795 703 no_dbg-after.o
The overall size increase difference is also negative. The increase in
the data section is only four bytes without lockdep.
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
If scomp_acomp_comp_decomp() fails to allocate memory for the
destination then we never copy back the data we compressed.
It is probably best to return an error code instead 0 in case of
failure.
I haven't found any user that is using acomp_request_set_params()
without the `dst' buffer so there is probably no harm.
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Spotted while reviewind patches from Eric Biggers.
Signed-off-by: Ondrej Mosnacek <omosnace@redhat.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
In salsa20_docrypt(), use crypto_xor_cpy() instead of crypto_xor().
This avoids having to memcpy() the src buffer to the dst buffer.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
In chacha_docrypt(), use crypto_xor_cpy() instead of crypto_xor().
This avoids having to memcpy() the src buffer to the dst buffer.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
All crypto API algorithms are supposed to support the case where they
are called in a context where SIMD instructions are unusable, e.g. IRQ
context on some architectures. However, this isn't tested for by the
self-tests, causing bugs to go undetected.
Now that all algorithms have been converted to use crypto_simd_usable(),
update the self-tests to test the no-SIMD case. First, a bool
testvec_config::nosimd is added. When set, the crypto operation is
executed with preemption disabled and with crypto_simd_usable() mocked
out to return false on the current CPU.
A bool test_sg_division::nosimd is also added. For hash algorithms it's
honored by the corresponding ->update(). By setting just a subset of
these bools, the case where some ->update()s are done in SIMD context
and some are done in no-SIMD context is also tested.
These bools are then randomly set by generate_random_testvec_config().
For now, all no-SIMD testing is limited to the extra crypto self-tests,
because it might be a bit too invasive for the regular self-tests.
But this could be changed later.
This has already found bugs in the arm64 AES-GCM and ChaCha algorithms.
This would have found some past bugs as well.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Replace all calls to may_use_simd() in the shared SIMD helpers with
crypto_simd_usable(), in order to allow testing the no-SIMD code paths.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
So that the no-SIMD fallback code can be tested by the crypto
self-tests, add a macro crypto_simd_usable() which wraps may_use_simd(),
but also returns false if the crypto self-tests have set a per-CPU bool
to disable SIMD in crypto code on the current CPU.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The arm64 implementations of ChaCha and XChaCha are failing the extra
crypto self-tests following my patches to test the !may_use_simd() code
paths, which previously were untested. The problem is as follows:
When !may_use_simd(), the arm64 NEON implementations fall back to the
generic implementation, which uses the skcipher_walk API to iterate
through the src/dst scatterlists. Due to how the skcipher_walk API
works, walk.stride is set from the skcipher_alg actually being used,
which in this case is the arm64 NEON algorithm. Thus walk.stride is
5*CHACHA_BLOCK_SIZE, not CHACHA_BLOCK_SIZE.
This unnecessarily large stride shouldn't cause an actual problem.
However, the generic implementation computes round_down(nbytes,
walk.stride). round_down() assumes the round amount is a power of 2,
which 5*CHACHA_BLOCK_SIZE is not, so it gives the wrong result.
This causes the following case in skcipher_walk_done() to be hit,
causing a WARN() and failing the encryption operation:
if (WARN_ON(err)) {
/* unexpected case; didn't process all bytes */
err = -EINVAL;
goto finish;
}
Fix it by rounding down to CHACHA_BLOCK_SIZE instead of walk.stride.
(Or we could replace round_down() with rounddown(), but that would add a
slow division operation every time, which I think we should avoid.)
Fixes: 2fe55987b262 ("crypto: arm64/chacha - use combined SIMD/ALU routine for more speed")
Cc: <stable@vger.kernel.org> # v5.0+
Signed-off-by: Eric Biggers <ebiggers@google.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Now that all AEAD algorithms (that I have the hardware to test, at
least) have been fixed to not modify the user-provided aead_request,
remove the workaround from testmgr that reset aead_request::tfm after
each AEAD encryption/decryption.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Convert the x86 implementations of MORUS-1280 to use the AEAD SIMD
helpers, rather than hand-rolling the same functionality. This
simplifies the code and also fixes the bug where the user-provided
aead_request is modified.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Convert the x86 implementation of MORUS-640 to use the AEAD SIMD
helpers, rather than hand-rolling the same functionality. This
simplifies the code and also fixes the bug where the user-provided
aead_request is modified.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Convert the x86 implementation of AEGIS-256 to use the AEAD SIMD
helpers, rather than hand-rolling the same functionality. This
simplifies the code and also fixes the bug where the user-provided
aead_request is modified.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Convert the x86 implementation of AEGIS-128L to use the AEAD SIMD
helpers, rather than hand-rolling the same functionality. This
simplifies the code and also fixes the bug where the user-provided
aead_request is modified.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Convert the x86 implementation of AEGIS-128 to use the AEAD SIMD
helpers, rather than hand-rolling the same functionality. This
simplifies the code and also fixes the bug where the user-provided
aead_request is modified.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Update the crypto_simd module to support wrapping AEAD algorithms.
Previously it only supported skciphers. The code for each is similar.
I'll be converting the x86 implementations of AES-GCM, AEGIS, and MORUS
to use this. Currently they each independently implement the same
functionality. This will not only simplify the code, but it will also
fix the bug detected by the improved self-tests: the user-provided
aead_request is modified. This is because these algorithms currently
reuse the original request, whereas the crypto_simd helpers build a new
request in the original request's context.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
To prevent any issues with persistent data, separate lzo-rle from lzo so
that it is treated as a separate algorithm, and lzo is still available.
Link: http://lkml.kernel.org/r/20190205155944.16007-3-dave.rodgman@arm.com
Signed-off-by: Dave Rodgman <dave.rodgman@arm.com>
Cc: David S. Miller <davem@davemloft.net>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: Markus F.X.J. Oberhumer <markus@oberhumer.com>
Cc: Matt Sealey <matt.sealey@arm.com>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Nitin Gupta <nitingupta910@gmail.com>
Cc: Richard Purdie <rpurdie@openedhand.com>
Cc: Sergey Senozhatsky <sergey.senozhatsky.work@gmail.com>
Cc: Sonny Rao <sonnyrao@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Pull crypto update from Herbert Xu:
"API:
- Add helper for simple skcipher modes.
- Add helper to register multiple templates.
- Set CRYPTO_TFM_NEED_KEY when setkey fails.
- Require neither or both of export/import in shash.
- AEAD decryption test vectors are now generated from encryption
ones.
- New option CONFIG_CRYPTO_MANAGER_EXTRA_TESTS that includes random
fuzzing.
Algorithms:
- Conversions to skcipher and helper for many templates.
- Add more test vectors for nhpoly1305 and adiantum.
Drivers:
- Add crypto4xx prng support.
- Add xcbc/cmac/ecb support in caam.
- Add AES support for Exynos5433 in s5p.
- Remove sha384/sha512 from artpec7 as hardware cannot do partial
hash"
[ There is a merge of the Freescale SoC tree in order to pull in changes
required by patches to the caam/qi2 driver. ]
* 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (174 commits)
crypto: s5p - add AES support for Exynos5433
dt-bindings: crypto: document Exynos5433 SlimSSS
crypto: crypto4xx - add missing of_node_put after of_device_is_available
crypto: cavium/zip - fix collision with generic cra_driver_name
crypto: af_alg - use struct_size() in sock_kfree_s()
crypto: caam - remove redundant likely/unlikely annotation
crypto: s5p - update iv after AES-CBC op end
crypto: x86/poly1305 - Clear key material from stack in SSE2 variant
crypto: caam - generate hash keys in-place
crypto: caam - fix DMA mapping xcbc key twice
crypto: caam - fix hash context DMA unmap size
hwrng: bcm2835 - fix probe as platform device
crypto: s5p-sss - Use AES_BLOCK_SIZE define instead of number
crypto: stm32 - drop pointless static qualifier in stm32_hash_remove()
crypto: chelsio - Fixed Traffic Stall
crypto: marvell - Remove set but not used variable 'ivsize'
crypto: ccp - Update driver messages to remove some confusion
crypto: adiantum - add 1536 and 4096-byte test vectors
crypto: nhpoly1305 - add a test vector with len % 16 != 0
crypto: arm/aes-ce - update IV after partial final CTR block
...
Make use of the struct_size() helper instead of an open-coded version
in order to avoid any potential type mistakes, in particular in the
context in which this code is being used.
So, change the following form:
sizeof(*sgl) + sizeof(sgl->sg[0]) * (MAX_SGL_ENTS + 1)
to :
struct_size(sgl, sg, MAX_SGL_ENTS + 1)
This code was detected with the help of Coccinelle.
Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add 1536 and 4096-byte Adiantum test vectors so that the case where
there are multiple NH hashes is tested. This is already tested by the
nhpoly1305 test vectors, but it should be tested at the Adiantum level
too. Moreover the 4096-byte case is especially important.
As with the other Adiantum test vectors, these were generated by the
reference Python implementation at https://github.com/google/adiantum
and then automatically formatted for testmgr by a script.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This is needed to test that the end of the message is zero-padded when
the length is not a multiple of 16 (NH_MESSAGE_UNIT). It's already
tested indirectly by the 31-byte Adiantum test vector, but it should be
tested directly at the nhpoly1305 level too.
As with the other nhpoly1305 test vectors, this was generated by the
reference Python implementation at https://github.com/google/adiantum
and then automatically formatted for testmgr by a script.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Test that all CTR implementations update the IV buffer to contain the
next counter block, aka the IV to continue the encryption/decryption of
a larger message. When the length processed is a multiple of the block
size, users may rely on this for chaining.
When the length processed is *not* a multiple of the block size, simple
chaining doesn't work. However, as noted in commit 88a3f582bea9
("crypto: arm64/aes - don't use IV buffer to return final keystream
block"), the generic CCM implementation assumes that the CTR IV is
handled in some sane way, not e.g. overwritten with part of the
keystream. Since this was gotten wrong once already, it's desirable to
test for it. And, the most straightforward way to do this is to enforce
that all CTR implementations have the same behavior as the generic
implementation, which returns the *next* counter following the final
partial block. This behavior also has the advantage that if someone
does misuse this case for chaining, then the keystream won't be
repeated. Thus, this patch makes the tests expect this behavior.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Test that all CBC implementations update the IV buffer to contain the
last ciphertext block, aka the IV to continue the encryption/decryption
of a larger message. Users may rely on this for chaining.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Allow skcipher test vectors to declare the value the IV buffer should be
updated to at the end of the encryption or decryption operation.
(This check actually used to be supported in testmgr, but it was never
used and therefore got removed except for the AES-Keywrap special case.
But it will be used by CBC and CTR now, so re-add it.)
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3DES only has an 8-byte block size, but the 3DES-CTR test vectors use
16-byte IVs. Remove the unused 8 bytes from the ends of the IVs.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Some arc4 cipher algorithm defines show up in two places:
crypto/arc4.c and drivers/crypto/bcm/cipher.h.
Let's export them in a common header and update their users.
Signed-off-by: Iuliana Prodan <iuliana.prodan@nxp.com>
Reviewed-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Check that algorithms do not change the aead_request structure, as users
may rely on submitting the request again (e.g. after copying new data
into the same source buffer) without reinitializing everything.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Check that algorithms do not change the skcipher_request structure, as
users may rely on submitting the request again (e.g. after copying new
data into the same source buffer) without reinitializing everything.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>