IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
To reduce the number of copies of boilerplate code throughout
the tree, this patch implements generic glue for the SHA-512
algorithm. This allows a specific arch or hardware implementation
to only implement the special handling that it needs.
The users need to supply an implementation of
void (sha512_block_fn)(struct sha512_state *sst, u8 const *src, int blocks)
and pass it to the SHA-512 base functions. For easy casting between the
prototype above and existing block functions that take a 'u64 state[]'
as their first argument, the 'state' member of struct sha512_state is
moved to the base of the struct.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
To reduce the number of copies of boilerplate code throughout
the tree, this patch implements generic glue for the SHA-256
algorithm. This allows a specific arch or hardware implementation
to only implement the special handling that it needs.
The users need to supply an implementation of
void (sha256_block_fn)(struct sha256_state *sst, u8 const *src, int blocks)
and pass it to the SHA-256 base functions. For easy casting between the
prototype above and existing block functions that take a 'u32 state[]'
as their first argument, the 'state' member of struct sha256_state is
moved to the base of the struct.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
To reduce the number of copies of boilerplate code throughout
the tree, this patch implements generic glue for the SHA-1
algorithm. This allows a specific arch or hardware implementation
to only implement the special handling that it needs.
The users need to supply an implementation of
void (sha1_block_fn)(struct sha1_state *sst, u8 const *src, int blocks)
and pass it to the SHA-1 base functions. For easy casting between the
prototype above and existing block functions that take a 'u32 state[]'
as their first argument, the 'state' member of struct sha1_state is
moved to the base of the struct.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
A cipher instance is added to the list of instances unconditionally
regardless of whether the associated test failed. However, a failed
test implies that during another lookup, the cipher instance will
be added to the list again as it will not be found by the lookup
code.
That means that the list can be filled up with instances whose tests
failed.
Note: tests only fail in reality in FIPS mode when a cipher is not
marked as fips_allowed=1. This can be seen with cmac(des3_ede) that does
not have a fips_allowed=1. When allocating the cipher, the allocation
fails with -ENOENT due to the missing fips_allowed=1 flag (which
causes the testmgr to return EINVAL). Yet, the instance of
cmac(des3_ede) is shown in /proc/crypto. Allocating the cipher again
fails again, but a 2nd instance is listed in /proc/crypto.
The patch simply de-registers the instance when the testing failed.
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
We currently initialise the crypto_alg ref count in the function
__crypto_register_alg. As one of the callers of that function
crypto_register_instance needs to obtain a ref count before it
calls __crypto_register_alg, we need to move the initialisation
out of there.
Since both callers of __crypto_register_alg call crypto_check_alg,
this is the logical place to perform the initialisation.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Acked-by: Stephan Mueller <smueller@chronox.de>
The AES implementation still assumes, that the hw_desc[0] has a valid
key as long as no new key needs to be set; consequentialy it always
sets the AES key header for the first descriptor and puts data into
the second one (hw_desc[1]).
Change this to only update the key in the hardware, when a new key is
to be set and use the first descriptor for data otherwise.
Signed-off-by: Steffen Trumtrar <s.trumtrar@pengutronix.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
With commit
7e77bdebff crypto: af_alg - fix backlog handling
in place, the backlog works under all circumstances where it previously failed, atleast
for the sahara driver. Use it.
Signed-off-by: Steffen Trumtrar <s.trumtrar@pengutronix.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The function crypto_alg_match returns an algorithm without taking
any references on it. This means that the algorithm can be freed
at any time, therefore all users of crypto_alg_match are buggy.
This patch fixes this by taking a reference count on the algorithm
to prevent such races.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The output buffer is used for CPU access, so
the API should be dma_sync_single_for_cpu which
makes the cache line invalid in order to reload
the value in memory.
Signed-off-by: Leilei Zhao <leilei.zhao@atmel.com>
Acked-by: Nicolas Ferre <nicolas.ferre@atmel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The input buffer and output buffer are mapped for DMA transfer
in Atmel AES driver. But they are also be used by CPU when
the requested crypt length is not bigger than the threshold
value 16. The buffers will be cached in cache line when CPU
accessed them. When DMA uses the buffers again, the memory
can happened to be flushed by cache while DMA starts transfer.
So using API dma_sync_single_for_device and dma_sync_single_for_cpu
in DMA to ensure DMA coherence and CPU always access the correct
value. This fix the issue that the encrypted result periodically goes
wrong when doing performance test with OpenSSH.
Signed-off-by: Leilei Zhao <leilei.zhao@atmel.com>
Acked-by: Nicolas Ferre <nicolas.ferre@atmel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Kernel will report "BUG: spinlock lockup suspected on CPU#0"
when CONFIG_DEBUG_SPINLOCK is enabled in kernel config and the
spinlock is used at the first time. It's caused by uninitialized
spinlock, so just initialize it in probe.
Signed-off-by: Leilei Zhao <leilei.zhao@atmel.com>
Acked-by: Nicolas Ferre <nicolas.ferre@atmel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Kernel will report "BUG: spinlock lockup suspected on CPU#0"
when CONFIG_DEBUG_SPINLOCK is enabled in kernel config and the
spinlock is used at the first time. It's caused by uninitialized
spinlock, so just initialize it in probe.
Signed-off-by: Leilei Zhao <leilei.zhao@atmel.com>
Acked-by: Nicolas Ferre <nicolas.ferre@atmel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The maximum source and destination burst size is 16
according to the datasheet of Atmel DMA. And the value
is also checked in function at_xdmac_csize of Atmel
DMA driver. With the restrict, the value beyond maximum
value will not be processed in DMA driver, so SHA384 and
SHA512 will not work and the program will wait forever.
So here change the max burst size of all the cases to 16
in order to make SHA384 and SHA512 work and keep consistent
with DMA driver and datasheet.
Signed-off-by: Leilei Zhao <leilei.zhao@atmel.com>
Acked-by: Nicolas Ferre <nicolas.ferre@atmel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Kernel will report "BUG: spinlock lockup suspected on CPU#0"
when CONFIG_DEBUG_SPINLOCK is enabled in kernel config and the
spinlock is used at the first time. It's caused by uninitialized
spinlock, so just initialize it in probe.
Signed-off-by: Leilei Zhao <leilei.zhao@atmel.com>
Acked-by: Nicolas Ferre <nicolas.ferre@atmel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Having a zero length sg doesn't mean it is the end of the sg list. This
case happens when calculating HMAC of IPSec packet.
Signed-off-by: Leilei Zhao <leilei.zhao@atmel.com>
Signed-off-by: Ludovic Desroches <ludovic.desroches@atmel.com>
Acked-by: Nicolas Ferre <nicolas.ferre@atmel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
When a hash is requested on data bigger than the buffer allocated by the
SHA driver, the way DMA transfers are performed is quite strange:
The buffer is filled at each update request. When full, a DMA transfer
is done. On next update request, another DMA transfer is done. Then we
wait to have a full buffer (or the end of the data) to perform the dma
transfer. Such a situation lead sometimes, on SAMA5D4, to a case where
dma transfer is finished but the data ready irq never comes. Moreover
hash was incorrect in this case.
With this patch, dma transfers are only performed when the buffer is
full or when there is no more data. So it removes the transfer whose size
is equal the update size after the full buffer transmission.
Signed-off-by: Ludovic Desroches <ludovic.desroches@atmel.com>
Signed-off-by: Leilei Zhao <leilei.zhao@atmel.com>
Acked-by: Nicolas Ferre <nicolas.ferre@atmel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add new version of atmel-sha available with SAMA5D4 devices.
Signed-off-by: Leilei Zhao <leilei.zhao@atmel.com>
Signed-off-by: Ludovic Desroches <ludovic.desroches@atmel.com>
Acked-by: Nicolas Ferre <nicolas.ferre@atmel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add new version of atmel-aes available with SAMA5D4 devices.
Signed-off-by: Leilei Zhao <leilei.zhao@atmel.com>
Signed-off-by: Ludovic Desroches <ludovic.desroches@atmel.com>
Acked-by: Nicolas Ferre <nicolas.ferre@atmel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
release_firmware was called twice on error path causing an Oops.
Reported-by: Ahsan Atta <ahsan.atta@intel.com>
Signed-off-by: Tadeusz Struk <tadeusz.struk@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Ring name was allocated but never refenenced.
It was supposed to be printed out in debug output.
Signed-off-by: Tadeusz Struk <tadeusz.struk@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch fix a spelling typo in crypto/Kconfig.
Signed-off-by: Masanari Iida <standby24x7@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch makes crypto_unregister_instance take a crypto_instance
instead of a crypto_alg. This allows us to remove a duplicate
CRYPTO_ALG_INSTANCE check in crypto_unregister_instance.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
There are multiple problems in crypto_unregister_instance:
1) The cra_refcnt BUG_ON check is racy and can cause crashes.
2) The cra_refcnt check shouldn't exist at all.
3) There is no reference on tmpl to protect the tmpl->free call.
This patch rewrites the function using crypto_remove_spawn which
now morphs into crypto_remove_instance.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
kmap_atomic() gives only the page address of the input page.
Driver should take care of adding the offset of the scatterlist
within the page to the returned page address.
omap-sham driver is not adding the offset to page and directly operates
on the return vale of kmap_atomic(), because of which the following
error comes when running crypto tests:
00000000: d9 a1 1b 7c aa 90 3b aa 11 ab cb 25 00 b8 ac bf
[ 2.338169] 00000010: c1 39 cd ff 48 d0 a8 e2 2b fa 33 a1
[ 2.344008] alg: hash: Chunking test 1 failed for omap-sha256
So adding the scatterlist offset to vaddr.
Signed-off-by: Lokesh Vutla <lokeshvutla@ti.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
ERROR:CODE_INDENT: code indent should use tabs where possible
Signed-off-by: Bruce Allan <bruce.w.allan@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
CHECK:COMPARISON_TO_NULL: Comparison to NULL could be written
"!device_reset_wq"
Signed-off-by: Bruce Allan <bruce.w.allan@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
CHECK:BIT_MACRO: Prefer using the BIT macro
Signed-off-by: Bruce Allan <bruce.w.allan@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
CHECK:CONCATENATED_STRING: Concatenated strings should use spaces between
elements
Signed-off-by: Bruce Allan <bruce.w.allan@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Cleanup code to fix the subject checkpatch warnings
Signed-off-by: Bruce Allan <bruce.w.allan@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
For cases where total length of an input SGs is not same as
length of the input data for encryption, omap-aes driver
crashes. This happens in the case when IPsec is trying to use
omap-aes driver.
To avoid this, we copy all the pages from the input SG list
into a contiguous buffer and prepare a single element SG list
for this buffer with length as the total bytes to crypt, which is
similar thing that is done in case of unaligned lengths.
Fixes: 6242332ff2 ("crypto: omap-aes - Add support for cases of unaligned lengths")
Signed-off-by: Lokesh Vutla <lokeshvutla@ti.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
omap_sham_handle_queue() can be called as part of done_task tasklet.
During this its atomic and any calls to pm functions cannot sleep.
But there is a call to pm_runtime_get_sync() (which can sleep) in
omap_sham_handle_queue(), because of which the following appears:
" [ 116.169969] BUG: scheduling while atomic: kworker/0:2/2676/0x00000100"
Add pm_runtime_irq_safe() to avoid this.
Signed-off-by: Lokesh Vutla <lokeshvutla@ti.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Flag all Multi buffer SHA1 helper ciphers as internal ciphers
to prevent them from being called by normal users.
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The mcryptd is used as a wrapper around internal ciphers. Therefore,
the mcryptd must process the internal cipher by marking mcryptd as
internal if the underlying cipher is an internal cipher.
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Flag all 64 bit ARMv8 AES helper ciphers as internal ciphers to
prevent them from being called by normal users.
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Flag all ARMv8 AES helper ciphers as internal ciphers to prevent
them from being called by normal users.
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Flag all NEON bit sliced AES helper ciphers as internal ciphers to
prevent them from being called by normal users.
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Flag all Twofish AVX helper ciphers as internal ciphers to prevent
them from being called by normal users.
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Flag all Serpent SSE2 helper ciphers as internal ciphers to prevent
them from being called by normal users.
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Flag all Serpent AVX helper ciphers as internal ciphers to prevent
them from being called by normal users.
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Flag all Serpent AVX2 helper ciphers as internal ciphers to prevent
them from being called by normal users.
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Flag all CAST6 helper ciphers as internal ciphers to prevent them
from being called by normal users.
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Flag all AVX Camellia helper ciphers as internal ciphers to prevent
them from being called by normal users.
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Flag all CAST5 helper ciphers as internal ciphers to prevent them
from being called by normal users.
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Flag all AES-NI Camellia helper ciphers as internal ciphers to
prevent them from being called by normal users.
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Flag all GHASH ARMv8 vmull.p64 helper ciphers as internal ciphers
to prevent them from being called by normal users.
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Flag all ash clmulni helper ciphers as internal ciphers to prevent them
from being called by normal users.
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Flag all AES-NI helper ciphers as internal ciphers to prevent them from
being called by normal users.
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>