IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
Verify certificate chain in the X.509 certificates contained within the PKCS#7
message as far as possible. If any signature that we should be able to verify
fails, we reject the whole lot.
Signed-off-by: David Howells <dhowells@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
Find the appropriate key in the PKCS#7 key list and verify the signature with
it. There may be several keys in there forming a chain. Any link in that
chain or the root of that chain may be in our keyrings.
Signed-off-by: David Howells <dhowells@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
Digest the data in a PKCS#7 signed-data message and attach to the
public_key_signature struct contained in the pkcs7_message struct.
Signed-off-by: David Howells <dhowells@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
Implement a parser for a PKCS#7 signed-data message as described in part of
RFC 2315.
Signed-off-by: David Howells <dhowells@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
This patch removes the build-time test that ensures at least one RNG
is set. Instead we will simply not build drbg if no options are set
through Kconfig.
This also fixes a typo in the name of the Kconfig option CRYTPO_DRBG
(should be CRYPTO_DRBG).
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The DRBG-style linked list to manage input data that is fed into the
cipher invocations is replaced with the kernel linked list
implementation.
The change is transparent to users of the interfaces offered by the
DRBG. Therefore, no changes to the testmgr code is needed.
Reported-by: kbuild test robot <fengguang.wu@intel.com>
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
For the CTR DRBG, the drbg_state->scratchpad temp buffer (i.e. the
memory location immediately before the drbg_state->tfm variable
is the buffer that the BCC function operates on. BCC operates
blockwise. Making the temp buffer drbg_statelen(drbg) in size is
sufficient when the DRBG state length is a multiple of the block
size. For AES192 this is not the case and the length for temp is
insufficient (yes, that also means for such ciphers, the final
output of all BCC rounds are truncated before used to update the
state of the DRBG!!).
The patch enlarges the temp buffer from drbg_statelen to
drbg_statelen + drbg_blocklen to have sufficient space.
Reported-by: Fengguang Wu <fengguang.wu@intel.com>
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Print the driver name that is being tested. The driver name can be
inferred parsing /proc/crypto but having it in the output is
clearer
Signed-off-by: Luca Clementi <luca.clementi@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Per further discussion with NIST, the requirements for FIPS state that
we only need to panic the system on failed kernel module signature checks
for crypto subsystem modules. This moves the fips-mode-only module
signature check out of the generic module loading code, into the crypto
subsystem, at points where we can catch both algorithm module loads and
mode module loads. At the same time, make CONFIG_CRYPTO_FIPS dependent on
CONFIG_MODULE_SIG, as this is entirely necessary for FIPS mode.
v2: remove extraneous blank line, perform checks in static inline
function, drop no longer necessary fips.h include.
CC: "David S. Miller" <davem@davemloft.net>
CC: Rusty Russell <rusty@rustcorp.com.au>
CC: Stephan Mueller <stephan.mueller@atsec.com>
Signed-off-by: Jarod Wilson <jarod@redhat.com>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
PKCS#7 validation requires access to the serial number and the raw names in an
X.509 certificate.
Signed-off-by: David Howells <dhowells@redhat.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Josh Boyer <jwboyer@redhat.com>
As reported by a static code analyzer, the code for the ordering of
the linked list can be simplified.
Reported-by: kbuild test robot <fengguang.wu@intel.com>
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
kvfree() helper is now available, use it instead of open code it.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This adds 4 test vectors for GHASH (of which one for chunked mode), making
a total of 5.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The DRBG test code implements the CAVS test approach.
As discussed for the test vectors, all DRBG types are covered with
testing. However, not every backend cipher is covered with testing. To
prevent the testmgr from logging missing testing, the NULL test is
registered for all backend ciphers not covered with specific test cases.
All currently implemented DRBG types and backend ciphers are defined
in SP800-90A. Therefore, the fips_allowed flag is set for all.
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
All types of the DRBG (CTR, HMAC, Hash) are covered with test vectors.
In addition, all permutations of use cases of the DRBG are covered:
* with and without predition resistance
* with and without additional information string
* with and without personalization string
As the DRBG implementation is agnositc of the specific backend cipher,
only test vectors for one specific backend cipher is used. For example:
the Hash DRBG uses the same code paths irrespectively of using SHA-256
or SHA-512. Thus, the test vectors for SHA-256 cover the testing of all
DRBG code paths of SHA-512.
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The different DRBG types of CTR, Hash, HMAC can be enabled or disabled
at compile time. At least one DRBG type shall be selected.
The default is the HMAC DRBG as its code base is smallest.
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This is a clean-room implementation of the DRBG defined in SP800-90A.
All three viable DRBGs defined in the standard are implemented:
* HMAC: This is the leanest DRBG and compiled per default
* Hash: The more complex DRBG can be enabled at compile time
* CTR: The most complex DRBG can also be enabled at compile time
The DRBG implementation offers the following:
* All three DRBG types are implemented with a derivation function.
* All DRBG types are available with and without prediction resistance.
* All SHA types of SHA-1, SHA-256, SHA-384, SHA-512 are available for
the HMAC and Hash DRBGs.
* All AES types of AES-128, AES-192 and AES-256 are available for the
CTR DRBG.
* A self test is implemented with drbg_healthcheck().
* The FIPS 140-2 continuous self test is implemented.
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
zswap allocates one LZO context per online cpu.
Using vmalloc() for small (16KB) memory areas has drawback of slowing
down /proc/vmallocinfo and /proc/meminfo reads, TLB pressure and poor
NUMA locality, as default NUMA policy at boot time is to interleave
pages :
edumazet:~# grep lzo /proc/vmallocinfo | head -4
0xffffc90006062000-0xffffc90006067000 20480 lzo_init+0x1b/0x30 pages=4 vmalloc N0=2 N1=2
0xffffc90006067000-0xffffc9000606c000 20480 lzo_init+0x1b/0x30 pages=4 vmalloc N0=2 N1=2
0xffffc9000606c000-0xffffc90006071000 20480 lzo_init+0x1b/0x30 pages=4 vmalloc N0=2 N1=2
0xffffc90006071000-0xffffc90006076000 20480 lzo_init+0x1b/0x30 pages=4 vmalloc N0=2 N1=2
This patch tries a regular kmalloc() and fallback to vmalloc in case
memory is too fragmented.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iEYEABECAAYFAlOTY+wACgkQuseO5dulBZXrIgCdFZyXRojufLLKikWEvHjZ3/k5
KsQAnimtcge+62/IX7YwDjWS+xg9Wt3m
=yPrI
-----END PGP SIGNATURE-----
Merge tag 'llvmlinux-for-v3.16' of git://git.linuxfoundation.org/llvmlinux/kernel
Pull LLVM patches from Behan Webster:
"Next set of patches to support compiling the kernel with clang.
They've been soaking in linux-next since the last merge window.
More still in the works for the next merge window..."
* tag 'llvmlinux-for-v3.16' of git://git.linuxfoundation.org/llvmlinux/kernel:
arm, unwind, LLVMLinux: Enable clang to be used for unwinding the stack
ARM: LLVMLinux: Change "extern inline" to "static inline" in glue-cache.h
all: LLVMLinux: Change DWARF flag to support gcc and clang
net: netfilter: LLVMLinux: vlais-netfilter
crypto: LLVMLinux: aligned-attribute.patch
__attribute__((aligned)) applies the default alignment for the largest scalar
type for the target ABI. gcc allows it to be applied inline to a defined type.
Clang only allows it to be applied to a type definition (PR11071).
Making it into 2 lines makes it more readable and works with both compilers.
Author: Mark Charlebois <charlebm@gmail.com>
Signed-off-by: Mark Charlebois <charlebm@gmail.com>
Signed-off-by: Behan Webster <behanw@converseincode.com>
Test vectors were taken from existing test for
CBC(DES3_EDE). Associated data has been added to test vectors.
HMAC computed with Crypto++ has been used. Following algos have
been covered.
(a) "authenc(hmac(sha1),cbc(des))"
(b) "authenc(hmac(sha1),cbc(des3_ede))"
(c) "authenc(hmac(sha224),cbc(des))"
(d) "authenc(hmac(sha224),cbc(des3_ede))"
(e) "authenc(hmac(sha256),cbc(des))"
(f) "authenc(hmac(sha256),cbc(des3_ede))"
(g) "authenc(hmac(sha384),cbc(des))"
(h) "authenc(hmac(sha384),cbc(des3_ede))"
(i) "authenc(hmac(sha512),cbc(des))"
(j) "authenc(hmac(sha512),cbc(des3_ede))"
Signed-off-by: Vakul Garg <vakul@freescale.com>
[NiteshNarayanLal@freescale.com: added hooks for the missing algorithms test and tested the patch]
Signed-off-by: Nitesh Lal <NiteshNarayanLal@freescale.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
With DMA-API debug enabled testmgr triggers a "DMA-API: device driver maps memory from stack" warning, when tested on a crypto HW accelerator.
Signed-off-by: Tadeusz Struk <tadeusz.struk@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Although the existing hash walk interface has already been used
by a number of ahash crypto drivers, it turns out that none of
them were really asynchronous. They were all essentially polling
for completion.
That's why nobody has noticed until now that the walk interface
couldn't work with a real asynchronous driver since the memory
is mapped using kmap_atomic.
As we now have a use-case for a real ahash implementation on x86,
this patch creates a minimal ahash walk interface. Basically it
just calls kmap instead of kmap_atomic and does away with the
crypto_yield call. Real ahash crypto drivers don't need to yield
since by definition they won't be hogging the CPU.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
CRYPTO_USER requires CAP_NET_ADMIN for all operations. Most information
provided by CRYPTO_MSG_GETALG is also accessible through /proc/modules
and AF_ALG. CRYPTO_MSG_GETALG should not require CAP_NET_ADMIN so that
processes without CAP_NET_ADMIN can use CRYPTO_MSG_GETALG to get cipher
details, such as cipher priorities, for AF_ALG.
Signed-off-by: Matthias-Christian Ott <ott@mirix.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Fix leakage of memory for struct aead_request that is allocated via
aead_request_alloc() but not released via aead_request_free().
Reported by Coverity - CID 1163869.
Signed-off-by: Christian Engelmayer <cengelma@gmx.at>
Reviewed-by: Marek Vasut <marex@denx.de>
Acked-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Fix a potential memory leak in the error handling of test_aead_speed(). In case
crypto_alloc_aead() fails, the function returns without going through the
centralized cleanup path. Reported by Coverity - CID 1163870.
Signed-off-by: Christian Engelmayer <cengelma@gmx.at>
Reviewed-by: Marek Vasut <marex@denx.de>
Acked-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Fix a potential memory leak in the error handling of test_aead_speed(). In case
the size check on the associate data length parameter fails, the function goes
through the wrong exit label. Reported by Coverity - CID 1163870.
Signed-off-by: Christian Engelmayer <cengelma@gmx.at>
Acked-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
It is possible by passing a netlink socket to a more privileged
executable and then to fool that executable into writing to the socket
data that happens to be valid netlink message to do something that
privileged executable did not intend to do.
To keep this from happening replace bare capable and ns_capable calls
with netlink_capable, netlink_net_calls and netlink_ns_capable calls.
Which act the same as the previous calls except they verify that the
opener of the socket had the desired permissions as well.
Reported-by: Andy Lutomirski <luto@amacapital.net>
Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Patch adds large test-vectors for SHA algorithms for better code coverage in
optimized assembly implementations. Empty test-vectors are also added, as some
crypto drivers appear to have special case handling for empty input.
Signed-off-by: Jussi Kivilinna <jussi.kivilinna@iki.fi>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This adds test cases for SHA-1, SHA-224, SHA-256 and AES-CCM with an input size
that is an exact multiple of the block size. The reason is that some
implementations use a different code path for these cases.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This git patch adds x86_64 AVX2 optimization of SHA1
transform to crypto support. The patch has been tested with 3.14.0-rc1
kernel.
On a Haswell desktop, with turbo disabled and all cpus running
at maximum frequency, tcrypt shows AVX2 performance improvement
from 3% for 256 bytes update to 16% for 1024 bytes update over
AVX implementation.
This patch adds sha1_avx2_transform(), the glue, build and
configuration changes needed for AVX2 optimization of
SHA1 transform to crypto support.
sha1-ssse3 is one module which adds the necessary optimization
support (SSSE3/AVX/AVX2) for the low-level SHA1 transform function.
With better optimization support, transform function is overridden
as the case may be. In the case of AVX2, due to performance reasons
across datablock sizes, the AVX or AVX2 transform function is used
at run-time as it suits best. The Makefile change therefore appends
the necessary objects to the linkage. Due to this, the patch merely
appends AVX2 transform to the existing build mix and Kconfig support
and leaves the configuration build support as is.
Signed-off-by: Chandramouli Narayanan <mouli@linux.intel.com>
Reviewed-by: Marek Vasut <marex@denx.de>
Acked-by: H. Peter Anvin <hpa@linux.intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The crypto algorithm modules utilizing the crypto daemon could
be used early when the system start up. Using module_init
does not guarantee that the daemon's work queue is initialized
when the cypto alorithm depending on crypto_wq starts. It is necessary
to initialize the crypto work queue earlier at the subsystem
init time to make sure that it is initialized
when used.
Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add test vectors for aead with null encryption and md5,
respectively sha1 authentication.
Input data is taken from test vectors listed in RFC2410.
Signed-off-by: Horia Geanta <horia.geanta@freescale.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
These defines might be needed by crypto drivers.
Signed-off-by: Horia Geanta <horia.geanta@freescale.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The ahash_def_finup() can make use of the request save/restore functions,
thus make it so. This simplifies the code a little and unifies the code
paths.
Note that the same remark about free()ing the req->priv applies here, the
req->priv can only be free()'d after the original request was restored.
Finally, squash a bug in the invocation of completion in the ASYNC path.
In both ahash_def_finup_done{1,2}, the function areq->base.complete(X, err);
was called with X=areq->base.data . This is incorrect , as X=&areq->base
is the correct value. By analysis of the data structures, we see the areq is
of type 'struct ahash_request' , areq->base is of type 'struct crypto_async_request'
and areq->base.completion is of type crypto_completion_t, which is defined in
include/linux/crypto.h as:
typedef void (*crypto_completion_t)(struct crypto_async_request *req, int err);
This is one lead that the X should be &areq->base . Next up, we can inspect
other code which calls the completion callback to give us kind-of statistical
idea of how this callback is used. We can try:
$ git grep base\.complete\( drivers/crypto/
Finally, by inspecting ahash_request_set_callback() implementation defined
in include/crypto/hash.h , we observe that the .data entry of 'struct
crypto_async_request' is intended for arbitrary data, not for completion
argument.
Signed-off-by: Marek Vasut <marex@denx.de>
Cc: David S. Miller <davem@davemloft.net>
Cc: Fabio Estevam <fabio.estevam@freescale.com>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: Shawn Guo <shawn.guo@linaro.org>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The functions to save original request within a newly adjusted request
and it's counterpart to restore the original request can be re-used by
more code in the crypto/ahash.c file. Pull these functions out from the
code so they're available.
Signed-off-by: Marek Vasut <marex@denx.de>
Cc: David S. Miller <davem@davemloft.net>
Cc: Fabio Estevam <fabio.estevam@freescale.com>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: Shawn Guo <shawn.guo@linaro.org>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add documentation for the pointer voodoo that is happening in crypto/ahash.c
in ahash_op_unaligned(). This code is quite confusing, so add a beefy chunk
of documentation.
Moreover, make sure the mangled request is completely restored after finishing
this unaligned operation. This means restoring all of .result, .base.data
and .base.complete .
Also, remove the crypto_completion_t complete = ... line present in the
ahash_op_unaligned_done() function. This type actually declares a function
pointer, which is very confusing.
Finally, yet very important nonetheless, make sure the req->priv is free()'d
only after the original request is restored in ahash_op_unaligned_done().
The req->priv data must not be free()'d before that in ahash_op_unaligned_finish(),
since we would be accessing previously free()'d data in ahash_op_unaligned_done()
and cause corruption.
Signed-off-by: Marek Vasut <marex@denx.de>
Cc: David S. Miller <davem@davemloft.net>
Cc: Fabio Estevam <fabio.estevam@freescale.com>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: Shawn Guo <shawn.guo@linaro.org>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This adds the function blkcipher_aead_walk_virt_block, which allows the caller
to use the blkcipher walk API to handle the input and output scatterlists.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
In order to allow other uses of the blkcipher walk API than the blkcipher
algos themselves, this patch copies some of the transform data members to the
walk struct so the transform is only accessed at walk init time.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
We added the soft module dependency of crc32c module alias
to generic crc32c module so other hardware accelerated crc32c
modules could get loaded and used before the generic version.
We also renamed the crypto/crc32c.c containing the generic
crc32c crypto computation to crypto/crc32c_generic.c according
to convention.
Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
When finishing the ahash request, the ahash_op_unaligned_done() will
call complete() on the request. Yet, this will not call the correct
complete callback. The correct complete callback was previously stored
in the requests' private data, as seen in ahash_op_unaligned(). This
patch restores the correct complete callback and .data field of the
request before calling complete() on it.
Signed-off-by: Marek Vasut <marex@denx.de>
Cc: David S. Miller <davem@davemloft.net>
Cc: Fabio Estevam <fabio.estevam@freescale.com>
Cc: Shawn Guo <shawn.guo@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Adding simple speed tests for a range of block sizes for AEAD crypto
algorithms.
Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Commit fe8c8a1268 introduced a possible build error for archs
that do not have CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS set. :/
Fix this up by bringing else braces outside of the ifdef.
Reported-by: Fengguang Wu <fengguang.wu@intel.com>
Fixes: fe8c8a1268 ("crypto: more robust crypto_memneq")
Signed-off-by: Daniel Borkmann <dborkman@redhat.com>
Acked-By: Cesar Eduardo Barros <cesarb@cesarb.eti.br>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
A kernel with enabled lockdep complains about the wrong usage of
rcu_dereference() under a rcu_read_lock_bh() protected region.
===============================
[ INFO: suspicious RCU usage. ]
3.13.0-rc1+ #126 Not tainted
-------------------------------
linux/crypto/pcrypt.c:81 suspicious rcu_dereference_check() usage!
other info that might help us debug this:
rcu_scheduler_active = 1, debug_locks = 1
1 lock held by cryptomgr_test/153:
#0: (rcu_read_lock_bh){.+....}, at: [<ffffffff812c8075>] pcrypt_do_parallel.isra.2+0x5/0x200
Fix that by using rcu_dereference_bh() instead.
Signed-off-by: Mathias Krause <minipli@googlemail.com>
Cc: "David S. Miller" <davem@davemloft.net>
Acked-by: Steffen Klassert <steffen.klassert@secunet.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Disabling compiler optimizations can be fragile, since a new
optimization could be added to -O0 or -Os that breaks the assumptions
the code is making.
Instead of disabling compiler optimizations, use a dummy inline assembly
(based on RELOC_HIDE) to block the problematic kinds of optimization,
while still allowing other optimizations to be applied to the code.
The dummy inline assembly is added after every OR, and has the
accumulator variable as its input and output. The compiler is forced to
assume that the dummy inline assembly could both depend on the
accumulator variable and change the accumulator variable, so it is
forced to compute the value correctly before the inline assembly, and
cannot assume anything about its value after the inline assembly.
This change should be enough to make crypto_memneq work correctly (with
data-independent timing) even if it is inlined at its call sites. That
can be done later in a followup patch.
Compile-tested on x86_64.
Signed-off-by: Cesar Eduardo Barros <cesarb@cesarb.eti.br>
Acked-by: Daniel Borkmann <dborkman@redhat.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Pull crypto fixes from Herbert Xu:
"This push fixes a number of crashes triggered by a previous crypto
self-test update. It also fixes a build problem in the caam driver,
as well as a concurrency issue in s390.
Finally there is a pair of fixes to bugs in the crypto scatterwalk
code and authenc that may lead to crashes"
* git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6:
crypto: testmgr - fix sglen in test_aead for case 'dst != src'
crypto: talitos - fix aead sglen for case 'dst != src'
crypto: caam - fix aead sglen for case 'dst != src'
crypto: ccm - Fix handling of zero plaintext when computing mac
crypto: s390 - Fix aes-xts parameter corruption
crypto: talitos - corrrectly handle zero-length assoc data
crypto: scatterwalk - Set the chain pointer indication bit
crypto: authenc - Find proper IV address in ablkcipher callback
crypto: caam - Add missing Job Ring include
Pull networking updates from David Miller:
"Here is a pile of bug fixes that accumulated while I was in Europe"
1) In fixing kernel leaks to userspace during copying of socket
addresses, we broke a case that used to work, namely the user
providing a buffer larger than the in-kernel generic socket address
structure. This broke Ruby amongst other things. Fix from Dan
Carpenter.
2) Fix regression added by byte queue limit support in 8139cp driver,
from Yang Yingliang.
3) The addition of MSG_SENDPAGE_NOTLAST buggered up a few sendpage
implementations, they should just treat it the same as MSG_MORE.
Fix from Richard Weinberger and Shawn Landden.
4) Handle icmpv4 errors received on ipv6 SIT tunnels correctly, from
Oussama Ghorbel. In particular we should send an ICMPv6 unreachable
in such situations.
5) Fix some regressions in the recent genetlink fixes, in particular
get the pmcraid driver to use the new safer interfaces correctly.
From Johannes Berg.
6) macvtap was converted to use a per-cpu set of statistics, but some
code was still bumping tx_dropped elsewhere. From Jason Wang.
7) Fix build failure of xen-netback due to missing include on some
architectures, from Andy Whitecroft.
8) macvtap double counts received packets in statistics, fix from Vlad
Yasevich.
9) Fix various cases of using *_STATS_BH() when *_STATS() is more
appropriate. From Eric Dumazet and Hannes Frederic Sowa.
10) Pktgen ipsec mode doesn't update the ipv4 header length and checksum
properly after encapsulation. Fix from Fan Du.
* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net: (61 commits)
net/mlx4_en: Remove selftest TX queues empty condition
{pktgen, xfrm} Update IPv4 header total len and checksum after tranformation
virtio_net: make all RX paths handle erors consistently
virtio_net: fix error handling for mergeable buffers
virtio_net: Fixed a trivial typo (fitler --> filter)
netem: fix gemodel loss generator
netem: fix loss 4 state model
netem: missing break in ge loss generator
net/hsr: Support iproute print_opt ('ip -details ...')
net/hsr: Very small fix of comment style.
MAINTAINERS: Added net/hsr/ maintainer
ipv6: fix possible seqlock deadlock in ip6_finish_output2
ixgbe: Make ixgbe_identify_qsfp_module_generic static
ixgbe: turn NETIF_F_HW_L2FW_DOFFLOAD off by default
ixgbe: ixgbe_fwd_ring_down needs to be static
e1000: fix possible reset_task running after adapter down
e1000: fix lockdep warning in e1000_reset_task
e1000: prevent oops when adapter is being closed and reset simultaneously
igb: Fixed Wake On LAN support
inet: fix possible seqlock deadlocks
...
Commit 35f9c09fe (tcp: tcp_sendpages() should call tcp_push() once)
added an internal flag MSG_SENDPAGE_NOTLAST, similar to
MSG_MORE.
algif_hash, algif_skcipher, and udp used MSG_MORE from tcp_sendpages()
and need to see the new flag as identical to MSG_MORE.
This fixes sendfile() on AF_ALG.
v3: also fix udp
Cc: Tom Herbert <therbert@google.com>
Cc: Eric Dumazet <eric.dumazet@gmail.com>
Cc: David S. Miller <davem@davemloft.net>
Cc: <stable@vger.kernel.org> # 3.4.x + 3.2.x
Reported-and-tested-by: Shawn Landden <shawnlandden@gmail.com>
Original-patch: Richard Weinberger <richard@nod.at>
Signed-off-by: Shawn Landden <shawn@churchofgit.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Commit d8a32ac256 (crypto: testmgr - make
test_aead also test 'dst != src' code paths) added support for different
source and destination buffers in test_aead.
This patch modifies the source and destination buffer lengths accordingly:
the lengths are not equal since encryption / decryption adds / removes
the ICV.
Cc: Jussi Kivilinna <jussi.kivilinna@mbnet.fi>
Signed-off-by: Horia Geanta <horia.geanta@freescale.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
When performing an asynchronous ablkcipher operation the authenc
completion callback routine is invoked, but it does not locate and use
the proper IV.
The callback routine, crypto_authenc_encrypt_done, is updated to use
the same method of calculating the address of the IV as is done in
crypto_authenc_encrypt function which sets up the callback.
Cc: stable@vger.kernel.org
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This reverts commit 09fbc47373, which
caused the following build errors:
crypto/asymmetric_keys/x509_public_key.c: In function ‘x509_key_preparse’:
crypto/asymmetric_keys/x509_public_key.c:237:35: error: ‘system_trusted_keyring’ undeclared (first use in this function)
ret = x509_validate_trust(cert, system_trusted_keyring);
^
crypto/asymmetric_keys/x509_public_key.c:237:35: note: each undeclared identifier is reported only once for each function it appears in
reported by Jim Davis. Mimi says:
"I made the classic mistake of requesting this patch to be upstreamed
at the last second, rather than waiting until the next open window.
At this point, the best course would probably be to revert the two
commits and fix them for the next open window"
Reported-by: Jim Davis <jim.epost@gmail.com>
Acked-by: Mimi Zohar <zohar@linux.vnet.ibm.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Pull crypto update from Herbert Xu:
- Made x86 ablk_helper generic for ARM
- Phase out chainiv in favour of eseqiv (affects IPsec)
- Fixed aes-cbc IV corruption on s390
- Added constant-time crypto_memneq which replaces memcmp
- Fixed aes-ctr in omap-aes
- Added OMAP3 ROM RNG support
- Add PRNG support for MSM SoC's
- Add and use Job Ring API in caam
- Misc fixes
[ NOTE! This pull request was sent within the merge window, but Herbert
has some questionable email sending setup that makes him public enemy
#1 as far as gmail is concerned. So most of his emails seem to be
trapped by gmail as spam, resulting in me not seeing them. - Linus ]
* git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (49 commits)
crypto: s390 - Fix aes-cbc IV corruption
crypto: omap-aes - Fix CTR mode counter length
crypto: omap-sham - Add missing modalias
padata: make the sequence counter an atomic_t
crypto: caam - Modify the interface layers to use JR API's
crypto: caam - Add API's to allocate/free Job Rings
crypto: caam - Add Platform driver for Job Ring
hwrng: msm - Add PRNG support for MSM SoC's
ARM: DT: msm: Add Qualcomm's PRNG driver binding document
crypto: skcipher - Use eseqiv even on UP machines
crypto: talitos - Simplify key parsing
crypto: picoxcell - Simplify and harden key parsing
crypto: ixp4xx - Simplify and harden key parsing
crypto: authencesn - Simplify key parsing
crypto: authenc - Export key parsing helper function
crypto: mv_cesa: remove deprecated IRQF_DISABLED
hwrng: OMAP3 ROM Random Number Generator support
crypto: sha256_ssse3 - also test for BMI2
crypto: mv_cesa - Remove redundant of_match_ptr
crypto: sahara - Remove redundant of_match_ptr
...
Pull networking fixes from David Miller:
1) Fix memory leaks and other issues in mwifiex driver, from Amitkumar
Karwar.
2) skb_segment() can choke on packets using frag lists, fix from
Herbert Xu with help from Eric Dumazet and others.
3) IPv4 output cached route instantiation properly handles races
involving two threads trying to install the same route, but we
forgot to propagate this logic to input routes as well. Fix from
Alexei Starovoitov.
4) Put protections in place to make sure that recvmsg() paths never
accidently copy uninitialized memory back into userspace and also
make sure that we never try to use more that sockaddr_storage for
building the on-kernel-stack copy of a sockaddr. Fixes from Hannes
Frederic Sowa.
5) R8152 driver transmit flow bug fixes from Hayes Wang.
6) Fix some minor fallouts from genetlink changes, from Johannes Berg
and Michael Opdenacker.
7) AF_PACKET sendmsg path can race with netdevice unregister notifier,
fix by using RCU to make sure the network device doesn't go away
from under us. Fix from Daniel Borkmann.
* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net: (43 commits)
gso: handle new frag_list of frags GRO packets
genetlink: fix genl_set_err() group ID
genetlink: fix genlmsg_multicast() bug
packet: fix use after free race in send path when dev is released
xen-netback: stop the VIF thread before unbinding IRQs
wimax: remove dead code
net/phy: Add the autocross feature for forced links on VSC82x4
net/phy: Add VSC8662 support
net/phy: Add VSC8574 support
net/phy: Add VSC8234 support
net: add BUG_ON if kernel advertises msg_namelen > sizeof(struct sockaddr_storage)
net: rework recvmsg handler msg_name and msg_namelen logic
bridge: flush br's address entry in fdb when remove the
net: core: Always propagate flag changes to interfaces
ipv4: fix race in concurrent ip_route_input_slow()
r8152: fix incorrect type in assignment
r8152: support stopping/waking tx queue
r8152: modify the tx flow
r8152: fix tx/rx memory overflow
netfilter: ebt_ip6: fix source and destination matching
...
Pull security subsystem updates from James Morris:
"In this patchset, we finally get an SELinux update, with Paul Moore
taking over as maintainer of that code.
Also a significant update for the Keys subsystem, as well as
maintenance updates to Smack, IMA, TPM, and Apparmor"
and since I wanted to know more about the updates to key handling,
here's the explanation from David Howells on that:
"Okay. There are a number of separate bits. I'll go over the big bits
and the odd important other bit, most of the smaller bits are just
fixes and cleanups. If you want the small bits accounting for, I can
do that too.
(1) Keyring capacity expansion.
KEYS: Consolidate the concept of an 'index key' for key access
KEYS: Introduce a search context structure
KEYS: Search for auth-key by name rather than target key ID
Add a generic associative array implementation.
KEYS: Expand the capacity of a keyring
Several of the patches are providing an expansion of the capacity of a
keyring. Currently, the maximum size of a keyring payload is one page.
Subtract a small header and then divide up into pointers, that only gives
you ~500 pointers on an x86_64 box. However, since the NFS idmapper uses
a keyring to store ID mapping data, that has proven to be insufficient to
the cause.
Whatever data structure I use to handle the keyring payload, it can only
store pointers to keys, not the keys themselves because several keyrings
may point to a single key. This precludes inserting, say, and rb_node
struct into the key struct for this purpose.
I could make an rbtree of records such that each record has an rb_node
and a key pointer, but that would use four words of space per key stored
in the keyring. It would, however, be able to use much existing code.
I selected instead a non-rebalancing radix-tree type approach as that
could have a better space-used/key-pointer ratio. I could have used the
radix tree implementation that we already have and insert keys into it by
their serial numbers, but that means any sort of search must iterate over
the whole radix tree. Further, its nodes are a bit on the capacious side
for what I want - especially given that key serial numbers are randomly
allocated, thus leaving a lot of empty space in the tree.
So what I have is an associative array that internally is a radix-tree
with 16 pointers per node where the index key is constructed from the key
type pointer and the key description. This means that an exact lookup by
type+description is very fast as this tells us how to navigate directly to
the target key.
I made the data structure general in lib/assoc_array.c as far as it is
concerned, its index key is just a sequence of bits that leads to a
pointer. It's possible that someone else will be able to make use of it
also. FS-Cache might, for example.
(2) Mark keys as 'trusted' and keyrings as 'trusted only'.
KEYS: verify a certificate is signed by a 'trusted' key
KEYS: Make the system 'trusted' keyring viewable by userspace
KEYS: Add a 'trusted' flag and a 'trusted only' flag
KEYS: Separate the kernel signature checking keyring from module signing
These patches allow keys carrying asymmetric public keys to be marked as
being 'trusted' and allow keyrings to be marked as only permitting the
addition or linkage of trusted keys.
Keys loaded from hardware during kernel boot or compiled into the kernel
during build are marked as being trusted automatically. New keys can be
loaded at runtime with add_key(). They are checked against the system
keyring contents and if their signatures can be validated with keys that
are already marked trusted, then they are marked trusted also and can
thus be added into the master keyring.
Patches from Mimi Zohar make this usable with the IMA keyrings also.
(3) Remove the date checks on the key used to validate a module signature.
X.509: Remove certificate date checks
It's not reasonable to reject a signature just because the key that it was
generated with is no longer valid datewise - especially if the kernel
hasn't yet managed to set the system clock when the first module is
loaded - so just remove those checks.
(4) Make it simpler to deal with additional X.509 being loaded into the kernel.
KEYS: Load *.x509 files into kernel keyring
KEYS: Have make canonicalise the paths of the X.509 certs better to deduplicate
The builder of the kernel now just places files with the extension ".x509"
into the kernel source or build trees and they're concatenated by the
kernel build and stuffed into the appropriate section.
(5) Add support for userspace kerberos to use keyrings.
KEYS: Add per-user_namespace registers for persistent per-UID kerberos caches
KEYS: Implement a big key type that can save to tmpfs
Fedora went to, by default, storing kerberos tickets and tokens in tmpfs.
We looked at storing it in keyrings instead as that confers certain
advantages such as tickets being automatically deleted after a certain
amount of time and the ability for the kernel to get at these tokens more
easily.
To make this work, two things were needed:
(a) A way for the tickets to persist beyond the lifetime of all a user's
sessions so that cron-driven processes can still use them.
The problem is that a user's session keyrings are deleted when the
session that spawned them logs out and the user's user keyring is
deleted when the UID is deleted (typically when the last log out
happens), so neither of these places is suitable.
I've added a system keyring into which a 'persistent' keyring is
created for each UID on request. Each time a user requests their
persistent keyring, the expiry time on it is set anew. If the user
doesn't ask for it for, say, three days, the keyring is automatically
expired and garbage collected using the existing gc. All the kerberos
tokens it held are then also gc'd.
(b) A key type that can hold really big tickets (up to 1MB in size).
The problem is that Active Directory can return huge tickets with lots
of auxiliary data attached. We don't, however, want to eat up huge
tracts of unswappable kernel space for this, so if the ticket is
greater than a certain size, we create a swappable shmem file and dump
the contents in there and just live with the fact we then have an
inode and a dentry overhead. If the ticket is smaller than that, we
slap it in a kmalloc()'d buffer"
* 'for-linus2' of git://git.kernel.org/pub/scm/linux/kernel/git/jmorris/linux-security: (121 commits)
KEYS: Fix keyring content gc scanner
KEYS: Fix error handling in big_key instantiation
KEYS: Fix UID check in keyctl_get_persistent()
KEYS: The RSA public key algorithm needs to select MPILIB
ima: define '_ima' as a builtin 'trusted' keyring
ima: extend the measurement list to include the file signature
kernel/system_certificate.S: use real contents instead of macro GLOBAL()
KEYS: fix error return code in big_key_instantiate()
KEYS: Fix keyring quota misaccounting on key replacement and unlink
KEYS: Fix a race between negating a key and reading the error set
KEYS: Make BIG_KEYS boolean
apparmor: remove the "task" arg from may_change_ptraced_domain()
apparmor: remove parent task info from audit logging
apparmor: remove tsk field from the apparmor_audit_struct
apparmor: fix capability to not use the current task, during reporting
Smack: Ptrace access check mode
ima: provide hash algo info in the xattr
ima: enable support for larger default filedata hash algorithms
ima: define kernel parameter 'ima_template=' to change configured default
ima: add Kconfig default measurement list template
...
This patch now always passes msg->msg_namelen as 0. recvmsg handlers must
set msg_namelen to the proper size <= sizeof(struct sockaddr_storage)
to return msg_name to the user.
This prevents numerous uninitialized memory leaks we had in the
recvmsg handlers and makes it harder for new code to accidentally leak
uninitialized memory.
Optimize for the case recvfrom is called with NULL as address. We don't
need to copy the address at all, so set it to NULL before invoking the
recvmsg handler. We can do so, because all the recvmsg handlers must
cope with the case a plain read() is called on them. read() also sets
msg_name to NULL.
Also document these changes in include/linux/net.h as suggested by David
Miller.
Changes since RFC:
Set msg->msg_name = NULL if user specified a NULL in msg_name but had a
non-null msg_namelen in verify_iovec/verify_compat_iovec. This doesn't
affect sendto as it would bail out earlier while trying to copy-in the
address. It also more naturally reflects the logic by the callers of
verify_iovec.
With this change in place I could remove "
if (!uaddr || msg_sys->msg_namelen == 0)
msg->msg_name = NULL
".
This change does not alter the user visible error logic as we ignore
msg_namelen as long as msg_name is NULL.
Also remove two unnecessary curly brackets in ___sys_recvmsg and change
comments to netdev style.
Cc: David Miller <davem@davemloft.net>
Suggested-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Pull slave-dmaengine changes from Vinod Koul:
"This brings for slave dmaengine:
- Change dma notification flag to DMA_COMPLETE from DMA_SUCCESS as
dmaengine can only transfer and not verify validaty of dma
transfers
- Bunch of fixes across drivers:
- cppi41 driver fixes from Daniel
- 8 channel freescale dma engine support and updated bindings from
Hongbo
- msx-dma fixes and cleanup by Markus
- DMAengine updates from Dan:
- Bartlomiej and Dan finalized a rework of the dma address unmap
implementation.
- In the course of testing 1/ a collection of enhancements to
dmatest fell out. Notably basic performance statistics, and
fixed / enhanced test control through new module parameters
'run', 'wait', 'noverify', and 'verbose'. Thanks to Andriy and
Linus [Walleij] for their review.
- Testing the raid related corner cases of 1/ triggered bugs in
the recently added 16-source operation support in the ioatdma
driver.
- Some minor fixes / cleanups to mv_xor and ioatdma"
* 'next' of git://git.infradead.org/users/vkoul/slave-dma: (99 commits)
dma: mv_xor: Fix mis-usage of mmio 'base' and 'high_base' registers
dma: mv_xor: Remove unneeded NULL address check
ioat: fix ioat3_irq_reinit
ioat: kill msix_single_vector support
raid6test: add new corner case for ioatdma driver
ioatdma: clean up sed pool kmem_cache
ioatdma: fix selection of 16 vs 8 source path
ioatdma: fix sed pool selection
ioatdma: Fix bug in selftest after removal of DMA_MEMSET.
dmatest: verbose mode
dmatest: convert to dmaengine_unmap_data
dmatest: add a 'wait' parameter
dmatest: add basic performance metrics
dmatest: add support for skipping verification and random data setup
dmatest: use pseudo random numbers
dmatest: support xor-only, or pq-only channels in tests
dmatest: restore ability to start test at module load and init
dmatest: cleanup redundant "dmatest: " prefixes
dmatest: replace stored results mechanism, with uniform messages
Revert "dmatest: append verify result to results"
...
Pull dmaengine changes from Dan
1/ Bartlomiej and Dan finalized a rework of the dma address unmap
implementation.
2/ In the course of testing 1/ a collection of enhancements to dmatest
fell out. Notably basic performance statistics, and fixed / enhanced
test control through new module parameters 'run', 'wait', 'noverify',
and 'verbose'. Thanks to Andriy and Linus for their review.
3/ Testing the raid related corner cases of 1/ triggered bugs in the
recently added 16-source operation support in the ioatdma driver.
4/ Some minor fixes / cleanups to mv_xor and ioatdma.
Conflicts:
drivers/dma/dmatest.c
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Use this new function to make code more comprehensible, since we are
reinitialzing the completion, not initializing.
[akpm@linux-foundation.org: linux-next resyncs]
Signed-off-by: Wolfram Sang <wsa@the-dreams.de>
Acked-by: Linus Walleij <linus.walleij@linaro.org> (personally at LCE13)
Cc: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
With 24 disks and an ioatdma instance with 16 source support there is a
corner case where the driver needs to be careful to account for the
number of implied sources in the continuation case.
Also bump the default case to test more than 16 sources now that it
triggers different paths in offload drivers.
Cc: Dave Jiang <dave.jiang@intel.com>
Acked-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Use the generic unmap object to unmap dma buffers.
Cc: Vinod Koul <vinod.koul@intel.com>
Cc: Tomasz Figa <t.figa@samsung.com>
Cc: Dave Jiang <dave.jiang@intel.com>
Reported-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
Signed-off-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Use the generic unmap object to unmap dma buffers.
Cc: Vinod Koul <vinod.koul@intel.com>
Cc: Tomasz Figa <t.figa@samsung.com>
Cc: Dave Jiang <dave.jiang@intel.com>
Reported-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
[bzolnier: keep temporary dma_dest array in do_async_gen_syndrome()]
Signed-off-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Use the generic unmap object to unmap dma buffers.
Cc: Vinod Koul <vinod.koul@intel.com>
Cc: Tomasz Figa <t.figa@samsung.com>
Cc: Dave Jiang <dave.jiang@intel.com>
Reported-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
[bzolnier: keep temporary dma_dest array in async_mult()]
Signed-off-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Use the generic unmap object to unmap dma buffers.
Cc: Vinod Koul <vinod.koul@intel.com>
Cc: Tomasz Figa <t.figa@samsung.com>
Cc: Dave Jiang <dave.jiang@intel.com>
Reported-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
[bzolnier: minor cleanups]
Signed-off-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Use the generic unmap object to unmap dma buffers.
Later we can push this unmap object up to the raid layer and get rid of
the 'scribble' parameter.
Cc: Vinod Koul <vinod.koul@intel.com>
Cc: Tomasz Figa <t.figa@samsung.com>
Cc: Dave Jiang <dave.jiang@intel.com>
Reported-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
[bzolnier: minor cleanups]
Signed-off-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
The RSA public key algorithm needs to select MPILIB directly in Kconfig as the
'select' directive is not recursive and is thus MPILIB is not enabled by
selecting MPILIB_EXTRA.
Without this, the following errors can occur:
crypto/built-in.o: In function `RSA_verify_signature':
rsa.c:(.text+0x1d347): undefined reference to `mpi_get_nbits'
rsa.c:(.text+0x1d354): undefined reference to `mpi_get_nbits'
rsa.c:(.text+0x1d36e): undefined reference to `mpi_cmp_ui'
rsa.c:(.text+0x1d382): undefined reference to `mpi_cmp'
rsa.c:(.text+0x1d391): undefined reference to `mpi_alloc'
rsa.c:(.text+0x1d3b0): undefined reference to `mpi_powm'
rsa.c:(.text+0x1d3c3): undefined reference to `mpi_free'
rsa.c:(.text+0x1d3d8): undefined reference to `mpi_get_buffer'
rsa.c:(.text+0x1d4d4): undefined reference to `mpi_free'
rsa.c:(.text+0x1d503): undefined reference to `mpi_get_nbits'
Reported-by: Randy Dunlap <rdunlap@infradead.org>
Signed-off-by: David Howells <dhowells@redhat.com>
Acked-by: Randy Dunlap <rdunlap@infradead.org>
Previously we would use eseqiv on all async ciphers in all cases,
and sync ciphers if we have more than one CPU. This meant that
chainiv is only used in the case of sync ciphers on a UP machine.
As chainiv may aid attackers by making the IV predictable, even
though this risk itself is small, the above usage pattern causes
it to further leak information about the host.
This patch addresses these issues by using eseqiv even if we're
on a UP machine.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Acked-by: Steffen Klassert <steffen.klassert@secunet.com>
Acked-by: David S. Miller <davem@davemloft.net>
In preparation of supporting more hash algorithms with larger hash sizes
needed for signature verification, this patch replaces the 20 byte sized
digest, with a more flexible structure. The new structure includes the
hash algorithm, digest size, and digest.
Changelog:
- recalculate filedata hash for the measurement list, if the signature
hash digest size is greater than 20 bytes.
- use generic HASH_ALGO_
- make ima_calc_file_hash static
- scripts lindent and checkpatch fixes
Signed-off-by: Dmitry Kasatkin <d.kasatkin@samsung.com>
Signed-off-by: Mimi Zohar <zohar@linux.vnet.ibm.com>
This patch makes use of the newly defined common hash algorithm info,
replacing, for example, PKEY_HASH with HASH_ALGO.
Changelog:
- Lindent fixes - Mimi
CC: David Howells <dhowells@redhat.com>
Signed-off-by: Dmitry Kasatkin <d.kasatkin@samsung.com>
Signed-off-by: Mimi Zohar <zohar@linux.vnet.ibm.com>
This patch provides a single place for information about hash algorithms,
such as hash sizes and kernel driver names, which will be used by IMA
and the public key code.
Changelog:
- Fix sparse and checkpatch warnings
- Move hash algo enums to uapi for userspace signing functions.
Signed-off-by: Dmitry Kasatkin <d.kasatkin@samsung.com>
Signed-off-by: Mimi Zohar <zohar@linux.vnet.ibm.com>
Acked-by: Herbert Xu <herbert@gondor.apana.org.au>
Use the common helper function crypto_authenc_extractkeys() for key
parsing.
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: "David S. Miller" <davem@davemloft.net>
Signed-off-by: Mathias Krause <mathias.krause@secunet.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
AEAD key parsing is duplicated to multiple places in the kernel. Add a
common helper function to consolidate that functionality.
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: "David S. Miller" <davem@davemloft.net>
Signed-off-by: Mathias Krause <mathias.krause@secunet.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
When comparing MAC hashes, AEAD authentication tags, or other hash
values in the context of authentication or integrity checking, it
is important not to leak timing information to a potential attacker,
i.e. when communication happens over a network.
Bytewise memory comparisons (such as memcmp) are usually optimized so
that they return a nonzero value as soon as a mismatch is found. E.g,
on x86_64/i5 for 512 bytes this can be ~50 cyc for a full mismatch
and up to ~850 cyc for a full match (cold). This early-return behavior
can leak timing information as a side channel, allowing an attacker to
iteratively guess the correct result.
This patch adds a new method crypto_memneq ("memory not equal to each
other") to the crypto API that compares memory areas of the same length
in roughly "constant time" (cache misses could change the timing, but
since they don't reveal information about the content of the strings
being compared, they are effectively benign). Iow, best and worst case
behaviour take the same amount of time to complete (in contrast to
memcmp).
Note that crypto_memneq (unlike memcmp) can only be used to test for
equality or inequality, NOT for lexicographical order. This, however,
is not an issue for its use-cases within the crypto API.
We tried to locate all of the places in the crypto API where memcmp was
being used for authentication or integrity checking, and convert them
over to crypto_memneq.
crypto_memneq is declared noinline, placed in its own source file,
and compiled with optimizations that might increase code size disabled
("Os") because a smart compiler (or LTO) might notice that the return
value is always compared against zero/nonzero, and might then
reintroduce the same early-return optimization that we are trying to
avoid.
Using #pragma or __attribute__ optimization annotations of the code
for disabling optimization was avoided as it seems to be considered
broken or unmaintained for long time in GCC [1]. Therefore, we work
around that by specifying the compile flag for memneq.o directly in
the Makefile. We found that this seems to be most appropriate.
As we use ("Os"), this patch also provides a loop-free "fast-path" for
frequently used 16 byte digests. Similarly to kernel library string
functions, leave an option for future even further optimized architecture
specific assembler implementations.
This was a joint work of James Yonan and Daniel Borkmann. Also thanks
for feedback from Florian Weimer on this and earlier proposals [2].
[1] http://gcc.gnu.org/ml/gcc/2012-07/msg00211.html
[2] https://lkml.org/lkml/2013/2/10/131
Signed-off-by: James Yonan <james@openvpn.net>
Signed-off-by: Daniel Borkmann <dborkman@redhat.com>
Cc: Florian Weimer <fw@deneb.enyo.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Bit sliced AES gives around 45% speedup on Cortex-A15 for encryption
and around 25% for decryption. This implementation of the AES algorithm
does not rely on any lookup tables so it is believed to be invulnerable
to cache timing attacks.
This algorithm processes up to 8 blocks in parallel in constant time. This
means that it is not usable by chaining modes that are strictly sequential
in nature, such as CBC encryption. CBC decryption, however, can benefit from
this implementation and runs about 25% faster. The other chaining modes
implemented in this module, XTS and CTR, can execute fully in parallel in
both directions.
The core code has been adopted from the OpenSSL project (in collaboration
with the original author, on cc). For ease of maintenance, this version is
identical to the upstream OpenSSL code, i.e., all modifications that were
required to make it suitable for inclusion into the kernel have been made
upstream. The original can be found here:
http://git.openssl.org/gitweb/?p=openssl.git;a=commit;h=6f6a6130
Note to integrators:
While this implementation is significantly faster than the existing table
based ones (generic or ARM asm), especially in CTR mode, the effects on
power efficiency are unclear as of yet. This code does fundamentally more
work, by calculating values that the table based code obtains by a simple
lookup; only by doing all of that work in a SIMD fashion, it manages to
perform better.
Cc: Andy Polyakov <appro@openssl.org>
Acked-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
This patch fixes lack of license, otherwise x509_key_parser.ko taints kernel.
Signed-off-by: Konstantin Khlebnikov <khlebnikov@openvz.org>
Signed-off-by: David Howells <dhowells@redhat.com>
Only public keys, with certificates signed by an existing
'trusted' key on the system trusted keyring, should be added
to a trusted keyring. This patch adds support for verifying
a certificate's signature.
This is derived from David Howells pkcs7_request_asymmetric_key() patch.
Signed-off-by: Mimi Zohar <zohar@linux.vnet.ibm.com>
Signed-off-by: David Howells <dhowells@redhat.com>
The keyring expansion patches introduces a new search method by which
key_search() attempts to walk directly to the key that has exactly the same
description as the requested one.
However, this causes inexact matching of asymmetric keys to fail. The
solution to this is to select iterative rather than direct search as the
default search type for asymmetric keys.
As an example, the kernel might have a key like this:
Magrathea: Glacier signing key: 6a2a0f82bad7e396665f465e4e3e1f9bd24b1226
and:
keyctl search <keyring-ID> asymmetric id:d24b1226
should find the key, despite that not being its exact description.
Signed-off-by: David Howells <dhowells@redhat.com>
Remove the certificate date checks that are performed when a certificate is
parsed. There are two checks: a valid from and a valid to. The first check is
causing a lot of problems with system clocks that don't keep good time and the
second places an implicit expiry date upon the kernel when used for module
signing, so do we really need them?
Signed-off-by: David Howells <dhowells@redhat.com>
cc: David Woodhouse <dwmw2@infradead.org>
cc: Rusty Russell <rusty@rustcorp.com.au>
cc: Josh Boyer <jwboyer@redhat.com>
cc: Alexander Holler <holler@ahsoftware.de>
cc: stable@vger.kernel.org
Handle certificates that lack an authorityKeyIdentifier field by assuming
they're self-signed and checking their signatures against themselves.
Signed-off-by: David Howells <dhowells@redhat.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Josh Boyer <jwboyer@redhat.com>
Check that the algorithm IDs obtained from the ASN.1 parse by OID lookup
corresponds to algorithms that are available to us.
Reported-by: Kees Cook <keescook@chromium.org>
Signed-off-by: David Howells <dhowells@redhat.com>
Embed a public_key_signature struct in struct x509_certificate, eliminating
now unnecessary fields, and split x509_check_signature() to create a filler
function for it that attaches a digest of the signed data and an MPI that
represents the signature data. x509_free_certificate() is then modified to
deal with these.
Whilst we're at it, export both x509_check_signature() and the new
x509_get_sig_params().
Signed-off-by: David Howells <dhowells@redhat.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Josh Boyer <jwboyer@redhat.com>
Modify public_key_verify_signature() so that it now takes a public_key struct
rather than a key struct and supply a wrapper that takes a key struct. The
wrapper is then used by the asymmetric key subtype and the modified function is
used by X.509 self-signature checking and can be used by other things also.
Signed-off-by: David Howells <dhowells@redhat.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Josh Boyer <jwboyer@redhat.com>
Store public key algo ID in public_key struct for reference purposes. This
allows it to be removed from the x509_certificate struct and used to find a
default in public_key_verify_signature().
Signed-off-by: David Howells <dhowells@redhat.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Josh Boyer <jwboyer@redhat.com>
Move the public-key algorithm pointer array from x509_public_key.c to
public_key.c as it isn't X.509 specific.
Note that to make this configure correctly, the public key part must be
dependent on the RSA module rather than the other way round. This needs a
further patch to make use of the crypto module loading stuff rather than using
a fixed table.
Signed-off-by: David Howells <dhowells@redhat.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Josh Boyer <jwboyer@redhat.com>
Rename the arrays of public key parameters (public key algorithm names, hash
algorithm names and ID type names) so that the array name ends in "_name".
Signed-off-by: David Howells <dhowells@redhat.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Josh Boyer <jwboyer@redhat.com>
Move all users of ablk_helper under x86/ to the generic version
and delete the x86 specific version.
Acked-by: Jussi Kivilinna <jussi.kivilinna@iki.fi>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>