IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
This patch uses the crypto_aead_set_reqsize helper to avoid directly
touching the internals of aead.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds a new primitive crypto_grab_spawn which is meant
to replace crypto_init_spawn and crypto_init_spawn2. Under the
new scheme the user no longer has to worry about reference counting
the alg object before it is subsumed by the spawn.
It is pretty much an exact copy of crypto_grab_aead.
Prior to calling this function spawn->frontend and spawn->inst
must have been set.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
In preparation for changing how struct net is refcounted
on kernel sockets pass the knowledge that we are creating
a kernel socket from sock_create_kern through to sk_alloc.
Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Change the crypto 842 compression alg to use the software 842 compression
and decompression library. Add the crypto driver_name as "842-generic".
Remove the fallback to LZO compression.
Previously, this crypto compression alg attemped 842 compression using
PowerPC hardware, and fell back to LZO compression and decompression if
the 842 PowerPC hardware was unavailable or failed. This should not
fall back to any other compression method, however; users of this crypto
compression alg can fallback if desired, and transparent fallback tricks
callers into thinking they are getting 842 compression when they actually
get LZO compression - the failure of the 842 hardware should not be
transparent to the caller.
The crypto compression alg for a hardware device also should not be located
in crypto/ so this is now a software-only implementation that uses the 842
software compression/decompression library.
Signed-off-by: Dan Streetman <ddstreet@ieee.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This adds a couple of test cases for CRC32 (not CRC32c) to
ensure that the generic and arch specific implementations
are in sync.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Steve Capper <steve.capper@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
In the test manager, there are a number of if-statements with expressions of
the form !x == y that incur warnings with gcc-5 of the following form:
../crypto/testmgr.c: In function '__test_aead':
../crypto/testmgr.c:523:12: warning: logical not is only applied to the left hand side of comparison [-Wlogical-not-parentheses]
if (!ret == template[i].fail) {
^
By converting the 'fail' member of struct aead_testvec and struct
cipher_testvec to a bool, we can get rid of the warnings.
Signed-off-by: David Howells <dhowells@redhat.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
In testmgr, struct pcomp_testvec takes a non-const 'params' field, which is
pointed to a const deflate_comp_params or deflate_decomp_params object. With
gcc-5 this incurs the following warnings:
In file included from ../crypto/testmgr.c:44:0:
../crypto/testmgr.h:28736:13: warning: initialization discards 'const' qualifier from pointer target type [-Wdiscarded-array-qualifiers]
.params = &deflate_comp_params,
^
../crypto/testmgr.h:28748:13: warning: initialization discards 'const' qualifier from pointer target type [-Wdiscarded-array-qualifiers]
.params = &deflate_comp_params,
^
../crypto/testmgr.h:28776:13: warning: initialization discards 'const' qualifier from pointer target type [-Wdiscarded-array-qualifiers]
.params = &deflate_decomp_params,
^
../crypto/testmgr.h:28800:13: warning: initialization discards 'const' qualifier from pointer target type [-Wdiscarded-array-qualifiers]
.params = &deflate_decomp_params,
^
Fix this by making the parameters pointer const and constifying the things
that use it.
Signed-off-by: David Howells <dhowells@redhat.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
When the user explicitly states that they don't care whether the
algorithm has been tested (type = CRYPTO_ALG_TESTED and mask = 0),
there is a corner case where we may erroneously return ENOENT.
This patch fixes it by correcting the logic in the test.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
When the user explicitly states that they don't care whether the
algorithm has been tested (type = CRYPTO_ALG_TESTED and mask = 0),
there is a corner case where we may erroneously return ENOENT.
This patch fixes it by correcting the logic in the test.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The commit 59afdc7b32143528524455039e7557a46b60e4c8 ("crypto:
api - Move module sig ifdef into accessor function") broke the
build when modules are completely disabled because we directly
dereference module->name.
This patch fixes this by using the accessor function module_name.
Reported-by: Fengguang Wu <fengguang.wu@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Highlights:
- "experimental" code for managing md/raid1 across a cluster using
DLM. Code is not ready for general use and triggers a WARNING if used.
However it is looking good and mostly done and having in mainline
will help co-ordinate development.
- RAID5/6 can now batch multiple (4K wide) stripe_heads so as to
handle a full (chunk wide) stripe as a single unit.
- RAID6 can now perform read-modify-write cycles which should
help performance on larger arrays: 6 or more devices.
- RAID5/6 stripe cache now grows and shrinks dynamically. The value
set is used as a minimum.
- Resync is now allowed to go a little faster than the 'mininum' when
there is competing IO. How much faster depends on the speed of the
devices, so the effective minimum should scale with device speed to
some extent.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2
iQIVAwUAVTbIxDnsnt1WYoG5AQIgAA//Z9FlEpkHcJJ75WGjrXgGJPyNTfEZOkoz
jnD8PpBY2Afp341vMatd0XKErdEAhuPQmJMAa+tbxht6pk77X3fSzXghGA8FEafg
yazn5pfBt6NXmepV4vhl/+LNYuWRxxLSA9EDm7wEg+tiO0UEts+a0w2TSXzcT1w+
30Yi1EjAlTaJ5yjlBHUOtTXWE43D6RKnUr6FMy2dRlnzlFyDlezRMChWo9v6OkQF
YqJ20FcvmJdLHY/6Yif3jvpm7eQecMqdCZENTvW/mJI86zqf6E+ToCYS1VNjfDnK
ud61iU9eshu4WtNWBG6KLuHBD0grO1NaEL7/S16w1KdNJMhYgiK8WussvIAJEesA
5SlETM7Y/1XFq8puwlAq2/tuPfhZ+TFxnAwce/C3hMTDcYAACnS/R6INFQXqGvy3
nX1NLogrCycX8oqxv3jTFKLVqIVwlkSlHcUGzIWjcfCF37StcXFKI5q862agyg2+
NNocFMuXhPPM1YcB9JJSo2nCsor4e9tTdVEZlFm2B3cc8LJ9BLWUMSoi1h7VK/1g
P7psnPIjz7/cdI2TZTFjGTZ0Kvhx/NTYp41AZealDNxeGWUNM+5xGZnUF8QRBc/E
0dGHtEAah834BDQFvNnJtuuh/s+KwbvswjNP+njoBsHjIQIvngDABpOwpIkdqF6r
diQ2gUPnHN0=
=OHG6
-----END PGP SIGNATURE-----
Merge tag 'md/4.1' of git://neil.brown.name/md
Pull md updates from Neil Brown:
"More updates that usual this time. A few have performance impacts
which hould mostly be positive, but RAID5 (in particular) can be very
work-load ensitive... We'll have to wait and see.
Highlights:
- "experimental" code for managing md/raid1 across a cluster using
DLM. Code is not ready for general use and triggers a WARNING if
used. However it is looking good and mostly done and having in
mainline will help co-ordinate development.
- RAID5/6 can now batch multiple (4K wide) stripe_heads so as to
handle a full (chunk wide) stripe as a single unit.
- RAID6 can now perform read-modify-write cycles which should help
performance on larger arrays: 6 or more devices.
- RAID5/6 stripe cache now grows and shrinks dynamically. The value
set is used as a minimum.
- Resync is now allowed to go a little faster than the 'mininum' when
there is competing IO. How much faster depends on the speed of the
devices, so the effective minimum should scale with device speed to
some extent"
* tag 'md/4.1' of git://neil.brown.name/md: (58 commits)
md/raid5: don't do chunk aligned read on degraded array.
md/raid5: allow the stripe_cache to grow and shrink.
md/raid5: change ->inactive_blocked to a bit-flag.
md/raid5: move max_nr_stripes management into grow_one_stripe and drop_one_stripe
md/raid5: pass gfp_t arg to grow_one_stripe()
md/raid5: introduce configuration option rmw_level
md/raid5: activate raid6 rmw feature
md/raid6 algorithms: xor_syndrome() for SSE2
md/raid6 algorithms: xor_syndrome() for generic int
md/raid6 algorithms: improve test program
md/raid6 algorithms: delta syndrome functions
raid5: handle expansion/resync case with stripe batching
raid5: handle io error of batch list
RAID5: batch adjacent full stripe write
raid5: track overwrite disk count
raid5: add a new flag to track if a stripe can be batched
raid5: use flex_array for scribble data
md raid0: access mddev->queue (request queue member) conditionally because it is not set when accessed from dm-raid
md: allow resync to go faster when there is competing IO.
md: remove 'go_faster' option from ->sync_request()
...
All users of AEAD should include crypto/aead.h instead of
include/linux/crypto.h.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Acked-by: David S. Miller <davem@davemloft.net>
All users of AEAD should include crypto/aead.h instead of
include/linux/crypto.h.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Acked-by: David S. Miller <davem@davemloft.net>
All users of AEAD should include crypto/aead.h instead of
include/linux/crypto.h.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Acked-by: David S. Miller <davem@davemloft.net>
Now that all fips_enabled users are including linux/fips.h directly
instead of getting it through internal.h, we can remove the fips.h
inclusions from internal.h.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
All users of fips_enabled should include linux/fips.h directly
instead of getting it through internal.h.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
All users of fips_enabled should include linux/fips.h directly
instead of getting it through internal.h which is reserved for
internal crypto API implementors.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
There is currently a large ifdef FIPS code section in proc.c.
Ostensibly it's there because the fips_enabled sysctl sits under
/proc/sys/crypto. However, no other crypto sysctls exist.
In fact, the whole ethos of the crypto API is against such user
interfaces so this patch moves all the FIPS sysctl code over to
fips.c.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The header file internal.h is only meant for internal crypto API
implementors such as rng.c. So fips has no business in including
it.
This patch removes that inclusions and instead adds inclusions of
the actual features used by fips.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
All users of fips_enabled should include linux/fips.h directly
instead of getting it through internal.h.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch removes the unnecessary CRYPTO_FIPS ifdef from
drbg_healthcheck_sanity so that the code always gets checked
by the compiler.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Acked-by: Stephan Mueller <smueller@chronox.de>
Currently we're hiding mod->sig_ok under an ifdef in open code.
This patch adds a module_sig_ok accessor function and removes that
ifdef.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Acked-by: Rusty Russell <rusty@rustcorp.com.au>
The function crypto_ahash_init can also be asynchronous just
like update and final. So all callers must be able to handle
an async return.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
If we allocate a seed on behalf ot the user in crypto_rng_reset,
we must ensure that it is zeroed afterwards or the RNG may be
compromised.
Reported-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Now that crypto_rng_reset takes a const argument, we no longer
need to cast away the const qualifier.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Now that all rng implementations have switched over to the new
interface, we can remove the old low-level interface.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch ocnverts the ANSI CPRNG implementation to the new
low-level rng interface.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
The file internal.h is only meant to be used by internel API
implementation and not algorithm implementations. In fact it
isn't even needed here so this patch removes it.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
This patch converts the DRBG implementation to the new low-level
rng interface.
This allows us to get rid of struct drbg_gen by using the new RNG
API instead.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Acked-by: Stephan Mueller <smueller@chronox.de>
This patch adds the helpers that allow the registration and removal
of multiple RNG algorithms.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch converts the low-level crypto_rng interface to the
"new" style.
This allows existing implementations to be converted over one-
by-one. Once that is complete we can then remove the old rng
interface.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
There is no reason why crypto_rng_reset should modify the seed
so this patch marks it as const. Since our algorithms don't
export a const seed function yet we have to go through some
contortions for now.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Glue it altogehter. The raid6 rmw path should work the same as the
already existing raid5 logic. So emulate the prexor handling/flags
and split functions as needed.
1) Enable xor_syndrome() in the async layer.
2) Split ops_run_prexor() into RAID4/5 and RAID6 logic. Xor the syndrome
at the start of a rmw run as we did it before for the single parity.
3) Take care of rmw run in ops_run_reconstruct6(). Again process only
the changed pages to get syndrome back into sync.
4) Enhance set_syndrome_sources() to fill NULL pages if we are in a rmw
run. The lower layers will calculate start & end pages from that and
call the xor_syndrome() correspondingly.
5) Adapt the several places where we ignored Q handling up to now.
Performance numbers for a single E5630 system with a mix of 10 7200k
desktop/server disks. 300 seconds random write with 8 threads onto a
3,2TB (10*400GB) RAID6 64K chunk without spare (group_thread_cnt=4)
bsize rmw_level=1 rmw_level=0 rmw_level=1 rmw_level=0
skip_copy=1 skip_copy=1 skip_copy=0 skip_copy=0
4K 115 KB/s 141 KB/s 165 KB/s 140 KB/s
8K 225 KB/s 275 KB/s 324 KB/s 274 KB/s
16K 434 KB/s 536 KB/s 640 KB/s 534 KB/s
32K 751 KB/s 1,051 KB/s 1,234 KB/s 1,045 KB/s
64K 1,339 KB/s 1,958 KB/s 2,282 KB/s 1,962 KB/s
128K 2,673 KB/s 3,862 KB/s 4,113 KB/s 3,898 KB/s
256K 7,685 KB/s 7,539 KB/s 7,557 KB/s 7,638 KB/s
512K 19,556 KB/s 19,558 KB/s 19,652 KB/s 19,688 Kb/s
Signed-off-by: Markus Stockhausen <stockhausen@collogia.de>
Signed-off-by: NeilBrown <neilb@suse.de>
This patch adds the new top-level function crypto_rng_generate
which generates random numbers with additional input. It also
extends the mid-level rng_gen_random function to take additional
data as input.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch converts the top-level crypto_rng to the "new" style.
It was the last algorithm type added before we switched over
to the new way of doing things exemplified by shash.
All users will automatically switch over to the new interface.
Note that this patch does not touch the low-level interface to
rng implementations.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds a crypto_alg_extsize helper that can be used
by algorithm types such as pcompress and shash.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Initialising the RNG in drbg_kcapi_init is a waste of precious
entropy because all users will immediately seed the RNG after
the allocation.
In fact, all users should seed the RNG before using it. So there
is no point in doing the seeding in drbg_kcapi_init.
This patch removes the initial seeding and the user must seed
the RNG explicitly (as they all currently do).
This patch also changes drbg_kcapi_reset to allow reseeding.
That is, if you call it after a successful initial seeding, then
it will not reset the internal state of the DRBG before mixing
the new input and entropy.
If you still wish to reset the internal state, you can always
free the DRBG and allocate a new one.
Finally this patch removes locking from drbg_uninstantiate because
it's now only called from the destruction path which must not be
executed in parallel with normal operations.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Acked-by: Stephan Mueller <smueller@chronox.de>
As we moved the mutex init out of drbg_instantiate and into cra_init
we need to explicitly initialise the mutex in drbg_healthcheck_sanity.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Acked-by: Stephan Mueller <smueller@chronox.de>
As the DRBG does not operate on shadow copies of the DRBG instance
any more, the cipher handles only need to be allocated once during
initalization time and deallocated during uninstantiate time.
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The creation of a shadow copy is intended to only hold a short term
lock. But the drawback is that parallel users have a very similar DRBG
state which only differs by a high-resolution time stamp.
The DRBG will now hold a long term lock. Therefore, the lock is changed
to a mutex which implies that the DRBG can only be used in process
context.
The lock now guards the instantiation as well as the entire DRBG
generation operation. Therefore, multiple callers are fully serialized
when generating a random number.
As the locking is changed to use a long-term lock to avoid such similar
DRBG states, the entire creation and maintenance of a shadow copy can be
removed.
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The drbg_generate returns 0 in success case. That means that
drbg_generate_long will always only generate drbg_max_request_bytes at
most. Longer requests will be truncated to drbg_max_request_bytes.
Reported-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The buffer uses for temporary data must be cleared entirely. In AES192
the used buffer is drbg_statelen(drbg) + drbg_blocklen(drbg) as
documented in the comment above drbg_ctr_df.
This patch ensures that the temp buffer is completely wiped.
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Commit 9c521a200bc3 ("crypto: api - remove instance when test failed")
tried to grab a module reference count before the module was even set.
Worse, it then goes on to free the module reference count after it is
set so you quickly end up with a negative module reference count which
prevents people from using any instances belonging to that module.
This patch moves the module initialisation before the reference
count.
Reported-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
The networking updates from David Miller removed the iocb argument from
sendmsg and recvmsg (in commit 1b784140474e: "net: Remove iocb argument
from sendmsg and recvmsg"), but the crypto code had added new instances
of them.
When I pulled the crypto update, it was a silent semantic mis-merge, and
I overlooked the new warning messages in my test-build. I try to fix
those in the merge itself, but that relies on me noticing. Oh well.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>