IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
Shorten menu titles and make them consistent:
- acronym
- name
- architecture features in parenthesis
- no suffixes like "<something> algorithm", "support", or
"hardware acceleration", or "optimized"
Simplify help text descriptions, update references, and ensure that
https references are still valid.
Signed-off-by: Robert Elliott <elliott@hpe.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Shorten menu titles and make them consistent:
- acronym
- name
- architecture features in parenthesis
- no suffixes like "<something> algorithm", "support", or
"hardware acceleration", or "optimized"
Simplify help text descriptions, update references, and ensure that
https references are still valid.
Signed-off-by: Robert Elliott <elliott@hpe.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Shorten menu titles and make them consistent:
- acronym
- name
- architecture features in parenthesis
- no suffixes like "<something> algorithm", "support", or
"hardware acceleration", or "optimized"
Simplify help text descriptions, update references, and ensure that
https references are still valid.
Signed-off-by: Robert Elliott <elliott@hpe.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Move CPU-specific crypto/Kconfig entries to arch/xxx/crypto/Kconfig
and create a submenu for them under the Crypto API menu.
Suggested-by: Eric Biggers <ebiggers@kernel.org>
Signed-off-by: Robert Elliott <elliott@hpe.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
- Rework copy_oldmem_page() callback to take an iov_iter.
This includes few prerequisite updates and fixes to the
oldmem reading code.
- Rework cpufeature implementation to allow for various CPU feature
indications, which is not only limited to hardware capabilities,
but also allows CPU facilities.
- Use the cpufeature rework to autoload Ultravisor module when CPU
facility 158 is available.
- Add ELF note type for encrypted CPU state of a protected virtual CPU.
The zgetdump tool from s390-tools package will decrypt the CPU state
using a Customer Communication Key and overwrite respective notes to
make the data accessible for crash and other debugging tools.
- Use vzalloc() instead of vmalloc() + memset() in ChaCha20 crypto test.
- Fix incorrect recovery of kretprobe modified return address in stacktrace.
- Switch the NMI handler to use generic irqentry_nmi_enter() and
irqentry_nmi_exit() helper functions.
- Rework the cryptographic Adjunct Processors (AP) pass-through design
to support dynamic changes to the AP matrix of a running guest as well
as to implement more of the AP architecture.
- Minor boot code cleanups.
- Grammar and typo fixes to hmcdrv and tape drivers.
-----BEGIN PGP SIGNATURE-----
iI0EABYIADUWIQQrtrZiYVkVzKQcYivNdxKlNrRb8AUCYu4dRBccYWdvcmRlZXZA
bGludXguaWJtLmNvbQAKCRDNdxKlNrRb8DnlAP45Sk4cE35T+Z0vdHE2f0uMXE/p
uHNjS3fDZOQVFJ2jZwEA99xPF5qPCttbR/b1VHsMSb30684IT1A4PC7y05kgfAw=
=jCc3
-----END PGP SIGNATURE-----
Merge tag 's390-5.20-1' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux
Pull s390 updates from Alexander Gordeev:
- Rework copy_oldmem_page() callback to take an iov_iter.
This includes a few prerequisite updates and fixes to the oldmem
reading code.
- Rework cpufeature implementation to allow for various CPU feature
indications, which is not only limited to hardware capabilities, but
also allows CPU facilities.
- Use the cpufeature rework to autoload Ultravisor module when CPU
facility 158 is available.
- Add ELF note type for encrypted CPU state of a protected virtual CPU.
The zgetdump tool from s390-tools package will decrypt the CPU state
using a Customer Communication Key and overwrite respective notes to
make the data accessible for crash and other debugging tools.
- Use vzalloc() instead of vmalloc() + memset() in ChaCha20 crypto
test.
- Fix incorrect recovery of kretprobe modified return address in
stacktrace.
- Switch the NMI handler to use generic irqentry_nmi_enter() and
irqentry_nmi_exit() helper functions.
- Rework the cryptographic Adjunct Processors (AP) pass-through design
to support dynamic changes to the AP matrix of a running guest as
well as to implement more of the AP architecture.
- Minor boot code cleanups.
- Grammar and typo fixes to hmcdrv and tape drivers.
* tag 's390-5.20-1' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux: (46 commits)
Revert "s390/smp: enforce lowcore protection on CPU restart"
Revert "s390/smp: rework absolute lowcore access"
Revert "s390/smp,ptdump: add absolute lowcore markers"
s390/unwind: fix fgraph return address recovery
s390/nmi: use irqentry_nmi_enter()/irqentry_nmi_exit()
s390: add ELF note type for encrypted CPU state of a PV VCPU
s390/smp,ptdump: add absolute lowcore markers
s390/smp: rework absolute lowcore access
s390/setup: rearrange absolute lowcore initialization
s390/boot: cleanup adjust_to_uv_max() function
s390/smp: enforce lowcore protection on CPU restart
s390/tape: fix comment typo
s390/hmcdrv: fix Kconfig "its" grammar
s390/docs: fix warnings for vfio_ap driver doc
s390/docs: fix warnings for vfio_ap driver lock usage doc
s390/crash: support multi-segment iterators
s390/crash: use static swap buffer for copy_to_user_real()
s390/crash: move copy_to_user_real() to crash_dump.c
s390/zcore: fix race when reading from hardware system area
s390/crash: fix incorrect number of bytes to copy to user space
...
Rework cpufeature implementation to allow for various cpu feature
indications, which is not only limited to hwcap bits. This is achieved
by adding a sequential list of cpu feature numbers, where each of them
is mapped to an entry which indicates what this number is about.
Each entry contains a type member, which indicates what feature
name space to look into (e.g. hwcap, or cpu facility). If wanted this
allows also to automatically load modules only in e.g. z/VM
configurations.
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
Signed-off-by: Steffen Eiden <seiden@linux.ibm.com>
Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
Reviewed-by: Hendrik Brueckner <brueckner@linux.ibm.com>
Link: https://lore.kernel.org/r/20220713125644.16121-2-seiden@linux.ibm.com
Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com>
When RDRAND was introduced, there was much discussion on whether it
should be trusted and how the kernel should handle that. Initially, two
mechanisms cropped up, CONFIG_ARCH_RANDOM, a compile time switch, and
"nordrand", a boot-time switch.
Later the thinking evolved. With a properly designed RNG, using RDRAND
values alone won't harm anything, even if the outputs are malicious.
Rather, the issue is whether those values are being *trusted* to be good
or not. And so a new set of options were introduced as the real
ones that people use -- CONFIG_RANDOM_TRUST_CPU and "random.trust_cpu".
With these options, RDRAND is used, but it's not always credited. So in
the worst case, it does nothing, and in the best case, maybe it helps.
Along the way, CONFIG_ARCH_RANDOM's meaning got sort of pulled into the
center and became something certain platforms force-select.
The old options don't really help with much, and it's a bit odd to have
special handling for these instructions when the kernel can deal fine
with the existence or untrusted existence or broken existence or
non-existence of that CPU capability.
Simplify the situation by removing CONFIG_ARCH_RANDOM and using the
ordinary asm-generic fallback pattern instead, keeping the two options
that are actually used. For now it leaves "nordrand" for now, as the
removal of that will take a different route.
Acked-by: Michael Ellerman <mpe@ellerman.id.au>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Borislav Petkov <bp@suse.de>
Acked-by: Heiko Carstens <hca@linux.ibm.com>
Acked-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
s390x appears to present two RNG interfaces:
- a "TRNG" that gathers entropy using some hardware function; and
- a "DRBG" that takes in a seed and expands it.
Previously, the TRNG was wired up to arch_get_random_{long,int}(), but
it was observed that this was being called really frequently, resulting
in high overhead. So it was changed to be wired up to arch_get_random_
seed_{long,int}(), which was a reasonable decision. Later on, the DRBG
was then wired up to arch_get_random_{long,int}(), with a complicated
buffer filling thread, to control overhead and rate.
Fortunately, none of the performance issues matter much now. The RNG
always attempts to use arch_get_random_seed_{long,int}() first, which
means a complicated implementation of arch_get_random_{long,int}() isn't
really valuable or useful to have around. And it's only used when
reseeding, which means it won't hit the high throughput complications
that were faced before.
So this commit returns to an earlier design of just calling the TRNG in
arch_get_random_seed_{long,int}(), and returning false in arch_get_
random_{long,int}().
Part of what makes the simplification possible is that the RNG now seeds
itself using the TRNG at bootup. But this only works if the TRNG is
detected early in boot, before random_init() is called. So this commit
also causes that check to happen in setup_arch().
Cc: stable@vger.kernel.org
Cc: Harald Freudenberger <freude@linux.ibm.com>
Cc: Ingo Franzki <ifranzki@linux.ibm.com>
Cc: Juergen Christ <jchrist@linux.ibm.com>
Cc: Heiko Carstens <hca@linux.ibm.com>
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Link: https://lore.kernel.org/r/20220610222023.378448-1-Jason@zx2c4.com
Reviewed-by: Harald Freudenberger <freude@linux.ibm.com>
Acked-by: Heiko Carstens <hca@linux.ibm.com>
Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com>
- Add Eric Farman as maintainer for s390 virtio drivers.
- Improve machine check handling, and avoid incorrectly injecting a machine
check into a kvm guest.
- Add cond_resched() call to gmap page table walker in order to avoid
possible huge latencies. Also use non-quiesing sske instruction to speed
up storage key handling.
- Add __GFP_NORETRY to KEXEC_CONTROL_MEMORY_GFP so s390 behaves similar like
common code.
- Get sie control block address from correct stack slot in perf event
code. This fixes potential random memory accesses.
- Change uaccess code so that the exception handler sets the result of
get_user() and __get_kernel_nofault() to zero in case of a fault. Until
now this was done via input parameters for inline assemblies. Doing it
via fault handling is what most or even all other architectures are
doing.
- Couple of other small cleanups and fixes.
-----BEGIN PGP SIGNATURE-----
iQIzBAABCAAdFiEECMNfWEw3SLnmiLkZIg7DeRspbsIFAmKZ2IMACgkQIg7DeRsp
bsIXgg/+JeKOFIfwWWZuk2/2drITjQxWL6OwAV/vHVL9bYVGGRgFHTfF7Qslr2Hb
j/kh55hajjMt00bVS6P52I7cC+xB6PcxPK4jF+u2FXcL187QnWZg7VD7qkFO3E7Q
G5jiWub0eNfd3ijytzSO1yLwv3Rh6GEIOi7lRkk88Fe2B9l0AaXbvnr7rrIoG1SS
TCPUoCCNKEH+xPmujdN5B6CDK2ldukcPZHtAJ9Qxu6DWAWIxh+hHr/c1zW9/7kDj
Vogc3gcgApeXTsMZu38c2tFiv6wxvg37cMa0EW+l5zkyeFn+a1CLSxA/qPi0N1UY
pFcxrlWXWshr7vKn16lJoCe1nBVyfb9ohToLKQtnWB7RZ86lP79tVjOpiQ4qi+jl
54yx/rdtycEDaC0w4ab5kfZPc9/EeaY4ppaOLjh4+r/Ve3x+EMcAZ9Q2Ei3ltRBy
75kswnRvHDWMbtS8rNecdk29QNRvLpnzpGLWQ4wGCV7V9FzTtJwO5mdjuibhSoI8
9dZw4/HGMeA2P1pAE/d1YqKbdgqlNzDyU1dYpk6PVBI0I9mUu36eZXQnjrBN++ki
8TWQYkBv6vnUHJmkfEK7B/thYwljzlQJSje4ebj0aPWzBJ9An+U5ozoANlAu97MR
/EVZM67snrT6aSOjhF6SNjWqMGBOlMVecF+GCsVrFEeah+qrYq4=
=BOi4
-----END PGP SIGNATURE-----
Merge tag 's390-5.19-2' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux
Pull more s390 updates from Heiko Carstens:
"Just a couple of small improvements, bug fixes and cleanups:
- Add Eric Farman as maintainer for s390 virtio drivers.
- Improve machine check handling, and avoid incorrectly injecting a
machine check into a kvm guest.
- Add cond_resched() call to gmap page table walker in order to avoid
possible huge latencies. Also use non-quiesing sske instruction to
speed up storage key handling.
- Add __GFP_NORETRY to KEXEC_CONTROL_MEMORY_GFP so s390 behaves
similar like common code.
- Get sie control block address from correct stack slot in perf event
code. This fixes potential random memory accesses.
- Change uaccess code so that the exception handler sets the result
of get_user() and __get_kernel_nofault() to zero in case of a
fault. Until now this was done via input parameters for inline
assemblies. Doing it via fault handling is what most or even all
other architectures are doing.
- Couple of other small cleanups and fixes"
* tag 's390-5.19-2' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux:
s390/stack: add union to reflect kvm stack slot usages
s390/stack: merge empty stack frame slots
s390/uaccess: whitespace cleanup
s390/uaccess: use __noreturn instead of __attribute__((noreturn))
s390/uaccess: use exception handler to zero result on get_user() failure
s390/uaccess: use symbolic names for inline assembler operands
s390/mcck: isolate SIE instruction when setting CIF_MCCK_GUEST flag
s390/mm: use non-quiescing sske for KVM switch to keyed guest
s390/gmap: voluntarily schedule during key setting
MAINTAINERS: Update s390 virtio-ccw
s390/kexec: add __GFP_NORETRY to KEXEC_CONTROL_MEMORY_GFP
s390/Kconfig.debug: fix indentation
s390/Kconfig: fix indentation
s390/perf: obtain sie_block from the right address
s390: generate register offsets into pt_regs automatically
s390: simplify early program check handler
s390/crypto: fix scatterwalk_unmap() callers in AES-GCM
API:
- Test in-place en/decryption with two sglists in testmgr.
- Fix process vs. softirq race in cryptd.
Algorithms:
- Add arm64 acceleration for sm4.
- Add s390 acceleration for chacha20.
Drivers:
- Add polarfire soc hwrng support in mpsf.
- Add support for TI SoC AM62x in sa2ul.
- Add support for ATSHA204 cryptochip in atmel-sha204a.
- Add support for PRNG in caam.
- Restore support for storage encryption in qat.
- Restore support for storage encryption in hisilicon/sec.
-----BEGIN PGP SIGNATURE-----
iQIzBAABCgAdFiEEn51F/lCuNhUwmDeSxycdCkmxi6cFAmKQs9cACgkQxycdCkmx
i6deOA//bwX9JvxI+SiwEK/1u5GX9VHtCpAa1rMOqhfl8UrBfo0516M/CeUDjW0J
t1yoq0JUoIfYrEbSJqxXTnfG6+fJ1WsQtT3jd1/64nrwVk+w6OdMBTt48B9GF0R5
ZjWG7zmjKZcspZqSwib/gxbehJ+IX7dYdUsrlUQq3q64qpQEqxTgqsfyiY3LP24N
lng6weLudrFA5Xa8pVCmrCnOH3J7kPGA4iGqTGNV8Qx3ud9CUWSc8BT4VdqU8t2f
opaYL3s9oKc+xtS4yrOnfV+Wa/A8K6AuBYeODFtLe41FSpGYgaPslcGqEGwAHNpL
0HjqQdC+4auimGJxyVcef7QVMCpGqIfKqYu7sYXuNROylPjqMNa/DRL64csaDxDn
WiheV9RSc1zfchxHC4IjnfwE7nNDVYnYrZ1awyvQ9xvAoh7bldiEe6k/UlWi3L0F
nejJRFPXOSZ2GfJjrVNsv5lSWZCNWRBzOehN4D6EMJjEfM/G3/30Q0qwif39QWVj
r1gYQWmZuCa9mL7enga1XavebQ6cLXggR4sTxEmV7Sta6AJ+RqNqOnrPecEF5Avr
eSYQLxor+jvhaepcKhyDOF4dKGGJIWaEi00GC83yZ8hApVbfWoVh8Nfxmp8TUEzH
UUJFvrFLNTBOwRoz3fIT57vaFxksQREZwlcQ77xVAeg8S+BOB4o=
=oVRe
-----END PGP SIGNATURE-----
Merge tag 'v5.19-p1' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6
Pull crypto updates from Herbert Xu:
"API:
- Test in-place en/decryption with two sglists in testmgr
- Fix process vs softirq race in cryptd
Algorithms:
- Add arm64 acceleration for sm4
- Add s390 acceleration for chacha20
Drivers:
- Add polarfire soc hwrng support in mpsf
- Add support for TI SoC AM62x in sa2ul
- Add support for ATSHA204 cryptochip in atmel-sha204a
- Add support for PRNG in caam
- Restore support for storage encryption in qat
- Restore support for storage encryption in hisilicon/sec"
* tag 'v5.19-p1' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (116 commits)
hwrng: omap3-rom - fix using wrong clk_disable() in omap_rom_rng_runtime_resume()
crypto: hisilicon/sec - delete the flag CRYPTO_ALG_ALLOCATES_MEMORY
crypto: qat - add support for 401xx devices
crypto: qat - re-enable registration of algorithms
crypto: qat - honor CRYPTO_TFM_REQ_MAY_SLEEP flag
crypto: qat - add param check for DH
crypto: qat - add param check for RSA
crypto: qat - remove dma_free_coherent() for DH
crypto: qat - remove dma_free_coherent() for RSA
crypto: qat - fix memory leak in RSA
crypto: qat - add backlog mechanism
crypto: qat - refactor submission logic
crypto: qat - use pre-allocated buffers in datapath
crypto: qat - set to zero DH parameters before free
crypto: s390 - add crypto library interface for ChaCha20
crypto: talitos - Uniform coding style with defined variable
crypto: octeontx2 - simplify the return expression of otx2_cpt_aead_cbc_aes_sha_setkey()
crypto: cryptd - Protect per-CPU resource by disabling BH.
crypto: sun8i-ce - do not fallback if cryptlen is less than sg length
crypto: sun8i-ce - rework debugging
...
The argument of scatterwalk_unmap() is supposed to be the void* that was
returned by the previous scatterwalk_map() call.
The s390 AES-GCM implementation was instead passing the pointer to the
struct scatter_walk.
This doesn't actually break anything because scatterwalk_unmap() only uses
its argument under CONFIG_HIGHMEM and ARCH_HAS_FLUSH_ON_KUNMAP.
Fixes: bf7fa03870 ("s390/crypto: add s390 platform specific aes gcm support.")
Signed-off-by: Jann Horn <jannh@google.com>
Acked-by: Harald Freudenberger <freude@linux.ibm.com>
Link: https://lore.kernel.org/r/20220517143047.3054498-1-jannh@google.com
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
Implement a crypto library interface for the s390-native ChaCha20 cipher
algorithm. This allows us to stop to select CRYPTO_CHACHA20 and instead
select CRYPTO_ARCH_HAVE_LIB_CHACHA. This allows BIG_KEYS=y not to build
a whole ChaCha20 crypto infrastructure as a built-in, but build a smaller
CRYPTO_LIB_CHACHA instead.
Make CRYPTO_CHACHA_S390 config entry to look like similar ones on other
architectures. Remove CRYPTO_ALGAPI select as anyway it is selected by
CRYPTO_SKCIPHER.
Add a new test module and a test script for ChaCha20 cipher and its
interfaces. Here are test results on an idle z15 machine:
Data | Generic crypto TFM | s390 crypto TFM | s390 lib
size | enc dec | enc dec | enc dec
-----+--------------------+------------------+----------------
512b | 1545ns 1295ns | 604ns 446ns | 430ns 407ns
4k | 9536ns 9463ns | 2329ns 2174ns | 2170ns 2154ns
64k | 149.6us 149.3us | 34.4us 34.5us | 33.9us 33.1us
6M | 23.61ms 23.11ms | 4223us 4160us | 3951us 4008us
60M | 143.9ms 143.9ms | 33.5ms 33.2ms | 32.2ms 32.1ms
Signed-off-by: Vladis Dronov <vdronov@redhat.com>
Reviewed-by: Harald Freudenberger <freude@linux.ibm.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
With z10 as minimum supported machine generation many ".insn" encodings
could be now converted to instruction names. There are couple of exceptions
- stfle is used from the als code built for z900 and cannot be converted
- few ".insn" directives encode unsupported instruction formats
The generated code is identical before/after this change.
Acked-by: Ilya Leoshkevich <iii@linux.ibm.com>
Reviewed-by: Heiko Carstens <hca@linux.ibm.com>
Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
Pull crypto updates from Herbert Xu:
"Algorithms:
- Drop alignment requirement for data in aesni
- Use synchronous seeding from the /dev/random in DRBG
- Reseed nopr DRBGs every 5 minutes from /dev/random
- Add KDF algorithms currently used by security/DH
- Fix lack of entropy on some AMD CPUs with jitter RNG
Drivers:
- Add support for the D1 variant in sun8i-ce
- Add SEV_INIT_EX support in ccp
- PFVF support for GEN4 host driver in qat
- Compression support for GEN4 devices in qat
- Add cn10k random number generator support"
* 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (145 commits)
crypto: af_alg - rewrite NULL pointer check
lib/mpi: Add the return value check of kcalloc()
crypto: qat - fix definition of ring reset results
crypto: hisilicon - cleanup warning in qm_get_qos_value()
crypto: kdf - select SHA-256 required for self-test
crypto: x86/aesni - don't require alignment of data
crypto: ccp - remove unneeded semicolon
crypto: stm32/crc32 - Fix kernel BUG triggered in probe()
crypto: s390/sha512 - Use macros instead of direct IV numbers
crypto: sparc/sha - remove duplicate hash init function
crypto: powerpc/sha - remove duplicate hash init function
crypto: mips/sha - remove duplicate hash init function
crypto: sha256 - remove duplicate generic hash init function
crypto: jitter - add oversampling of noise source
MAINTAINERS: update SEC2 driver maintainers list
crypto: ux500 - Use platform_get_irq() to get the interrupt
crypto: hisilicon/qm - disable qm clock-gating
crypto: omap-aes - Fix broken pm_runtime_and_get() usage
MAINTAINERS: update caam crypto driver maintainers list
crypto: octeontx2 - prevent underflow in get_cores_bmap()
...
In the init functions of sha512 and sha384, the initial hash value
use macros instead of numbers.
Signed-off-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The clgfi instruction used within the ChaCha20 assembly is only
available for z9-109 and newer machines, and therefore this will
generate a compile error if compiled e.g. with MARCH_Z900.
Given that the assembler code will only be executed on machines with
vector instructions, which became much later available than z9-109,
use insn notation to generate the clgfi instruction, and avoid compile
errors due to unknown instructions.
Fixes: b087dfab4d ("s390/crypto: add SIMD implementation for ChaCha20")
Reported-by: kernel test robot <lkp@intel.com>
Reviewed-by: Harald Freudenberger <freude@linux.ibm.com>
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
Add an implementation of the ChaCha20 stream cipher (see e.g. RFC 7539)
that makes use of z13's vector instruction set extension.
The original implementation is by Andy Polyakov which is
adapted for kernel use.
Four to six blocks are processed in parallel resulting in a performance
gain for inputs >= 256 bytes.
chacha20-generic
1 operation in 622 cycles (256 bytes)
1 operation in 2346 cycles (1024 bytes)
chacha20-s390
1 operation in 218 cycles (256 bytes)
1 operation in 647 cycles (1024 bytes)
Cc: Andy Polyakov <appro@openssl.org>
Reviewed-by: Harald Freudenberger <freude@de.ibm.com>
Signed-off-by: Patrick Steuer <patrick.steuer@de.ibm.com>
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
A review of the code showed, that this function which is exposed
within the whole kernel should do a parameter check for the
amount of bytes requested. If this requested bytes is too high
an unsigned int overflow could happen causing this function to
try to memcpy a really big memory chunk.
This is not a security issue as there are only two invocations
of this function from arch/s390/include/asm/archrandom.h and both
are not exposed to userland.
Reported-by: Sven Schnelle <svens@linux.ibm.com>
Signed-off-by: Harald Freudenberger <freude@linux.ibm.com>
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
Use store_tod_clock_ext() in order to be able to get rid
get_tod_clock_ext().
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
A master key change on a CCA card may cause an immediately
following request to derive an protected key from a secure
key to fail with error condition 8/2290. The recommendation
from firmware is to retry with 1 second sleep.
So now the low level cca functions return -EAGAIN when this
error condition is seen and the paes retry function will
evaluate the return value. Seeing EAGAIN and running in
process context results in trying to sleep for 1 s now.
Signed-off-by: Harald Freudenberger <freude@linux.ibm.com>
Reviewed-by: Ingo Franzki <ifranzki@linux.ibm.com>
Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
The cipher routines in the crypto API are mostly intended for templates
implementing skcipher modes generically in software, and shouldn't be
used outside of the crypto subsystem. So move the prototypes and all
related definitions to a new header file under include/crypto/internal.
Also, let's use the new module namespace feature to move the symbol
exports into a new namespace CRYPTO_INTERNAL.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Acked-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
hugepages using CMA:
- Add arch_get_random_long() support.
- Add ap bus userspace notifications.
- Increase default size of vmalloc area to 512GB and otherwise let it increase
dynamically by the size of physical memory. This should fix all occurrences
where the vmalloc area was not large enough.
- Completely get rid of set_fs() (aka select SET_FS) and rework address space
handling while doing that; making address space handling much more simple.
- Reimplement getcpu vdso syscall in C.
- Add support for extended SCLP responses (> 4k). This allows e.g. to handle
also potential large system configurations.
- Simplify KASAN by removing 3-level page table support and only supporting
4-levels from now on.
- Improve debug-ability of the kernel decompressor code, which now prints also
stack traces and symbols in case of problems to the console.
- Remove more power management leftovers.
- Other various fixes and improvements all over the place.
-----BEGIN PGP SIGNATURE-----
iQIzBAABCAAdFiEECMNfWEw3SLnmiLkZIg7DeRspbsIFAl/XQAIACgkQIg7DeRsp
bsIdYA//TCtSTrka/yW03b4b0FuLtKNpKB5zQgaqtEurbgbZhXdZ7/L3N+KavPQH
njmKAARxebRIJB0DoZ9w9XpSb+mI3Q5y8GMi5xvUzjtJj/c6ahi3cEXIpuDR0PBv
bf4UYSUpvndOwVFVOEZLeaJwKciCYvdoOwjBCmoKz9orthNVdVh5vztVRE2dMkNl
y9C/Pb3w4ZMYxrbETuYnxqzueCxUhVOJmwodkGdP6bxBeemOwKn2TLVZQCbGGe7y
BZpG+xsTaLZV1dZUZuDSOzVi1CTzJBGaJuYy5ewddWfxi7+mxqwEg/4s6nGKAciX
Fa3T6aqLpUmDDN842Ql9TZHrwR+GYrlAp3XaQETOusUuEQLvP1dKRj/RXiDXN3MZ
L+Mfa56dbs9GkVaNN/N+L7Y4z/6tZ2caX4X2S22Cp/QzvRTrG4jXVTn0r4WIcY/2
vn7fEy71LJ97CLQTDryyfJx7YNMdyIlUZY5ICAk1bt8nz1lB/IoZy0YoCBvPxIzb
cEKcFTOdOtZR4WY3F8+kU0Nv1HQ8yPBzMaAqSNERvNQhMvoCChxntmyYxuVgH5iB
SACADqEJKQ3hb4nMnxkeTrmmrhH4e0kdF9lAEytX+VYbjAq/6MY+qYo+QHDYkFWh
BndxI54d6IiktDcKuBcpKJM7S/7N2t+EsLTS6Dhux7dbDZ2+Upw=
=UR7j
-----END PGP SIGNATURE-----
Merge tag 's390-5.11-1' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux
Pull s390 updates from Heiko Carstens:
- Add support for the hugetlb_cma command line option to allocate
gigantic hugepages using CMA
- Add arch_get_random_long() support.
- Add ap bus userspace notifications.
- Increase default size of vmalloc area to 512GB and otherwise let it
increase dynamically by the size of physical memory. This should fix
all occurrences where the vmalloc area was not large enough.
- Completely get rid of set_fs() (aka select SET_FS) and rework address
space handling while doing that; making address space handling much
more simple.
- Reimplement getcpu vdso syscall in C.
- Add support for extended SCLP responses (> 4k). This allows e.g. to
handle also potential large system configurations.
- Simplify KASAN by removing 3-level page table support and only
supporting 4-levels from now on.
- Improve debug-ability of the kernel decompressor code, which now
prints also stack traces and symbols in case of problems to the
console.
- Remove more power management leftovers.
- Other various fixes and improvements all over the place.
* tag 's390-5.11-1' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux: (62 commits)
s390/mm: add support to allocate gigantic hugepages using CMA
s390/crypto: add arch_get_random_long() support
s390/smp: perform initial CPU reset also for SMT siblings
s390/mm: use invalid asce for user space when switching to init_mm
s390/idle: fix accounting with machine checks
s390/idle: add missing mt_cycles calculation
s390/boot: add build-id to decompressor
s390/kexec_file: fix diag308 subcode when loading crash kernel
s390/cio: fix use-after-free in ccw_device_destroy_console
s390/cio: remove pm support from ccw bus driver
s390/cio: remove pm support from css-bus driver
s390/cio: remove pm support from IO subchannel drivers
s390/cio: remove pm support from chsc subchannel driver
s390/vmur: remove unused pm related functions
s390/tape: remove unsupported PM functions
s390/cio: remove pm support from eadm-sch drivers
s390: remove pm support from console drivers
s390/dasd: remove unused pm related functions
s390/zfcp: remove pm support from zfcp driver
s390/ap: let bus_register() add the AP bus sysfs attributes
...
The random longs to be pulled by arch_get_random_long() are
prepared in an 4K buffer which is filled from the NIST 800-90
compliant s390 drbg. By default the random long buffer is refilled
256 times before the drbg itself needs a reseed. The reseed of the
drbg is done with 32 bytes fetched from the high quality (but slow)
trng which is assumed to deliver 100% entropy. So the 32 * 8 = 256
bits of entropy are spread over 256 * 4KB = 1MB serving 131072
arch_get_random_long() invocations before reseeded.
How often the 4K random long buffer is refilled with the drbg
before the drbg is reseeded can be adjusted. There is a module
parameter 's390_arch_rnd_long_drbg_reseed' accessible via
/sys/module/arch_random/parameters/rndlong_drbg_reseed
or as kernel command line parameter
arch_random.rndlong_drbg_reseed=<value>
This parameter tells how often the drbg fills the 4K buffer before
it is re-seeded by fresh entropy from the trng.
A value of 16 results in reseeding the drbg at every 16 * 4 KB = 64
KB with 32 bytes of fresh entropy pulled from the trng. So a value
of 16 would result in 256 bits entropy per 64 KB.
A value of 256 results in 1MB of drbg output before a reseed of the
drbg is done. So this would spread the 256 bits of entropy among 1MB.
Setting this parameter to 0 forces the reseed to take place every
time the 4K buffer is depleted, so the entropy rises to 256 bits
entropy per 4K or 0.5 bit entropy per arch_get_random_long(). With
setting this parameter to negative values all this effort is
disabled, arch_get_random long() returns false and thus indicating
that the arch_get_random_long() feature is disabled at all.
arch_get_random_long() is used by random.c among others to provide
an initial hash value to be mixed with the entropy pool on every
random data pull. For about 64 bytes read from /dev/urandom there
is one call to arch_get_random_long(). So these additional random
long values count for performance of /dev/urandom with measurable
but low penalty.
Signed-off-by: Harald Freudenberger <freude@linux.ibm.com>
Reviewed-by: Ingo Franzki <ifranzki@linux.ibm.com>
Reviewed-by: Juergen Christ <jchrist@linux.ibm.com>
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
Instead of creating the sysfs attributes for the prng devices by hand,
describe them in .groups and let the misdevice core handle it.
This also ensures that the attributes are available when the KOBJ_ADD
event is raised.
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Reviewed-by: Harald Freudenberger <freude@linux.ibm.com>
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
Currently <crypto/sha.h> contains declarations for both SHA-1 and SHA-2,
and <crypto/sha3.h> contains declarations for SHA-3.
This organization is inconsistent, but more importantly SHA-1 is no
longer considered to be cryptographically secure. So to the extent
possible, SHA-1 shouldn't be grouped together with any of the other SHA
versions, and usage of it should be phased out.
Therefore, split <crypto/sha.h> into two headers <crypto/sha1.h> and
<crypto/sha2.h>, and make everyone explicitly specify whether they want
the declarations for SHA-1, SHA-2, or both.
This avoids making the SHA-1 declarations visible to files that don't
want anything to do with SHA-1. It also prepares for potentially moving
sha1.h into a new insecure/ or dangerous/ directory.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Acked-by: Jason A. Donenfeld <Jason@zx2c4.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
As said by Linus:
A symmetric naming is only helpful if it implies symmetries in use.
Otherwise it's actively misleading.
In "kzalloc()", the z is meaningful and an important part of what the
caller wants.
In "kzfree()", the z is actively detrimental, because maybe in the
future we really _might_ want to use that "memfill(0xdeadbeef)" or
something. The "zero" part of the interface isn't even _relevant_.
The main reason that kzfree() exists is to clear sensitive information
that should not be leaked to other future users of the same memory
objects.
Rename kzfree() to kfree_sensitive() to follow the example of the recently
added kvfree_sensitive() and make the intention of the API more explicit.
In addition, memzero_explicit() is used to clear the memory to make sure
that it won't get optimized away by the compiler.
The renaming is done by using the command sequence:
git grep -w --name-only kzfree |\
xargs sed -i 's/kzfree/kfree_sensitive/'
followed by some editing of the kfree_sensitive() kerneldoc and adding
a kzfree backward compatibility macro in slab.h.
[akpm@linux-foundation.org: fs/crypto/inline_crypt.c needs linux/slab.h]
[akpm@linux-foundation.org: fix fs/crypto/inline_crypt.c some more]
Suggested-by: Joe Perches <joe@perches.com>
Signed-off-by: Waiman Long <longman@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Acked-by: David Howells <dhowells@redhat.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Cc: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
Cc: James Morris <jmorris@namei.org>
Cc: "Serge E. Hallyn" <serge@hallyn.com>
Cc: Joe Perches <joe@perches.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Dan Carpenter <dan.carpenter@oracle.com>
Cc: "Jason A . Donenfeld" <Jason@zx2c4.com>
Link: http://lkml.kernel.org/r/20200616154311.12314-3-longman@redhat.com
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
snprintf() returns the number of bytes that would be written,
which may be greater than the the actual length to be written.
show() methods should return the number of bytes printed into the
buffer. This is the return value of scnprintf().
Link: https://lkml.kernel.org/r/20200509085608.41061-2-chenzhou10@huawei.com
Signed-off-by: Chen Zhou <chenzhou10@huawei.com>
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
Prefix the s390 SHA-1 functions with "s390_sha1_" rather than "sha1_".
This allows us to rename the library function sha_init() to sha1_init()
without causing a naming collision.
Cc: linux-s390@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
aes_s390.c has several functions which allocate space for key material on
the stack and leave the used keys there. It is considered good practice
to clean these locations before the function returns.
Link: https://lkml.kernel.org/r/20200221165511.GB6928@lst.de
Signed-off-by: Torsten Duwe <duwe@suse.de>
Signed-off-by: Harald Freudenberger <freude@linux.ibm.com>
Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
Extend the low level ep11 misc functions implementation by
several functions to support EP11 key objects for paes and pkey:
- EP11 AES secure key generation
- EP11 AES secure key generation from given clear key value
- EP11 AES secure key blob check
- findcard function returns list of apqns based on given criterias
- EP11 AES secure key derive to CPACF protected key
Extend the pkey module to be able to generate and handle EP11
secure keys and also use them as base for deriving protected
keys for CPACF usage. These ioctls are extended to support
EP11 keys: PKEY_GENSECK2, PKEY_CLR2SECK2, PKEY_VERIFYKEY2,
PKEY_APQNS4K, PKEY_APQNS4KT, PKEY_KBLOB2PROTK2.
Additionally the 'clear key' token to protected key now uses
an EP11 card if the other ways (via PCKMO, via CCA) fail.
The PAES cipher implementation needed a new upper limit for
the max key size, but is now also working with EP11 keys.
Signed-off-by: Harald Freudenberger <freude@linux.ibm.com>
Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
With this patch the paes ciphers do accept AES clear key values of
size 16, 24 or 32 byte. The key value is internal rearranged to form a
paes clear key token so that the pkey kernel module recognizes and
handles this key material as source for protected keys.
Using clear key material as a source for protected keys is a security
risc as the raw key material is kept in memory. However, so the AES
selftests provided with the testmanager can be run during registration
of the paes ciphers.
Signed-off-by: Harald Freudenberger <freude@linux.ibm.com>
Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
There have been some findings during Eric Biggers rework of the
paes implementation which this patch tries to address:
A very minor finding within paes ctr where when the cpacf instruction
returns with only partially data en/decrytped the walk_done() was
mistakenly done with the all data counter. Please note this can only
happen when the kmctr returns because the protected key became invalid
in the middle of the operation. And this is only with suspend and
resume on a system with different effective wrapping key.
Eric Biggers mentioned that the context struct within the tfm struct
may be shared among multiple kernel threads. So here now a rework
which uses a spinlock per context to protect the read and write of the
protected key blob value. The en/decrypt functions copy the protected
key(s) at the beginning into a param struct and do not work with the
protected key within the context any more. If the protected key in the
param struct becomes invalid, the key material is again converted to
protected key(s) and the context gets this update protected by the
spinlock. Race conditions are still possible and may result in writing
the very same protected key value more than once. So the spinlock
needs to make sure the protected key(s) within the context are
consistent updated.
The ctr page is now locked by a mutex instead of a spinlock. A similar
patch went into the aes_s390 code as a result of a complain "sleeping
function called from invalid context at ...algapi.h". See
commit 1c2c7029c0 ("s390/crypto: fix possible sleep during spinlock
aquired")' for more.
During testing with instrumented code another issue with the xts
en/decrypt function revealed. The retry cleared the running iv value
and thus let to wrong en/decrypted data.
Tested and verified with additional testcases via AF_ALG interface and
additional selftests within the kernel (which will be made available
as soon as possible).
Reported-by: Eric Biggers <ebiggers@kernel.org>
Signed-off-by: Harald Freudenberger <freude@linux.ibm.com>
Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
The CRYPTO_TFM_RES_* flags were apparently meant as a way to make the
->setkey() functions provide more information about errors. But these
flags weren't actually being used or tested, and in many cases they
weren't being set correctly anyway. So they've now been removed.
Also, if someone ever actually needs to start better distinguishing
->setkey() errors (which is somewhat unlikely, as this has been unneeded
for a long time), we'd be much better off just defining different return
values, like -EINVAL if the key is invalid for the algorithm vs.
-EKEYREJECTED if the key was rejected by a policy like "no weak keys".
That would be much simpler, less error-prone, and easier to test.
So just remove CRYPTO_TFM_RES_MASK and all the unneeded logic that
propagates these flags around.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The CRYPTO_TFM_RES_BAD_KEY_LEN flag was apparently meant as a way to
make the ->setkey() functions provide more information about errors.
However, no one actually checks for this flag, which makes it pointless.
Also, many algorithms fail to set this flag when given a bad length key.
Reviewing just the generic implementations, this is the case for
aes-fixed-time, cbcmac, echainiv, nhpoly1305, pcrypt, rfc3686, rfc4309,
rfc7539, rfc7539esp, salsa20, seqiv, and xcbc. But there are probably
many more in arch/*/crypto/ and drivers/crypto/.
Some algorithms can even set this flag when the key is the correct
length. For example, authenc and authencesn set it when the key payload
is malformed in any way (not just a bad length), the atmel-sha and ccree
drivers can set it if a memory allocation fails, and the chelsio driver
sets it for bad auth tag lengths, not just bad key lengths.
So even if someone actually wanted to start checking this flag (which
seems unlikely, since it's been unused for a long time), there would be
a lot of work needed to get it working correctly. But it would probably
be much better to go back to the drawing board and just define different
return values, like -EINVAL if the key is invalid for the algorithm vs.
-EKEYREJECTED if the key was rejected by a policy like "no weak keys".
That would be much simpler, less error-prone, and easier to test.
So just remove this flag.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Reviewed-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
s390_crypto_shash_parmsize() return type is int, it
should not be stored in a unsigned variable, which
compared with zero.
Reported-by: Hulk Robot <hulkci@huawei.com>
Fixes: 3c2eb6b76c ("s390/crypto: Support for SHA3 via CPACF (MSA6)")
Signed-off-by: YueHaibing <yuehaibing@huawei.com>
Signed-off-by: Joerg Schmidbauer <jschmidb@linux.vnet.ibm.com>
Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
Convert the glue code for the S390 CPACF implementations of DES-ECB,
DES-CBC, DES-CTR, 3DES-ECB, 3DES-CBC, and 3DES-CTR from the deprecated
"blkcipher" API to the "skcipher" API. This is needed in order for the
blkcipher API to be removed.
Note: I made CTR use the same function for encryption and decryption,
since CTR encryption and decryption are identical.
Signed-off-by: Eric Biggers <ebiggers@google.com>
reviewed-by: Harald Freudenberger <freude@linux.ibm.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Convert the glue code for the S390 CPACF protected key implementations
of AES-ECB, AES-CBC, AES-XTS, and AES-CTR from the deprecated
"blkcipher" API to the "skcipher" API. This is needed in order for the
blkcipher API to be removed.
Note: I made CTR use the same function for encryption and decryption,
since CTR encryption and decryption are identical.
Signed-off-by: Eric Biggers <ebiggers@google.com>
reviewed-by: Harald Freudenberger <freude@linux.ibm.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Convert the glue code for the S390 CPACF implementations of AES-ECB,
AES-CBC, AES-XTS, and AES-CTR from the deprecated "blkcipher" API to the
"skcipher" API. This is needed in order for the blkcipher API to be
removed.
Note: I made CTR use the same function for encryption and decryption,
since CTR encryption and decryption are identical.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Reviewed-by: Harald Freudenberger <freude@linux.ibm.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Pull crypto updates from Herbert Xu:
"API:
- Add the ability to abort a skcipher walk.
Algorithms:
- Fix XTS to actually do the stealing.
- Add library helpers for AES and DES for single-block users.
- Add library helpers for SHA256.
- Add new DES key verification helper.
- Add surrounding bits for ESSIV generator.
- Add accelerations for aegis128.
- Add test vectors for lzo-rle.
Drivers:
- Add i.MX8MQ support to caam.
- Add gcm/ccm/cfb/ofb aes support in inside-secure.
- Add ofb/cfb aes support in media-tek.
- Add HiSilicon ZIP accelerator support.
Others:
- Fix potential race condition in padata.
- Use unbound workqueues in padata"
* 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (311 commits)
crypto: caam - Cast to long first before pointer conversion
crypto: ccree - enable CTS support in AES-XTS
crypto: inside-secure - Probe transform record cache RAM sizes
crypto: inside-secure - Base RD fetchcount on actual RD FIFO size
crypto: inside-secure - Base CD fetchcount on actual CD FIFO size
crypto: inside-secure - Enable extended algorithms on newer HW
crypto: inside-secure: Corrected configuration of EIP96_TOKEN_CTRL
crypto: inside-secure - Add EIP97/EIP197 and endianness detection
padata: remove cpu_index from the parallel_queue
padata: unbind parallel jobs from specific CPUs
padata: use separate workqueues for parallel and serial work
padata, pcrypt: take CPU hotplug lock internally in padata_alloc_possible
crypto: pcrypt - remove padata cpumask notifier
padata: make padata_do_parallel find alternate callback CPU
workqueue: require CPU hotplug read exclusion for apply_workqueue_attrs
workqueue: unconfine alloc/apply/free_workqueue_attrs()
padata: allocate workqueue internally
arm64: dts: imx8mq: Add CAAM node
random: Use wait_event_freezable() in add_hwgenerator_randomness()
crypto: ux500 - Fix COMPILE_TEST warnings
...
This patch introduces sha3 support for s390.
- Rework the s390-specific SHA1 and SHA2 related code to
provide the basis for SHA3.
- Provide two new kernel modules sha3_256_s390 and
sha3_512_s390 together with new kernel options.
Signed-off-by: Joerg Schmidbauer <jschmidb@de.ibm.com>
Reviewed-by: Ingo Franzki <ifranzki@linux.ibm.com>
Reviewed-by: Harald Freudenberger <freude@linux.ibm.com>
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
With 'extra run-time crypto self tests' enabled, the selftest
for s390-xts fails with
alg: skcipher: xts-aes-s390 encryption unexpectedly succeeded on
test vector "random: len=0 klen=64"; expected_error=-22,
cfg="random: inplace use_digest nosimd src_divs=[2.61%@+4006,
84.44%@+21, 1.55%@+13, 4.50%@+344, 4.26%@+21, 2.64%@+27]"
This special case with nbytes=0 is not handled correctly and this
fix now makes sure that -EINVAL is returned when there is en/decrypt
called with 0 bytes to en/decrypt.
Signed-off-by: Harald Freudenberger <freude@linux.ibm.com>
Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
Rename static / file-local functions so that they do not conflict with
the functions declared in crypto/sha256.h.
This is a preparation patch for folding crypto/sha256.h into crypto/sha.h.
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
For correctness and compliance with the XTS-AES specification, we are
adding support for ciphertext stealing to XTS implementations, even
though no use cases are known that will be enabled by this.
Since the s390 implementation already has a fallback skcipher standby
for other purposes, let's use it for this purpose as well. If ciphertext
stealing use cases ever become a bottleneck, we can always revisit this.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Switch to the refactored DES key verification routines. While at it,
rename the DES encrypt/decrypt routines so they will not conflict with
the DES library later on.
Reviewed-by: Harald Freudenberger <freude@linux.ibm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The context used to store the key blob used a fixed 80 bytes
buffer. And all the set_key functions did not even check the given key
size. With CCA variable length AES cipher keys there come key blobs
with about 136 bytes and maybe in the future there will arise the need
to store even bigger key blobs.
This patch reworks the paes set_key functions and the context
buffers to work with small key blobs (<= 128 bytes) directly in the
context buffer and larger blobs by allocating additional memory and
storing the pointer in the context buffer. If there has been memory
allocated for storing a key blob, it also needs to be freed on release
of the tfm. So all the paes ciphers now have a init and exit function
implemented for this job.
Signed-off-by: Harald Freudenberger <freude@linux.ibm.com>
Reviewed-by: Ingo Franzki <ifranzki@linux.ibm.com>
Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>