IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
Like the implementation of AESNI/AVX, this patch adds an accelerated
implementation of AESNI/AVX2. In terms of code implementation, by
reusing AESNI/AVX mode-related codes, the amount of code is greatly
reduced. From the benchmark data, it can be seen that when the block
size is 1024, compared to AVX acceleration, the performance achieved
by AVX2 has increased by about 70%, it is also 7.7 times of the pure
software implementation of sm4-generic.
The main algorithm implementation comes from SM4 AES-NI work by
libgcrypt and Markku-Juhani O. Saarinen at:
https://github.com/mjosaarinen/sm4ni
This optimization supports the four modes of SM4, ECB, CBC, CFB,
and CTR. Since CBC and CFB do not support multiple block parallel
encryption, the optimization effect is not obvious.
Benchmark on Intel i5-6200U 2.30GHz, performance data of three
implementation methods, pure software sm4-generic, aesni/avx
acceleration, and aesni/avx2 acceleration, the data comes from
the 218 mode and 518 mode of tcrypt. The abscissas are blocks of
different lengths. The data is tabulated and the unit is Mb/s:
block-size | 16 64 128 256 1024 1420 4096
sm4-generic
ECB enc | 60.94 70.41 72.27 73.02 73.87 73.58 73.59
ECB dec | 61.87 70.53 72.15 73.09 73.89 73.92 73.86
CBC enc | 56.71 66.31 68.05 69.84 70.02 70.12 70.24
CBC dec | 54.54 65.91 68.22 69.51 70.63 70.79 70.82
CFB enc | 57.21 67.24 69.10 70.25 70.73 70.52 71.42
CFB dec | 57.22 64.74 66.31 67.24 67.40 67.64 67.58
CTR enc | 59.47 68.64 69.91 71.02 71.86 71.61 71.95
CTR dec | 59.94 68.77 69.95 71.00 71.84 71.55 71.95
sm4-aesni-avx
ECB enc | 44.95 177.35 292.06 316.98 339.48 322.27 330.59
ECB dec | 45.28 178.66 292.31 317.52 339.59 322.52 331.16
CBC enc | 57.75 67.68 69.72 70.60 71.48 71.63 71.74
CBC dec | 44.32 176.83 284.32 307.24 328.61 312.61 325.82
CFB enc | 57.81 67.64 69.63 70.55 71.40 71.35 71.70
CFB dec | 43.14 167.78 282.03 307.20 328.35 318.24 325.95
CTR enc | 42.35 163.32 279.11 302.93 320.86 310.56 317.93
CTR dec | 42.39 162.81 278.49 302.37 321.11 310.33 318.37
sm4-aesni-avx2
ECB enc | 45.19 177.41 292.42 316.12 339.90 322.53 330.54
ECB dec | 44.83 178.90 291.45 317.31 339.85 322.55 331.07
CBC enc | 57.66 67.62 69.73 70.55 71.58 71.66 71.77
CBC dec | 44.34 176.86 286.10 501.68 559.58 483.87 527.46
CFB enc | 57.43 67.60 69.61 70.52 71.43 71.28 71.65
CFB dec | 43.12 167.75 268.09 499.33 558.35 490.36 524.73
CTR enc | 42.42 163.39 256.17 493.95 552.45 481.58 517.19
CTR dec | 42.49 163.11 256.36 493.34 552.62 481.49 516.83
Signed-off-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Export the reusable functions in the SM4 AESNI/AVX implementation,
mainly public functions, which are used to develop the SM4 AESNI/AVX2
implementation, and eliminate unnecessary duplication of code.
At the same time, in order to make the public function universal,
minor fixes was added.
Signed-off-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Replace the obsolete and ambiguos macro in_irq() with new
macro in_hardirq().
Signed-off-by: Changbin Du <changbin.du@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
To support runtime PM, use the function 'pci_set_power_state' to change
the power state. Therefore, method _PS0 or _PR0 needs to be filled by
platform. So check whether the method is supported, if not, print a
prompt information.
Signed-off-by: Weili Qian <qianweili@huawei.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
To avoid repeatedly obtaining 'qm' from 'filp', parameter passing of
debugfs function directly use 'qm' instead of 'filp'.
Signed-off-by: Weili Qian <qianweili@huawei.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Accelerator devices support runtime PM to reduce power consumption.
This patch adds the runtime PM suspend/resume callbacks to the
accelerator devices.
Signed-off-by: Weili Qian <qianweili@huawei.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The accelerator devices support runtime PM, when device is in suspended, an
exception will occur if reading registers. Therefore, this patch uses
'debugfs_create_file' instead of 'debugfs_create_regset32' to create
debugfs file, and then the driver can get the device status before
reading the register.
Signed-off-by: Weili Qian <qianweili@huawei.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
tcrypt supports GCM/CCM mode, CMAC, CBCMAC, and speed test of
SM4 algorithm.
Signed-off-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The GCM/CCM mode of the SM4 algorithm is defined in the rfc 8998
specification, and the test case data also comes from rfc 8998.
Signed-off-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
There are several places where the return value check of crypto_aead_setkey
and crypto_aead_setauthsize were lost. It is necessary to add these checks.
At the same time, move the crypto_aead_setauthsize() call out of the loop,
and only need to call it once after load transform.
Fixee: 53f52d7aec ("crypto: tcrypt - Added speed tests for AEAD crypto alogrithms in tcrypt test suite")
Signed-off-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
Reviewed-by: Vitaly Chikunov <vt@altlinux.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
When the endian configuration of the hardware is abnormal, it will
cause the SEC engine is faulty that reports empty message. And it
will affect the normal function of the hardware. Currently the soft
configuration method can't restore the faulty device. The endian
needs to be configured according to the system properties. So fix it.
Signed-off-by: Kai Ye <yekai13@huawei.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Because the algs registration process has added a judgment.
So need to add the judgment for the abnormal exiting process.
Signed-off-by: Kai Ye <yekai13@huawei.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
If the VF is newer than the PF, it decides whether it is compatible or
not. In case it is compatible, store that information in the
vf.compatible flag in the accel_dev structure.
Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Suggested-by: Fiona Trahe <fiona.trahe@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The function adf_iov_putmsg() is only used inside the intel_qat module
therefore should not be exported.
Remove EXPORT_SYMBOL for the function adf_iov_putmsg().
Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Reviewed-by: Fiona Trahe <fiona.trahe@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
There is a race condition during shutdown in adf_disable_sriov() where
both the PF and the VF drivers are loaded on the host system.
The PF notifies a VF with a "RESTARTING" message due to which the VF
starts an asynchronous worker to stop and shutdown itself.
At the same time the PF calls pci_disable_sriov() which invokes the
remove() routine on the VF device driver triggering the shutdown flow
again.
This change fixes the problem by ensuring that the VF flushes the worker
that performs stop()/shutdown() before these two functions are called in
the remove(). To make sure that no additional PV/VF messages are
processed by the VF, interrupts are disabled before flushing the
workqueue.
Signed-off-by: Ahsan Atta <ahsan.atta@intel.com>
Co-developed-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Reviewed-by: Fiona Trahe <fiona.trahe@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
All QAT GEN2 devices share the same register offset for masking interrupts,
so they don't need any complex device specific infrastructure.
Remove this function in favor of a constant in order to simplify the code.
Also, future generations may require a more complex device specific
handling, making the current approach obsolete anyway.
Signed-off-by: Marco Chiappero <marco.chiappero@intel.com>
Co-developed-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Reviewed-by: Fiona Trahe <fiona.trahe@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Currently all the functions related to the activation of the PFVF
protocol, both on PF and VF, include the direction specific "vf2pf"
name.
Replace the existing naming schema with:
- a direction agnostic naming, that applies to both PF and VF, for the
function pointer ("pfvf")
- a direction specific naming schema for the implementations ("pf2vf" or
"vf2pf")
In particular this patch renames:
- adf_pf_enable_vf2pf_comms() in adf_enable_pf2vf_comms()
- enable_vf2pf_comms() in enable_pfvf_comms()
Signed-off-by: Marco Chiappero <marco.chiappero@intel.com>
Co-developed-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Reviewed-by: Fiona Trahe <fiona.trahe@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Make sure all the steps in the initialization sequence are complete
before any completion event notification.
Signed-off-by: Marco Chiappero <marco.chiappero@intel.com>
Co-developed-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Reviewed-by: Fiona Trahe <fiona.trahe@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Move IOV functions at the end of hw_data so that PFVF functions related
functions are group together.
Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Reviewed-by: Fiona Trahe <fiona.trahe@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
At start and shutdown, VFs notify the PF about their state. These
notifications are carried out through a message exchange using the PFVF
protocol.
Function names lead to believe they do perform init or shutdown logic.
This is to fix the naming to better reflect their purpose.
Signed-off-by: Marco Chiappero <marco.chiappero@intel.com>
Co-developed-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Reviewed-by: Fiona Trahe <fiona.trahe@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
In the PF interrupt handler, the interrupt is disabled for a set of VFs
by writing to the interrupt source mask register, ERRMSK.
The interrupt is re-enabled in the bottom half handler by writing to the
same CSR. This is done through the functions enable_vf2pf_interrupts()
and disable_vf2pf_interrupts() which perform a read-modify-write
operation on the ERRMSK registers to mask and unmask the source of
interrupt.
There can be a race condition where the top half handler for one VF
interrupt runs just as the bottom half for another VF is about to
re-enable the interrupt. Depending on whether the top or bottom half
updates the CSR first, this would result either in a spurious interrupt
or in the interrupt not being re-enabled.
This patch protects the access of ERRMSK with a spinlock.
The functions adf_enable_vf2pf_interrupts() and
adf_disable_vf2pf_interrupts() have been changed to acquire a spin lock
before accessing and modifying the ERRMSK registers. These functions use
spin_lock_irqsave() to disable IRQs and avoid potential deadlocks.
In addition, the function adf_disable_vf2pf_interrupts_irq() has been
added. This uses spin_lock() and it is meant to be used in the top half
only.
Signed-off-by: Kanchana Velusamy <kanchanax.velusamy@intel.com>
Co-developed-by: Marco Chiappero <marco.chiappero@intel.com>
Signed-off-by: Marco Chiappero <marco.chiappero@intel.com>
Co-developed-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Reviewed-by: Fiona Trahe <fiona.trahe@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Interrupt code to enable interrupts from PF does not belong to the
protocol code, so move it to the interrupt handling specific file for
better code organization.
Signed-off-by: Marco Chiappero <marco.chiappero@intel.com>
Co-developed-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Reviewed-by: Fiona Trahe <fiona.trahe@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Use reinit_completion() to set to a clean state a completion variable,
used to coordinate the VF to PF request-response flow, before every
new VF request.
Signed-off-by: Marco Chiappero <marco.chiappero@intel.com>
Co-developed-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Reviewed-by: Fiona Trahe <fiona.trahe@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The PF driver uses the tasklet vf2pf_bh_tasklet to schedule a workqueue
to handle the vf2vf protocol (pf2vf_resp_wq).
Since the tasklet is only used to schedule the workqueue, this patch
removes it and schedules the pf2vf_resp_wq workqueue directly for the
top half.
Signed-off-by: Svyatoslav Pankratov <svyatoslav.pankratov@intel.com>
Co-developed-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Reviewed-by: Marco Chiappero <marco.chiappero@intel.com>
Reviewed-by: Fiona Trahe <fiona.trahe@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Rename ADF_PFVF_COMPATIBILITY_VERSION in ADF_PFVF_COMPAT_THIS_VERSION
since it is used to indicate the current version of the PFVF protocol.
Signed-off-by: Marco Chiappero <marco.chiappero@intel.com>
Co-developed-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Reviewed-by: Fiona Trahe <fiona.trahe@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
There is a chance that the PFVF handler, adf_vf2pf_req_hndl(), runs
twice for the same request when multiple interrupts come simultaneously
from different VFs.
Since the source VF is identified by a positional bit set in the ERRSOU
registers and that is not cleared until the bottom half completes, new
top halves from other VFs may reschedule a second bottom half for
previous interrupts.
This patch solves the problem in the ISR handler by not considering
sources with already disabled interrupts (and processing pending), as
set in the ERRMSK registers.
Also, move some definitions where actually needed.
Signed-off-by: Marco Chiappero <marco.chiappero@intel.com>
Co-developed-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Reviewed-by: Fiona Trahe <fiona.trahe@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
QAT GEN2 devices suffer from a defect where the MSI interrupt can be
sent multiple times.
If the second (spurious) interrupt is handled before the bottom half
handler runs, then the extra interrupt is effectively ignored because
the bottom half is only scheduled once.
However, if the top half runs again after the bottom half runs, this
will appear as a spurious PF to VF interrupt.
This can be avoided by checking the interrupt mask register in addition
to the interrupt source register in the interrupt handler.
This patch is based on earlier work done by Conor McLoughlin.
Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Co-developed-by: Marco Chiappero <marco.chiappero@intel.com>
Signed-off-by: Marco Chiappero <marco.chiappero@intel.com>
Reviewed-by: Fiona Trahe <fiona.trahe@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The top half of the VF drivers handled only a source at the time.
If an interrupt for PF2VF and bundle occurred at the same time, the ISR
scheduled only the bottom half for PF2VF.
This patch fixes the VF top half so that if both sources of interrupt
trigger at the same time, both bottom halves are scheduled.
This patch is based on earlier work done by Conor McLoughlin.
Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Reviewed-by: Marco Chiappero <marco.chiappero@intel.com>
Reviewed-by: Fiona Trahe <fiona.trahe@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The function adf_dev_init() ignores the error code reported by
enable_vf2pf_comms(). If the latter fails, e.g. the VF is not compatible
with the pf, then the load of the VF driver progresses.
This patch changes adf_dev_init() so that the error code from
enable_vf2pf_comms() is returned to the caller.
Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Reviewed-by: Marco Chiappero <marco.chiappero@intel.com>
Reviewed-by: Fiona Trahe <fiona.trahe@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Enable device interrupts after the setup of the interrupt handlers.
Signed-off-by: Marco Chiappero <marco.chiappero@intel.com>
Co-developed-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Reviewed-by: Fiona Trahe <fiona.trahe@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Remove the empty implementation of sriov_configure() and set the
sriov_configure member of the pci_driver structure to NULL.
This way, if a user tries to enable VFs on a device, when kernel and
driver are built with CONFIG_PCI_IOV=n, the kernel reports an error
message saying that the driver does not support SRIOV configuration via
sysfs.
Signed-off-by: Marco Chiappero <marco.chiappero@intel.com>
Co-developed-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Reviewed-by: Fiona Trahe <fiona.trahe@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Replace vf_mask type with unsigned long to avoid a stack-out-of-bound.
This is to fix the following warning reported by KASAN the first time
adf_msix_isr_ae() gets called.
[ 692.091987] BUG: KASAN: stack-out-of-bounds in find_first_bit+0x28/0x50
[ 692.092017] Read of size 8 at addr ffff88afdf789e60 by task swapper/32/0
[ 692.092076] Call Trace:
[ 692.092089] <IRQ>
[ 692.092101] dump_stack+0x9c/0xcf
[ 692.092132] print_address_description.constprop.0+0x18/0x130
[ 692.092164] ? find_first_bit+0x28/0x50
[ 692.092185] kasan_report.cold+0x7f/0x111
[ 692.092213] ? static_obj+0x10/0x80
[ 692.092234] ? find_first_bit+0x28/0x50
[ 692.092262] find_first_bit+0x28/0x50
[ 692.092288] adf_msix_isr_ae+0x16e/0x230 [intel_qat]
Fixes: ed8ccaef52 ("crypto: qat - Add support for SRIOV")
Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Reviewed-by: Marco Chiappero <marco.chiappero@intel.com>
Reviewed-by: Fiona Trahe <fiona.trahe@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
s/Enable/Disable/ when describing 'adf_disable_aer()'
Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr>
Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
If an error occurs after a 'adf_enable_aer()' call, it must be undone by a
corresponding 'adf_disable_aer()' call, as already done in the remove
function.
Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr>
Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Change the DMA mask from 64 to 48 for Gen2 devices as they cannot handle
addresses greater than 48 bits.
Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The wrappers in include/linux/pci-dma-compat.h should go away.
Replace 'pci_set_dma_mask/pci_set_consistent_dma_mask' by an equivalent
and less verbose 'dma_set_mask_and_coherent()' call.
Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr>
Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Co-developed-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
lockdep complains that in omap-aes, the list_lock is taken both with
softirqs enabled at probe time, and also in softirq context, which
could lead to a deadlock:
================================
WARNING: inconsistent lock state
5.14.0-rc1-00035-gc836005b01c5-dirty #69 Not tainted
--------------------------------
inconsistent {SOFTIRQ-ON-W} -> {IN-SOFTIRQ-W} usage.
ksoftirqd/0/7 [HC0[0]:SC1[3]:HE1:SE0] takes:
bf00e014 (list_lock){+.?.}-{2:2}, at: omap_aes_find_dev+0x18/0x54 [omap_aes_driver]
{SOFTIRQ-ON-W} state was registered at:
_raw_spin_lock+0x40/0x50
omap_aes_probe+0x1d4/0x664 [omap_aes_driver]
platform_probe+0x58/0xb8
really_probe+0xbc/0x314
__driver_probe_device+0x80/0xe4
driver_probe_device+0x30/0xc8
__driver_attach+0x70/0xf4
bus_for_each_dev+0x70/0xb4
bus_add_driver+0xf0/0x1d4
driver_register+0x74/0x108
do_one_initcall+0x84/0x2e4
do_init_module+0x5c/0x240
load_module+0x221c/0x2584
sys_finit_module+0xb0/0xec
ret_fast_syscall+0x0/0x2c
0xbed90b30
irq event stamp: 111800
hardirqs last enabled at (111800): [<c02a21e4>] __kmalloc+0x484/0x5ec
hardirqs last disabled at (111799): [<c02a21f0>] __kmalloc+0x490/0x5ec
softirqs last enabled at (111776): [<c01015f0>] __do_softirq+0x2b8/0x4d0
softirqs last disabled at (111781): [<c0135948>] run_ksoftirqd+0x34/0x50
other info that might help us debug this:
Possible unsafe locking scenario:
CPU0
----
lock(list_lock);
<Interrupt>
lock(list_lock);
*** DEADLOCK ***
2 locks held by ksoftirqd/0/7:
#0: c0f5e8c8 (rcu_read_lock){....}-{1:2}, at: netif_receive_skb+0x6c/0x260
#1: c0f5e8c8 (rcu_read_lock){....}-{1:2}, at: ip_local_deliver_finish+0x2c/0xdc
stack backtrace:
CPU: 0 PID: 7 Comm: ksoftirqd/0 Not tainted 5.14.0-rc1-00035-gc836005b01c5-dirty #69
Hardware name: Generic AM43 (Flattened Device Tree)
[<c010e6e0>] (unwind_backtrace) from [<c010b9d0>] (show_stack+0x10/0x14)
[<c010b9d0>] (show_stack) from [<c017c640>] (mark_lock.part.17+0x5bc/0xd04)
[<c017c640>] (mark_lock.part.17) from [<c017d9e4>] (__lock_acquire+0x960/0x2fa4)
[<c017d9e4>] (__lock_acquire) from [<c0180980>] (lock_acquire+0x10c/0x358)
[<c0180980>] (lock_acquire) from [<c093d324>] (_raw_spin_lock_bh+0x44/0x58)
[<c093d324>] (_raw_spin_lock_bh) from [<bf00b258>] (omap_aes_find_dev+0x18/0x54 [omap_aes_driver])
[<bf00b258>] (omap_aes_find_dev [omap_aes_driver]) from [<bf00b328>] (omap_aes_crypt+0x94/0xd4 [omap_aes_driver])
[<bf00b328>] (omap_aes_crypt [omap_aes_driver]) from [<c08ac6d0>] (esp_input+0x1b0/0x2c8)
[<c08ac6d0>] (esp_input) from [<c08c9e90>] (xfrm_input+0x410/0x1290)
[<c08c9e90>] (xfrm_input) from [<c08b6374>] (xfrm4_esp_rcv+0x54/0x11c)
[<c08b6374>] (xfrm4_esp_rcv) from [<c0838840>] (ip_protocol_deliver_rcu+0x48/0x3bc)
[<c0838840>] (ip_protocol_deliver_rcu) from [<c0838c50>] (ip_local_deliver_finish+0x9c/0xdc)
[<c0838c50>] (ip_local_deliver_finish) from [<c0838dd8>] (ip_local_deliver+0x148/0x1b0)
[<c0838dd8>] (ip_local_deliver) from [<c0838f5c>] (ip_rcv+0x11c/0x180)
[<c0838f5c>] (ip_rcv) from [<c077e3a4>] (__netif_receive_skb_one_core+0x54/0x74)
[<c077e3a4>] (__netif_receive_skb_one_core) from [<c077e588>] (netif_receive_skb+0xa8/0x260)
[<c077e588>] (netif_receive_skb) from [<c068d6d4>] (cpsw_rx_handler+0x224/0x2fc)
[<c068d6d4>] (cpsw_rx_handler) from [<c0688ccc>] (__cpdma_chan_process+0xf4/0x188)
[<c0688ccc>] (__cpdma_chan_process) from [<c068a0c0>] (cpdma_chan_process+0x3c/0x5c)
[<c068a0c0>] (cpdma_chan_process) from [<c0690e14>] (cpsw_rx_mq_poll+0x44/0x98)
[<c0690e14>] (cpsw_rx_mq_poll) from [<c0780810>] (__napi_poll+0x28/0x268)
[<c0780810>] (__napi_poll) from [<c0780c64>] (net_rx_action+0xcc/0x204)
[<c0780c64>] (net_rx_action) from [<c0101478>] (__do_softirq+0x140/0x4d0)
[<c0101478>] (__do_softirq) from [<c0135948>] (run_ksoftirqd+0x34/0x50)
[<c0135948>] (run_ksoftirqd) from [<c01583b8>] (smpboot_thread_fn+0xf4/0x1d8)
[<c01583b8>] (smpboot_thread_fn) from [<c01546dc>] (kthread+0x14c/0x174)
[<c01546dc>] (kthread) from [<c010013c>] (ret_from_fork+0x14/0x38)
...
The omap-des and omap-sham drivers appear to have a similar issue.
Fix this by using spin_{,un}lock_bh() around device list access in all
the probe and remove functions.
Signed-off-by: Ben Hutchings <ben.hutchings@mind.be>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
omap_crypto_cleanup() currently copies data from sg to orig if either
copy flag is set. However OMAP_CRYPTO_SG_COPIED means that sg refers
to the same pages as orig, truncated to len bytes. There is no need
to copy in this case.
Only copy data if the OMAP_CRYPTO_DATA_COPIED flag is set.
Fixes: 74ed87e7e7 ("crypto: omap - add base support library for common ...")
Signed-off-by: Ben Hutchings <ben.hutchings@mind.be>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Don't use "/**" to begin a comment that is not kernel-doc notation.
crypto/wp512.c:779: warning: This comment starts with '/**', but isn't a kernel-doc comment. Refer Documentation/doc-guide/kernel-doc.rst
* The core Whirlpool transform.
Signed-off-by: Randy Dunlap <rdunlap@infradead.org>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: linux-crypto@vger.kernel.org
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Kunpeng930 hpre device supports dynamic clock gating. When doing tasks,
the algorithm core is opened, and when idle, the algorithm core is closed.
This patch enables hpre dynamic clock gating by writing hardware registers.
Signed-off-by: Weili Qian <qianweili@huawei.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Kunpeng930 sec device supports dynamic clock gating. When doing tasks,
the algorithm core is opened, and when idle, the algorithm core is closed.
This patch enables sec dynamic clock gating by writing hardware registers.
Signed-off-by: Weili Qian <qianweili@huawei.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Kunpeng930 zip device supports dynamic clock gating. When executing tasks,
the algorithm core is opened, and when idle, the algorithm core is closed.
This patch enables zip dynamic clock gating by writing hardware registers.
Signed-off-by: Weili Qian <qianweili@huawei.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
We should set the additional space to 0 in mpi_resize().
So use kcalloc() instead of kmalloc_array().
In lib/mpi/ec.c:
/****************
* Resize the array of A to NLIMBS. the additional space is cleared
* (set to 0) [done by m_realloc()]
*/
int mpi_resize(MPI a, unsigned nlimbs)
Like the comment of kernel's mpi_resize() said, the additional space
need to be set to 0, but when a->d is not NULL, it does not set.
The kernel's mpi lib is from libgcrypt, the mpi resize in libgcrypt
is _gcry_mpi_resize() which set the additional space to 0.
This bug may cause mpi api which use mpi_resize() get wrong result
under the condition of using the additional space without initiation.
If this condition is not met, the bug would not be triggered.
Currently in kernel, rsa, sm2 and dh use mpi lib, and they works well,
so the bug is not triggered in these cases.
add_points_edwards() use the additional space directly, so it will
get a wrong result.
Fixes: cdec9cb516 ("crypto: GnuPG based MPI lib - source files (part 1)")
Signed-off-by: Hongbo Li <herberthbli@tencent.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The functions get_online_cpus() and put_online_cpus() have been
deprecated during the CPU hotplug rework. They map directly to
cpus_read_lock() and cpus_read_unlock().
Replace deprecated CPU-hotplug functions with the official version.
The behavior remains unchanged.
Cc: Steffen Klassert <steffen.klassert@secunet.com>
Cc: Daniel Jordan <daniel.m.jordan@oracle.com>
Cc: linux-crypto@vger.kernel.org
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Acked-by: Daniel Jordan <daniel.m.jordan@oracle.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The functions get_online_cpus() and put_online_cpus() have been
deprecated during the CPU hotplug rework. They map directly to
cpus_read_lock() and cpus_read_unlock().
Replace deprecated CPU-hotplug functions with the official version.
The behavior remains unchanged.
Cc: Gonglei <arei.gonglei@huawei.com>
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Jason Wang <jasowang@redhat.com>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: virtualization@lists.linux-foundation.org
Cc: linux-crypto@vger.kernel.org
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The kfree_sensitive is a kernel API to clear sensitive information
that should not be leaked to other future users of the same memory
objects and free the memory. Its function is the same as the
combination of memzero_explicit and kfree. Thus, we can replace the
combination APIs with the single kfree_sensitive API.
Signed-off-by: Jason Wang <wangborong@cdjrlc.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The "Arm True Random Number Generator Firmware Interface"[1] provides
an SMCCC based interface to a true hardware random number generator.
So far we are using that in arch_get_random_seed(), but it might be
useful to expose the entropy through the /dev/hwrng device as well. This
allows to assess the quality of the implementation, by using "rngtest"
from the rng-tools package, for example.
Add a simple platform driver implementing the hw_random interface.
The corresponding platform device is created by the SMCCC core code,
we just match it here by name and provide a module alias.
Since the firmware takes care about serialisation, this can happily
coexist with the arch_get_random_seed() bits.
[1] https://developer.arm.com/documentation/den0098/latest/
Signed-off-by: Andre Przywara <andre.przywara@arm.com>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Reviewed-by: Mark Brown <broonie@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
At the moment we probe for the Random Number Generator SMCCC service,
and use that in the core code (arch_get_random). However the hardware
entropy can also be useful to access from userland, and be it to assess
its quality.
Register a platform device when the SMCCC TRNG service is detected, to
allow a hw_random driver to hook onto this.
The function registering the device is deliberately made in a way which
allows expansion, so other services that could be exposed via a platform
device (or some other interface), can be added here easily.
Signed-off-by: Andre Przywara <andre.przywara@arm.com>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Reviewed-by: Mark Brown <broonie@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>