IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
You cannot allocate crypto tfm objects in NORECLAIM or NOFS contexts.
The ecryptfs code currently does exactly that for the MD5 tfm.
This patch fixes it by preallocating the MD5 tfm in a safe context.
The MD5 tfm is also reentrant so this patch removes the superfluous
cs_hash_tfm_mutex.
Reported-by: Nicolas Boichat <drinkcat@chromium.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Fix Section mismatch warinig in adf_exit_vf_wq()
Reported-by: kbuild test robot <fengguang.wu@intel.com>
Signed-off-by: Tadeusz Struk <tadeusz.struk@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
IRQs need to be enabled when VFs go down in case some VF to PF
comms happens.
Tested-by: Suman Bangalore Sathyanarayana <sumanx.bangalore.sathyanarayana@intel.com>
Signed-off-by: Tadeusz Struk <tadeusz.struk@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Before VF sends a signal to PF it should check if PF
is still running.
Tested-by: Suman Bangalore Sathyanarayana <sumanx.bangalore.sathyanarayana@intel.com>
Signed-off-by: Tadeusz Struk <tadeusz.struk@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The vf2pf_init and vf2pf_exit are exactly the same for all VFs
so move them to common and reuse.
Tested-by: Suman Bangalore Sathyanarayana <sumanx.bangalore.sathyanarayana@intel.com>
Signed-off-by: Tadeusz Struk <tadeusz.struk@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
__GFP_REPEAT has a rather weak semantic but since it has been introduced
around 2.6.12 it has been ignored for low order allocations.
lzo_init uses __GFP_REPEAT to allocate LZO1X_MEM_COMPRESS 16K. This is
order 3 allocation request and __GFP_REPEAT is ignored for this size
as well as all <= PAGE_ALLOC_COSTLY requests.
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: linux-crypto@vger.kernel.org
Signed-off-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This adds the Hisilicon Random Number Generator(RNG) support,
which is found in Hip04 and Hip05 soc.
Reviewed-by: Mathieu Poirier <mathieu.poirier@linaro.org>
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Document the devicetree bindings for the random number generator found
on Hisilicon Hip04 and Hip05 soc.
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
Acked-by: Rob Herring <rob@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
According to the Freescale GPL driver code, there are two different
Security Controller (SCC) versions: SCC and SCC2.
The SCC is found on older i.MX SoCs, e.g. the i.MX25. This is the
version implemented and tested here.
As there is no publicly available documentation for this IP core,
all information about this unit is gathered from the GPL'ed driver
from Freescale.
Signed-off-by: Steffen Trumtrar <s.trumtrar@pengutronix.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add the Security Controller (SCC) module to the dtsi.
Signed-off-by: Steffen Trumtrar <s.trumtrar@pengutronix.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add documentation for the Freescale Security Controller (SCC)
found on i.MX25 SoCs.
Signed-off-by: Steffen Trumtrar <s.trumtrar@pengutronix.de>
Acked-by: Rob Herring <robh@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
VFs call adf_dev_stop() from a PF to VF interrupt bottom half.
This causes an oops "scheduling while atomic", because it tries
to acquire a mutex to un-register crypto algorithms.
This patch fixes the issue by calling adf_dev_stop() asynchronously.
Changes in v2:
- change kthread to a work queue.
Signed-off-by: Tadeusz Struk <tadeusz.struk@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Direct include of rwlock_types.h breaks RT, use spinlock_types.h instead.
Fixes: 553d2374db crypto: ccp - Support for multiple CCPs
Signed-off-by: Mike Galbraith <umgwanakikbuti@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Correct smasung.com into samsung.com.
Signed-off-by: Krzysztof Kozlowski <k.kozlowski@samsung.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
When stopping devices it is not enought to loop backwards.
We need to explicitly stop all VFs first.
Signed-off-by: Tadeusz Struk <tadeusz.struk@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The HMAC implementation allows setting the HMAC key independently from
the hashing operation. Therefore, the key only needs to be set when a
new key is generated.
This patch increases the speed of the HMAC DRBG by at least 35% depending
on the use case.
The patch is fully CAVS tested.
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The current sun4i-ss driver could generate data corruption when ciphering/deciphering.
It occurs randomly on end of handled data.
No root cause have been found and the only way to remove it is to replace
all spin_lock_bh by their irq counterparts.
Fixes: 6298e94821 ("crypto: sunxi-ss - Add Allwinner Security System crypto accelerator")
Signed-off-by: LABBE Corentin <clabbe.montjoie@gmail.com>
Cc: stable <stable@vger.kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
memcopying to a (null pointer + offset) will result
in memory corruption or undefined behaviour.
Signed-off-by: Tudor Ambarus <tudor-dan.ambarus@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
FW requires the const_tab to be 1024 bytes aligned.
Signed-off-by: Tadeusz Struk <tadeusz.struk@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Within the copying loop in mpi_read_raw_from_sgl(), the last input SGE's
byte count gets artificially extended as follows:
if (sg_is_last(sg) && (len % BYTES_PER_MPI_LIMB))
len += BYTES_PER_MPI_LIMB - (len % BYTES_PER_MPI_LIMB);
Within the following byte copying loop, this causes reads beyond that
SGE's allocated buffer:
BUG: KASAN: slab-out-of-bounds in mpi_read_raw_from_sgl+0x331/0x650
at addr ffff8801e168d4d8
Read of size 1 by task systemd-udevd/721
[...]
Call Trace:
[<ffffffff818c4d35>] dump_stack+0xbc/0x117
[<ffffffff818c4c79>] ? _atomic_dec_and_lock+0x169/0x169
[<ffffffff814af5d1>] ? print_section+0x61/0xb0
[<ffffffff814b1109>] print_trailer+0x179/0x2c0
[<ffffffff814bc524>] object_err+0x34/0x40
[<ffffffff814bfdc7>] kasan_report_error+0x307/0x8c0
[<ffffffff814bf315>] ? kasan_unpoison_shadow+0x35/0x50
[<ffffffff814bf38e>] ? kasan_kmalloc+0x5e/0x70
[<ffffffff814c0ad1>] kasan_report+0x71/0xa0
[<ffffffff81938171>] ? mpi_read_raw_from_sgl+0x331/0x650
[<ffffffff814bf1a6>] __asan_load1+0x46/0x50
[<ffffffff81938171>] mpi_read_raw_from_sgl+0x331/0x650
[<ffffffff817f41b6>] rsa_verify+0x106/0x260
[<ffffffff817f40b0>] ? rsa_set_pub_key+0xf0/0xf0
[<ffffffff818edc79>] ? sg_init_table+0x29/0x50
[<ffffffff817f4d22>] ? pkcs1pad_sg_set_buf+0xb2/0x2e0
[<ffffffff817f5b74>] pkcs1pad_verify+0x1f4/0x2b0
[<ffffffff81831057>] public_key_verify_signature+0x3a7/0x5e0
[<ffffffff81830cb0>] ? public_key_describe+0x80/0x80
[<ffffffff817830f0>] ? keyring_search_aux+0x150/0x150
[<ffffffff818334a4>] ? x509_request_asymmetric_key+0x114/0x370
[<ffffffff814b83f0>] ? kfree+0x220/0x370
[<ffffffff818312c2>] public_key_verify_signature_2+0x32/0x50
[<ffffffff81830b5c>] verify_signature+0x7c/0xb0
[<ffffffff81835d0c>] pkcs7_validate_trust+0x42c/0x5f0
[<ffffffff813c391a>] system_verify_data+0xca/0x170
[<ffffffff813c3850>] ? top_trace_array+0x9b/0x9b
[<ffffffff81510b29>] ? __vfs_read+0x279/0x3d0
[<ffffffff8129372f>] mod_verify_sig+0x1ff/0x290
[...]
The exact purpose of the len extension isn't clear to me, but due to
its form, I suspect that it's a leftover somehow accounting for leading
zero bytes within the most significant output limb.
Note however that without that len adjustement, the total number of bytes
ever processed by the inner loop equals nbytes and thus, the last output
limb gets written at this point. Thus the net effect of the len adjustement
cited above is just to keep the inner loop running for some more
iterations, namely < BYTES_PER_MPI_LIMB ones, reading some extra bytes from
beyond the last SGE's buffer and discarding them afterwards.
Fix this issue by purging the extension of len beyond the last input SGE's
buffer length.
Fixes: 2d4d1eea54 ("lib/mpi: Add mpi sgl helpers")
Signed-off-by: Nicolai Stange <nicstange@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Within the byte reading loop in mpi_read_raw_sgl(), there are two
housekeeping indices used, z and x.
At all times, the index z represents the number of output bytes covered
by the input SGEs for which processing has completed so far. This includes
any leading zero bytes within the most significant limb.
The index x changes its meaning after the first outer loop's iteration
though: while processing the first input SGE, it represents
"number of leading zero bytes in most significant output limb" +
"current position within current SGE"
For the remaining SGEs OTOH, x corresponds just to
"current position within current SGE"
After all, it is only the sum of z and x that has any meaning for the
output buffer and thus, the
"number of leading zero bytes in most significant output limb"
part can be moved away from x into z from the beginning, opening up the
opportunity for cleaner code.
Before the outer loop iterating over the SGEs, don't initialize z with
zero, but with the number of leading zero bytes in the most significant
output limb. For the inner loop iterating over a single SGE's bytes,
get rid of the buf_shift offset to x' bounds and let x run from zero to
sg->length - 1.
Signed-off-by: Nicolai Stange <nicstange@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The number of bits, nbits, is calculated in mpi_read_raw_from_sgl() as
follows:
nbits = nbytes * 8;
Afterwards, the number of leading zero bits of the first byte get
subtracted:
nbits -= count_leading_zeros(*(u8 *)(sg_virt(sgl) + lzeros));
However, count_leading_zeros() takes an unsigned long and thus,
the u8 gets promoted to an unsigned long.
Thus, the above doesn't subtract the number of leading zeros in the most
significant nonzero input byte from nbits, but the number of leading
zeros of the most significant nonzero input byte promoted to unsigned long,
i.e. BITS_PER_LONG - 8 too many.
Fix this by subtracting
count_leading_zeros(...) - (BITS_PER_LONG - 8)
from nbits only.
Fixes: 2d4d1eea54 ("lib/mpi: Add mpi sgl helpers")
Signed-off-by: Nicolai Stange <nicstange@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
In mpi_read_raw_from_sgl(), unsigned nbits is calculated as follows:
nbits = nbytes * 8;
and redundantly cleared later on if nbytes == 0:
if (nbytes > 0)
...
else
nbits = 0;
Purge this redundant clearing for the sake of clarity.
Signed-off-by: Nicolai Stange <nicstange@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
At the very beginning of mpi_read_raw_from_sgl(), the leading zeros of
the input scatterlist are counted:
lzeros = 0;
for_each_sg(sgl, sg, ents, i) {
...
if (/* sg contains nonzero bytes */)
break;
/* sg contains nothing but zeros here */
ents--;
lzeros = 0;
}
Later on, the total number of trailing nonzero bytes is calculated by
subtracting the number of leading zero bytes from the total number of input
bytes:
nbytes -= lzeros;
However, since lzeros gets reset to zero for each completely zero leading
sg in the loop above, it doesn't include those.
Besides wasting resources by allocating a too large output buffer,
this mistake propagates into the calculation of x, the number of
leading zeros within the most significant output limb:
x = BYTES_PER_MPI_LIMB - nbytes % BYTES_PER_MPI_LIMB;
What's more, the low order bytes of the output, equal in number to the
extra bytes in nbytes, are left uninitialized.
Fix this by adjusting nbytes for each completely zero leading scatterlist
entry.
Fixes: 2d4d1eea54 ("lib/mpi: Add mpi sgl helpers")
Signed-off-by: Nicolai Stange <nicstange@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Currently, the nbytes local variable is calculated from the len argument
as follows:
... mpi_read_raw_from_sgl(..., unsigned int len)
{
unsigned nbytes;
...
if (!ents)
nbytes = 0;
else
nbytes = len - lzeros;
...
}
Given that nbytes is derived from len in a trivial way and that the len
argument is shadowed by a local len variable in several loops, this is just
confusing.
Rename the len argument to nbytes and get rid of the nbytes local variable.
Do the nbytes calculation in place.
Signed-off-by: Nicolai Stange <nicstange@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Currently, mpi_read_buffer() writes full limbs to the output buffer
and moves memory around to purge leading zero limbs afterwards.
However, with
commit 9cbe21d8f8 ("lib/mpi: only require buffers as big as needed for
the integer")
the caller is only required to provide a buffer large enough to hold the
result without the leading zeros.
This might result in a buffer overflow for small MP numbers with leading
zeros.
Fix this by coping the result to its final destination within the output
buffer and not copying the leading zeros at all.
Fixes: 9cbe21d8f8 ("lib/mpi: only require buffers as big as needed for
the integer")
Signed-off-by: Nicolai Stange <nicstange@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Currently, the endian conversion from CPU order to BE is open coded in
mpi_read_buffer().
Replace this by the centrally provided cpu_to_be*() macros.
Copy from the temporary storage on stack to the destination buffer
by means of memcpy().
Signed-off-by: Nicolai Stange <nicstange@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Currently, if the number of leading zeros is greater than fits into a
complete limb, mpi_read_buffer() skips them by iterating over them
limb-wise.
Instead of skipping the high order zero limbs within the loop as shown
above, adjust the copying loop's bounds.
Signed-off-by: Nicolai Stange <nicstange@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Currently, the endian conversion from CPU order to BE is open coded in
mpi_write_sgl().
Replace this by the centrally provided cpu_to_be*() macros.
Signed-off-by: Nicolai Stange <nicstange@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Within the copying loop in mpi_write_sgl(), we have
if (lzeros) {
mpi_limb_t *limb1 = (void *)p - sizeof(alimb);
mpi_limb_t *limb2 = (void *)p - sizeof(alimb)
+ lzeros;
*limb1 = *limb2;
...
}
where p points past the end of alimb2 which lives on the stack and contains
the current limb in BE order.
The purpose of the above is to shift the non-zero bytes of alimb2 to its
beginning in memory, i.e. to skip its leading zero bytes.
However, limb2 points somewhere into the middle of alimb2 and thus, reading
*limb2 pulls in lzero bytes from somewhere.
Indeed, KASAN splats:
BUG: KASAN: stack-out-of-bounds in mpi_write_to_sgl+0x4e3/0x6f0
at addr ffff8800cb04f601
Read of size 8 by task systemd-udevd/391
page:ffffea00032c13c0 count:0 mapcount:0 mapping: (null) index:0x0
flags: 0x3fff8000000000()
page dumped because: kasan: bad access detected
CPU: 3 PID: 391 Comm: systemd-udevd Tainted: G B L
4.5.0-next-20160316+ #12
[...]
Call Trace:
[<ffffffff8194889e>] dump_stack+0xdc/0x15e
[<ffffffff819487c2>] ? _atomic_dec_and_lock+0xa2/0xa2
[<ffffffff814892b5>] ? __dump_page+0x185/0x330
[<ffffffff8150ffd6>] kasan_report_error+0x5e6/0x8b0
[<ffffffff814724cd>] ? kzfree+0x2d/0x40
[<ffffffff819c5bce>] ? mpi_free_limb_space+0xe/0x20
[<ffffffff819c469e>] ? mpi_powm+0x37e/0x16f0
[<ffffffff815109f1>] kasan_report+0x71/0xa0
[<ffffffff819c0353>] ? mpi_write_to_sgl+0x4e3/0x6f0
[<ffffffff8150ed34>] __asan_load8+0x64/0x70
[<ffffffff819c0353>] mpi_write_to_sgl+0x4e3/0x6f0
[<ffffffff819bfe70>] ? mpi_set_buffer+0x620/0x620
[<ffffffff819c0e6f>] ? mpi_cmp+0xbf/0x180
[<ffffffff8186e282>] rsa_verify+0x202/0x260
What's more, since lzeros can be anything from 1 to sizeof(mpi_limb_t)-1,
the above will cause unaligned accesses which is bad on non-x86 archs.
Fix the issue, by preparing the starting point p for the upcoming copy
operation instead of shifting the source memory, i.e. alimb2.
Fixes: 2d4d1eea54 ("lib/mpi: Add mpi sgl helpers")
Signed-off-by: Nicolai Stange <nicstange@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Within the copying loop in mpi_write_sgl(), we have
if (lzeros) {
...
p -= lzeros;
y = lzeros;
}
p = p - (sizeof(alimb) - y);
If lzeros == 0, then y == 0, too. Thus, lzeros gets subtracted and added
back again to p.
Purge this redundancy.
Signed-off-by: Nicolai Stange <nicstange@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Within the copying loop in mpi_write_sgl(), we have
if (lzeros > 0) {
...
lzeros -= sizeof(alimb);
}
However, at this point, lzeros < sizeof(alimb) holds. Make this fact
explicit by rewriting the above to
if (lzeros) {
...
lzeros = 0;
}
Signed-off-by: Nicolai Stange <nicstange@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Currently, if the number of leading zeros is greater than fits into a
complete limb, mpi_write_sgl() skips them by iterating over them limb-wise.
However, it fails to adjust its internal leading zeros tracking variable,
lzeros, accordingly: it does a
p -= sizeof(alimb);
continue;
which should really have been a
lzeros -= sizeof(alimb);
continue;
Since lzeros never decreases if its initial value >= sizeof(alimb), nothing
gets copied by mpi_write_sgl() in that case.
Instead of skipping the high order zero limbs within the loop as shown
above, fix the issue by adjusting the copying loop's bounds.
Fixes: 2d4d1eea54 ("lib/mpi: Add mpi sgl helpers")
Signed-off-by: Nicolai Stange <nicstange@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Sort the headers alphabetically to improve readability and to spot
duplications easier.
Suggested-by: Vladimir Zapolskiy <vz@mleia.com>
Signed-off-by: Krzysztof Kozlowski <k.kozlowski@samsung.com>
Acked-by: Vladimir Zapolskiy <vz@mleia.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
During crypto selftests on Odroid XU3 (Exynos5422) some of the
algorithms failed because of passing AES-block unaligned source and
destination buffers:
alg: skcipher: encryption failed on chunk test 1 for ecb-aes-s5p: ret=22
Handle such case by copying the buffers to a new aligned and contiguous
space.
Signed-off-by: Krzysztof Kozlowski <krzk@kernel.org>
Acked-by: Vladimir Zapolskiy <vz@mleia.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Remove unneeded inclusion of delay.h and get rid of indentation from
labels.
Signed-off-by: Krzysztof Kozlowski <krzk@kernel.org>
Acked-by: Vladimir Zapolskiy <vz@mleia.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Gary will be taking over future development of the CCP driver, so add
him as a co-maintainer of the driver.
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Acked-by: Gary Hook <gary.hook@amd.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
creq->cache[] is an array inside the struct, it's not a pointer and it
can't be NULL.
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Acked-by: Boris Brezillon <boris.brezillon@free-electrons.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The OID_sha224 case is missing a break and it falls through
to the -ENOPKG error default. Since HASH_ALGO_SHA224 seems
to be supported, this looks like an unintentional missing break.
Fixes: 07f081fb50 ("PKCS#7: Add OIDs for sha224, sha284 and sha512 hash algos and use them")
Cc: <stable@vger.kernel.org> # 4.2+
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Return statement at the end of a void function is useless.
The Coccinelle semantic patch used to make this change is as follows:
//<smpl>
@@
identifier f;
expression e;
@@
void f(...) {
<...
- return
e;
...>
}
//</smpl>
Signed-off-by: Amitoj Kaur Chawla <amitoj1606@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Get some build coverage of S5P/Exynos AES H/W acceleration driver.
Driver uses DMA and devm_ioremap_resource() so add DMA and IOMEM
dependencies for the compile testing.
Signed-off-by: Krzysztof Kozlowski <k.kozlowski@samsung.com>
Acked-by: Vladimir Zapolskiy <vz@mleia.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Get some build coverage of Exynos H/W random number generator
driver. Driver uses devm_ioremap_resource() so add IOMEM dependency for
the compile testing.
Signed-off-by: Krzysztof Kozlowski <k.kozlowski@samsung.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Driver enabled runtime PM but did not revert this on removal. Re-binding
of a device triggered warning:
exynos-rng 10830400.rng: Unbalanced pm_runtime_enable!
Fixes: b329669ea0 ("hwrng: exynos - Add support for Exynos random number generator")
Signed-off-by: Krzysztof Kozlowski <k.kozlowski@samsung.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add proper error path (for disabling runtime PM) when registering of
hwrng fails.
Fixes: b329669ea0 ("hwrng: exynos - Add support for Exynos random number generator")
Signed-off-by: Krzysztof Kozlowski <k.kozlowski@samsung.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
In case of timeout during read operation, the exit path lacked PM
runtime put. This could lead to unbalanced runtime PM usage counter thus
leaving the device in an active state.
Fixes: d7fd6075a2 ("hwrng: exynos - Add timeout for waiting on init done")
Cc: <stable@vger.kernel.org> # v4.4+
Signed-off-by: Krzysztof Kozlowski <k.kozlowski@samsung.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The driver uses pm_runtime_put_noidle() after initialization so the
device might remain in active state if the core does not read from it
(the read callback contains regular runtime put). The put_noidle() was
chosen probably to avoid unneeded suspend and resume cycle after the
initialization.
However for this purpose autosuspend is enabled so it is safe to runtime
put just after the initialization.
Signed-off-by: Krzysztof Kozlowski <k.kozlowski@samsung.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>