c626910f3f
Currently, the ahash API checks the alignment of all key and result buffers against the algorithm's declared alignmask, and for any unaligned buffers it falls back to manually aligned temporary buffers. This is virtually useless, however. First, since it does not apply to the message, its effect is much more limited than e.g. is the case for the alignmask for "skcipher". Second, the key and result buffers are given as virtual addresses and cannot (in general) be DMA'ed into, so drivers end up having to copy to/from them in software anyway. As a result it's easy to use memcpy() or the unaligned access helpers. The crypto_hash_walk_*() helper functions do use the alignmask to align the message. But with one exception those are only used for shash algorithms being exposed via the ahash API, not for native ahashes, and aligning the message is not required in this case, especially now that alignmask support has been removed from shash. The exception is the n2_core driver, which doesn't set an alignmask. In any case, no ahash algorithms actually set a nonzero alignmask anymore. Therefore, remove support for it from ahash. The benefit is that all the code to handle "misaligned" buffers in the ahash API goes away, reducing the overhead of the ahash API. This follows the same change that was made to shash. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> |
||
---|---|---|
.. | ||
api-aead.rst | ||
api-akcipher.rst | ||
api-digest.rst | ||
api-intro.rst | ||
api-kpp.rst | ||
api-rng.rst | ||
api-samples.rst | ||
api-skcipher.rst | ||
api.rst | ||
architecture.rst | ||
asymmetric-keys.rst | ||
async-tx-api.rst | ||
crypto_engine.rst | ||
descore-readme.rst | ||
devel-algos.rst | ||
index.rst | ||
intro.rst | ||
userspace-if.rst |