Herbert Xu [Sun, 9 Mar 2025 02:43:17 +0000 (10:43 +0800)]
crypto: acomp - Move stream management into scomp layer
Rather than allocating the stream memory in the request object,
move it into a per-cpu buffer managed by scomp. This takes the
stress off the user from having to manage large request objects
and setting up their own per-cpu buffers in order to do so.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Herbert Xu [Sun, 9 Mar 2025 02:43:14 +0000 (10:43 +0800)]
crypto: scomp - Remove tfm argument from alloc/free_ctx
The tfm argument is completely unused and meaningless as the
same stream object is identical over all transforms of a given
algorithm. Remove it.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Herbert Xu [Sun, 9 Mar 2025 02:43:12 +0000 (10:43 +0800)]
crypto: api - Add cra_type->destroy hook
Add a cra_type->destroy hook so that resources can be freed after
the last user of a registered algorithm is gone.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Ethan Carter Edwards [Sun, 9 Mar 2025 00:30:52 +0000 (19:30 -0500)]
crypto: artpec6 - change from kzalloc to kcalloc in artpec6_crypto_probe()
We are trying to get rid of all multiplications from allocation
functions to prevent potential integer overflows. Here the
multiplication is probably safe, but using kcalloc() is more
appropriate and improves readability. This patch has no effect
on runtime behavior.
Link: https://github.com/KSPP/linux/issues/162
Link: https://www.kernel.org/doc/html/next/process/deprecated.html#open-coded-arithmetic-in-allocator-arguments
Signed-off-by: Ethan Carter Edwards <ethan@ethancedwards.com>
Acked-by: Jesper Nilsson <jesper.nilsson@axis.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Herbert Xu [Sat, 8 Mar 2025 12:53:13 +0000 (20:53 +0800)]
crypto: skcipher - Make skcipher_walk src.virt.addr const
Mark the src.virt.addr field in struct skcipher_walk as a pointer
to const data. This guarantees that the user won't modify the data
which should be done through dst.virt.addr to ensure that flushing
is done when necessary.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Herbert Xu [Sat, 8 Mar 2025 12:45:25 +0000 (20:45 +0800)]
crypto: skcipher - Eliminate duplicate virt.addr field
Reuse the addr field from struct scatter_walk for skcipher_walk.
Keep the existing virt.addr fields but make them const for the
user to access the mapped address.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Herbert Xu [Sat, 8 Mar 2025 12:45:23 +0000 (20:45 +0800)]
crypto: scatterwalk - Add memcpy_sglist
Add memcpy_sglist which copies one SG list to another.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Herbert Xu [Sat, 8 Mar 2025 12:45:21 +0000 (20:45 +0800)]
crypto: scatterwalk - Change scatterwalk_next calling convention
Rather than returning the address and storing the length into an
argument pointer, add an address field to the walk struct and use
that to store the address. The length is returned directly.
Change the done functions to use this stored address instead of
getting them from the caller.
Split the address into two using a union. The user should only
access the const version so that it is never changed.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Dionna Glaze [Sat, 8 Mar 2025 01:10:28 +0000 (12:10 +1100)]
crypto: ccp - Fix uAPI definitions of PSP errors
Additions to the error enum after explicit 0x27 setting for
SEV_RET_INVALID_KEY leads to incorrect value assignments.
Use explicit values to match the manufacturer specifications more
clearly.
Fixes:
3a45dc2b419e ("crypto: ccp: Define the SEV-SNP commands")
CC: stable@vger.kernel.org
Signed-off-by: Dionna Glaze <dionnaglaze@google.com>
Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com>
Signed-off-by: Alexey Kardashevskiy <aik@amd.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Krzysztof Kozlowski [Fri, 7 Mar 2025 09:33:09 +0000 (10:33 +0100)]
dt-bindings: rng: rockchip,rk3588-rng: Drop unnecessary status from example
Device nodes are enabled by default, so no need for 'status = "okay"' in
the DTS example.
Reviewed-by: Heiko Stuebner <heiko@sntech.de>
Signed-off-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
Acked-by: Rob Herring (Arm) <robh@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Lukas Wunner [Wed, 5 Mar 2025 17:14:32 +0000 (18:14 +0100)]
MAINTAINERS: Add Lukas & Ignat & Stefan for asymmetric keys
Herbert asks for long-term maintenance of everything under
crypto/asymmetric_keys/ and associated algorithms (ECDSA, GOST, RSA) [1].
Ignat has kindly agreed to co-maintain this with me going forward.
Stefan has agreed to be added as reviewer for ECDSA. He introduced it
in 2021 and has been meticulously providing reviews for 3rd party
patches anyway.
Retain David Howells' maintainer entry until he explicitly requests to
be removed. He originally introduced asymmetric keys in 2012.
RSA was introduced by Tadeusz Struk as an employee of Intel in 2015,
but he's changed jobs and last contributed to the implementation in 2016.
GOST was introduced by Vitaly Chikunov as an employee of Basealt LLC [2]
(Базальт СПО [3]) in 2019. This company is an OFAC sanctioned entity
[4][5], which makes employees ineligible as maintainer [6]. It's not
clear if Vitaly is still working for Basealt, he did not immediately
respond to my e-mail. Since knowledge and use of GOST algorithms is
relatively limited outside the Russian Federation, assign "Odd fixes"
status for now.
[1] https://lore.kernel.org/r/Z8QNJqQKhyyft_gz@gondor.apana.org.au/
[2] https://prohoster.info/ru/blog/novosti-interneta/reliz-yadra-linux-5-2
[3] https://www.basealt.ru/
[4] https://ofac.treasury.gov/recent-actions/
20240823
[5] https://sanctionssearch.ofac.treas.gov/Details.aspx?id=50178
[6] https://lore.kernel.org/r/
7ee74c1b5b589619a13c6318c9fbd0d6ac7c334a.camel@HansenPartnership.com/
Signed-off-by: Lukas Wunner <lukas@wunner.de>
Reviewed-by: Stefan Berger <stefanb@linux.ibm.com>
Signed-off-by: David Howells <dhowells@redhat.com>
Acked-by: Jarkko Sakkinen <jarkko@kernel.org>
Acked-by: Ignat Korchagin <ignat@cloudflare.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Shashank Gupta [Wed, 5 Mar 2025 07:57:05 +0000 (13:27 +0530)]
crypto: octeontx2 - suppress auth failure screaming due to negative tests
This patch addresses an issue where authentication failures were being
erroneously reported due to negative test failures in the "ccm(aes)"
selftest.
pr_debug suppress unnecessary screaming of these tests.
Signed-off-by: Shashank Gupta <shashankg@marvell.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
David Sterba [Tue, 4 Mar 2025 12:46:58 +0000 (13:46 +0100)]
MAINTAINERS: add myself to co-maintain ZSTD
Recently the ZSTD tree has been removed from linux-next due to lack of
updates for last year [1]. As there are several users of ZSTD in kernel
we need to keep the code maintained and updated. I'll act as a backup to
get the ZSTD upstream to linux, Nick is OK with that [2].
[1] https://lore.kernel.org/all/
20250216224200.
50b9dd6a@canb.auug.org.au/
[2] https://github.com/facebook/zstd/issues/4262#issuecomment-
2691527952
CC: Nick Terrell <terrelln@fb.com>
Signed-off-by: David Sterba <dsterba@suse.com>
Signed-off-by: Nick Terrell <terrelln@fb.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Christophe JAILLET [Mon, 3 Mar 2025 19:08:04 +0000 (20:08 +0100)]
crypto: virtio - Erase some sensitive memory when it is freed
virtcrypto_clear_request() does the same as the code here, but uses
kfree_sensitive() for one of the free operation.
So, better safe than sorry, use virtcrypto_clear_request() directly to
save a few lines of code and cleanly free the memory.
Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr>
Tested-by: Lei Yang <leiyang@redhat.com>
Acked-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Dr. David Alan Gilbert [Sun, 29 Sep 2024 13:21:48 +0000 (14:21 +0100)]
async_xor: Remove unused 'async_xor_val'
async_xor_val has been unused since commit
a7c224a820c3 ("md/raid5: convert to new xor compution interface")
Remove it.
Signed-off-by: Dr. David Alan Gilbert <linux@treblig.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Eric Biggers [Thu, 6 Mar 2025 03:33:05 +0000 (19:33 -0800)]
crypto: skcipher - fix mismatch between mapping and unmapping order
Local kunmaps have to be unmapped in the opposite order from which they
were mapped. My recent change flipped the unmap order in the
SKCIPHER_WALK_DIFF case. Adjust the mapping side to match.
This fixes a WARN_ON_ONCE that was triggered when running the
crypto-self tests on a 32-bit kernel with
CONFIG_DEBUG_KMAP_LOCAL_FORCE_MAP=y.
Fixes:
95dbd711b1d8 ("crypto: skcipher - use the new scatterwalk functions")
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Herbert Xu [Mon, 3 Mar 2025 03:09:06 +0000 (11:09 +0800)]
crypto: Kconfig - Select LIB generic option
Select the generic LIB options if the Crypto API algorithm is
enabled. Otherwise this may lead to a build failure as the Crypto
API algorithm always uses the generic implementation.
Fixes:
17ec3e71ba79 ("crypto: lib/Kconfig - Hide arch options from user")
Reported-by: kernel test robot <lkp@intel.com>
Closes: https://lore.kernel.org/oe-kbuild-all/
202503022113.79uEtUuy-lkp@intel.com/
Closes: https://lore.kernel.org/oe-kbuild-all/
202503022115.9OOyDR5A-lkp@intel.com/
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Ard Biesheuvel [Fri, 28 Feb 2025 12:11:38 +0000 (13:11 +0100)]
crypto: lib/chachapoly - Drop dependency on CRYPTO_ALGAPI
The ChaCha20-Poly1305 library code uses the sg_miter API to process
input presented via scatterlists, except for the special case where the
digest buffer is not covered entirely by the same scatterlist entry as
the last byte of input. In that case, it uses scatterwalk_map_and_copy()
to access the memory in the input scatterlist where the digest is stored.
This results in a dependency on crypto/scatterwalk.c and therefore on
CONFIG_CRYPTO_ALGAPI, which is unnecessary, as the sg_miter API already
provides this functionality via sg_copy_to_buffer(). So use that
instead, and drop the dependencies on CONFIG_CRYPTO_ALGAPI and
CONFIG_CRYPTO.
Reported-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Eric Biggers <ebiggers@kernel.org>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Abhinaba Rakshit [Thu, 27 Feb 2025 20:15:54 +0000 (01:45 +0530)]
dt-bindings: crypto: qcom,prng: document QCS615
Document QCS615 compatible for True Random Number Generator.
Signed-off-by: Abhinaba Rakshit <quic_arakshit@quicinc.com>
Acked-by: Rob Herring (Arm) <robh@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Herbert Xu [Thu, 27 Feb 2025 10:14:57 +0000 (18:14 +0800)]
crypto: acomp - Remove acomp request flags
The acomp request flags field duplicates the base request flags
and is confusing. Remove it.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Herbert Xu [Thu, 27 Feb 2025 10:14:55 +0000 (18:14 +0800)]
crypto: iaa - Test the correct request flag
Test the correct flags for the MAY_SLEEP bit.
Fixes:
2ec6761df889 ("crypto: iaa - Add support for deflate-iaa compression algorithm")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Herbert Xu [Thu, 27 Feb 2025 09:04:46 +0000 (17:04 +0800)]
crypto: lzo - Fix compression buffer overrun
Unlike the decompression code, the compression code in LZO never
checked for output overruns. It instead assumes that the caller
always provides enough buffer space, disregarding the buffer length
provided by the caller.
Add a safe compression interface that checks for the end of buffer
before each write. Use the safe interface in crypto/lzo.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Rob Herring (Arm) [Wed, 26 Feb 2025 21:42:43 +0000 (15:42 -0600)]
dt-bindings: crypto: inside-secure,safexcel: Allow dma-coherent
Some platforms like Marvell are cache coherent, so allow the
"dma-coherent" property.
Signed-off-by: Rob Herring (Arm) <robh@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Herbert Xu [Tue, 25 Feb 2025 05:03:26 +0000 (13:03 +0800)]
crypto: api - Move struct crypto_type into internal.h
Move the definition of struct crypto_type into internal.h as it
is only used by API implementors and not algorithm implementors.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Akhil R [Mon, 24 Feb 2025 09:16:10 +0000 (14:46 +0530)]
crypto: tegra - Use HMAC fallback when keyslots are full
The intermediate results for HMAC is stored in the allocated keyslot by
the hardware. Dynamic allocation of keyslot during an operation is hence
not possible. As the number of keyslots are limited in the hardware,
fallback to the HMAC software implementation if keyslots are not available
Fixes:
0880bb3b00c8 ("crypto: tegra - Add Tegra Security Engine driver")
Signed-off-by: Akhil R <akhilrajeev@nvidia.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Akhil R [Mon, 24 Feb 2025 09:16:09 +0000 (14:46 +0530)]
crypto: tegra - Reserve keyslots to allocate dynamically
The HW supports only storing 15 keys at a time. This limits the number
of tfms that can work without failutes. Reserve keyslots to solve this
and use the reserved ones during the encryption/decryption operation.
This allow users to have the capability of hardware protected keys
and faster operations if there are limited number of tfms while not
halting the operation if there are more tfms.
Fixes:
0880bb3b00c8 ("crypto: tegra - Add Tegra Security Engine driver")
Signed-off-by: Akhil R <akhilrajeev@nvidia.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Akhil R [Mon, 24 Feb 2025 09:16:08 +0000 (14:46 +0530)]
crypto: tegra - Set IV to NULL explicitly for AES ECB
It may happen that the variable req->iv may have stale values or
zero sized buffer by default and may end up getting used during
encryption/decryption. This inturn may corrupt the results or break the
operation. Set the req->iv variable to NULL explicitly for algorithms
like AES-ECB where IV is not used.
Fixes:
0880bb3b00c8 ("crypto: tegra - Add Tegra Security Engine driver")
Signed-off-by: Akhil R <akhilrajeev@nvidia.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Akhil R [Mon, 24 Feb 2025 09:16:07 +0000 (14:46 +0530)]
crypto: tegra - Fix CMAC intermediate result handling
Saving and restoring of the intermediate results are needed if there is
context switch caused by another ongoing request on the same engine.
This is therefore not only to support import/export functionality.
Hence, save and restore the intermediate result for every non-first task.
Fixes:
0880bb3b00c8 ("crypto: tegra - Add Tegra Security Engine driver")
Signed-off-by: Akhil R <akhilrajeev@nvidia.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Akhil R [Mon, 24 Feb 2025 09:16:06 +0000 (14:46 +0530)]
crypto: tegra - Fix HASH intermediate result handling
The intermediate hash values generated during an update task were
handled incorrectly in the driver. The values have a defined format for
each algorithm. Copying and pasting from the HASH_RESULT register
balantly would not work for all the supported algorithms. This incorrect
handling causes failures when there is a context switch between multiple
operations.
To handle the expected format correctly, add a separate buffer for
storing the intermediate results for each request. Remove the previous
copy/paste functions which read/wrote to the registers directly. Instead
configure the hardware to get the intermediate result copied to the
buffer and use host1x path to restore the intermediate hash results.
Fixes:
0880bb3b00c8 ("crypto: tegra - Add Tegra Security Engine driver")
Signed-off-by: Akhil R <akhilrajeev@nvidia.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Akhil R [Mon, 24 Feb 2025 09:16:05 +0000 (14:46 +0530)]
crypto: tegra - Transfer HASH init function to crypto engine
Ahash init() function was called asynchronous to the crypto engine queue.
This could corrupt the request context if there is any ongoing operation
for the same request. Queue the init function as well to the crypto
engine queue so that this scenario can be avoided.
Fixes:
0880bb3b00c8 ("crypto: tegra - Add Tegra Security Engine driver")
Signed-off-by: Akhil R <akhilrajeev@nvidia.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Akhil R [Mon, 24 Feb 2025 09:16:04 +0000 (14:46 +0530)]
crypto: tegra - check return value for hash do_one_req
Initialize and check the return value in hash *do_one_req() functions
and exit the function if there is an error. This fixes the
'uninitialized variable' warnings reported by testbots.
Reported-by: kernel test robot <lkp@intel.com>
Reported-by: Dan Carpenter <dan.carpenter@linaro.org>
Closes: https://lore.kernel.org/r/
202412071747.flPux4oB-lkp@intel.com/
Fixes:
0880bb3b00c8 ("crypto: tegra - Add Tegra Security Engine driver")
Signed-off-by: Akhil R <akhilrajeev@nvidia.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Akhil R [Mon, 24 Feb 2025 09:16:03 +0000 (14:46 +0530)]
crypto: tegra - finalize crypto req on error
Call the crypto finalize function before exiting *do_one_req() functions.
This allows the driver to take up further requests even if the previous
one fails.
Fixes:
0880bb3b00c8 ("crypto: tegra - Add Tegra Security Engine driver")
Signed-off-by: Akhil R <akhilrajeev@nvidia.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Akhil R [Mon, 24 Feb 2025 09:16:02 +0000 (14:46 +0530)]
crypto: tegra - Do not use fixed size buffers
Allocate the buffer based on the request instead of a fixed buffer
length. In operations which may require larger buffer size, a fixed
buffer may fail.
Fixes:
0880bb3b00c8 ("crypto: tegra - Add Tegra Security Engine driver")
Signed-off-by: Akhil R <akhilrajeev@nvidia.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Akhil R [Mon, 24 Feb 2025 09:16:01 +0000 (14:46 +0530)]
crypto: tegra - Use separate buffer for setkey
The buffer which sends the commands to host1x was shared for all tasks
in the engine. This causes a problem with the setkey() function as it
gets called asynchronous to the crypto engine queue. Modifying the same
cmdbuf in setkey() will corrupt the ongoing host1x task and in turn
break the encryption/decryption operation. Hence use a separate cmdbuf
for setkey().
Fixes:
0880bb3b00c8 ("crypto: tegra - Add Tegra Security Engine driver")
Signed-off-by: Akhil R <akhilrajeev@nvidia.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Sven Schwermer [Mon, 24 Feb 2025 07:42:25 +0000 (08:42 +0100)]
crypto: mxs-dcp - Only set OTP_KEY bit for OTP key
While MXS_DCP_CONTROL0_OTP_KEY is set, the CRYPTO_KEY (DCP_PAES_KEY_OTP)
is used even if the UNIQUE_KEY (DCP_PAES_KEY_UNIQUE) is selected. This
is not clearly documented, but this implementation is consistent with
NXP's downstream kernel fork and optee_os.
Signed-off-by: Sven Schwermer <sven@svenschwermer.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Herbert Xu [Sat, 8 Mar 2025 07:03:20 +0000 (15:03 +0800)]
Merge tag 'crypto-krb5-
20250303' of git://git./linux/kernel/git/dhowells/linux-fs.git
Pull Kerberos crypto library from David Howells:
crypto: Add Kerberos crypto lib
It does a couple of things:
(1) Provide an AEAD crypto driver, krb5enc, that mirrors the authenc
driver, but that hashes the plaintext, not the ciphertext. This was
made a separate module rather than just being a part of the authenc
driver because it has to do all of the constituent operations in the
opposite order - which impacts the async op handling.
Testmgr data is provided for AES+SHA2 and Camellia combinations of
authenc and krb5enc used by the krb5 library. AES+SHA1 is not
provided as the RFCs don't contain usable test vectors.
(2) Provide a Kerberos 5 crypto library. This is an extract from the
sunrpc driver as that code can be shared between sunrpc/nfs and
rxrpc/afs. This provides encryption, decryption, get MIC and verify
MIC routines that use and wrap the crypto functions, along with some
functions to provide layout management.
This supports AES+SHA1, AES+SHA2 and Camellia encryption types.
Self-testing is provided that goes further than is possible with
testmgr, doing subkey derivation as well.
The patches were previously posted here:
https://lore.kernel.org/r/
20250203142343.248839-1-dhowells@redhat.com/
as part of a larger series, but the networking guys would prefer these to
go through the crypto tree. If you want them reposting independently, I
can do that.
David Howells [Mon, 3 Feb 2025 13:44:37 +0000 (13:44 +0000)]
crypto/krb5: Implement crypto self-testing
Implement self-testing infrastructure to test the pseudo-random function,
key derivation, encryption and checksumming.
Add the testing data from rfc8009 to test AES + HMAC-SHA2.
Add the testing data from rfc6803 to test Camellia. Note some encryption
test vectors here are incomplete, lacking the key usage number needed to
derive Ke and Ki, and there are errata for this:
https://www.rfc-editor.org/errata_search.php?rfc=6803
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Herbert Xu <herbert@gondor.apana.org.au>
cc: "David S. Miller" <davem@davemloft.net>
cc: Chuck Lever <chuck.lever@oracle.com>
cc: Marc Dionne <marc.dionne@auristor.com>
cc: Eric Dumazet <edumazet@google.com>
cc: Jakub Kicinski <kuba@kernel.org>
cc: Paolo Abeni <pabeni@redhat.com>
cc: Simon Horman <horms@kernel.org>
cc: linux-afs@lists.infradead.org
cc: linux-nfs@vger.kernel.org
cc: linux-crypto@vger.kernel.org
cc: netdev@vger.kernel.org
David Howells [Fri, 25 Sep 2020 11:24:50 +0000 (12:24 +0100)]
crypto/krb5: Implement the Camellia enctypes from rfc6803
Implement the camellia128-cts-cmac and camellia256-cts-cmac enctypes from
rfc6803.
Note that the test vectors in rfc6803 for encryption are incomplete,
lacking the key usage number needed to derive Ke and Ki, and there are
errata for this:
https://www.rfc-editor.org/errata_search.php?rfc=6803
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Herbert Xu <herbert@gondor.apana.org.au>
cc: "David S. Miller" <davem@davemloft.net>
cc: Chuck Lever <chuck.lever@oracle.com>
cc: Marc Dionne <marc.dionne@auristor.com>
cc: Eric Dumazet <edumazet@google.com>
cc: Jakub Kicinski <kuba@kernel.org>
cc: Paolo Abeni <pabeni@redhat.com>
cc: Simon Horman <horms@kernel.org>
cc: linux-afs@lists.infradead.org
cc: linux-nfs@vger.kernel.org
cc: linux-crypto@vger.kernel.org
cc: netdev@vger.kernel.org
David Howells [Mon, 3 Feb 2025 13:42:41 +0000 (13:42 +0000)]
crypto/krb5: Implement the AES enctypes from rfc8009
Implement the aes128-cts-hmac-sha256-128 and aes256-cts-hmac-sha384-192
enctypes from rfc8009, overriding the rfc3961 kerberos 5 simplified crypto
scheme.
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Herbert Xu <herbert@gondor.apana.org.au>
cc: "David S. Miller" <davem@davemloft.net>
cc: Chuck Lever <chuck.lever@oracle.com>
cc: Marc Dionne <marc.dionne@auristor.com>
cc: Eric Dumazet <edumazet@google.com>
cc: Jakub Kicinski <kuba@kernel.org>
cc: Paolo Abeni <pabeni@redhat.com>
cc: Simon Horman <horms@kernel.org>
cc: linux-afs@lists.infradead.org
cc: linux-nfs@vger.kernel.org
cc: linux-crypto@vger.kernel.org
cc: netdev@vger.kernel.org
David Howells [Thu, 3 Sep 2020 11:05:04 +0000 (12:05 +0100)]
crypto/krb5: Implement the AES enctypes from rfc3962
Implement the aes128-cts-hmac-sha1-96 and aes256-cts-hmac-sha1-96 enctypes
from rfc3962, using the rfc3961 kerberos 5 simplified crypto scheme.
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Herbert Xu <herbert@gondor.apana.org.au>
cc: "David S. Miller" <davem@davemloft.net>
cc: Chuck Lever <chuck.lever@oracle.com>
cc: Marc Dionne <marc.dionne@auristor.com>
cc: Eric Dumazet <edumazet@google.com>
cc: Jakub Kicinski <kuba@kernel.org>
cc: Paolo Abeni <pabeni@redhat.com>
cc: Simon Horman <horms@kernel.org>
cc: linux-afs@lists.infradead.org
cc: linux-nfs@vger.kernel.org
cc: linux-crypto@vger.kernel.org
cc: netdev@vger.kernel.org
David Howells [Thu, 24 Sep 2020 09:23:48 +0000 (10:23 +0100)]
crypto/krb5: Implement the Kerberos5 rfc3961 get_mic and verify_mic
Add functions that sign and verify a message according to rfc3961 sec 5.4,
using Kc to generate a checksum and insert it into the MIC field in the
skbuff in the sign phase then checksum the data and compare it to the MIC
in the verify phase.
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Herbert Xu <herbert@gondor.apana.org.au>
cc: "David S. Miller" <davem@davemloft.net>
cc: Chuck Lever <chuck.lever@oracle.com>
cc: Marc Dionne <marc.dionne@auristor.com>
cc: Eric Dumazet <edumazet@google.com>
cc: Jakub Kicinski <kuba@kernel.org>
cc: Paolo Abeni <pabeni@redhat.com>
cc: Simon Horman <horms@kernel.org>
cc: linux-afs@lists.infradead.org
cc: linux-nfs@vger.kernel.org
cc: linux-crypto@vger.kernel.org
cc: netdev@vger.kernel.org
David Howells [Thu, 24 Sep 2020 07:31:06 +0000 (08:31 +0100)]
crypto/krb5: Implement the Kerberos5 rfc3961 encrypt and decrypt functions
Add functions that encrypt and decrypt a message according to rfc3961 sec
5.3, using Ki to checksum the data to be secured and Ke to encrypt it
during the encryption phase, then decrypting with Ke and verifying the
checksum with Ki in the decryption phase.
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Herbert Xu <herbert@gondor.apana.org.au>
cc: "David S. Miller" <davem@davemloft.net>
cc: Chuck Lever <chuck.lever@oracle.com>
cc: Marc Dionne <marc.dionne@auristor.com>
cc: Eric Dumazet <edumazet@google.com>
cc: Jakub Kicinski <kuba@kernel.org>
cc: Paolo Abeni <pabeni@redhat.com>
cc: Simon Horman <horms@kernel.org>
cc: linux-afs@lists.infradead.org
cc: linux-nfs@vger.kernel.org
cc: linux-crypto@vger.kernel.org
cc: netdev@vger.kernel.org
David Howells [Fri, 17 Jan 2025 13:20:34 +0000 (13:20 +0000)]
crypto/krb5: Provide RFC3961 setkey packaging functions
Provide functions to derive keys according to RFC3961 (or load the derived
keys for the selftester where only derived keys are available) and to
package them up appropriately for passing to a krb5enc AEAD setkey or a
hash setkey function.
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Herbert Xu <herbert@gondor.apana.org.au>
cc: "David S. Miller" <davem@davemloft.net>
cc: Chuck Lever <chuck.lever@oracle.com>
cc: Marc Dionne <marc.dionne@auristor.com>
cc: Eric Dumazet <edumazet@google.com>
cc: Jakub Kicinski <kuba@kernel.org>
cc: Paolo Abeni <pabeni@redhat.com>
cc: Simon Horman <horms@kernel.org>
cc: linux-afs@lists.infradead.org
cc: linux-nfs@vger.kernel.org
cc: linux-crypto@vger.kernel.org
cc: netdev@vger.kernel.org
David Howells [Thu, 3 Sep 2020 11:05:04 +0000 (12:05 +0100)]
crypto/krb5: Implement the Kerberos5 rfc3961 key derivation
Implement the simplified crypto profile for Kerberos 5 rfc3961 with the
pseudo-random function, PRF(), from section 5.3 and the key derivation
function, DK() from section 5.1.
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Herbert Xu <herbert@gondor.apana.org.au>
cc: "David S. Miller" <davem@davemloft.net>
cc: Chuck Lever <chuck.lever@oracle.com>
cc: Marc Dionne <marc.dionne@auristor.com>
cc: Eric Dumazet <edumazet@google.com>
cc: Jakub Kicinski <kuba@kernel.org>
cc: Paolo Abeni <pabeni@redhat.com>
cc: Simon Horman <horms@kernel.org>
cc: linux-afs@lists.infradead.org
cc: linux-nfs@vger.kernel.org
cc: linux-crypto@vger.kernel.org
cc: netdev@vger.kernel.org
David Howells [Thu, 3 Sep 2020 11:05:04 +0000 (12:05 +0100)]
crypto/krb5: Provide infrastructure and key derivation
Provide key derivation interface functions and a helper to implement the
PRF+ function from rfc4402.
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Herbert Xu <herbert@gondor.apana.org.au>
cc: "David S. Miller" <davem@davemloft.net>
cc: Chuck Lever <chuck.lever@oracle.com>
cc: Marc Dionne <marc.dionne@auristor.com>
cc: Eric Dumazet <edumazet@google.com>
cc: Jakub Kicinski <kuba@kernel.org>
cc: Paolo Abeni <pabeni@redhat.com>
cc: Simon Horman <horms@kernel.org>
cc: linux-afs@lists.infradead.org
cc: linux-nfs@vger.kernel.org
cc: linux-crypto@vger.kernel.org
cc: netdev@vger.kernel.org
David Howells [Thu, 9 Jan 2025 09:03:35 +0000 (09:03 +0000)]
crypto/krb5: Add an API to perform requests
Add an API by which users of the krb5 crypto library can perform crypto
requests, such as encrypt, decrypt, get_mic and verify_mic. These
functions take the previously prepared crypto objects to work on.
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Herbert Xu <herbert@gondor.apana.org.au>
cc: "David S. Miller" <davem@davemloft.net>
cc: Chuck Lever <chuck.lever@oracle.com>
cc: Marc Dionne <marc.dionne@auristor.com>
cc: Eric Dumazet <edumazet@google.com>
cc: Jakub Kicinski <kuba@kernel.org>
cc: Paolo Abeni <pabeni@redhat.com>
cc: Simon Horman <horms@kernel.org>
cc: linux-afs@lists.infradead.org
cc: linux-nfs@vger.kernel.org
cc: linux-crypto@vger.kernel.org
cc: netdev@vger.kernel.org
David Howells [Fri, 17 Jan 2025 13:29:24 +0000 (13:29 +0000)]
crypto/krb5: Add an API to alloc and prepare a crypto object
Add an API by which users of the krb5 crypto library can get an allocated
and keyed crypto object.
For encryption-mode operation, an AEAD object is returned; for
checksum-mode operation, a synchronous hash object is returned.
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Herbert Xu <herbert@gondor.apana.org.au>
cc: "David S. Miller" <davem@davemloft.net>
cc: Chuck Lever <chuck.lever@oracle.com>
cc: Marc Dionne <marc.dionne@auristor.com>
cc: Eric Dumazet <edumazet@google.com>
cc: Jakub Kicinski <kuba@kernel.org>
cc: Paolo Abeni <pabeni@redhat.com>
cc: Simon Horman <horms@kernel.org>
cc: linux-afs@lists.infradead.org
cc: linux-nfs@vger.kernel.org
cc: linux-crypto@vger.kernel.org
cc: netdev@vger.kernel.org
David Howells [Thu, 19 Nov 2020 09:46:48 +0000 (09:46 +0000)]
crypto/krb5: Add an API to query the layout of the crypto section
Provide some functions to allow the called to find out about the layout of
the crypto section:
(1) Calculate, for a given size of data, how big a buffer will be
required to hold it and where the data will be within it.
(2) Calculate, for an amount of buffer, what's the maximum size of data
that will fit therein, and where it will start.
(3) Determine where the data will be in a received message.
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Herbert Xu <herbert@gondor.apana.org.au>
cc: "David S. Miller" <davem@davemloft.net>
cc: Chuck Lever <chuck.lever@oracle.com>
cc: Marc Dionne <marc.dionne@auristor.com>
cc: Eric Dumazet <edumazet@google.com>
cc: Jakub Kicinski <kuba@kernel.org>
cc: Paolo Abeni <pabeni@redhat.com>
cc: Simon Horman <horms@kernel.org>
cc: linux-afs@lists.infradead.org
cc: linux-nfs@vger.kernel.org
cc: linux-crypto@vger.kernel.org
cc: netdev@vger.kernel.org
David Howells [Tue, 10 Nov 2020 17:00:54 +0000 (17:00 +0000)]
crypto/krb5: Implement Kerberos crypto core
Provide core structures, an encoding-type registry and basic module and
config bits for a generic Kerberos crypto library.
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Herbert Xu <herbert@gondor.apana.org.au>
cc: "David S. Miller" <davem@davemloft.net>
cc: Chuck Lever <chuck.lever@oracle.com>
cc: Marc Dionne <marc.dionne@auristor.com>
cc: Eric Dumazet <edumazet@google.com>
cc: Jakub Kicinski <kuba@kernel.org>
cc: Paolo Abeni <pabeni@redhat.com>
cc: Simon Horman <horms@kernel.org>
cc: linux-afs@lists.infradead.org
cc: linux-nfs@vger.kernel.org
cc: linux-crypto@vger.kernel.org
cc: netdev@vger.kernel.org
David Howells [Thu, 9 Jan 2025 20:13:55 +0000 (20:13 +0000)]
crypto/krb5: Test manager data
Add Kerberos crypto tests to the test manager database. This covers:
camellia128-cts-cmac samples from RFC6803
camellia256-cts-cmac samples from RFC6803
aes128-cts-hmac-sha256-128 samples from RFC8009
aes256-cts-hmac-sha384-192 samples from RFC8009
but not:
aes128-cts-hmac-sha1-96
aes256-cts-hmac-sha1-96
as the test samples in RFC3962 don't seem to be suitable.
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Herbert Xu <herbert@gondor.apana.org.au>
cc: "David S. Miller" <davem@davemloft.net>
cc: Chuck Lever <chuck.lever@oracle.com>
cc: Marc Dionne <marc.dionne@auristor.com>
cc: Eric Dumazet <edumazet@google.com>
cc: Jakub Kicinski <kuba@kernel.org>
cc: Paolo Abeni <pabeni@redhat.com>
cc: Simon Horman <horms@kernel.org>
cc: linux-afs@lists.infradead.org
cc: linux-nfs@vger.kernel.org
cc: linux-crypto@vger.kernel.org
cc: netdev@vger.kernel.org
David Howells [Fri, 17 Jan 2025 11:46:23 +0000 (11:46 +0000)]
crypto: Add 'krb5enc' hash and cipher AEAD algorithm
Add an AEAD template that does hash-then-cipher (unlike authenc that does
cipher-then-hash). This is required for a number of Kerberos 5 encoding
types.
[!] Note that the net/sunrpc/auth_gss/ implementation gets a pair of
ciphers, one non-CTS and one CTS, using the former to do all the aligned
blocks and the latter to do the last two blocks if they aren't also
aligned. It may be necessary to do this here too for performance reasons -
but there are considerations both ways:
(1) firstly, there is an optimised assembly version of cts(cbc(aes)) on
x86_64 that should be used instead of having two ciphers;
(2) secondly, none of the hardware offload drivers seem to offer CTS
support (Intel QAT does not, for instance).
However, I don't know if it's possible to query the crypto API to find out
whether there's an optimised CTS algorithm available.
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Herbert Xu <herbert@gondor.apana.org.au>
cc: "David S. Miller" <davem@davemloft.net>
cc: Chuck Lever <chuck.lever@oracle.com>
cc: Marc Dionne <marc.dionne@auristor.com>
cc: Eric Dumazet <edumazet@google.com>
cc: Jakub Kicinski <kuba@kernel.org>
cc: Paolo Abeni <pabeni@redhat.com>
cc: Simon Horman <horms@kernel.org>
cc: linux-afs@lists.infradead.org
cc: linux-nfs@vger.kernel.org
cc: linux-crypto@vger.kernel.org
cc: netdev@vger.kernel.org
David Howells [Tue, 10 Nov 2020 23:57:35 +0000 (23:57 +0000)]
crypto/krb5: Add some constants out of sunrpc headers
Add some constants from the sunrpc headers.
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Herbert Xu <herbert@gondor.apana.org.au>
cc: "David S. Miller" <davem@davemloft.net>
cc: Chuck Lever <chuck.lever@oracle.com>
cc: Marc Dionne <marc.dionne@auristor.com>
cc: Eric Dumazet <edumazet@google.com>
cc: Jakub Kicinski <kuba@kernel.org>
cc: Paolo Abeni <pabeni@redhat.com>
cc: Simon Horman <horms@kernel.org>
cc: linux-afs@lists.infradead.org
cc: linux-nfs@vger.kernel.org
cc: linux-crypto@vger.kernel.org
cc: netdev@vger.kernel.org
David Howells [Fri, 14 Apr 2023 10:35:12 +0000 (11:35 +0100)]
crypto/krb5: Add API Documentation
Add API documentation for the kerberos crypto library.
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Herbert Xu <herbert@gondor.apana.org.au>
cc: "David S. Miller" <davem@davemloft.net>
cc: Chuck Lever <chuck.lever@oracle.com>
cc: Marc Dionne <marc.dionne@auristor.com>
cc: Eric Dumazet <edumazet@google.com>
cc: Jakub Kicinski <kuba@kernel.org>
cc: Paolo Abeni <pabeni@redhat.com>
cc: Simon Horman <horms@kernel.org>
cc: linux-afs@lists.infradead.org
cc: linux-nfs@vger.kernel.org
cc: linux-crypto@vger.kernel.org
cc: netdev@vger.kernel.org
Herbert Xu [Thu, 27 Feb 2025 07:48:39 +0000 (15:48 +0800)]
crypto: lib/Kconfig - Hide arch options from user
The ARCH_MAY_HAVE patch missed arm64, mips and s390. But it may
also lead to arch options being enabled but ineffective because
of modular/built-in conflicts.
As the primary user of all these options wireguard is selecting
the arch options anyway, make the same selections at the lib/crypto
option level and hide the arch options from the user.
Instead of selecting them centrally from lib/crypto, simply set
the default of each arch option as suggested by Eric Biggers.
Change the Crypto API generic algorithms to select the top-level
lib/crypto options instead of the generic one as otherwise there
is no way to enable the arch options (Eric Biggers). Introduce a
set of INTERNAL options to work around dependency cycles on the
CONFIG_CRYPTO symbol.
Fixes:
1047e21aecdf ("crypto: lib/Kconfig - Fix lib built-in failure when arch is modular")
Reported-by: kernel test robot <lkp@intel.com>
Reported-by: Arnd Bergmann <arnd@kernel.org>
Closes: https://lore.kernel.org/oe-kbuild-all/
202502232152.JC84YDLp-lkp@intel.com/
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Herbert Xu [Sun, 23 Feb 2025 06:27:51 +0000 (14:27 +0800)]
crypto: skcipher - Use restrict rather than hand-rolling accesses
Rather than accessing 'alg' directly to avoid the aliasing issue
which leads to unnecessary reloads, use the __restrict keyword
to explicitly tell the compiler that there is no aliasing.
This generates equivalent if not superior code on x86 with gcc 12.
Note that in skcipher_walk_virt the alg assignment is moved after
might_sleep_if because that function is a compiler barrier and
forces a reload.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Dr. David Alan Gilbert [Sun, 23 Feb 2025 00:56:46 +0000 (00:56 +0000)]
crypto: octeontx - Remove unused function otx_cpt_eng_grp_has_eng_type
otx_cpt_eng_grp_has_eng_type() was added in 2020 by
commit
d9110b0b01ff ("crypto: marvell - add support for OCTEON TX CPT
engine")
but has remained unused.
Remove it.
Signed-off-by: Dr. David Alan Gilbert <linux@treblig.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Dr. David Alan Gilbert [Sun, 23 Feb 2025 00:53:54 +0000 (00:53 +0000)]
crypto: octeontx2 - Remove unused otx2_cpt_print_uc_dbg_info
otx2_cpt_print_uc_dbg_info() has been unused since 2023's
commit
82f89f1aa6ca ("crypto: octeontx2 - add devlink option to set t106
mode")
Remove it and the get_engs_info() helper it's the only user of.
Signed-off-by: Dr. David Alan Gilbert <linux@treblig.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
J. Neuschäfer [Thu, 20 Feb 2025 11:57:35 +0000 (12:57 +0100)]
dt-bindings: crypto: Convert fsl,sec-2.0 to YAML
Convert the Freescale security engine (crypto accelerator) binding from
text form to YAML. The list of compatible strings reflects what was
previously described in prose; not all combinations occur in existing
devicetrees.
Reviewed-by: Frank Li <Frank.Li@nxp.com>
Signed-off-by: J. Neuschäfer <j.ne@posteo.net>
Reviewed-by: Rob Herring (Arm) <robh@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Eric Biggers [Wed, 19 Feb 2025 18:23:41 +0000 (10:23 -0800)]
crypto: scatterwalk - don't split at page boundaries when !HIGHMEM
When !HIGHMEM, the kmap_local_page() in the scatterlist walker does not
actually map anything, and the address it returns is just the address
from the kernel's direct map, where each sg entry's data is virtually
contiguous. To improve performance, stop unnecessarily clamping data
segments to page boundaries in this case.
For now, still limit segments to PAGE_SIZE. This is needed to prevent
preemption from being disabled for too long when SIMD is used, and to
support the alignmask case which still uses a page-sized bounce buffer.
Even so, this change still helps a lot in cases where messages cross a
page boundary. For example, testing IPsec with AES-GCM on x86_64, the
messages are 1424 bytes which is less than PAGE_SIZE, but on the Rx side
over a third cross a page boundary. These ended up being processed in
three parts, with the middle part going through skcipher_next_slow which
uses a 16-byte bounce buffer. That was causing a significant amount of
overhead which unnecessarily reduced the performance benefit of the new
x86_64 AES-GCM assembly code. This change solves the problem; all these
messages now get passed to the assembly code in one part.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Eric Biggers [Wed, 19 Feb 2025 18:23:40 +0000 (10:23 -0800)]
crypto: scatterwalk - remove obsolete functions
Remove various functions that are no longer used.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Eric Biggers [Wed, 19 Feb 2025 18:23:39 +0000 (10:23 -0800)]
crypto: skcipher - use the new scatterwalk functions
Convert skcipher_walk to use the new scatterwalk functions.
This includes a few changes to exactly where the different parts of the
iteration happen. For example the dcache flush that previously happened
in scatterwalk_done() now happens in scatterwalk_dst_done() or in
memcpy_to_scatterwalk(). Advancing to the next sg entry now happens
just-in-time in scatterwalk_clamp() instead of in scatterwalk_done().
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Eric Biggers [Wed, 19 Feb 2025 18:23:38 +0000 (10:23 -0800)]
net/tls: use the new scatterwalk functions
Replace calls to the deprecated function scatterwalk_copychunks() with
memcpy_from_scatterwalk(), memcpy_to_scatterwalk(), or
scatterwalk_skip() as appropriate. The new functions generally behave
more as expected and eliminate the need to call scatterwalk_done() or
scatterwalk_pagedone().
However, the new functions intentionally do not advance to the next sg
entry right away, which would have broken chain_to_walk() which is
accessing the fields of struct scatter_walk directly. To avoid this,
replace chain_to_walk() with scatterwalk_get_sglist() which supports the
needed functionality.
Cc: Boris Pismenny <borisp@nvidia.com>
Cc: Jakub Kicinski <kuba@kernel.org>
Cc: John Fastabend <john.fastabend@gmail.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Eric Biggers [Wed, 19 Feb 2025 18:23:37 +0000 (10:23 -0800)]
crypto: x86/aegis - use the new scatterwalk functions
In crypto_aegis128_aesni_process_ad(), use scatterwalk_next() which
consolidates scatterwalk_clamp() and scatterwalk_map(). Use
scatterwalk_done_src() which consolidates scatterwalk_unmap(),
scatterwalk_advance(), and scatterwalk_done().
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Eric Biggers [Wed, 19 Feb 2025 18:23:36 +0000 (10:23 -0800)]
crypto: x86/aes-gcm - use the new scatterwalk functions
In gcm_process_assoc(), use scatterwalk_next() which consolidates
scatterwalk_clamp() and scatterwalk_map(). Use scatterwalk_done_src()
which consolidates scatterwalk_unmap(), scatterwalk_advance(), and
scatterwalk_done().
Also rename some variables to avoid implying that anything is actually
mapped (it's not), or that the loop is going page by page (it is for
now, but nothing actually requires that to be the case).
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Eric Biggers [Wed, 19 Feb 2025 18:23:35 +0000 (10:23 -0800)]
crypto: stm32 - use the new scatterwalk functions
Replace calls to the deprecated function scatterwalk_copychunks() with
memcpy_from_scatterwalk(), memcpy_to_scatterwalk(), scatterwalk_skip(),
or scatterwalk_start_at_pos() as appropriate.
Cc: Alexandre Torgue <alexandre.torgue@foss.st.com>
Cc: Maxime Coquelin <mcoquelin.stm32@gmail.com>
Cc: Maxime Méré <maxime.mere@foss.st.com>
Cc: Thomas Bourgoin <thomas.bourgoin@foss.st.com>
Cc: linux-stm32@st-md-mailman.stormreply.com
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Eric Biggers [Wed, 19 Feb 2025 18:23:34 +0000 (10:23 -0800)]
crypto: s5p-sss - use the new scatterwalk functions
s5p_sg_copy_buf() open-coded a copy from/to a scatterlist using
scatterwalk_* functions that are planned for removal. Replace it with
the new functions memcpy_from_sglist() and memcpy_to_sglist() instead.
Also take the opportunity to replace calls to scatterwalk_map_and_copy()
in the same file; this eliminates the confusing 'out' argument.
Cc: Krzysztof Kozlowski <krzk@kernel.org>
Cc: Vladimir Zapolskiy <vz@mleia.com>
Cc: linux-samsung-soc@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Eric Biggers [Wed, 19 Feb 2025 18:23:33 +0000 (10:23 -0800)]
crypto: s390/aes-gcm - use the new scatterwalk functions
Use scatterwalk_next() which consolidates scatterwalk_clamp() and
scatterwalk_map(). Use scatterwalk_done_src() and
scatterwalk_done_dst() which consolidate scatterwalk_unmap(),
scatterwalk_advance(), and scatterwalk_done().
Besides the new functions being a bit easier to use, this is necessary
because scatterwalk_done() is planned to be removed.
Reviewed-by: Harald Freudenberger <freude@linux.ibm.com>
Tested-by: Harald Freudenberger <freude@linux.ibm.com>
Cc: Holger Dengler <dengler@linux.ibm.com>
Cc: linux-s390@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Eric Biggers [Wed, 19 Feb 2025 18:23:32 +0000 (10:23 -0800)]
crypto: nx - use the new scatterwalk functions
- In nx_walk_and_build(), use scatterwalk_start_at_pos() instead of a
more complex way to achieve the same result.
- Also in nx_walk_and_build(), use the new functions scatterwalk_next()
which consolidates scatterwalk_clamp() and scatterwalk_map(), and use
scatterwalk_done_src() which consolidates scatterwalk_unmap(),
scatterwalk_advance(), and scatterwalk_done(). Remove unnecessary
code that seemed to be intended to advance to the next sg entry, which
is already handled by the scatterwalk functions.
Note that nx_walk_and_build() does not actually read or write the
mapped virtual address, and thus it is misusing the scatter_walk API.
It really should just access the scatterlist directly. This patch
does not try to address this existing issue.
- In nx_gca(), use memcpy_from_sglist() instead of a more complex way to
achieve the same result.
- In various functions, replace calls to scatterwalk_map_and_copy() with
memcpy_from_sglist() or memcpy_to_sglist() as appropriate. Note that
this eliminates the confusing 'out' argument (which this driver had
tried to work around by defining the missing constants for it...)
Cc: Christophe Leroy <christophe.leroy@csgroup.eu>
Cc: Madhavan Srinivasan <maddy@linux.ibm.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Naveen N Rao <naveen@kernel.org>
Cc: Nicholas Piggin <npiggin@gmail.com>
Cc: linuxppc-dev@lists.ozlabs.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Eric Biggers [Wed, 19 Feb 2025 18:23:31 +0000 (10:23 -0800)]
crypto: arm64 - use the new scatterwalk functions
Use scatterwalk_next() which consolidates scatterwalk_clamp() and
scatterwalk_map(), and use scatterwalk_done_src() which consolidates
scatterwalk_unmap(), scatterwalk_advance(), and scatterwalk_done().
Remove unnecessary code that seemed to be intended to advance to the
next sg entry, which is already handled by the scatterwalk functions.
Adjust variable naming slightly to keep things consistent.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Eric Biggers [Wed, 19 Feb 2025 18:23:30 +0000 (10:23 -0800)]
crypto: arm/ghash - use the new scatterwalk functions
Use scatterwalk_next() which consolidates scatterwalk_clamp() and
scatterwalk_map(), and use scatterwalk_done_src() which consolidates
scatterwalk_unmap(), scatterwalk_advance(), and scatterwalk_done().
Remove unnecessary code that seemed to be intended to advance to the
next sg entry, which is already handled by the scatterwalk functions.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Eric Biggers [Wed, 19 Feb 2025 18:23:29 +0000 (10:23 -0800)]
crypto: aegis - use the new scatterwalk functions
Use scatterwalk_next() which consolidates scatterwalk_clamp() and
scatterwalk_map(), and use scatterwalk_done_src() which consolidates
scatterwalk_unmap(), scatterwalk_advance(), and scatterwalk_done().
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Eric Biggers [Wed, 19 Feb 2025 18:23:28 +0000 (10:23 -0800)]
crypto: skcipher - use scatterwalk_start_at_pos()
In skcipher_walk_aead_common(), use scatterwalk_start_at_pos() instead
of a sequence of scatterwalk_start(), scatterwalk_copychunks(..., 2),
and scatterwalk_done(). This is simpler and faster.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Eric Biggers [Wed, 19 Feb 2025 18:23:27 +0000 (10:23 -0800)]
crypto: scatterwalk - add scatterwalk_get_sglist()
Add a function that creates a scatterlist that represents the remaining
data in a walk. This will be used to replace chain_to_walk() in
net/tls/tls_device_fallback.c so that it will no longer need to reach
into the internals of struct scatter_walk.
Cc: Boris Pismenny <borisp@nvidia.com>
Cc: Jakub Kicinski <kuba@kernel.org>
Cc: John Fastabend <john.fastabend@gmail.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Eric Biggers [Wed, 19 Feb 2025 18:23:26 +0000 (10:23 -0800)]
crypto: scatterwalk - add new functions for copying data
Add memcpy_from_sglist() and memcpy_to_sglist() which are more readable
versions of scatterwalk_map_and_copy() with the 'out' argument 0 and 1
respectively. They follow the same argument order as memcpy_from_page()
and memcpy_to_page() from <linux/highmem.h>. Note that in the case of
memcpy_from_sglist(), this also happens to be the same argument order
that scatterwalk_map_and_copy() uses.
The new code is also faster, mainly because it builds the scatter_walk
directly without creating a temporary scatterlist. E.g., a 20%
performance improvement is seen for copying the AES-GCM auth tag.
Make scatterwalk_map_and_copy() be a wrapper around memcpy_from_sglist()
and memcpy_to_sglist(). Callers of scatterwalk_map_and_copy() should be
updated to call memcpy_from_sglist() or memcpy_to_sglist() directly, but
there are a lot of them so they aren't all being updated right away.
Also add functions memcpy_from_scatterwalk() and memcpy_to_scatterwalk()
which are similar but operate on a scatter_walk instead of a
scatterlist. These will replace scatterwalk_copychunks() with the 'out'
argument 0 and 1 respectively. Their behavior differs slightly from
scatterwalk_copychunks() in that they automatically take care of
flushing the dcache when needed, making them easier to use.
scatterwalk_copychunks() itself is left unchanged for now. It will be
removed after its callers are updated to use other functions instead.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Eric Biggers [Wed, 19 Feb 2025 18:23:25 +0000 (10:23 -0800)]
crypto: scatterwalk - add new functions for iterating through data
Add scatterwalk_next() which consolidates scatterwalk_clamp() and
scatterwalk_map(). Also add scatterwalk_done_src() and
scatterwalk_done_dst() which consolidate scatterwalk_unmap(),
scatterwalk_advance(), and scatterwalk_done() or scatterwalk_pagedone().
A later patch will remove scatterwalk_done() and scatterwalk_pagedone().
The new code eliminates the error-prone 'more' parameter. Advancing to
the next sg entry now only happens just-in-time in scatterwalk_next().
The new code also pairs the dcache flush more closely with the actual
write, similar to memcpy_to_page(). Previously it was paired with
advancing to the next page. This is currently causing bugs where the
dcache flush is incorrectly being skipped, usually due to
scatterwalk_copychunks() being called without a following
scatterwalk_done(). The dcache flush may have been placed where it was
in order to not call flush_dcache_page() redundantly when visiting a
page more than once. However, that case is rare in practice, and most
architectures either do not implement flush_dcache_page() anyway or
implement it lazily where it just clears a page flag.
Another limitation of the old code was that by the time the flush
happened, there was no way to tell if more than one page needed to be
flushed. That has been sufficient because the code goes page by page,
but I would like to optimize that on !HIGHMEM platforms. The new code
makes this possible, and a later patch will implement this optimization.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Eric Biggers [Wed, 19 Feb 2025 18:23:24 +0000 (10:23 -0800)]
crypto: scatterwalk - add new functions for skipping data
Add scatterwalk_skip() to skip the given number of bytes in a
scatter_walk. Previously support for skipping was provided through
scatterwalk_copychunks(..., 2) followed by scatterwalk_done(), which was
confusing and less efficient.
Also add scatterwalk_start_at_pos() which starts a scatter_walk at the
given position, equivalent to scatterwalk_start() + scatterwalk_skip().
This addresses another common need in a more streamlined way.
Later patches will convert various users to use these functions.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Eric Biggers [Wed, 19 Feb 2025 18:23:23 +0000 (10:23 -0800)]
crypto: scatterwalk - move to next sg entry just in time
The scatterwalk_* functions are designed to advance to the next sg entry
only when there is more data from the request to process. Compared to
the alternative of advancing after each step if !sg_is_last(sg), this
has the advantage that it doesn't cause problems if users accidentally
don't terminate their scatterlist with the end marker (which is an easy
mistake to make, and there are examples of this).
Currently, the advance to the next sg entry happens in
scatterwalk_done(), which is called after each "step" of the walk. It
requires the caller to pass in a boolean 'more' that indicates whether
there is more data. This works when the caller immediately knows
whether there is more data, though it adds some complexity. However in
the case of scatterwalk_copychunks() it's not immediately known whether
there is more data, so the call to scatterwalk_done() has to happen
higher up the stack. This is error-prone, and indeed the needed call to
scatterwalk_done() is not always made, e.g. scatterwalk_copychunks() is
sometimes called multiple times in a row. This causes a zero-length
step to get added in some cases, which is unexpected and seems to work
only by accident.
This patch begins the switch to a less error-prone approach where the
advance to the next sg entry happens just in time instead. For now,
that means just doing the advance in scatterwalk_clamp() if it's needed
there. Initially this is redundant, but it's needed to keep the tree in
a working state as later patches change things to the final state.
Later patches will similarly move the dcache flushing logic out of
scatterwalk_done() and then remove scatterwalk_done() entirely.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Geert Uytterhoeven [Wed, 19 Feb 2025 15:03:32 +0000 (16:03 +0100)]
hwrng: Kconfig - Fix indentation of HW_RANDOM_CN10K help text
Change the indentation of the help text of the HW_RANDOM_CN10K symbol
from one TAB plus one space to one TAB plus two spaces, as is customary.
Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
Reviewed-by: Dragan Simic <dsimic@manjaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Arnd Bergmann [Mon, 17 Feb 2025 12:55:55 +0000 (13:55 +0100)]
crypto: bpf - Add MODULE_DESCRIPTION for skcipher
All modules should have a description, building with extra warnings
enabled prints this outfor the for bpf_crypto_skcipher module:
WARNING: modpost: missing MODULE_DESCRIPTION() in crypto/bpf_crypto_skcipher.o
Add a description line.
Fixes:
fda4f71282b2 ("bpf: crypto: add skcipher to bpf crypto")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Reviewed-by: Vadim Fedorenko <vadim.fedorenko@linux.dev>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Herbert Xu [Sun, 16 Feb 2025 03:07:24 +0000 (11:07 +0800)]
crypto: ahash - Set default reqsize from ahash_alg
Add a reqsize field to struct ahash_alg and use it to set the
default reqsize so that algorithms with a static reqsize are
not forced to create an init_tfm function.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Herbert Xu [Sun, 16 Feb 2025 03:07:22 +0000 (11:07 +0800)]
crypto: ahash - Add virtual address support
This patch adds virtual address support to ahash. Virtual addresses
were previously only supported through shash. The user may choose
to use virtual addresses with ahash by calling ahash_request_set_virt
instead of ahash_request_set_crypt.
The API will take care of translating this to an SG list if necessary,
unless the algorithm declares that it supports chaining. Therefore
in order for an ahash algorithm to support chaining, it must also
support virtual addresses directly.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Herbert Xu [Sun, 16 Feb 2025 03:07:19 +0000 (11:07 +0800)]
crypto: tcrypt - Restore multibuffer ahash tests
This patch is a revert of commit
388ac25efc8ce3bf9768ce7bf24268d6fac285d5.
As multibuffer ahash is coming back in the form of request chaining,
restore the multibuffer ahash tests using the new interface.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Herbert Xu [Sun, 16 Feb 2025 03:07:17 +0000 (11:07 +0800)]
crypto: hash - Add request chaining API
This adds request chaining to the ahash interface. Request chaining
allows multiple requests to be submitted in one shot. An algorithm
can elect to receive chained requests by setting the flag
CRYPTO_ALG_REQ_CHAIN. If this bit is not set, the API will break
up chained requests and submit them one-by-one.
A new err field is added to struct crypto_async_request to record
the return value for each individual request.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Herbert Xu [Sun, 16 Feb 2025 03:07:15 +0000 (11:07 +0800)]
crypto: x86/ghash - Use proper helpers to clone request
Rather than copying a request by hand with memcpy, use the correct
API helpers to setup the new request. This will matter once the
API helpers start setting up chained requests as a simple memcpy
will break chaining.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Herbert Xu [Sun, 16 Feb 2025 03:07:12 +0000 (11:07 +0800)]
crypto: ahash - Only save callback and data in ahash_save_req
As unaligned operations are supported by the underlying algorithm,
ahash_save_req and ahash_restore_req can be greatly simplified to
only preserve the callback and data.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Christian Marangi [Sat, 15 Feb 2025 23:36:18 +0000 (00:36 +0100)]
crypto: inside-secure/eip93 - Correctly handle return of for sg_nents_for_len
Fix smatch warning for sg_nents_for_len return value in Inside Secure
EIP93 driver.
The return value of sg_nents_for_len was assigned to an u32 and the
error was ignored and converted to a positive integer.
Rework the code to correctly handle the error from sg_nents_for_len to
mute smatch warning.
Fixes:
9739f5f93b78 ("crypto: eip93 - Add Inside Secure SafeXcel EIP-93 crypto engine support")
Reported-by: Dan Carpenter <dan.carpenter@linaro.org>
Signed-off-by: Christian Marangi <ansuelsmth@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Herbert Xu [Sat, 15 Feb 2025 00:57:51 +0000 (08:57 +0800)]
crypto: skcipher - Zap type in crypto_alloc_sync_skcipher
The type needs to be zeroed as otherwise the user could use it to
allocate an asynchronous sync skcipher.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Małgorzata Mielnik [Fri, 14 Feb 2025 16:40:43 +0000 (16:40 +0000)]
crypto: qat - refactor service parsing logic
The service parsing logic is used to parse the configuration string
provided by the user using the attribute qat/cfg_services in sysfs.
The logic relies on hard-coded strings. For example, the service
"sym;asym" is also replicated as "asym;sym".
This makes the addition of new services or service combinations
complex as it requires the addition of new hard-coded strings for all
possible combinations.
This commit addresses this issue by:
* reducing the number of internal service strings to only the basic
service representations.
* modifying the service parsing logic to analyze the service string
token by token instead of comparing a whole string with patterns.
* introducing the concept of a service mask where each service is
represented by a single bit.
* dividing the parsing logic into several functions to allow for code
reuse (e.g. by sysfs-related functions).
* introducing a new, device generation-specific function to verify
whether the requested service combination is supported by the
currently used device.
Signed-off-by: Małgorzata Mielnik <malgorzata.mielnik@intel.com>
Co-developed-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Giovanni Cabiddu [Fri, 14 Feb 2025 16:40:42 +0000 (16:40 +0000)]
crypto: qat - do not export adf_cfg_services
The symbol `adf_cfg_services` is only used on the intel_qat module.
There is no need to export it.
Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Herbert Xu [Fri, 14 Feb 2025 06:02:08 +0000 (14:02 +0800)]
crypto: skcipher - Set tfm in SYNC_SKCIPHER_REQUEST_ON_STACK
Set the request tfm directly in SYNC_SKCIPHER_REQUEST_ON_STACK since
the tfm is already available.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Herbert Xu [Fri, 14 Feb 2025 02:31:25 +0000 (10:31 +0800)]
crypto: api - Fix larval relookup type and mask
When the lookup is retried after instance construction, it uses
the type and mask from the larval, which may not match the values
used by the caller. For example, if the caller is requesting for
a !NEEDS_FALLBACK algorithm, it may end up getting an algorithm
that needs fallbacks.
Fix this by making the caller supply the type/mask and using that
for the lookup.
Reported-by: Coiby Xu <coxu@redhat.com>
Fixes:
96ad59552059 ("crypto: api - Remove instance larval fulfilment")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Abel Vesa [Thu, 13 Feb 2025 12:37:05 +0000 (14:37 +0200)]
dt-bindings: crypto: qcom-qce: Document the X1E80100 crypto engine
Document the crypto engine on the X1E80100 Platform.
Signed-off-by: Abel Vesa <abel.vesa@linaro.org>
Reviewed-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Herbert Xu [Wed, 12 Feb 2025 06:10:07 +0000 (14:10 +0800)]
crypto: null - Use spin lock instead of mutex
As the null algorithm may be freed in softirq context through
af_alg, use spin locks instead of mutexes to protect the default
null algorithm.
Reported-by: syzbot+b3e02953598f447d4d2a@syzkaller.appspotmail.com
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Herbert Xu [Wed, 12 Feb 2025 04:48:55 +0000 (12:48 +0800)]
crypto: lib/Kconfig - Fix lib built-in failure when arch is modular
The HAVE_ARCH Kconfig options in lib/crypto try to solve the
modular versus built-in problem, but it still fails when the
the LIB option (e.g., CRYPTO_LIB_CURVE25519) is selected externally.
Fix this by introducing a level of indirection with ARCH_MAY_HAVE
Kconfig options, these then go on to select the ARCH_HAVE options
if the ARCH Kconfig options matches that of the LIB option.
Reported-by: kernel test robot <lkp@intel.com>
Closes: https://lore.kernel.org/oe-kbuild-all/
202501230223.ikroNDr1-lkp@intel.com/
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Giovanni Cabiddu [Tue, 11 Feb 2025 09:58:53 +0000 (09:58 +0000)]
crypto: qat - reorder objects in qat_common Makefile
The objects in the qat_common Makefile are currently listed in a random
order.
Reorder the objects alphabetically to make it easier to find where to
add a new object.
Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Reviewed-by: Ahsan Atta <ahsan.atta@intel.com>
Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Giovanni Cabiddu [Tue, 11 Feb 2025 09:58:52 +0000 (09:58 +0000)]
crypto: qat - fix object goals in Makefiles
Align with kbuild documentation by using <module_name>-y instead of
<module_name>-objs, following the kernel convention for building modules
from multiple object files.
Link: https://docs.kernel.org/kbuild/makefiles.html#loadable-module-goals-obj-m
Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Suggested-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Thorsten Blum [Tue, 11 Feb 2025 09:52:54 +0000 (10:52 +0100)]
crypto: aead - use str_yes_no() helper in crypto_aead_show()
Remove hard-coded strings by using the str_yes_no() helper function.
Signed-off-by: Thorsten Blum <thorsten.blum@linux.dev>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Thorsten Blum [Mon, 10 Feb 2025 22:36:44 +0000 (23:36 +0100)]
crypto: bcm - set memory to zero only once
Use kmalloc_array() instead of kcalloc() because sg_init_table() already
sets the memory to zero. This avoids zeroing the memory twice.
Signed-off-by: Thorsten Blum <thorsten.blum@linux.dev>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Eric Biggers [Mon, 10 Feb 2025 17:17:40 +0000 (09:17 -0800)]
crypto: x86/aes-xts - change license to Apache-2.0 OR BSD-2-Clause
As with the other AES modes I've implemented, I've received interest in
my AES-XTS assembly code being reused in other projects. Therefore,
change the license to Apache-2.0 OR BSD-2-Clause like what I used for
AES-GCM. Apache-2.0 is the license of OpenSSL and BoringSSL.
Note that it is difficult to *directly* share code between the kernel,
OpenSSL, and BoringSSL for various reasons such as perlasm vs. plain
asm, Windows ABI support, different divisions of responsibility between
C and asm in each project, etc. So whether that will happen instead of
just doing ports is still TBD. But this dual license should at least
make it possible to port changes between the projects.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Eric Biggers [Mon, 10 Feb 2025 16:50:20 +0000 (08:50 -0800)]
crypto: x86/aes-ctr - rewrite AESNI+AVX optimized CTR and add VAES support
Delete aes_ctrby8_avx-x86_64.S and add a new assembly file
aes-ctr-avx-x86_64.S which follows a similar approach to
aes-xts-avx-x86_64.S in that it uses a "template" to provide AESNI+AVX,
VAES+AVX2, VAES+AVX10/256, and VAES+AVX10/512 code, instead of just
AESNI+AVX. Wire it up to the crypto API accordingly.
This greatly improves the performance of AES-CTR and AES-XCTR on
VAES-capable CPUs, with the best case being AMD Zen 5 where an over 230%
increase in throughput is seen on long messages. Performance on
non-VAES-capable CPUs remains about the same, and the non-AVX AES-CTR
code (aesni_ctr_enc) is also kept as-is for now. There are some slight
regressions (less than 10%) on some short message lengths on some CPUs;
these are difficult to avoid, given how the previous code was so heavily
unrolled by message length, and they are not particularly important.
Detailed performance results are given in the tables below.
Both CTR and XCTR support is retained. The main loop remains
8-vector-wide, which differs from the 4-vector-wide main loops that are
used in the XTS and GCM code. A wider loop is appropriate for CTR and
XCTR since they have fewer other instructions (such as vpclmulqdq) to
interleave with the AES instructions.
Similar to what was the case for AES-GCM, the new assembly code also has
a much smaller binary size, as it fixes the excessive unrolling by data
length and key length present in the old code. Specifically, the new
assembly file compiles to about 9 KB of text vs. 28 KB for the old file.
This is despite 4x as many implementations being included.
The tables below show the detailed performance results. The tables show
percentage improvement in single-threaded throughput for repeated
encryption of the given message length; an increase from 6000 MB/s to
12000 MB/s would be listed as 100%. They were collected by directly
measuring the Linux crypto API performance using a custom kernel module.
The tested CPUs were all server processors from Google Compute Engine
except for Zen 5 which was a Ryzen 9 9950X desktop processor.
Table 1: AES-256-CTR throughput improvement,
CPU microarchitecture vs. message length in bytes:
| 16384 | 4096 | 4095 | 1420 | 512 | 500 |
---------------------+-------+-------+-------+-------+-------+-------+
AMD Zen 5 | 232% | 203% | 212% | 143% | 71% | 95% |
Intel Emerald Rapids | 116% | 116% | 117% | 91% | 78% | 79% |
Intel Ice Lake | 109% | 103% | 107% | 81% | 54% | 56% |
AMD Zen 4 | 109% | 91% | 100% | 70% | 43% | 59% |
AMD Zen 3 | 92% | 78% | 87% | 57% | 32% | 43% |
AMD Zen 2 | 9% | 8% | 14% | 12% | 8% | 21% |
Intel Skylake | 7% | 7% | 8% | 5% | 3% | 8% |
| 300 | 200 | 64 | 63 | 16 |
---------------------+-------+-------+-------+-------+-------+
AMD Zen 5 | 57% | 39% | -9% | 7% | -7% |
Intel Emerald Rapids | 37% | 42% | -0% | 13% | -8% |
Intel Ice Lake | 39% | 30% | -1% | 14% | -9% |
AMD Zen 4 | 42% | 38% | -0% | 18% | -3% |
AMD Zen 3 | 38% | 35% | 6% | 31% | 5% |
AMD Zen 2 | 24% | 23% | 5% | 30% | 3% |
Intel Skylake | 9% | 1% | -4% | 10% | -7% |
Table 2: AES-256-XCTR throughput improvement,
CPU microarchitecture vs. message length in bytes:
| 16384 | 4096 | 4095 | 1420 | 512 | 500 |
---------------------+-------+-------+-------+-------+-------+-------+
AMD Zen 5 | 240% | 201% | 216% | 151% | 75% | 108% |
Intel Emerald Rapids | 100% | 99% | 102% | 91% | 94% | 104% |
Intel Ice Lake | 93% | 89% | 92% | 74% | 50% | 64% |
AMD Zen 4 | 86% | 75% | 83% | 60% | 41% | 52% |
AMD Zen 3 | 73% | 63% | 69% | 45% | 21% | 33% |
AMD Zen 2 | -2% | -2% | 2% | 3% | -1% | 11% |
Intel Skylake | -1% | -1% | 1% | 2% | -1% | 9% |
| 300 | 200 | 64 | 63 | 16 |
---------------------+-------+-------+-------+-------+-------+
AMD Zen 5 | 78% | 56% | -4% | 38% | -2% |
Intel Emerald Rapids | 61% | 55% | 4% | 32% | -5% |
Intel Ice Lake | 57% | 42% | 3% | 44% | -4% |
AMD Zen 4 | 35% | 28% | -1% | 17% | -3% |
AMD Zen 3 | 26% | 23% | -3% | 11% | -6% |
AMD Zen 2 | 13% | 24% | -1% | 14% | -3% |
Intel Skylake | 16% | 8% | -4% | 35% | -3% |
Signed-off-by: Eric Biggers <ebiggers@google.com>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Tested-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>