lib/crypto: x86/sha256: Move static_call above kernel-mode FPU section
authorEric Biggers <ebiggers@kernel.org>
Fri, 4 Jul 2025 02:39:57 +0000 (19:39 -0700)
committerEric Biggers <ebiggers@kernel.org>
Fri, 4 Jul 2025 17:23:55 +0000 (10:23 -0700)
commita8c60a9aca778d7fd22d6c9b1af702d6f952b87f
treec55a23eaaf7d639ac37e141cdce01707171c0460
parentb34c9803aabd85189ffacc0d3cdb9ce4515c2b4d
lib/crypto: x86/sha256: Move static_call above kernel-mode FPU section

As I did for sha512_blocks(), reorganize x86's sha256_blocks() to be
just a static_call.  To achieve that, for each assembly function add a C
function that handles the kernel-mode FPU section and fallback.  While
this increases total code size slightly, the amount of code actually
executed on a given system does not increase, and it is slightly more
efficient since it eliminates the extra static_key.  It also makes the
assembly functions be called with standard direct calls instead of
static calls, eliminating the need for ANNOTATE_NOENDBR.

Acked-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20250704023958.73274-2-ebiggers@kernel.org
Signed-off-by: Eric Biggers <ebiggers@kernel.org>
lib/crypto/x86/sha256-avx-asm.S
lib/crypto/x86/sha256-avx2-asm.S
lib/crypto/x86/sha256-ni-asm.S
lib/crypto/x86/sha256-ssse3-asm.S
lib/crypto/x86/sha256.h