bpf: Use architecture provided res_smp_cond_load_acquire
authorKumar Kartikeya Dwivedi <memxor@gmail.com>
Thu, 10 Apr 2025 14:55:12 +0000 (07:55 -0700)
committerAlexei Starovoitov <ast@kernel.org>
Thu, 10 Apr 2025 19:47:07 +0000 (12:47 -0700)
In v2 of rqspinlock [0], we fixed potential problems with WFE usage in
arm64 to fallback to a version copied from Ankur's series [1]. This
logic was moved into arch-specific headers in v3 [2].

However, we missed using the arch-provided res_smp_cond_load_acquire
in commit ebababcd0372 ("rqspinlock: Hardcode cond_acquire loops for arm64")
due to a rebasing mistake between v2 and v3 of the rqspinlock series.
Fix the typo to fallback to the arm64 definition as we did in v2.

  [0]: https://lore.kernel.org/bpf/20250206105435.2159977-18-memxor@gmail.com
  [1]: https://lore.kernel.org/lkml/20250203214911.898276-1-ankur.a.arora@oracle.com
  [2]: https://lore.kernel.org/bpf/20250303152305.3195648-9-memxor@gmail.com

Fixes: ebababcd0372 ("rqspinlock: Hardcode cond_acquire loops for arm64")
Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Link: https://lore.kernel.org/r/20250410145512.1876745-1-memxor@gmail.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
arch/arm64/include/asm/rqspinlock.h
kernel/bpf/rqspinlock.c

index 5b80785324b6c3bee46972468cfd22270248607b..9ea0a74e5892734dea95e7b47c5033e68a9eecc2 100644 (file)
@@ -86,7 +86,7 @@
 
 #endif
 
-#define res_smp_cond_load_acquire_timewait(v, c) smp_cond_load_acquire_timewait(v, c, 0, 1)
+#define res_smp_cond_load_acquire(v, c) smp_cond_load_acquire_timewait(v, c, 0, 1)
 
 #include <asm-generic/rqspinlock.h>
 
index b896c4a75a5c9bde1adc9a184b49532ee1916065..338305c8852cf625e0ae84519bb5f3a4aad9584a 100644 (file)
@@ -253,7 +253,7 @@ static noinline int check_timeout(rqspinlock_t *lock, u32 mask,
        })
 #else
 #define RES_CHECK_TIMEOUT(ts, ret, mask)                             \
-       ({ (ret) = check_timeout(&(ts)); })
+       ({ (ret) = check_timeout((lock), (mask), &(ts)); })
 #endif
 
 /*