x86/asm: Simplify __smp_mb() definition
authorBorislav Petkov <bp@suse.de>
Wed, 12 May 2021 09:33:10 +0000 (11:33 +0200)
committerIngo Molnar <mingo@kernel.org>
Wed, 12 May 2021 10:22:57 +0000 (12:22 +0200)
Drop the bitness ifdeffery in favor of using _ASM_SP,
which is the helper macro for the rSP register specification
for 32 and 64 bit depending on the build.

No functional changes.

Signed-off-by: Borislav Petkov <bp@suse.de>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lore.kernel.org/r/20210512093310.5635-1-bp@alien8.de
arch/x86/include/asm/barrier.h

index 4819d5e5a3353d7db2fa314f9665a6cd07db741a..3ba772a69cc8baf5ab6b0065527d353c3ac04bd0 100644 (file)
@@ -54,11 +54,8 @@ static inline unsigned long array_index_mask_nospec(unsigned long index,
 #define dma_rmb()      barrier()
 #define dma_wmb()      barrier()
 
-#ifdef CONFIG_X86_32
-#define __smp_mb()     asm volatile("lock; addl $0,-4(%%esp)" ::: "memory", "cc")
-#else
-#define __smp_mb()     asm volatile("lock; addl $0,-4(%%rsp)" ::: "memory", "cc")
-#endif
+#define __smp_mb()     asm volatile("lock; addl $0,-4(%%" _ASM_SP ")" ::: "memory", "cc")
+
 #define __smp_rmb()    dma_rmb()
 #define __smp_wmb()    barrier()
 #define __smp_store_mb(var, value) do { (void)xchg(&var, value); } while (0)