mips/atomic: Fix smp_mb__{before,after}_atomic()
authorPeter Zijlstra <peterz@infradead.org>
Thu, 13 Jun 2019 13:43:20 +0000 (15:43 +0200)
committerPaul Burton <paul.burton@mips.com>
Sat, 31 Aug 2019 10:06:02 +0000 (11:06 +0100)
commit42344113ba7a1ed7b5654cd5270af0d5698d8521
tree7ac40f4f47e97e27ff2909c0a58023ba6fb41b63
parent1c6c1ca318585f1096d4d04bc722297c85e9fb8a
mips/atomic: Fix smp_mb__{before,after}_atomic()

Recent probing at the Linux Kernel Memory Model uncovered a
'surprise'. Strongly ordered architectures where the atomic RmW
primitive implies full memory ordering and
smp_mb__{before,after}_atomic() are a simple barrier() (such as MIPS
without WEAK_REORDERING_BEYOND_LLSC) fail for:

*x = 1;
atomic_inc(u);
smp_mb__after_atomic();
r0 = *y;

Because, while the atomic_inc() implies memory order, it
(surprisingly) does not provide a compiler barrier. This then allows
the compiler to re-order like so:

atomic_inc(u);
*x = 1;
smp_mb__after_atomic();
r0 = *y;

Which the CPU is then allowed to re-order (under TSO rules) like:

atomic_inc(u);
r0 = *y;
*x = 1;

And this very much was not intended. Therefore strengthen the atomic
RmW ops to include a compiler barrier.

Reported-by: Andrea Parri <andrea.parri@amarulasolutions.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Paul Burton <paul.burton@mips.com>
arch/mips/include/asm/atomic.h
arch/mips/include/asm/barrier.h
arch/mips/include/asm/bitops.h
arch/mips/include/asm/cmpxchg.h