locking/osq_lock: Annotate a data race in osq_lock
authorQian Cai <cai@lca.pw>
Tue, 11 Feb 2020 13:54:15 +0000 (08:54 -0500)
committerPaul E. McKenney <paulmck@kernel.org>
Mon, 29 Jun 2020 19:04:48 +0000 (12:04 -0700)
The prev->next pointer can be accessed concurrently as noticed by KCSAN:

 write (marked) to 0xffff9d3370dbbe40 of 8 bytes by task 3294 on cpu 107:
  osq_lock+0x25f/0x350
  osq_wait_next at kernel/locking/osq_lock.c:79
  (inlined by) osq_lock at kernel/locking/osq_lock.c:185
  rwsem_optimistic_spin
  <snip>

 read to 0xffff9d3370dbbe40 of 8 bytes by task 3398 on cpu 100:
  osq_lock+0x196/0x350
  osq_lock at kernel/locking/osq_lock.c:157
  rwsem_optimistic_spin
  <snip>

Since the write only stores NULL to prev->next and the read tests if
prev->next equals to this_cpu_ptr(&osq_node). Even if the value is
shattered, the code is still working correctly. Thus, mark it as an
intentional data race using the data_race() macro.

Signed-off-by: Qian Cai <cai@lca.pw>
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
kernel/locking/osq_lock.c

index 1f7734949ac883792569a53524b7ae6237a8a1ea..1de006ed3aa8c794465961b9a5cef3c36301f6f9 100644 (file)
@@ -154,7 +154,11 @@ bool osq_lock(struct optimistic_spin_queue *lock)
         */
 
        for (;;) {
-               if (prev->next == node &&
+               /*
+                * cpu_relax() below implies a compiler barrier which would
+                * prevent this comparison being optimized away.
+                */
+               if (data_race(prev->next) == node &&
                    cmpxchg(&prev->next, node, NULL) == node)
                        break;