locking/rwsem: Disable preemption in all down_read*() and up_read() code paths
authorWaiman Long <longman@redhat.com>
Thu, 26 Jan 2023 00:36:26 +0000 (19:36 -0500)
committerIngo Molnar <mingo@kernel.org>
Thu, 26 Jan 2023 10:46:46 +0000 (11:46 +0100)
commit3f5245538a1964ae186ab7e1636020a41aa63143
tree76d26d7d9a1678ef515f9f0c944cf0aeb82184fa
parentb613c7f31476c44316bfac1af7cac714b7d6bef9
locking/rwsem: Disable preemption in all down_read*() and up_read() code paths

Commit:

  91d2a812dfb9 ("locking/rwsem: Make handoff writer optimistically spin on owner")

... assumes that when the owner field is changed to NULL, the lock will
become free soon. But commit:

  48dfb5d2560d ("locking/rwsem: Disable preemption while trying for rwsem lock")

... disabled preemption when acquiring rwsem for write.

However, preemption has not yet been disabled when acquiring a read lock
on a rwsem.  So a reader can add a RWSEM_READER_BIAS to count without
setting owner to signal a reader, got preempted out by a RT task which
then spins in the writer slowpath as owner remains NULL leading to live lock.

One easy way to fix this problem is to disable preemption at all the
down_read*() and up_read() code paths as implemented in this patch.

Fixes: 91d2a812dfb9 ("locking/rwsem: Make handoff writer optimistically spin on owner")
Reported-by: Mukesh Ojha <quic_mojha@quicinc.com>
Suggested-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Waiman Long <longman@redhat.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: https://lore.kernel.org/r/20230126003628.365092-3-longman@redhat.com
kernel/locking/rwsem.c