btrfs: Relax memory barrier in btrfs_tree_unlock
authorNikolay Borisov <nborisov@suse.com>
Wed, 14 Feb 2018 12:37:26 +0000 (14:37 +0200)
committerDavid Sterba <dsterba@suse.com>
Fri, 30 Mar 2018 23:26:51 +0000 (01:26 +0200)
When performing an unlock on an extent buffer we'd like to order the
decrement of extent_buffer::blocking_writers with waking up any
waiters. In such situations it's sufficient to use smp_mb__after_atomic
rather than the heavy smp_mb. On architectures where atomic operations
are fully ordered (such as x86 or s390) unconditionally executing
a heavyweight smp_mb instruction causes a severe hit to performance
while bringin no improvements in terms of correctness.

The better thing is to use the appropriate smp_mb__after_atomic routine
which will do the correct thing (invoke a full smp_mb or in the case
of ordered atomics insert a compiler barrier). Put another way,
an RMW atomic op + smp_load__after_atomic equals, in terms of
semantics, to a full smp_mb. This ensures that none of the problems
described in the accompanying comment of waitqueue_active occur.
No functional changes.

Signed-off-by: Nikolay Borisov <nborisov@suse.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
fs/btrfs/locking.c

index d13128c70dddc2b440f29f167efe0862d8792e2e..621083f8932c7e5b61d5d9734bd187e73abd1e69 100644 (file)
@@ -290,7 +290,7 @@ void btrfs_tree_unlock(struct extent_buffer *eb)
                /*
                 * Make sure counter is updated before we wake up waiters.
                 */
-               smp_mb();
+               smp_mb__after_atomic();
                if (waitqueue_active(&eb->write_lock_wq))
                        wake_up(&eb->write_lock_wq);
        } else {