mm: shmem: save one radix tree lookup when truncating swapped pages
authorJohannes Weiner <hannes@cmpxchg.org>
Thu, 3 Apr 2014 21:47:41 +0000 (14:47 -0700)
committerLinus Torvalds <torvalds@linux-foundation.org>
Thu, 3 Apr 2014 23:21:00 +0000 (16:21 -0700)
Page cache radix tree slots are usually stabilized by the page lock, but
shmem's swap cookies have no such thing.  Because the overall truncation
loop is lockless, the swap entry is currently confirmed by a tree lookup
and then deleted by another tree lookup under the same tree lock region.

Use radix_tree_delete_item() instead, which does the verification and
deletion with only one lookup.  This also allows removing the
delete-only special case from shmem_radix_tree_replace().

Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
Reviewed-by: Minchan Kim <minchan@kernel.org>
Reviewed-by: Rik van Riel <riel@redhat.com>
Acked-by: Mel Gorman <mgorman@suse.de>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Bob Liu <bob.liu@oracle.com>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Dave Chinner <david@fromorbit.com>
Cc: Greg Thelen <gthelen@google.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Jan Kara <jack@suse.cz>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Luigi Semenzato <semenzato@google.com>
Cc: Metin Doslu <metin@citusdata.com>
Cc: Michel Lespinasse <walken@google.com>
Cc: Ozgun Erdogan <ozgun@citusdata.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Roman Gushchin <klamm@yandex-team.ru>
Cc: Ryan Mallon <rmallon@gmail.com>
Cc: Tejun Heo <tj@kernel.org>
Cc: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
mm/shmem.c

index 1f18c9d0d93ea270ab01054b2febbdd6a7eb6f56..e470997010cda6e350ce86313345e56dec75a987 100644 (file)
@@ -242,19 +242,17 @@ static int shmem_radix_tree_replace(struct address_space *mapping,
                        pgoff_t index, void *expected, void *replacement)
 {
        void **pslot;
-       void *item = NULL;
+       void *item;
 
        VM_BUG_ON(!expected);
+       VM_BUG_ON(!replacement);
        pslot = radix_tree_lookup_slot(&mapping->page_tree, index);
-       if (pslot)
-               item = radix_tree_deref_slot_protected(pslot,
-                                                       &mapping->tree_lock);
+       if (!pslot)
+               return -ENOENT;
+       item = radix_tree_deref_slot_protected(pslot, &mapping->tree_lock);
        if (item != expected)
                return -ENOENT;
-       if (replacement)
-               radix_tree_replace_slot(pslot, replacement);
-       else
-               radix_tree_delete(&mapping->page_tree, index);
+       radix_tree_replace_slot(pslot, replacement);
        return 0;
 }
 
@@ -386,14 +384,15 @@ export:
 static int shmem_free_swap(struct address_space *mapping,
                           pgoff_t index, void *radswap)
 {
-       int error;
+       void *old;
 
        spin_lock_irq(&mapping->tree_lock);
-       error = shmem_radix_tree_replace(mapping, index, radswap, NULL);
+       old = radix_tree_delete_item(&mapping->page_tree, index, radswap);
        spin_unlock_irq(&mapping->tree_lock);
-       if (!error)
-               free_swap_and_cache(radix_to_swp_entry(radswap));
-       return error;
+       if (old != radswap)
+               return -ENOENT;
+       free_swap_and_cache(radix_to_swp_entry(radswap));
+       return 0;
 }
 
 /*