btrfs: do not use GFP_ATOMIC in the read endio
authorJosef Bacik <josef@toxicpanda.com>
Fri, 14 Oct 2022 14:00:39 +0000 (10:00 -0400)
committerDavid Sterba <dsterba@suse.com>
Mon, 5 Dec 2022 17:00:40 +0000 (18:00 +0100)
We have done read endio in an async thread for a very, very long time,
which makes the use of GFP_ATOMIC and unlock_extent_atomic() unneeded in
our read endio path.  We've noticed under heavy memory pressure in our
fleet that we can fail these allocations, and then often trip a
BUG_ON(!allocation), which isn't an ideal outcome.  Begin to address
this by simply not using GFP_ATOMIC, which will allow us to do things
like actually allocate a extent state when doing
set_extent_bits(UPTODATE) in the endio handler.

End io handlers are not called in atomic context, besides we have been
allocating failrec with GFP_NOFS so we'd notice there's a problem.

Signed-off-by: Josef Bacik <josef@toxicpanda.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
fs/btrfs/extent_io.c

index 4e4f28387aceae11e66649e360b9e9a537f18f58..78d7ea10621d2d583023f8ba4581d643f691d485 100644 (file)
@@ -897,9 +897,9 @@ static void end_sector_io(struct page *page, u64 offset, bool uptodate)
        end_page_read(page, uptodate, offset, sectorsize);
        if (uptodate)
                set_extent_uptodate(&inode->io_tree, offset,
-                                   offset + sectorsize - 1, &cached, GFP_ATOMIC);
-       unlock_extent_atomic(&inode->io_tree, offset, offset + sectorsize - 1,
-                            &cached);
+                                   offset + sectorsize - 1, &cached, GFP_NOFS);
+       unlock_extent(&inode->io_tree, offset, offset + sectorsize - 1,
+                     &cached);
 }
 
 static void submit_data_read_repair(struct inode *inode,
@@ -1103,7 +1103,7 @@ static void endio_readpage_release_extent(struct processed_extent *processed,
         * Now we don't have range contiguous to the processed range, release
         * the processed range now.
         */
-       unlock_extent_atomic(tree, processed->start, processed->end, &cached);
+       unlock_extent(tree, processed->start, processed->end, &cached);
 
 update:
        /* Update processed to current range */