fs/writeback: bail out if there is no more inodes for IO and queued once
authorKemeng Shi <shikemeng@huaweicloud.com>
Wed, 28 Feb 2024 09:19:54 +0000 (17:19 +0800)
committerChristian Brauner <brauner@kernel.org>
Fri, 5 Apr 2024 13:52:17 +0000 (15:52 +0200)
For case there is no more inodes for IO in io list from last wb_writeback,
We may bail out early even there is inode in dirty list should be written
back. Only bail out when we queued once to avoid missing dirtied inode.

This is from code reading...

Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com>
Link: https://lore.kernel.org/r/20240228091958.288260-3-shikemeng@huaweicloud.com
Reviewed-by: Jan Kara <jack@suse.cz>
[brauner@kernel.org: fold in memory corruption fix from Jan in [1]]
Link: https://lore.kernel.org/r/20240405132346.bid7gibby3lxxhez@quack3
Signed-off-by: Christian Brauner <brauner@kernel.org>
fs/fs-writeback.c

index fe634f00f4d9bc7fe01bde79bb89413253ef5f51..f864c7d6ef921799595c10c0251f359ad0ebd94b 100644 (file)
@@ -2076,6 +2076,7 @@ static long wb_writeback(struct bdi_writeback *wb,
        struct inode *inode;
        long progress;
        struct blk_plug plug;
+       bool queued = false;
 
        blk_start_plug(&plug);
        for (;;) {
@@ -2118,8 +2119,10 @@ static long wb_writeback(struct bdi_writeback *wb,
                        dirtied_before = jiffies;
 
                trace_writeback_start(wb, work);
-               if (list_empty(&wb->b_io))
+               if (list_empty(&wb->b_io)) {
                        queue_io(wb, work, dirtied_before);
+                       queued = true;
+               }
                if (work->sb)
                        progress = writeback_sb_inodes(work->sb, wb, work);
                else
@@ -2134,7 +2137,7 @@ static long wb_writeback(struct bdi_writeback *wb,
                 * mean the overall work is done. So we keep looping as long
                 * as made some progress on cleaning pages or inodes.
                 */
-               if (progress) {
+               if (progress || !queued) {
                        spin_unlock(&wb->list_lock);
                        continue;
                }