f2fs: avoid splitting bio when reading multiple pages
authorJianan Huang <huangjianan@xiaomi.com>
Mon, 30 Jun 2025 12:57:53 +0000 (20:57 +0800)
committerJaegeuk Kim <jaegeuk@kernel.org>
Tue, 1 Jul 2025 16:22:07 +0000 (16:22 +0000)
When fewer pages are read, nr_pages may be smaller than nr_cpages. Due
to the nr_vecs limit, the compressed pages will be split into multiple
bios and then merged at the block level. In this case, nr_cpages should
be used to pre-allocate bvecs.
To handle this case, align max_nr_pages to cluster_size, which should be
enough for all compressed pages.

Signed-off-by: Jianan Huang <huangjianan@xiaomi.com>
Signed-off-by: Sheng Yong <shengyong1@xiaomi.com>
Reviewed-by: Chao Yu <chao@kernel.org>
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
fs/f2fs/data.c

index 31e892842625928d5731a02670f7d137cb902a4a..40292e4ad3419496ce3b7ea47225e5be631518ef 100644 (file)
@@ -2303,7 +2303,7 @@ submit_and_realloc:
                }
 
                if (!bio) {
-                       bio = f2fs_grab_read_bio(inode, blkaddr, nr_pages,
+                       bio = f2fs_grab_read_bio(inode, blkaddr, nr_pages - i,
                                        f2fs_ra_op_flags(rac),
                                        folio->index, for_write);
                        if (IS_ERR(bio)) {
@@ -2376,6 +2376,14 @@ static int f2fs_mpage_readpages(struct inode *inode,
        unsigned max_nr_pages = nr_pages;
        int ret = 0;
 
+#ifdef CONFIG_F2FS_FS_COMPRESSION
+       if (f2fs_compressed_file(inode)) {
+               index = rac ? readahead_index(rac) : folio->index;
+               max_nr_pages = round_up(index + nr_pages, cc.cluster_size) -
+                               round_down(index, cc.cluster_size);
+       }
+#endif
+
        map.m_pblk = 0;
        map.m_lblk = 0;
        map.m_len = 0;