From: Jianan Huang Date: Mon, 30 Jun 2025 12:57:53 +0000 (+0800) Subject: f2fs: avoid splitting bio when reading multiple pages X-Git-Tag: io_uring-6.17-20250815~47^2~89 X-Git-Url: https://git.kernel.dk/?a=commitdiff_plain;h=185f203a6991f7dd7f8070d6638415215da35d7e;p=linux-block.git f2fs: avoid splitting bio when reading multiple pages When fewer pages are read, nr_pages may be smaller than nr_cpages. Due to the nr_vecs limit, the compressed pages will be split into multiple bios and then merged at the block level. In this case, nr_cpages should be used to pre-allocate bvecs. To handle this case, align max_nr_pages to cluster_size, which should be enough for all compressed pages. Signed-off-by: Jianan Huang Signed-off-by: Sheng Yong Reviewed-by: Chao Yu Signed-off-by: Jaegeuk Kim --- diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c index 31e892842625..40292e4ad341 100644 --- a/fs/f2fs/data.c +++ b/fs/f2fs/data.c @@ -2303,7 +2303,7 @@ submit_and_realloc: } if (!bio) { - bio = f2fs_grab_read_bio(inode, blkaddr, nr_pages, + bio = f2fs_grab_read_bio(inode, blkaddr, nr_pages - i, f2fs_ra_op_flags(rac), folio->index, for_write); if (IS_ERR(bio)) { @@ -2376,6 +2376,14 @@ static int f2fs_mpage_readpages(struct inode *inode, unsigned max_nr_pages = nr_pages; int ret = 0; +#ifdef CONFIG_F2FS_FS_COMPRESSION + if (f2fs_compressed_file(inode)) { + index = rac ? readahead_index(rac) : folio->index; + max_nr_pages = round_up(index + nr_pages, cc.cluster_size) - + round_down(index, cc.cluster_size); + } +#endif + map.m_pblk = 0; map.m_lblk = 0; map.m_len = 0;