xfs: cap the length of deduplication requests
authorDarrick J. Wong <darrick.wong@oracle.com>
Tue, 17 Apr 2018 06:07:36 +0000 (23:07 -0700)
committerDarrick J. Wong <darrick.wong@oracle.com>
Wed, 2 May 2018 16:21:33 +0000 (09:21 -0700)
Since deduplication potentially has to read in all the pages in both
files in order to compare the contents, cap the deduplication request
length at MAX_RW_COUNT/2 (roughly 1GB) so that we have /some/ upper bound
on the request length and can't just lock up the kernel forever.  Found
by running generic/304 after commit 1ddae54555b62 ("common/rc: add
missing 'local' keywords").

Reported-by: matorola@gmail.com
Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
Reviewed-by: Carlos Maiolino <cmaiolino@redhat.com>
fs/xfs/xfs_file.c

index eed073cc47783411b09d9efcc8e23f6d272bb666..e70fb8cceceaa5d2333573e49460beba75629815 100644 (file)
@@ -880,8 +880,18 @@ xfs_file_dedupe_range(
        struct file     *dst_file,
        u64             dst_loff)
 {
+       struct inode    *srci = file_inode(src_file);
+       u64             max_dedupe;
        int             error;
 
+       /*
+        * Since we have to read all these pages in to compare them, cut
+        * it off at MAX_RW_COUNT/2 rounded down to the nearest block.
+        * That means we won't do more than MAX_RW_COUNT IO per request.
+        */
+       max_dedupe = (MAX_RW_COUNT >> 1) & ~(i_blocksize(srci) - 1);
+       if (len > max_dedupe)
+               len = max_dedupe;
        error = xfs_reflink_remap_range(src_file, loff, dst_file, dst_loff,
                                     len, true);
        if (error)