From: Sitsofe Wheeler Date: Thu, 27 Apr 2017 06:22:26 +0000 (+0100) Subject: iolog: remove random layout verification optimisation X-Git-Tag: fio-3.0~13^2~1 X-Git-Url: https://git.kernel.dk/?p=fio.git;a=commitdiff_plain;h=8d4564e9884cbc6082b798cd828eb43da1bb35ff iolog: remove random layout verification optimisation Running the following fio jobs unexpectedly reports a verification failure: rm /tmp/tmp.fio; ./fio --iodepth=1 \ --verify=pattern --verify_fatal=1 --size=100M --bsrange=512-128k \ --rw=randwrite --verify_backlog=128 --filename=/tmp/tmp.fio \ --verify_pattern="%o" --name=spuriousmismatch1 rm /tmp/tmp.fio; ./fio --iodepth=1 \ --verify=crc32c --verify_fatal=1 --size=100M --bs=4k \ --rw=randwrite --verify_backlog=20 --filename=/tmp/tmp.fio \ --percentage_random=50 --randseed=86 --name=spuriousmismatch2 In the case of the first job, using a bsrange where the start and end are different can cause random write I/O to overlap an already written region making the original data unverifiable. For the second job, when percentage_random is between 1 and 99 the same offset can be generated multiple times but only the last write to that offset should be verified. Rather than special casing the growing number of random jobs that might generate overlaps while still having a randommap, and given preallocation during layout is the default where possible, just remove the overwrite=0 optimisation thus forcing all random jobs to be checked for overlaps. It is still possible to force the old behaviour by setting verifysort=0. Fixes https://github.com/axboe/fio/issues/335 and https://github.com/axboe/fio/issues/344 . Tested-by: Jeff Furlong Signed-off-by: Sitsofe Wheeler --- diff --git a/iolog.c b/iolog.c index 18ae4369..b041eff4 100644 --- a/iolog.c +++ b/iolog.c @@ -227,20 +227,16 @@ void log_io_piece(struct thread_data *td, struct io_u *io_u) } /* - * We don't need to sort the entries, if: - * - * Sequential writes, or - * Random writes that lay out the file as it goes along - * - * For both these cases, just reading back data in the order we - * wrote it out is the fastest. + * We don't need to sort the entries if we only performed sequential + * writes. In this case, just reading back data in the order we wrote + * it out is the faster but still safe. * * One exception is if we don't have a random map AND we are doing * verifies, in that case we need to check for duplicate blocks and * drop the old one, which we rely on the rb insert/lookup for * handling. */ - if (((!td->o.verifysort) || !td_random(td) || !td->o.overwrite) && + if (((!td->o.verifysort) || !td_random(td)) && (file_randommap(td, ipo->file) || td->o.verify == VERIFY_NONE)) { INIT_FLIST_HEAD(&ipo->list); flist_add_tail(&ipo->list, &td->io_hist_list);