randread Random reads
rw,readwrite Sequential mixed reads and writes
randrw Random mixed reads and writes
+ trimwrite Mixed trims and writes. Blocks will be
+ trimmed first, then written to.
For the mixed io types, the default is to split them 50/50.
For certain types of io the result may still be skewed a bit,
If set, fio will use POSIX_FADV_SEQUENTIAL for sequential
IO and POSIX_FADV_RANDOM for random IO.
+fadvise_stream=int Notify the kernel what write stream ID to place these
+ writes under. Only supported on Linux. Note, this option
+ may change going forward.
+
size=int The total size of file io for this job. Fio will run until
this many bytes has been transferred, unless runtime is
limited by other options (such as 'runtime', for instance,
necessary environment variables to work with
hdfs/libhdfs properly.
+ mtd Read, write and erase an MTD character device
+ (e.g., /dev/mtd0). Discards are treated as
+ erases. Depending on the underlying device
+ type, the I/O may have to go in a certain
+ pattern, e.g., on NAND, writing sequentially
+ to erase blocks and discarding before
+ overwriting. The writetrim mode works well
+ for this constraint.
+
external Prefix to specify loading an external
IO engine object file. Append the engine
filename, eg ioengine=external:/tmp/foo.o
after fio has filled the queue of 16 requests, it will let
the depth drain down to 4 before starting to fill it again.
+io_submit_mode=str This option controls how fio submits the IO to
+ the IO engine. The default is 'inline', which means that the
+ fio job threads submit and reap IO directly. If set to
+ 'offload', the job threads will offload IO submission to a
+ dedicated pool of IO threads. This requires some coordination
+ and thus has a bit of extra overhead, especially for lower
+ queue depth IO where it can increase latencies. The benefit
+ is that fio can manage submission rates independently of
+ the device completion rates. This avoids skewed latency
+ reporting if IO gets back up on the device side (the
+ coordinated omission problem).
+
direct=bool If value is true, use non-buffered io. This is usually
O_DIRECT. Note that ZFS on Solaris doesn't support direct io.
On Windows the synchronous ioengines don't support direct io.
independent fio invocations. Unfortuantely this also breaks
the strict time ordering between multiple device accesses.
+replay_align=int Force alignment of IO offsets and lengths in a trace
+ to this power of 2 value.
+
+replay_scale=int Scale sector offsets down by this factor when
+ replaying traces.
+
write_bw_log=str If given, write a bandwidth log of the jobs in this job
file. Can be used to store data of the bandwidth of the
jobs in their lifetime. The included fio_generate_plots
command line parameter. The files will be stored with a
.fz suffix.
+block_error_percentiles=bool If set, record errors in trim block-sized
+ units from writes and trims and output a histogram of
+ how many trims it took to get to errors, and what kind
+ of error was encountered.
+
lockmem=int Pin down the specified amount of memory with mlock(2). Can
potentially be used instead of removing memory or booting
with less memory to simulate a smaller amount of memory.
completion latencies.
percentile_list=float_list Overwrite the default list of percentiles
- for completion latencies. Each number is a floating
- number in the range (0,100], and the maximum length of
- the list is 20. Use ':' to separate the numbers, and
- list the numbers in ascending order. For example,
- --percentile_list=99.5:99.9 will cause fio to report
- the values of completion latency below which 99.5% and
- 99.9% of the observed latencies fell, respectively.
+ for completion latencies and the block error histogram.
+ Each number is a floating number in the range (0,100],
+ and the maximum length of the list is 20. Use ':'
+ to separate the numbers, and list the numbers in ascending
+ order. For example, --percentile_list=99.5:99.9 will cause
+ fio to report the values of completion latency below which
+ 99.5% and 99.9% of the observed latencies fell, respectively.
clocksource=str Use the given clocksource as the base of timing. The
supported options are:
1 : allocate space immidietly inside defragment event,
and free right after event
+[mtd] skip_bad=bool Skip operations against known bad blocks.
6.0 Interpreting the output
bytes. The action can be one of these:
wait Wait for 'offset' microseconds. Everything below 100 is discarded.
+ The time is relative to the previous wait statement.
read Read 'length' bytes beginning from 'offset'
write Write 'length' bytes beginning from 'offset'
sync fsync() the file