zero_buffers If this option is given, fio will init the IO buffers to
all zeroes. The default is to fill them with random data.
+refill_buffers If this option is given, fio will refill the IO buffers
+ on every submit. The default is to only fill it at init
+ time and reuse that data. Only makes sense if zero_buffers
+ isn't specified, naturally. If data verification is enabled,
+ refill_buffers is also automatically enabled.
+
nrfiles=int Number of files to use for this job. Defaults to 1.
openfiles=int Number of files to keep open at the same time. Defaults to
posixaio glibc posix asynchronous io.
+ solarisaio Solaris native asynchronous io.
+
mmap File is memory mapped and data copied
to/from using memcpy(3).
job, can be overridden with a larger value for higher
concurrency.
+iodepth_batch_submit=int
iodepth_batch=int This defines how many pieces of IO to submit at once.
It defaults to 1 which means that we submit each IO
as soon as it is available, but can be raised to submit
bigger batches of IO at the time.
+iodepth_batch_complete=int This defines how many pieces of IO to retrieve
+ at once. It defaults to 1 which means that we'll ask
+ for a minimum of 1 IO in the retrieval process from
+ the kernel. The IO retrieval will go on until we
+ hit the limit set by iodepth_low. If this variable is
+ set to 0, then fio will always check for completed
+ events before queuing more IO. This helps reduce
+ IO latency, at the cost of more retrieval system calls.
+
iodepth_low=int The low water mark indicating when to start filling
the queue again. Defaults to the same as iodepth, meaning
that fio will attempt to keep the queue full at all times.
not sync the file. The exception is the sg io engine, which
synchronizes the disk cache anyway.
-overwrite=bool If writing to a file, setup the file first and do overwrites.
+overwrite=bool If true, writes to a file will always overwrite existing
+ data. If the file doesn't already exist, it will be
+ created before the write phase begins. If the file exists
+ and is large enough for the specified write phase, nothing
+ will be done.
end_fsync=bool If true, fsync file contents when the job exits.
bw (KiB/s) : min= 0, max= 1196, per=51.00%, avg=664.02, stdev=681.68
cpu : usr=1.49%, sys=0.25%, ctx=7969, majf=0, minf=17
IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=99.0%, 16=0.0%, 32=0.0%, >32=0.0%
+ submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
+ complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued r/w: total=0/32768, short=0/0
lat (msec): 2=1.6%, 4=0.0%, 10=3.2%, 20=12.8%, 50=38.4%, 100=24.8%,
lat (msec): 250=15.2%, 500=0.0%, 750=0.0%, 1000=0.0%, >=2048=0.0%
16= entries includes depths up to that value but higher
than the previous entry. In other words, it covers the
range from 16 to 31.
+IO submit= How many pieces of IO were submitting in a single submit
+ call. Each entry denotes that amount and below, until
+ the previous entry - eg, 8=100% mean that we submitted
+ anywhere in between 5-8 ios per submit call.
+IO complete= Like the above submit number, but for completions instead.
IO issued= The number of read/write requests issued, and how many
of them were short.
IO latencies= The distribution of IO completion latencies. This is the