files between threads in a job or several jobs, specify
a filename for each of them to override the default. If
the ioengine used is 'net', the filename is the host, port,
- and protocol to use in the format of =host/port/protocol.
+ and protocol to use in the format of =host,port,protocol.
See ioengine=net for more. If the ioengine is file based, you
can specify a number of files by separating the names with a
':' colon. So if you wanted a job to open /dev/sda and /dev/sdb
isn't specified, naturally. If data verification is enabled,
refill_buffers is also automatically enabled.
+scramble_buffers=bool If refill_buffers is too costly and the target is
+ using data deduplication, then setting this option will
+ slightly modify the IO buffer contents to defeat normal
+ de-dupe attempts. This is not enough to defeat more clever
+ block compression attempts, but it will stop naive dedupe of
+ blocks. Default: true.
+
nrfiles=int Number of files to use for this job. Defaults to 1.
openfiles=int Number of files to keep open at the same time. Defaults to
bwavgtime=int Average the calculated bandwidth over the given time. Value
is specified in milliseconds.
+iopsavgtime=int Average the calculated IOPS over the given time. Value
+ is specified in milliseconds.
+
create_serialize=bool If true, serialize the file creating for the jobs.
This may be handy to avoid interleaving of data
files, which may greatly depend on the filesystem
verify_dump=bool If set, dump the contents of both the original data
block and the data block we read off disk to files. This
allows later analysis to inspect just what kind of data
- corruption occurred. On by default.
+ corruption occurred. Off by default.
verify_async=int Fio will normally verify IO inline from the submitting
thread. This option takes an integer describing how many
and foo_lat.log. This helps fio_generate_plot fine the logs
automatically.
+write_bw_log=str If given, write an IOPS log of the jobs in this job
+ file. See write_bw_log.
+
lockmem=int Pin down the specified amount of memory with mlock(2). Can
potentially be used instead of removing memory or booting
with less memory to simulate a smaller amount of memory.
Split up, the format is as follows:
- version, jobname, groupid, error
+ terse version, fio version, jobname, groupid, error
READ status:
- Total IO (KB), bandwidth (KB/sec), runtime (msec)
+ Total IO (KB), bandwidth (KB/sec), IOPS, runtime (msec)
Submission latency: min, max, mean, deviation
Completion latency: min, max, mean, deviation
+ Completion latency percentiles: 20 fields (see below)
Total latency: min, max, mean, deviation
Bw: min, max, aggregate percentage of total, mean, deviation
WRITE status:
- Total IO (KB), bandwidth (KB/sec), runtime (msec)
+ Total IO (KB), bandwidth (KB/sec), IOPS, runtime (msec)
Submission latency: min, max, mean, deviation
Completion latency: min, max, mean, deviation
+ Completion latency percentiles: 20 fields (see below)
Total latency: min, max, mean, deviation
Bw: min, max, aggregate percentage of total, mean, deviation
CPU usage: user, system, context switches, major faults, minor faults
IO depths: <=1, 2, 4, 8, 16, 32, >=64
IO latencies microseconds: <=2, 4, 10, 20, 50, 100, 250, 500, 750, 1000
IO latencies milliseconds: <=2, 4, 10, 20, 50, 100, 250, 500, 750, 1000, 2000, >=2000
+ Disk utilization: Disk name, Read ios, write ios,
+ Read merges, write merges,
+ Read ticks, write ticks,
+ Time spent in queue, disk utilization percentage
Additional Info (dependant on continue_on_error, default off): total # errors, first error code
Additional Info (dependant on description being set): Text description
+Completion latency percentiles can be a grouping of up to 20 sets, so
+for the terse output fio writes all of them. Each field will look like this:
+
+ 1.00%=6112
+
+which is the Xth percentile, and the usec latency associated with it.
+
+For disk utilization, all disks used by fio are shown. So for each disk
+there will be a disk utilization section.
+
8.0 Trace file format
---------------------