write Sequential writes
randwrite Random writes
randread Random reads
- rw Sequential mixed reads and writes
+ rw,readwrite Sequential mixed reads and writes
randrw Random mixed reads and writes
For the mixed io types, the default is to split them 50/50.
block compression attempts, but it will stop naive dedupe of
blocks. Default: true.
+buffer_compress_percentage=int If this is set, then fio will attempt to
+ provide IO buffer content (on WRITEs) that compress to
+ the specified level. Fio does this by providing a mix of
+ random data and zeroes. Note that this is per block size
+ unit, for file/disk wide compression level that matches
+ this setting, you'll also want to set refill_buffers.
+
+buffer_compress_chunk=int See buffer_compress_percentage. This
+ setting allows fio to manage how big the ranges of random
+ data and zeroed data is. Without this set, fio will
+ provide buffer_compress_percentage of blocksize random
+ data, followed by the remaining zeroed. With this set
+ to some chunk size smaller than the block size, fio can
+ alternate random and zeroed data throughout the IO
+ buffer.
+
nrfiles=int Number of files to use for this job. Defaults to 1.
openfiles=int Number of files to keep open at the same time. Defaults to
direct=bool If value is true, use non-buffered io. This is usually
O_DIRECT. Note that ZFS on Solaris doesn't support direct io.
+ On Windows the synchronous ioengines don't support direct io.
buffered=bool If value is true, use buffered io. This is the opposite
of the 'direct' option. Defaults to true.
the given offset will not be touched. This effectively
caps the file size at real_size - offset.
+offset_increment=int If this is provided, then the real offset becomes
+ the offset + offset_increment * thread_number, where the
+ thread number is a counter that starts at 0 and is incremented
+ for each job. This option is useful if there are several jobs
+ which are intended to operate on a file in parallel in disjoint
+ segments, with even spacing between the starting points.
+
fsync=int If writing to a file, issue a sync of the dirty data
for every number of blocks given. For example, if you give
32 as a parameter, fio will sync the file for every 32
fdatasync=int Like fsync= but uses fdatasync() to only sync data and not
metadata blocks.
- In FreeBSD there is no fdatasync(), this falls back to
+ In FreeBSD and Windows there is no fdatasync(), this falls back to
using fsync()
sync_file_range=str:val Use sync_file_range() for every 'val' number of
ioscheduler=str Attempt to switch the device hosting the file to the specified
io scheduler before running.
-cpuload=int If the job is a CPU cycle eater, attempt to use the specified
- percentage of CPU cycles.
-
-cpuchunks=int If the job is a CPU cycle eater, split the load into
- cycles of the given time. In microseconds.
-
disk_util=bool Generate disk utilization statistics, if the platform
supports it. Defaults to on.
enabled when polling for a minimum of 0 events (eg when
iodepth_batch_complete=0).
+[cpu] cpuload=int Attempt to use the specified percentage of CPU cycles.
+
+[cpu] cpuchunks=int Split the load into cycles of the given time. In
+ microseconds.
+
[netsplice] hostname=str
[net] hostname=str The host name or IP address to use for TCP or UDP based IO.
If the job is a TCP listener or UDP reader, the hostname is not
F Running, currently waiting for fsync()
V Running, doing verification of written data.
E Thread exited, not reaped by main thread yet.
-_ Thread reaped.
+_ Thread reaped, or
+X Thread reaped, exited with an error.
+K Thread reaped, exited due to signal.
The other values are fairly self explanatory - number of threads
currently running and doing io, rate of io since last check (read speed
listed first, then write speed), and the estimated completion percentage
and time for the running group. It's impossible to estimate runtime of
-the following groups (if any).
+the following groups (if any). Note that the string is displayed in order,
+so it's possible to tell which of the jobs are currently doing what. The
+first character is the first job defined in the job file, and so forth.
When fio is done (or interrupted by ctrl-c), it will show the data for
each thread, group of threads, and disks in that order. For each data
latency, since queue/complete is one operation there. This
value can be in milliseconds or microseconds, fio will choose
the most appropriate base and print that. In the example
- above, milliseconds is the best scale.
+ above, milliseconds is the best scale. Note: in --minimal mode
+ latencies are always expressed in microseconds.
clat= Completion latency. Same names as slat, this denotes the
time from submission to completion of the io pieces. For
sync io, clat will usually be equal (or very close) to 0,
terse version, fio version, jobname, groupid, error
READ status:
Total IO (KB), bandwidth (KB/sec), IOPS, runtime (msec)
- Submission latency: min, max, mean, deviation
- Completion latency: min, max, mean, deviation
+ Submission latency: min, max, mean, deviation (usec)
+ Completion latency: min, max, mean, deviation (usec)
Completion latency percentiles: 20 fields (see below)
- Total latency: min, max, mean, deviation
- Bw: min, max, aggregate percentage of total, mean, deviation
+ Total latency: min, max, mean, deviation (usec)
+ Bw (KB/s): min, max, aggregate percentage of total, mean, deviation
WRITE status:
Total IO (KB), bandwidth (KB/sec), IOPS, runtime (msec)
- Submission latency: min, max, mean, deviation
- Completion latency: min, max, mean, deviation
+ Submission latency: min, max, mean, deviation (usec)
+ Completion latency: min, max, mean, deviation (usec)
Completion latency percentiles: 20 fields (see below)
- Total latency: min, max, mean, deviation
- Bw: min, max, aggregate percentage of total, mean, deviation
+ Total latency: min, max, mean, deviation (usec)
+ Bw (KB/s): min, max, aggregate percentage of total, mean, deviation
CPU usage: user, system, context switches, major faults, minor faults
IO depths: <=1, 2, 4, 8, 16, 32, >=64
IO latencies microseconds: <=2, 4, 10, 20, 50, 100, 250, 500, 750, 1000