7. Terse output
8. Trace file format
9. CPU idleness profiling
+10. Verification and triggers
+11. Log File Formats
+
1.0 Overview and history
------------------------
-------------------------
fio also supports environment variable expansion in job files. Any
-substring of the form "${VARNAME}" as part of an option value (in other
+sub-string of the form "${VARNAME}" as part of an option value (in other
words, on the right of the `='), will be expanded to the value of the
environment variable called VARNAME. If no such environment variable
is defined, or VARNAME is the empty string, the empty string will be
special purpose of also signaling the start of a new
job.
+wait_for=str Specifies the name of the already defined job to wait
+ for. Single waitee name only may be specified. If set, the job
+ won't be started until all workers of the waitee job are done.
+
+ Wait_for operates on the job name basis, so there are a few
+ limitations. First, the waitee must be defined prior to the
+ waiter job (meaning no forward references). Second, if a job
+ is being referenced as a waitee, it must have a unique name
+ (no duplicate waitees).
+
description=str Text description of the job. Doesn't do anything except
dump this text description when this job is run. It's
not parsed.
default of $jobname.$jobnum.$filenum will be used if
no other format specifier is given.
+unique_filename=bool To avoid collisions between networked clients, fio
+ defaults to prefixing any generated filenames (with a directory
+ specified) with the source of the client connecting. To disable
+ this behavior, set this option to 0.
+
opendir=str Tell fio to recursively add any file it can find in this
directory and down the file system tree.
the other options related to buffer contents. The setting can
be any pattern of bytes, and can be prefixed with 0x for hex
values. It may also be a string, where the string must then
- be wrapped with "".
+ be wrapped with "", e.g.:
+
+ buffer_pattern="abcd"
+ or
+ buffer_pattern=-12
+ or
+ buffer_pattern=0xdeadface
+
+ Also you can combine everything together in any order:
+ buffer_pattern=0xdeadface"abcd"-12
dedupe_percentage=int If set, fio will generate this percentage of
identical buffers when writing. These buffers will be
the next. Multiple files can still be
open depending on 'openfiles'.
- The string can have a number appended, indicating how
- often to switch to a new file. So if option random:4 is
- given, fio will switch to a new random file after 4 ios
- have been issued.
+ zipf Use a zipfian distribution to decide what file
+ to access.
+
+ pareto Use a pareto distribution to decide what file
+ to access.
+
+ gauss Use a gaussian (normal) distribution to decide
+ what file to access.
+
+ For random, roundrobin, and sequential, a postfix can be
+ appended to tell fio how many I/Os to issue before switching
+ to a new file. For example, specifying
+ 'file_service_type=random:8' would cause fio to issue 8 I/Os
+ before selecting a new file at random. For the non-uniform
+ distributions, a floating point postfix can be given to
+ influence how the distribution is skewed. See
+ 'random_distribution' for a description of how that would work.
ioengine=str Defines how the job issues io to the file. The following
types are defined:
defines engine specific options.
libhdfs Read and write through Hadoop (HDFS).
- The 'filename' option is used to specify host,
- port of the hdfs name-node to connect. This
- engine interprets offsets a little
+ This engine interprets offsets a little
differently. In HDFS, files once created
cannot be modified. So random writes are not
possible. To imitate this, libhdfs engine
- expects bunch of small files to be created
- over HDFS, and engine will randomly pick a
- file out of those files based on the offset
- generated by fio backend. (see the example
- job file to create such files, use rw=write
- option). Please note, you might want to set
- necessary environment variables to work with
- hdfs/libhdfs properly.
+ creates bunch of small files, and engine will
+ pick a file out of those files based on the
+ offset enerated by fio backend. Each jobs uses
+ it's own connection to HDFS.
mtd Read, write and erase an MTD character device
(e.g., /dev/mtd0). Discards are treated as
overwriting. The writetrim mode works well
for this constraint.
+ pmemblk Read and write through the NVML libpmemblk
+ interface.
+
external Prefix to specify loading an external
IO engine object file. Append the engine
filename, eg ioengine=external:/tmp/foo.o
iodepth_batch=int This defines how many pieces of IO to submit at once.
It defaults to 1 which means that we submit each IO
as soon as it is available, but can be raised to submit
- bigger batches of IO at the time.
+ bigger batches of IO at the time. If it is set to 0 the iodepth
+ value will be used.
+iodepth_batch_complete_min=int
iodepth_batch_complete=int This defines how many pieces of IO to retrieve
at once. It defaults to 1 which means that we'll ask
for a minimum of 1 IO in the retrieval process from
events before queuing more IO. This helps reduce
IO latency, at the cost of more retrieval system calls.
+iodepth_batch_complete_max=int This defines maximum pieces of IO to
+ retrieve at once. This variable should be used along with
+ iodepth_batch_complete_min=int variable, specifying the range
+ of min and max amount of IO which should be retrieved. By default
+ it is equal to iodepth_batch_complete_min value.
+
+ Example #1:
+
+ iodepth_batch_complete_min=1
+ iodepth_batch_complete_max=<iodepth>
+
+ which means that we will retrieve at leat 1 IO and up to the
+ whole submitted queue depth. If none of IO has been completed
+ yet, we will wait.
+
+ Example #2:
+
+ iodepth_batch_complete_min=0
+ iodepth_batch_complete_max=<iodepth>
+
+ which means that we can retrieve up to the whole submitted
+ queue depth, but if none of IO has been completed yet, we will
+ NOT wait and immediately exit the system call. In this example
+ we simply do polling.
+
iodepth_low=int The low water mark indicating when to start filling
the queue again. Defaults to the same as iodepth, meaning
that fio will attempt to keep the queue full at all times.
fdatasync=int Like fsync= but uses fdatasync() to only sync data and not
metadata blocks.
- In FreeBSD and Windows there is no fdatasync(), this falls back to
- using fsync()
+ In FreeBSD and Windows there is no fdatasync(), this falls back
+ to using fsync()
sync_file_range=str:val Use sync_file_range() for every 'val' number of
write operations. Fio will track range of writes that
random Uniform random distribution
zipf Zipf distribution
pareto Pareto distribution
+ gauss Normal (guassian) distribution
+ zoned Zoned random distribution
When using a zipf or pareto distribution, an input value
is also needed to define the access pattern. For zipf, this
what the given input values will yield in terms of hit rates.
If you wanted to use zipf with a theta of 1.2, you would use
random_distribution=zipf:1.2 as the option. If a non-uniform
- model is used, fio will disable use of the random map.
+ model is used, fio will disable use of the random map. For
+ the gauss distribution, a normal deviation is supplied as
+ a value between 0 and 100.
+
+ For a zoned distribution, fio supports specifying percentages
+ of IO access that should fall within what range of the file or
+ device. For example, given a criteria of:
+
+ 60% of accesses should be to the first 10%
+ 30% of accesses should be to the next 20%
+ 8% of accesses should be to to the next 30%
+ 2% of accesses should be to the next 40%
+
+ we can define that through zoning of the random accesses. For
+ the above example, the user would do:
+
+ random_distribution=zoned:60/10:30/20:8/30:2/40
+
+ similarly to how bssplit works for setting ranges and
+ percentages of block sizes. Like bssplit, it's possible to
+ specify separate zones for reads, writes, and trims. If just
+ one set is given, it'll apply to all of them.
percentage_random=int For a random workload, set how big a percentage should
be random. This defaults to 100%, in which case the workload
tausworthe Strong 2^88 cycle random number generator
lfsr Linear feedback shift register generator
+ tausworthe64 Strong 64-bit 2^258 cycle random number
+ generator
Tausworthe is a strong random number generator, but it
requires tracking on the side if we want to ensure that
typically good enough. LFSR only works with single
block sizes, not with workloads that use multiple block
sizes. If used with such a workload, fio may read or write
- some blocks multiple times.
+ some blocks multiple times. The default value is tausworthe,
+ unless the required space exceeds 2^32 blocks. If it does,
+ then tausworthe64 is selected automatically.
nice=int Run the job with the given nice value. See man nice(2).
will only limit writes (to 500KB/sec), the latter will only
limit reads.
-ratemin=int Tell fio to do whatever it can to maintain at least this
+rate_min=int Tell fio to do whatever it can to maintain at least this
bandwidth. Failing to meet this requirement, will cause
the job to exit. The same format as rate is used for
read vs write separation.
the job to exit. The same format as rate is used for read vs
write separation.
+rate_process=str This option controls how fio manages rated IO
+ submissions. The default is 'linear', which submits IO in a
+ linear fashion with fixed delays between IOs that gets
+ adjusted based on IO completion rates. If this is set to
+ 'poisson', fio will submit IO based on a more real world
+ random request flow, known as the Poisson process
+ (https://en.wikipedia.org/wiki/Poisson_process). The lambda
+ will be 10^6 / IOPS for the given workload.
+
latency_target=int If set, fio will attempt to find the max performance
point that the given workload will run at while maintaining a
latency below this target. The values is given in microseconds.
max_latency=int If set, fio will exit the job if it exceeds this maximum
latency. It will exit with an ETIME error.
-ratecycle=int Average bandwidth for 'rate' and 'ratemin' over this number
+rate_cycle=int Average bandwidth for 'rate' and 'rate_min' over this number
of milliseconds.
cpumask=int Set the CPU affinity of this job. The parameter given is a
backing. Append filename after mmaphuge, ala
mem=mmaphuge:/hugetlbfs/file
+ mmapshared Same as mmap, but use a MMAP_SHARED
+ mapping.
+
The area allocated is a function of the maximum allowed
bs size for the job, multiplied by the io depth given. Note
that for shmhuge and mmaphuge to work, the system must have
to wait for each job to finish, sometimes that is not the
desired action.
+exitall_on_error When one job finishes in error, terminate the rest. The
+ default is to wait for each job to finish.
+
bwavgtime=int Average the calculated bandwidth over the given time. Value
- is specified in milliseconds.
+ is specified in milliseconds. If the job also does bandwidth
+ logging through 'write_bw_log', then the minimum of this option
+ and 'log_avg_msec' will be used. Default: 500ms.
iopsavgtime=int Average the calculated IOPS over the given time. Value
- is specified in milliseconds.
+ is specified in milliseconds. If the job also does IOPS logging
+ through 'write_iops_log', then the minimum of this option and
+ 'log_avg_msec' will be used. Default: 500ms.
create_serialize=bool If true, serialize the file creating for the jobs.
This may be handy to avoid interleaving of data
option is false, then fio will error out if the files it
needs to use don't already exist. Default: true.
+allow_mounted_write=bool If this isn't set, fio will abort jobs that
+ are destructive (eg that write) to what appears to be a
+ mounted device or partition. This should help catch creating
+ inadvertently destructive tests, not realizing that the test
+ will destroy data on the mounted file system. Default: false.
+
pre_read=bool If this is given, files will be pre-read into memory before
starting the given IO operation. This will also clear
the 'invalidate' flag, since it is pointless to pre-read
verify is set. Defaults to 1.
verify=str If writing to a file, fio can verify the file contents
- after each iteration of the job. The allowed values are:
+ after each iteration of the job. Each verification method also implies
+ verification of special header, which is written to the beginning of
+ each block. This header also includes meta information, like offset
+ of the block, block number, timestamp when block was written, etc.
+ verify=str can be combined with verify_pattern=str option.
+ The allowed values are:
md5 Use an md5 sum of the data area and store
it in the header of each block.
sha1 Use optimized sha1 as the checksum function.
- meta Write extra information about each io
- (timestamp, block number etc.). The block
- number is verified. The io sequence number is
- verified for workloads that write data.
- See also verify_pattern.
+ meta This option is deprecated, since now meta information is
+ included in generic verification header and meta verification
+ happens by default. For detailed information see the description
+ of the verify=str setting. This option is kept because of
+ compatibility's sake with old configurations. Do not use it.
+
+ pattern Verify a strict pattern. Normally fio includes
+ a header with some basic information and
+ checksumming, but if this option is set, only
+ the specific pattern set with 'verify_pattern'
+ is verified.
null Only pretend to verify. Useful for testing
internals with ioengine=null, not for much
buffer at the time(it can be either a decimal or a hex number).
The verify_pattern if larger than a 32-bit quantity has to
be a hex number that starts with either "0x" or "0X". Use
- with verify=meta.
+ with verify=str. Also, verify_pattern supports %o format,
+ which means that for each block offset will be written and
+ then verifyied back, e.g.:
+
+ verify_pattern=%o
+
+ Or use combination of everything:
+ verify_pattern=0xff%o"abcd"-12
verify_fatal=bool Normally fio will keep checking the entire contents
before quitting on a block verification failure. If this
filename. For this option, the suffix is _bw.x.log, where
x is the index of the job (1..N, where N is the number of
jobs). If 'per_job_logs' is false, then the filename will not
- include the job index.
+ include the job index. See 'Log File Formats'.
write_lat_log=str Same as write_bw_log, except that this option stores io
submission, completion, and total latencies instead. If no
and foo_lat.x.log, where x is the index of the job (1..N,
where N is the number of jobs). This helps fio_generate_plot
fine the logs automatically. If 'per_job_logs' is false, then
- the filename will not include the job index.
-
+ the filename will not include the job index. See 'Log File
+ Formats'.
write_iops_log=str Same as write_bw_log, but writes IOPS. If no filename is
given with this option, the default filename of
(1..N, where N is the number of jobs). Even if the filename
is given, fio will still append the type of log. If
'per_job_logs' is false, then the filename will not include
- the job index.
+ the job index. See 'Log File Formats'.
log_avg_msec=int By default, fio will log an entry in the iops, latency,
or bw log for every IO that completes. When writing to the
disk log, that can quickly grow to a very large size. Setting
this option makes fio average the each log entry over the
specified period of time, reducing the resolution of the log.
- Defaults to 0.
+ See log_max_value as well. Defaults to 0, logging all entries.
+
+log_max_value=bool If log_avg_msec is set, fio logs the average over that
+ window. If you instead want to log the maximum value, set this
+ option to 1. Defaults to 0, meaning that averaged values are
+ logged.
log_offset=int If this is set, the iolog options will include the byte
offset for the IO entry as well as the other data values.
in the specified log file. This feature depends on the
availability of zlib.
-log_store_compressed=bool If set, and log_compression is also set,
- fio will store the log files in a compressed format. They
- can be decompressed with fio, using the --inflate-log
- command line parameter. The files will be stored with a
- .fz suffix.
+log_compression_cpus=str Define the set of CPUs that are allowed to
+ handle online log compression for the IO jobs. This can
+ provide better isolation between performance sensitive jobs,
+ and background compression work.
+
+log_store_compressed=bool If set, fio will store the log files in a
+ compressed format. They can be decompressed with fio, using
+ the --inflate-log command line parameter. The files will be
+ stored with a .fz suffix.
block_error_percentiles=bool If set, record errors in trim block-sized
units from writes and trims and output a histogram of
enabled when polling for a minimum of 0 events (eg when
iodepth_batch_complete=0).
+[psyncv2] hipri Set RWF_HIPRI on IO, indicating to the kernel that
+ it's of higher priority than normal.
+
[cpu] cpuload=int Attempt to use the specified percentage of CPU cycles.
[cpu] cpuchunks=int Split the load into cycles of the given time. In
If the job is a TCP listener or UDP reader, the hostname is not
used and must be omitted unless it is a valid UDP multicast
address.
+[libhdfs] namenode=str The host name or IP address of a HDFS cluster namenode to contact.
[netsplice] port=int
[net] port=int The TCP or UDP port to bind to or connect to. If this is used
with numjobs to spawn multiple instances of the same job type, then this will
be the starting port number since fio will use a range of ports.
+[libhdfs] port=int the listening port of the HFDS cluster namenode.
[netsplice] interface=str
[net] interface=str The IP address of the network interface used to send or
1 : allocate space immidietly inside defragment event,
and free right after event
+[rbd] clustername=str Specifies the name of the Ceph cluster.
+[rbd] rbdname=str Specifies the name of the RBD.
+[rbd] pool=str Specifies the naem of the Ceph pool containing RBD.
+[rbd] clientname=str Specifies the username (without the 'client.' prefix)
+ used to access the Ceph cluster. If the clustername is
+ specified, the clientmae shall be the full type.id
+ string. If no type. prefix is given, fio will add
+ 'client.' by default.
+
[mtd] skip_bad=bool Skip operations against known bad blocks.
+[libhdfs] hdfsdirectory libhdfs will create chunk in this HDFS directory
+[libhdfs] chunck_size the size of the chunck to use for each file.
+
6.0 Interpreting the output
---------------------------
cpu= CPU usage. User and system time, along with the number
of context switches this thread went through, usage of
system and user time, and finally the number of major
- and minor page faults.
+ and minor page faults. The CPU utilization numbers are
+ averages for the jobs in that reporting group, while the
+ context and fault counters are summed.
IO depths= The distribution of io depths over the job life time. The
numbers are divided into powers of 2, so for example the
16= entries includes depths up to that value but higher
terse version, fio version, jobname, groupid, error
READ status:
Total IO (KB), bandwidth (KB/sec), IOPS, runtime (msec)
- Submission latency: min, max, mean, deviation (usec)
- Completion latency: min, max, mean, deviation (usec)
+ Submission latency: min, max, mean, stdev (usec)
+ Completion latency: min, max, mean, stdev (usec)
Completion latency percentiles: 20 fields (see below)
- Total latency: min, max, mean, deviation (usec)
- Bw (KB/s): min, max, aggregate percentage of total, mean, deviation
+ Total latency: min, max, mean, stdev (usec)
+ Bw (KB/s): min, max, aggregate percentage of total, mean, stdev
WRITE status:
Total IO (KB), bandwidth (KB/sec), IOPS, runtime (msec)
- Submission latency: min, max, mean, deviation (usec)
- Completion latency: min, max, mean, deviation (usec)
+ Submission latency: min, max, mean, stdev (usec)
+ Completion latency: min, max, mean, stdev(usec)
Completion latency percentiles: 20 fields (see below)
- Total latency: min, max, mean, deviation (usec)
- Bw (KB/s): min, max, aggregate percentage of total, mean, deviation
+ Total latency: min, max, mean, stdev (usec)
+ Bw (KB/s): min, max, aggregate percentage of total, mean, stdev
CPU usage: user, system, context switches, major faults, minor faults
IO depths: <=1, 2, 4, 8, 16, 32, >=64
IO latencies microseconds: <=2, 4, 10, 20, 50, 100, 250, 500, 750, 1000
For this case, fio would wait for the server to send us the write state,
then execute 'ipmi-reboot server' when that happened.
-10.1 Loading verify state
+10.2 Loading verify state
-------------------------
To load store write state, read verification job file must contain
the verify_state_load option. If that is set, fio will load the previously
stored state. For a local fio run this is done by loading the files directly,
and on a client/server run, the server backend will ask the client to send
the files over and load them from there.
+
+
+11.0 Log File Formats
+---------------------
+
+Fio supports a variety of log file formats, for logging latencies, bandwidth,
+and IOPS. The logs share a common format, which looks like this:
+
+time (msec), value, data direction, offset
+
+Time for the log entry is always in milliseconds. The value logged depends
+on the type of log, it will be one of the following:
+
+ Latency log Value is latency in usecs
+ Bandwidth log Value is in KB/sec
+ IOPS log Value is IOPS
+
+Data direction is one of the following:
+
+ 0 IO is a READ
+ 1 IO is a WRITE
+ 2 IO is a TRIM
+
+The offset is the offset, in bytes, from the start of the file, for that
+particular IO. The logging of the offset can be toggled with 'log_offset'.
+
+If windowed logging is enabled though 'log_avg_msec', then fio doesn't log
+individual IOs. Instead of logs the average values over the specified
+period of time. Since 'data direction' and 'offset' are per-IO values,
+they aren't applicable if windowed logging is enabled. If windowed logging
+is enabled and 'log_max_value' is set, then fio logs maximum values in
+that window instead of averages.
+