7. Terse output
8. Trace file format
9. CPU idleness profiling
+10. Verification and triggers
1.0 Overview and history
------------------------
special purpose of also signaling the start of a new
job.
+wait_for=str Specifies the name of the already defined job to wait
+ for. Single waitee name only may be specified. If set, the job
+ won't be started until all workers of the waitee job are done.
+
+ Wait_for operates on the job name basis, so there are a few
+ limitations. First, the waitee must be defined prior to the
+ waiter job (meaning no forward references). Second, if a job
+ is being referenced as a waitee, it must have a unique name
+ (no duplicate waitees).
+
description=str Text description of the job. Doesn't do anything except
dump this text description when this job is run. It's
not parsed.
defines engine specific options.
libhdfs Read and write through Hadoop (HDFS).
- The 'filename' option is used to specify host,
- port of the hdfs name-node to connect. This
- engine interprets offsets a little
+ This engine interprets offsets a little
differently. In HDFS, files once created
cannot be modified. So random writes are not
possible. To imitate this, libhdfs engine
- expects bunch of small files to be created
- over HDFS, and engine will randomly pick a
- file out of those files based on the offset
- generated by fio backend. (see the example
- job file to create such files, use rw=write
- option). Please note, you might want to set
- necessary environment variables to work with
- hdfs/libhdfs properly.
+ creates bunch of small files, and engine will
+ pick a file out of those files based on the
+ offset enerated by fio backend. Each jobs uses
+ it's own connection to HDFS.
mtd Read, write and erase an MTD character device
(e.g., /dev/mtd0). Discards are treated as
random Uniform random distribution
zipf Zipf distribution
pareto Pareto distribution
+ gauss Normal (guassian) distribution
When using a zipf or pareto distribution, an input value
is also needed to define the access pattern. For zipf, this
what the given input values will yield in terms of hit rates.
If you wanted to use zipf with a theta of 1.2, you would use
random_distribution=zipf:1.2 as the option. If a non-uniform
- model is used, fio will disable use of the random map.
+ model is used, fio will disable use of the random map. For
+ the gauss distribution, a normal deviation is supplied as
+ a value between 0 and 100.
percentage_random=int For a random workload, set how big a percentage should
be random. This defaults to 100%, in which case the workload
typically good enough. LFSR only works with single
block sizes, not with workloads that use multiple block
sizes. If used with such a workload, fio may read or write
- some blocks multiple times.
+ some blocks multiple times. The default value is tausworthe,
+ unless the required space exceeds 2^32 blocks. If it does,
+ then tausworthe64 is selected automatically.
nice=int Run the job with the given nice value. See man nice(2).
will only limit writes (to 500KB/sec), the latter will only
limit reads.
-ratemin=int Tell fio to do whatever it can to maintain at least this
+rate_min=int Tell fio to do whatever it can to maintain at least this
bandwidth. Failing to meet this requirement, will cause
the job to exit. The same format as rate is used for
read vs write separation.
the job to exit. The same format as rate is used for read vs
write separation.
+rate_process=str This option controls how fio manages rated IO
+ submissions. The default is 'linear', which submits IO in a
+ linear fashion with fixed delays between IOs that gets
+ adjusted based on IO completion rates. If this is set to
+ 'poisson', fio will submit IO based on a more real world
+ random request flow, known as the Poisson process
+ (https://en.wikipedia.org/wiki/Poisson_process). The lambda
+ will be 10^6 / IOPS for the given workload.
+
latency_target=int If set, fio will attempt to find the max performance
point that the given workload will run at while maintaining a
latency below this target. The values is given in microseconds.
max_latency=int If set, fio will exit the job if it exceeds this maximum
latency. It will exit with an ETIME error.
-ratecycle=int Average bandwidth for 'rate' and 'ratemin' over this number
+rate_cycle=int Average bandwidth for 'rate' and 'rate_min' over this number
of milliseconds.
cpumask=int Set the CPU affinity of this job. The parameter given is a
to wait for each job to finish, sometimes that is not the
desired action.
+exitall_on_error When one job finishes in error, terminate the rest. The
+ default is to wait for each job to finish.
+
bwavgtime=int Average the calculated bandwidth over the given time. Value
is specified in milliseconds.
disk log, that can quickly grow to a very large size. Setting
this option makes fio average the each log entry over the
specified period of time, reducing the resolution of the log.
- Defaults to 0.
+ See log_max as well. Defaults to 0, logging all entries.
+log_max=bool If log_avg_msec is set, fio logs the average over that window.
+ If you instead want to log the maximum value, set this option
+ to 1. Defaults to 0, meaning that averaged values are logged.
+.
log_offset=int If this is set, the iolog options will include the byte
offset for the IO entry as well as the other data values.
in the specified log file. This feature depends on the
availability of zlib.
-log_store_compressed=bool If set, and log_compression is also set,
- fio will store the log files in a compressed format. They
- can be decompressed with fio, using the --inflate-log
- command line parameter. The files will be stored with a
- .fz suffix.
+log_compression_cpus=str Define the set of CPUs that are allowed to
+ handle online log compression for the IO jobs. This can
+ provide better isolation between performance sensitive jobs,
+ and background compression work.
+
+log_store_compressed=bool If set, fio will store the log files in a
+ compressed format. They can be decompressed with fio, using
+ the --inflate-log command line parameter. The files will be
+ stored with a .fz suffix.
block_error_percentiles=bool If set, record errors in trim block-sized
units from writes and trims and output a histogram of
enabled when polling for a minimum of 0 events (eg when
iodepth_batch_complete=0).
+[psyncv2] hipri Set RWF_HIPRI on IO, indicating to the kernel that
+ it's of higher priority than normal.
+
[cpu] cpuload=int Attempt to use the specified percentage of CPU cycles.
[cpu] cpuchunks=int Split the load into cycles of the given time. In
If the job is a TCP listener or UDP reader, the hostname is not
used and must be omitted unless it is a valid UDP multicast
address.
+[libhdfs] namenode=str The host name or IP address of a HDFS cluster namenode to contact.
[netsplice] port=int
[net] port=int The TCP or UDP port to bind to or connect to. If this is used
with numjobs to spawn multiple instances of the same job type, then this will
be the starting port number since fio will use a range of ports.
+[libhdfs] port=int the listening port of the HFDS cluster namenode.
[netsplice] interface=str
[net] interface=str The IP address of the network interface used to send or
[mtd] skip_bad=bool Skip operations against known bad blocks.
+[libhdfs] hdfsdirectory libhdfs will create chunk in this HDFS directory
+[libhdfs] chunck_size the size of the chunck to use for each file.
+
6.0 Interpreting the output
---------------------------
For this case, fio would wait for the server to send us the write state,
then execute 'ipmi-reboot server' when that happened.
-10.1 Loading verify state
+10.2 Loading verify state
-------------------------
To load store write state, read verification job file must contain
the verify_state_load option. If that is set, fio will load the previously