the next. Multiple files can still be
open depending on 'openfiles'.
- The string can have a number appended, indicating how
- often to switch to a new file. So if option random:4 is
- given, fio will switch to a new random file after 4 ios
- have been issued.
+ zipf Use a zipfian distribution to decide what file
+ to access.
+
+ pareto Use a pareto distribution to decide what file
+ to access.
+
+ gauss Use a gaussian (normal) distribution to decide
+ what file to access.
+
+ For random, roundrobin, and sequential, a postfix can be
+ appended to tell fio how many I/Os to issue before switching
+ to a new file. For example, specifying
+ 'file_service_type=random:8' would cause fio to issue 8 I/Os
+ before selecting a new file at random. For the non-uniform
+ distributions, a floating point postfix can be given to
+ influence how the distribution is skewed. See
+ 'random_distribution' for a description of how that would work.
ioengine=str Defines how the job issues io to the file. The following
types are defined:
overwriting. The writetrim mode works well
for this constraint.
+ pmemblk Read and write through the NVML libpmemblk
+ interface.
+
external Prefix to specify loading an external
IO engine object file. Append the engine
filename, eg ioengine=external:/tmp/foo.o
default is to wait for each job to finish.
bwavgtime=int Average the calculated bandwidth over the given time. Value
- is specified in milliseconds.
+ is specified in milliseconds. If the job also does bandwidth
+ logging through 'write_bw_log', then the minimum of this option
+ and 'log_avg_msec' will be used. Default: 500ms.
iopsavgtime=int Average the calculated IOPS over the given time. Value
- is specified in milliseconds.
+ is specified in milliseconds. If the job also does IOPS logging
+ through 'write_iops_log', then the minimum of this option and
+ 'log_avg_msec' will be used. Default: 500ms.
create_serialize=bool If true, serialize the file creating for the jobs.
This may be handy to avoid interleaving of data
disk log, that can quickly grow to a very large size. Setting
this option makes fio average the each log entry over the
specified period of time, reducing the resolution of the log.
- See log_max as well. Defaults to 0, logging all entries.
+ See log_max_value as well. Defaults to 0, logging all entries.
+
+log_max_value=bool If log_avg_msec is set, fio logs the average over that
+ window. If you instead want to log the maximum value, set this
+ option to 1. Defaults to 0, meaning that averaged values are
+ logged.
-log_max=bool If log_avg_msec is set, fio logs the average over that window.
- If you instead want to log the maximum value, set this option
- to 1. Defaults to 0, meaning that averaged values are logged.
-.
log_offset=int If this is set, the iolog options will include the byte
offset for the IO entry as well as the other data values.
1 : allocate space immidietly inside defragment event,
and free right after event
+[rbd] clustername=str Specifies the name of the Ceph cluster.
+[rbd] rbdname=str Specifies the name of the RBD.
+[rbd] pool=str Specifies the naem of the Ceph pool containing RBD.
+[rbd] clientname=str Specifies the username (without the 'client.' prefix)
+ used to access the Ceph cluster. If the clustername is
+ specified, the clientmae shall be the full type.id
+ string. If no type. prefix is given, fio will add
+ 'client.' by default.
+
[mtd] skip_bad=bool Skip operations against known bad blocks.
[libhdfs] hdfsdirectory libhdfs will create chunk in this HDFS directory
cpu= CPU usage. User and system time, along with the number
of context switches this thread went through, usage of
system and user time, and finally the number of major
- and minor page faults.
+ and minor page faults. The CPU utilization numbers are
+ averages for the jobs in that reporting group, while the
+ context and fault counters are summed.
IO depths= The distribution of io depths over the job life time. The
numbers are divided into powers of 2, so for example the
16= entries includes depths up to that value but higher