1k:4k. If the option allows two sets of ranges, they can be
specified with a ',' or '/' delimiter: 1k-4k/8k-32k. Also see
int.
-float_list A list of floating numbers, separated by a ':' character.
+float_list A list of floating point numbers, separated by a ':' character.
With the above in mind, here follows the complete list of fio job
parameters.
on what IO patterns it is likely to issue. Sometimes you
want to test specific IO patterns without telling the
kernel about it, in which case you can disable this option.
- If set, fio will use POSIX_FADV_SEQUENTIAL for sequential
- IO and POSIX_FADV_RANDOM for random IO.
+ The following options are supported:
+
+ sequential Use FADV_SEQUENTIAL
+ random Use FADV_RANDOM
+ 1 Backwards-compatible hint for basing
+ the hint on the fio workload. Will use
+ FADV_SEQUENTIAL for a sequential
+ workload, and FADV_RANDOM for a random
+ workload.
+ 0 Backwards-compatible setting for not
+ issing a fadvise hint.
fadvise_stream=int Notify the kernel what write stream ID to place these
writes under. Only supported on Linux. Note, this option
cannot be modified. So random writes are not
possible. To imitate this, libhdfs engine
creates bunch of small files, and engine will
- pick a file out of those files based on the
- offset enerated by fio backend. Each jobs uses
+ pick a file out of those files based on the
+ offset generated by fio backend. Each jobs uses
it's own connection to HDFS.
mtd Read, write and erase an MTD character device
pmemblk Read and write through the NVML libpmemblk
interface.
+ dev-dax Read and write through a DAX device exposed
+ from persistent memory.
+
external Prefix to specify loading an external
IO engine object file. Append the engine
filename, eg ioengine=external:/tmp/foo.o
iodepth_batch_complete_min=1
iodepth_batch_complete_max=<iodepth>
- which means that we will retrieve at leat 1 IO and up to the
+ which means that we will retrieve at least 1 IO and up to the
whole submitted queue depth. If none of IO has been completed
yet, we will wait.
random Uniform random distribution
zipf Zipf distribution
pareto Pareto distribution
- gauss Normal (guassian) distribution
+ gauss Normal (gaussian) distribution
zoned Zoned random distribution
When using a zipf or pareto distribution, an input value
and random IO, at the given percentages. It is possible to
set different values for reads, writes, and trim. To do so,
simply use a comma separated list. See blocksize.
-
+
norandommap Normally fio will cover every block of the file when doing
random IO. If this option is given, fio will just get a
new random offset without looking at past io history. This
nice=int Run the job with the given nice value. See man nice(2).
+ On Windows, values less than -15 set the process class to "High";
+ -1 through -15 set "Above Normal"; 1 through 15 "Below Normal";
+ and above 15 "Idle" priority class.
+
prio=int Set the io priority value of this job. Linux limits us to
a positive value between 0 and 7, with 0 being the highest.
See man ionice(1). Refer to an appropriate manpage for
fio must be built on a system with libnuma-dev(el) installed.
numa_mem_policy=str Set this job's memory policy and corresponding NUMA
- nodes. Format of the argements:
+ nodes. Format of the arguments:
<mode>[:<nodelist>]
`mode' is one of the following memory policy:
default, prefer, bind, interleave, local
thus it will increase the total runtime if a special timeout
or runtime is specified.
+steadystate=str:float
+ss=str:float Define the criterion and limit for assessing steady state
+ performance. The first parameter designates the criterion
+ whereas the second parameter sets the threshold. When the
+ criterion falls below the threshold for the specified duration,
+ the job will stop. For example, iops_slope:0.1% will direct fio
+ to terminate the job when the least squares regression slope
+ falls below 0.1% of the mean IOPS. If group_reporting is
+ enabled this will apply to all jobs in the group. Below is the
+ list of available steady state assessment criteria. All
+ assessments are carried out using only data from the rolling
+ collection window. Threshold limits can be expressed as a fixed
+ value or as a percentage of the mean in the collection window.
+ iops Collect IOPS data. Stop the job if all
+ individual IOPS measurements are within the
+ specified limit of the mean IOPS (e.g., iops:2
+ means that all individual IOPS values must be
+ within 2 of the mean, whereas iops:0.2% means
+ that all individual IOPS values must be within
+ 0.2% of the mean IOPS to terminate the job).
+ iops_slope
+ Collect IOPS data and calculate the least
+ squares regression slope. Stop the job if the
+ slope falls below the specified limit.
+ bw Collect bandwidth data. Stop the job if all
+ individual bandwidth measurements are within
+ the specified limit of the mean bandwidth.
+ bw_slope
+ Collect bandwidth data and calculate the least
+ squares regression slope. Stop the job if the
+ slope falls below the specified limit.
+
+steadystate_duration=time
+ss_dur=time A rolling window of this duration will be used to judge whether
+ steady state has been reached. Data will be collected once per
+ second. The default is 0 which disables steady state detection.
+
+steadystate_ramp_time=time
+ss_ramp=time Allow the job to run for the specified duration before
+ beginning data collection for checking the steady state job
+ termination criterion. The default is 0.
+
invalidate=bool Invalidate the buffer/page cache parts for this file prior
to starting io. Defaults to true.
location should point there. So if it's mounted in /huge,
you would use mem=mmaphuge:/huge/somefile.
-iomem_align=int This indiciates the memory alignment of the IO memory buffers.
+iomem_align=int This indicates the memory alignment of the IO memory buffers.
Note that the given alignment is applied to the first IO unit
buffer, if using iodepth the alignment of the following buffers
are given by the bs used. In other words, if using a bs that is
through 'write_iops_log', then the minimum of this option and
'log_avg_msec' will be used. Default: 500ms.
-create_serialize=bool If true, serialize the file creating for the jobs.
+create_serialize=bool If true, serialize the file creation for the jobs.
This may be handy to avoid interleaving of data
files, which may greatly depend on the filesystem
used and even the number of processors in the system.
starting the given IO operation. This will also clear
the 'invalidate' flag, since it is pointless to pre-read
and then drop the cache. This will only work for IO engines
- that are seekable, since they allow you to read the same data
+ that are seek-able, since they allow you to read the same data
multiple times. Thus it will not work on eg network or splice
IO.
runs of that job would then waste time recreating the file
set again and again.
+unlink_each_loop=bool Unlink job files after each iteration or loop.
+
loops=int Run the specified number of iterations of this job. Used
to repeat the same workload a given number of times. Defaults
to 1.
crc32c Use a crc32c sum of the data area and store
it in the header of each block.
- crc32c-intel Use hardware assisted crc32c calcuation
+ crc32c-intel Use hardware assisted crc32c calculation
provided on SSE4.2 enabled processors. Falls
back to regular software crc32c, if not
supported by the system.
be a hex number that starts with either "0x" or "0X". Use
with verify=str. Also, verify_pattern supports %o format,
which means that for each block offset will be written and
- then verifyied back, e.g.:
+ then verified back, e.g.:
verify_pattern=%o
replay_no_stall=int When replaying I/O with read_iolog the default behavior
is to attempt to respect the time stamps within the log and
- replay them with the appropriate delay between IOPS. By
+ replay them with the appropriate delay between IOPS. By
setting this variable fio will not respect the timestamps and
attempt to replay them as fast as possible while still
- respecting ordering. The result is the same I/O pattern to a
+ respecting ordering. The result is the same I/O pattern to a
given device, but different timings.
replay_redirect=str While replaying I/O patterns using read_iolog the
mapping. Replay_redirect causes all IOPS to be replayed onto
the single specified device regardless of the device it was
recorded from. i.e. replay_redirect=/dev/sdc would cause all
- IO in the blktrace to be replayed onto /dev/sdc. This means
- multiple devices will be replayed onto a single, if the trace
- contains multiple devices. If you want multiple devices to be
- replayed concurrently to multiple redirected devices you must
- blkparse your trace into separate traces and replay them with
- independent fio invocations. Unfortuantely this also breaks
- the strict time ordering between multiple device accesses.
+ IO in the blktrace or iolog to be replayed onto /dev/sdc.
+ This means multiple devices will be replayed onto a single
+ device, if the trace contains multiple devices. If you want
+ multiple devices to be replayed concurrently to multiple
+ redirected devices you must blkparse your trace into separate
+ traces and replay them with independent fio invocations.
+ Unfortunately this also breaks the strict time ordering
+ between multiple device accesses.
replay_align=int Force alignment of IO offsets and lengths in a trace
to this power of 2 value.
the filename will not include the job index. See 'Log File
Formats'.
+write_hist_log=str Same as write_lat_log, but writes I/O completion
+ latency histograms. If no filename is given with this option, the
+ default filename of "jobname_clat_hist.x.log" is used, where x is
+ the index of the job (1..N, where N is the number of jobs). Even
+ if the filename is given, fio will still append the type of log.
+ If per_job_logs is false, then the filename will not include the
+ job index. See 'Log File Formats'.
+
write_iops_log=str Same as write_bw_log, but writes IOPS. If no filename is
given with this option, the default filename of
"jobname_type.x.log" is used,where x is the index of the job
specified period of time, reducing the resolution of the log.
See log_max_value as well. Defaults to 0, logging all entries.
+log_hist_msec=int Same as log_avg_msec, but logs entries for completion
+ latency histograms. Computing latency percentiles from averages of
+ intervals using log_avg_msec is innacurate. Setting this option makes
+ fio log histogram entries over the specified period of time, reducing
+ log sizes for high IOPS devices while retaining percentile accuracy.
+ See log_hist_coarseness as well. Defaults to 0, meaning histogram
+ logging is disabled.
+
+log_hist_coarseness=int Integer ranging from 0 to 6, defining the coarseness
+ of the resolution of the histogram logs enabled with log_hist_msec. For
+ each increment in coarseness, fio outputs half as many bins. Defaults to
+ 0, for which histogram logs contain 1216 latency bins. See
+ 'Log File Formats'.
+
log_max_value=bool If log_avg_msec is set, fio logs the average over that
window. If you instead want to log the maximum value, set this
option to 1. Defaults to 0, meaning that averaged values are
the --inflate-log command line parameter. The files will be
stored with a .fz suffix.
+log_unix_epoch=bool If set, fio will log Unix timestamps to the log
+ files produced by enabling write_type_log for each log type, instead
+ of the default zero-based timestamps.
+
block_error_percentiles=bool If set, record errors in trim block-sized
units from writes and trims and output a histogram of
how many trims it took to get to errors, and what kind
connections rather than initiating an outgoing connection. The
hostname must be omitted if this option is used.
-[net] pingpong Normaly a network writer will just continue writing data, and
+[net] pingpong Normally a network writer will just continue writing data, and
a network reader will just consume packages. If pingpong=1
is set, a writer will send its normal payload to the reader,
then wait for the reader to send the same payload back. This
[e4defrag] inplace=int
Configure donor file blocks allocation strategy
0(default): Preallocate donor's file on init
- 1 : allocate space immidietly inside defragment event,
+ 1 : allocate space immediately inside defragment event,
and free right after event
[rbd] clustername=str Specifies the name of the Ceph cluster.
[rbd] rbdname=str Specifies the name of the RBD.
-[rbd] pool=str Specifies the naem of the Ceph pool containing RBD.
+[rbd] pool=str Specifies the name of the Ceph pool containing RBD.
[rbd] clientname=str Specifies the username (without the 'client.' prefix)
used to access the Ceph cluster. If the clustername is
- specified, the clientmae shall be the full type.id
+ specified, the clientname shall be the full type.id
string. If no type. prefix is given, fio will add
'client.' by default.
[mtd] skip_bad=bool Skip operations against known bad blocks.
[libhdfs] hdfsdirectory libhdfs will create chunk in this HDFS directory
-[libhdfs] chunck_size the size of the chunck to use for each file.
+[libhdfs] chunk_size the size of the chunk to use for each file.
6.0 Interpreting the output
The offset is the offset, in bytes, from the start of the file, for that
particular IO. The logging of the offset can be toggled with 'log_offset'.
-If windowed logging is enabled though 'log_avg_msec', then fio doesn't log
+If windowed logging is enabled through 'log_avg_msec', then fio doesn't log
individual IOs. Instead of logs the average values over the specified
period of time. Since 'data direction' and 'offset' are per-IO values,
they aren't applicable if windowed logging is enabled. If windowed logging
is enabled and 'log_max_value' is set, then fio logs maximum values in
that window instead of averages.
-