.. option:: runtime=time
- Tell fio to terminate processing after the specified period of time. It
- can be quite hard to determine for how long a specified job will run, so
- this parameter is handy to cap the total runtime to a given time. When
- the unit is omitted, the value is interpreted in seconds.
+ Limit runtime. The test will run until it completes the configured I/O
+ workload or until it has run for this specified amount of time, whichever
+ occurs first. It can be quite hard to determine for how long a specified
+ job will run, so this parameter is handy to cap the total runtime to a
+ given time. When the unit is omitted, the value is interpreted in
+ seconds.
.. option:: time_based
.. option:: max_open_zones=int
- When running a random write test across an entire drive many more
- zones will be open than in a typical application workload. Hence this
- command line option that allows one to limit the number of open zones. The
- number of open zones is defined as the number of zones to which write
- commands are issued.
+ A zone of a zoned block device is in the open state when it is partially
+ written (i.e. not all sectors of the zone have been written). Zoned
+ block devices may have a limit on the total number of zones that can
+ be simultaneously in the open state, that is, the number of zones that
+ can be written to simultaneously. The :option:`max_open_zones` parameter
+ limits the number of zones to which write commands are issued by all fio
+ jobs, that is, limits the number of zones that will be in the open
+ state. This parameter is relevant only if the :option:`zonemode` =zbd is
+ used. The default value is always equal to maximum number of open zones
+ of the target zoned block device and a value higher than this limit
+ cannot be specified by users unless the option
+ :option:`ignore_zone_limits` is specified. When
+ :option:`ignore_zone_limits` is specified or the target device has no
+ limit on the number of zones that can be in an open state,
+ :option:`max_open_zones` can specify 0 to disable any limit on the
+ number of zones that can be simultaneously written to by all jobs.
.. option:: job_max_open_zones=int
- Limit on the number of simultaneously opened zones per single
- thread/process.
+ In the same manner as :option:`max_open_zones`, limit the number of open
+ zones per fio job, that is, the number of zones that a single job can
+ simultaneously write to. A value of zero indicates no limit.
+ Default: zero.
.. option:: ignore_zone_limits=bool
.. option:: zone_reset_threshold=float
- A number between zero and one that indicates the ratio of logical
- blocks with data to the total number of logical blocks in the test
- above which zones should be reset periodically.
+ A number between zero and one that indicates the ratio of written bytes
+ in the zones with write pointers in the IO range to the size of the IO
+ range. When current ratio is above this ratio, zones are reset
+ periodically as :option:`zone_reset_frequency` specifies. If there are
+ multiple jobs when using this option, the IO range for all write jobs
+ has to be the same.
.. option:: zone_reset_frequency=float
OpenBSD and ZFS on Solaris don't support direct I/O. On Windows the synchronous
ioengines don't support direct I/O. Default: false.
-.. option:: atomic=bool
-
- If value is true, attempt to use atomic direct I/O. Atomic writes are
- guaranteed to be stable once acknowledged by the operating system. Only
- Linux supports O_ATOMIC right now.
-
.. option:: buffered=bool
If value is true, use buffered I/O. This is the opposite of the
Generate the same offset.
``sequential`` is only useful for random I/O, where fio would normally
- generate a new random offset for every I/O. If you append e.g. 8 to randread,
- you would get a new random offset for every 8 I/Os. The result would be a
- seek for only every 8 I/Os, instead of for every I/O. Use ``rw=randread:8``
- to specify that. As sequential I/O is already sequential, setting
- ``sequential`` for that would not result in any differences. ``identical``
- behaves in a similar fashion, except it sends the same offset 8 number of
- times before generating a new offset.
+ generate a new random offset for every I/O. If you append e.g. 8 to
+ randread, i.e. ``rw=randread:8`` you would get a new random offset for
+ every 8 I/Os. The result would be a sequence of 8 sequential offsets
+ with a random starting point. However this behavior may change if a
+ sequential I/O reaches end of the file. As sequential I/O is already
+ sequential, setting ``sequential`` for that would not result in any
+ difference. ``identical`` behaves in a similar fashion, except it sends
+ the same offset 8 number of times before generating a new offset.
+
+ Example #1::
+
+ rw=randread:8
+ rw_sequencer=sequential
+ bs=4k
+
+ The generated sequence of offsets will look like this:
+ 4k, 8k, 12k, 16k, 20k, 24k, 28k, 32k, 92k, 96k, 100k, 104k, 108k,
+ 112k, 116k, 120k, 48k, 52k ...
+
+ Example #2::
+
+ rw=randread:8
+ rw_sequencer=identical
+ bs=4k
+
+ The generated sequence of offsets will look like this:
+ 4k, 4k, 4k, 4k, 4k, 4k, 4k, 4k, 92k, 92k, 92k, 92k, 92k, 92k, 92k, 92k,
+ 48k, 48k, 48k ...
.. option:: unified_rw_reporting=str
.. option:: randrepeat=bool
- Seed the random number generator used for random I/O patterns in a
- predictable way so the pattern is repeatable across runs. Default: true.
+ Seed all random number generators in a predictable way so the pattern
+ is repeatable across runs. Default: true.
.. option:: allrandrepeat=bool
- Seed all random number generators in a predictable way so results are
- repeatable across runs. Default: false.
+ Alias for :option:`randrepeat`. Default: true.
.. option:: randseed=int
**random**
Advise using **FADV_RANDOM**.
+ **noreuse**
+ Advise using **FADV_NOREUSE**. This may be a no-op on older Linux
+ kernels. Since Linux 6.3, it provides a hint to the LRU algorithm.
+ See the :manpage:`posix_fadvise(2)` man page.
+
.. option:: write_hint=str
Use :manpage:`fcntl(2)` to advise the kernel what life time to expect
before overwriting. The `trimwrite` mode works well for this
constraint.
- **pmemblk**
- Read and write using filesystem DAX to a file on a filesystem
- mounted with DAX on a persistent memory device through the PMDK
- libpmemblk library.
-
**dev-dax**
Read and write using device DAX to a persistent memory device (e.g.,
/dev/dax0.0) through the PMDK libpmem library.
:option:`libblkio_driver`. If
:option:`mem`/:option:`iomem` is not specified, memory
allocation is delegated to libblkio (and so is
- guaranteed to work with the selected *driver*).
+ guaranteed to work with the selected *driver*). One
+ libblkio instance is used per process, so all jobs
+ setting option :option:`thread` will share a single
+ instance (with one queue per thread) and must specify
+ compatible options. Note that some drivers don't allow
+ several instances to access the same device or file
+ simultaneously, but allow it for threads.
I/O engine specific parameters
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
For direct I/O, requests will only succeed if cache invalidation isn't required,
file blocks are fully allocated and the disk request could be issued immediately.
+.. option:: fdp=bool : [io_uring_cmd]
+
+ Enable Flexible Data Placement mode for write commands.
+
+.. option:: fdp_pli=str : [io_uring_cmd]
+
+ Select which Placement ID Index/Indicies this job is allowed to use for
+ writes. By default, the job will cycle through all available Placement
+ IDs, so use this to isolate these identifiers to specific jobs. If you
+ want fio to use placement identifier only at indices 0, 2 and 5 specify
+ ``fdp_pli=0,2,5``.
+
.. option:: cpuload=int : [cpuio]
Attempt to use the specified percentage of CPU cycles. This is a mandatory
**posix**
Use the posix asynchronous I/O interface to perform one or
more I/O operations asynchronously.
+ **vfio**
+ Use the user-space VFIO-based backend, implemented using
+ libvfn instead of SPDK.
**nil**
Do not transfer any data; just pretend to. This is mainly used
for introspective performance evaluation.
.. option:: xnvme_dev_nsid=int : [xnvme]
- xnvme namespace identifier for userspace NVMe driver, such as SPDK.
+ xnvme namespace identifier for userspace NVMe driver, SPDK or vfio.
+
+.. option:: xnvme_dev_subnqn=str : [xnvme]
+
+ Sets the subsystem NQN for fabrics. This is for xNVMe to utilize a
+ fabrics target with multiple systems.
+
+.. option:: xnvme_mem=str : [xnvme]
+
+ Select the xnvme memory backend. This can take these values.
+
+ **posix**
+ This is the default posix memory backend for linux NVMe driver.
+ **hugepage**
+ Use hugepages, instead of existing posix memory backend. The
+ memory backend uses hugetlbfs. This require users to allocate
+ hugepages, mount hugetlbfs and set an enviornment variable for
+ XNVME_HUGETLB_PATH.
+ **spdk**
+ Uses SPDK's memory allocator.
+ **vfio**
+ Uses libvfn's memory allocator. This also specifies the use
+ of libvfn backend instead of SPDK.
.. option:: xnvme_iovec=int : [xnvme]
performance. The default is to enable it only if
:option:`libblkio_wait_mode=eventfd <libblkio_wait_mode>`.
+.. option:: no_completion_thread : [windowsaio]
+
+ Avoid using a separate thread for completion polling.
+
I/O depth
~~~~~~~~~
fio will ignore the thinktime and continue doing IO at the specified
rate, instead of entering a catch-up mode after thinktime is done.
+.. option:: rate_cycle=int
+
+ Average bandwidth for :option:`rate` and :option:`rate_min` over this number
+ of milliseconds. Defaults to 1000.
+
I/O latency
~~~~~~~~~~~
microseconds. Comma-separated values may be specified for reads, writes,
and trims as described in :option:`blocksize`.
-.. option:: rate_cycle=int
-
- Average bandwidth for :option:`rate` and :option:`rate_min` over this number
- of milliseconds. Defaults to 1000.
-
I/O replay
~~~~~~~~~~
verification pass, according to the settings in the job file used. Default
false.
+.. option:: experimental_verify=bool
+
+ Enable experimental verification. Standard verify records I/O metadata
+ for later use during the verification phase. Experimental verify
+ instead resets the file after the write phase and then replays I/Os for
+ the verification phase.
+
.. option:: trim_percentage=int
Number of verify blocks to discard/trim.
Trim this number of I/O blocks.
-.. option:: experimental_verify=bool
-
- Enable experimental verification. Standard verify records I/O metadata
- for later use during the verification phase. Experimental verify
- instead resets the file after the write phase and then replays I/Os for
- the verification phase.
-
Steady state
~~~~~~~~~~~~
.. option:: steadystate_duration=time, ss_dur=time
- A rolling window of this duration will be used to judge whether steady state
- has been reached. Data will be collected once per second. The default is 0
- which disables steady state detection. When the unit is omitted, the
- value is interpreted in seconds.
+ A rolling window of this duration will be used to judge whether steady
+ state has been reached. Data will be collected every
+ :option:`ss_interval`. The default is 0 which disables steady state
+ detection. When the unit is omitted, the value is interpreted in
+ seconds.
.. option:: steadystate_ramp_time=time, ss_ramp=time
collection for checking the steady state job termination criterion. The
default is 0. When the unit is omitted, the value is interpreted in seconds.
+.. option:: steadystate_check_interval=time, ss_interval=time
+
+ The values during the rolling window will be collected with a period of
+ this value. If :option:`ss_interval` is 30s and :option:`ss_dur` is
+ 300s, 10 measurements will be taken. Default is 1s but that might not
+ converge, especially for slower devices, so set this accordingly. When
+ the unit is omitted, the value is interpreted in seconds.
+
Measurements and reporting
~~~~~~~~~~~~~~~~~~~~~~~~~~
It is the sum of submission and completion latency.
**bw**
- Bandwidth statistics based on samples. Same names as the xlat stats,
- but also includes the number of samples taken (**samples**) and an
- approximate percentage of total aggregate bandwidth this thread
- received in its group (**per**). This last value is only really
- useful if the threads in this group are on the same disk, since they
- are then competing for disk access.
+ Bandwidth statistics based on measurements from discrete
+ intervals. Fio continuously monitors bytes transferred and I/O
+ operations completed. By default fio calculates bandwidth in
+ each half-second interval (see :option:`bwavgtime`) and reports
+ descriptive statistics for the measurements here. Same names as
+ the xlat stats, but also includes the number of samples taken
+ (**samples**) and an approximate percentage of total aggregate
+ bandwidth this thread received in its group (**per**). This
+ last value is only really useful if the threads in this group
+ are on the same disk, since they are then competing for disk
+ access.
**iops**
- IOPS statistics based on samples. Same names as bw.
+ IOPS statistics based on measurements from discrete intervals.
+ For details see the description for bw above. See
+ :option:`iopsavgtime` to control the duration of the intervals.
+ Same values reported here as for bw except for percentage.
**lat (nsec/usec/msec)**
The distribution of I/O completion latencies. This is the time from when