.. option:: runtime=time
- Tell fio to terminate processing after the specified period of time. It
- can be quite hard to determine for how long a specified job will run, so
- this parameter is handy to cap the total runtime to a given time. When
- the unit is omitted, the value is interpreted in seconds.
+ Limit runtime. The test will run until it completes the configured I/O
+ workload or until it has run for this specified amount of time, whichever
+ occurs first. It can be quite hard to determine for how long a specified
+ job will run, so this parameter is handy to cap the total runtime to a
+ given time. When the unit is omitted, the value is interpreted in
+ seconds.
.. option:: time_based
.. option:: max_open_zones=int
- When running a random write test across an entire drive many more
- zones will be open than in a typical application workload. Hence this
- command line option that allows to limit the number of open zones. The
- number of open zones is defined as the number of zones to which write
- commands are issued.
+ A zone of a zoned block device is in the open state when it is partially
+ written (i.e. not all sectors of the zone have been written). Zoned
+ block devices may have a limit on the total number of zones that can
+ be simultaneously in the open state, that is, the number of zones that
+ can be written to simultaneously. The :option:`max_open_zones` parameter
+ limits the number of zones to which write commands are issued by all fio
+ jobs, that is, limits the number of zones that will be in the open
+ state. This parameter is relevant only if the :option:`zonemode` =zbd is
+ used. The default value is always equal to maximum number of open zones
+ of the target zoned block device and a value higher than this limit
+ cannot be specified by users unless the option
+ :option:`ignore_zone_limits` is specified. When
+ :option:`ignore_zone_limits` is specified or the target device has no
+ limit on the number of zones that can be in an open state,
+ :option:`max_open_zones` can specify 0 to disable any limit on the
+ number of zones that can be simultaneously written to by all jobs.
.. option:: job_max_open_zones=int
- Limit on the number of simultaneously opened zones per single
- thread/process.
+ In the same manner as :option:`max_open_zones`, limit the number of open
+ zones per fio job, that is, the number of zones that a single job can
+ simultaneously write to. A value of zero indicates no limit.
+ Default: zero.
.. option:: ignore_zone_limits=bool
.. option:: zone_reset_threshold=float
- A number between zero and one that indicates the ratio of logical
- blocks with data to the total number of logical blocks in the test
- above which zones should be reset periodically.
+ A number between zero and one that indicates the ratio of written bytes
+ in the zones with write pointers in the IO range to the size of the IO
+ range. When current ratio is above this ratio, zones are reset
+ periodically as :option:`zone_reset_frequency` specifies. If there are
+ multiple jobs when using this option, the IO range for all write jobs
+ has to be the same.
.. option:: zone_reset_frequency=float
OpenBSD and ZFS on Solaris don't support direct I/O. On Windows the synchronous
ioengines don't support direct I/O. Default: false.
-.. option:: atomic=bool
-
- If value is true, attempt to use atomic direct I/O. Atomic writes are
- guaranteed to be stable once acknowledged by the operating system. Only
- Linux supports O_ATOMIC right now.
-
.. option:: buffered=bool
If value is true, use buffered I/O. This is the opposite of the
Random mixed reads and writes.
**trimwrite**
Sequential trim+write sequences. Blocks will be trimmed first,
- then the same blocks will be written to.
+ then the same blocks will be written to. So if ``io_size=64K``
+ is specified, Fio will trim a total of 64K bytes and also
+ write 64K bytes on the same trimmed blocks. This behaviour
+ will be consistent with ``number_ios`` or other Fio options
+ limiting the total bytes or number of I/O's.
+ **randtrimwrite**
+ Like trimwrite, but uses random offsets rather
+ than sequential writes.
Fio defaults to read if the option is not specified. For the mixed I/O
types, the default is to split them 50/50. For certain types of I/O the
Generate the same offset.
``sequential`` is only useful for random I/O, where fio would normally
- generate a new random offset for every I/O. If you append e.g. 8 to randread,
- you would get a new random offset for every 8 I/Os. The result would be a
- seek for only every 8 I/Os, instead of for every I/O. Use ``rw=randread:8``
- to specify that. As sequential I/O is already sequential, setting
- ``sequential`` for that would not result in any differences. ``identical``
- behaves in a similar fashion, except it sends the same offset 8 number of
- times before generating a new offset.
+ generate a new random offset for every I/O. If you append e.g. 8 to
+ randread, i.e. ``rw=randread:8`` you would get a new random offset for
+ every 8 I/Os. The result would be a sequence of 8 sequential offsets
+ with a random starting point. However this behavior may change if a
+ sequential I/O reaches end of the file. As sequential I/O is already
+ sequential, setting ``sequential`` for that would not result in any
+ difference. ``identical`` behaves in a similar fashion, except it sends
+ the same offset 8 number of times before generating a new offset.
+
+ Example #1::
+
+ rw=randread:8
+ rw_sequencer=sequential
+ bs=4k
+
+ The generated sequence of offsets will look like this:
+ 4k, 8k, 12k, 16k, 20k, 24k, 28k, 32k, 92k, 96k, 100k, 104k, 108k,
+ 112k, 116k, 120k, 48k, 52k ...
+
+ Example #2::
+
+ rw=randread:8
+ rw_sequencer=identical
+ bs=4k
+
+ The generated sequence of offsets will look like this:
+ 4k, 4k, 4k, 4k, 4k, 4k, 4k, 4k, 92k, 92k, 92k, 92k, 92k, 92k, 92k, 92k,
+ 48k, 48k, 48k ...
.. option:: unified_rw_reporting=str
supplied as a value between 0 and 100.
The second, optional float is allowed for **pareto**, **zipf** and **normal** distributions.
- It allows to set base of distribution in non-default place, giving more control
+ It allows one to set base of distribution in non-default place, giving more control
over most probable outcome. This value is in range [0-1] which maps linearly to
range of possible random values.
Defaults are: random for **pareto** and **zipf**, and 0.5 for **normal**.
.. option:: size=int
The total size of file I/O for each thread of this job. Fio will run until
- this many bytes has been transferred, unless runtime is limited by other options
- (such as :option:`runtime`, for instance, or increased/decreased by :option:`io_size`).
+ this many bytes has been transferred, unless runtime is altered by other means
+ such as (1) :option:`runtime`, (2) :option:`io_size` (3) :option:`number_ios`,
+ (4) gaps/holes while doing I/O's such as ``rw=read:16K``, or (5) sequential
+ I/O reaching end of the file which is possible when :option:`percentage_random`
+ is less than 100.
Fio will divide this size between the available files determined by options
such as :option:`nrfiles`, :option:`filename`, unless :option:`filesize` is
specified by the job. If the result of division happens to be 0, the size is
before overwriting. The `trimwrite` mode works well for this
constraint.
- **pmemblk**
- Read and write using filesystem DAX to a file on a filesystem
- mounted with DAX on a persistent memory device through the PMDK
- libpmemblk library.
-
**dev-dax**
Read and write using device DAX to a persistent memory device (e.g.,
/dev/dax0.0) through the PMDK libpmem library.
the SPDK NVMe driver, or your own custom NVMe driver. The xnvme engine includes
engine specific options. (See https://xnvme.io).
+ **libblkio**
+ Use the libblkio library
+ (https://gitlab.com/libblkio/libblkio). The specific
+ *driver* to use must be set using
+ :option:`libblkio_driver`. If
+ :option:`mem`/:option:`iomem` is not specified, memory
+ allocation is delegated to libblkio (and so is
+ guaranteed to work with the selected *driver*). One
+ libblkio instance is used per process, so all jobs
+ setting option :option:`thread` will share a single
+ instance (with one queue per thread) and must specify
+ compatible options. Note that some drivers don't allow
+ several instances to access the same device or file
+ simultaneously, but allow it for threads.
+
I/O engine specific parameters
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
map and release for each IO. This is more efficient, and reduces the
IO latency as well.
-.. option:: nonvectored : [io_uring] [io_uring_cmd]
+.. option:: nonvectored=int : [io_uring] [io_uring_cmd]
With this option, fio will use non-vectored read/write commands, where
address must contain the address directly. Default is -1.
kernel of available items in the SQ ring. If this option is set, the
act of submitting IO will be done by a polling thread in the kernel.
This frees up cycles for fio, at the cost of using more CPU in the
- system.
+ system. As submission is just the time it takes to fill in the sqe
+ entries and any syscall required to wake up the idle kernel thread,
+ fio will not report submission latencies.
-.. option:: sqthread_poll_cpu : [io_uring] [io_uring_cmd]
+.. option:: sqthread_poll_cpu=int : [io_uring] [io_uring_cmd]
When :option:`sqthread_poll` is set, this option provides a way to
define which CPU should be used for the polling thread.
by the application. The benefits are more efficient IO for high IOPS
scenarios, and lower latencies for low queue depth IO.
+ [libblkio]
+
+ Use poll queues. This is incompatible with
+ :option:`libblkio_wait_mode=eventfd <libblkio_wait_mode>` and
+ :option:`libblkio_force_enable_completion_eventfd`.
+
[pvsync2]
Set RWF_HIPRI on I/O, indicating to the kernel that it's of higher priority
When hipri is set this determines the probability of a pvsync2 I/O being high
priority. The default is 100%.
-.. option:: nowait : [pvsync2] [libaio] [io_uring]
+.. option:: nowait=bool : [pvsync2] [libaio] [io_uring] [io_uring_cmd]
By default if a request cannot be executed immediately (e.g. resource starvation,
waiting on locks) it is queued and the initiating process will be blocked until
For direct I/O, requests will only succeed if cache invalidation isn't required,
file blocks are fully allocated and the disk request could be issued immediately.
+.. option:: fdp=bool : [io_uring_cmd]
+
+ Enable Flexible Data Placement mode for write commands.
+
+.. option:: fdp_pli=str : [io_uring_cmd]
+
+ Select which Placement ID Index/Indicies this job is allowed to use for
+ writes. By default, the job will cycle through all available Placement
+ IDs, so use this to isolate these identifiers to specific jobs. If you
+ want fio to use placement identifier only at indices 0, 2 and 5 specify
+ ``fdp_pli=0,2,5``.
+
.. option:: cpuload=int : [cpuio]
Attempt to use the specified percentage of CPU cycles. This is a mandatory
[dfs]
- Specificy a different chunk size (in bytes) for the dfs file.
+ Specify a different chunk size (in bytes) for the dfs file.
Use DAOS container's chunk size by default.
[libhdfs]
.. option:: object_class=str : [dfs]
- Specificy a different object class for the dfs file.
+ Specify a different object class for the dfs file.
Use DAOS container's object class by default.
.. option:: skip_bad=bool : [mtd]
**posix**
Use the posix asynchronous I/O interface to perform one or
more I/O operations asynchronously.
+ **vfio**
+ Use the user-space VFIO-based backend, implemented using
+ libvfn instead of SPDK.
**nil**
Do not transfer any data; just pretend to. This is mainly used
for introspective performance evaluation.
.. option:: xnvme_dev_nsid=int : [xnvme]
- xnvme namespace identifier for userspace NVMe driver, such as SPDK.
+ xnvme namespace identifier for userspace NVMe driver, SPDK or vfio.
+
+.. option:: xnvme_dev_subnqn=str : [xnvme]
+
+ Sets the subsystem NQN for fabrics. This is for xNVMe to utilize a
+ fabrics target with multiple systems.
+
+.. option:: xnvme_mem=str : [xnvme]
+
+ Select the xnvme memory backend. This can take these values.
+
+ **posix**
+ This is the default posix memory backend for linux NVMe driver.
+ **hugepage**
+ Use hugepages, instead of existing posix memory backend. The
+ memory backend uses hugetlbfs. This require users to allocate
+ hugepages, mount hugetlbfs and set an enviornment variable for
+ XNVME_HUGETLB_PATH.
+ **spdk**
+ Uses SPDK's memory allocator.
+ **vfio**
+ Uses libvfn's memory allocator. This also specifies the use
+ of libvfn backend instead of SPDK.
.. option:: xnvme_iovec=int : [xnvme]
If this option is set. xnvme will use vectored read/write commands.
+.. option:: libblkio_driver=str : [libblkio]
+
+ The libblkio *driver* to use. Different drivers access devices through
+ different underlying interfaces. Available drivers depend on the
+ libblkio version in use and are listed at
+ https://libblkio.gitlab.io/libblkio/blkio.html#drivers
+
+.. option:: libblkio_path=str : [libblkio]
+
+ Sets the value of the driver-specific "path" property before connecting
+ the libblkio instance, which identifies the target device or file on
+ which to perform I/O. Its exact semantics are driver-dependent and not
+ all drivers may support it; see
+ https://libblkio.gitlab.io/libblkio/blkio.html#drivers
+
+.. option:: libblkio_pre_connect_props=str : [libblkio]
+
+ A colon-separated list of additional libblkio properties to be set after
+ creating but before connecting the libblkio instance. Each property must
+ have the format ``<name>=<value>``. Colons can be escaped as ``\:``.
+ These are set after the engine sets any other properties, so those can
+ be overriden. Available properties depend on the libblkio version in use
+ and are listed at
+ https://libblkio.gitlab.io/libblkio/blkio.html#properties
+
+.. option:: libblkio_num_entries=int : [libblkio]
+
+ Sets the value of the driver-specific "num-entries" property before
+ starting the libblkio instance. Its exact semantics are driver-dependent
+ and not all drivers may support it; see
+ https://libblkio.gitlab.io/libblkio/blkio.html#drivers
+
+.. option:: libblkio_queue_size=int : [libblkio]
+
+ Sets the value of the driver-specific "queue-size" property before
+ starting the libblkio instance. Its exact semantics are driver-dependent
+ and not all drivers may support it; see
+ https://libblkio.gitlab.io/libblkio/blkio.html#drivers
+
+.. option:: libblkio_pre_start_props=str : [libblkio]
+
+ A colon-separated list of additional libblkio properties to be set after
+ connecting but before starting the libblkio instance. Each property must
+ have the format ``<name>=<value>``. Colons can be escaped as ``\:``.
+ These are set after the engine sets any other properties, so those can
+ be overriden. Available properties depend on the libblkio version in use
+ and are listed at
+ https://libblkio.gitlab.io/libblkio/blkio.html#properties
+
+.. option:: libblkio_vectored : [libblkio]
+
+ Submit vectored read and write requests.
+
+.. option:: libblkio_write_zeroes_on_trim : [libblkio]
+
+ Submit trims as "write zeroes" requests instead of discard requests.
+
+.. option:: libblkio_wait_mode=str : [libblkio]
+
+ How to wait for completions:
+
+ **block** (default)
+ Use a blocking call to ``blkioq_do_io()``.
+ **eventfd**
+ Use a blocking call to ``read()`` on the completion eventfd.
+ **loop**
+ Use a busy loop with a non-blocking call to ``blkioq_do_io()``.
+
+.. option:: libblkio_force_enable_completion_eventfd : [libblkio]
+
+ Enable the queue's completion eventfd even when unused. This may impact
+ performance. The default is to enable it only if
+ :option:`libblkio_wait_mode=eventfd <libblkio_wait_mode>`.
+
I/O depth
~~~~~~~~~
.. option:: flow=int
- Weight in token-based flow control. If this value is used, then there is a
- 'flow counter' which is used to regulate the proportion of activity between
- two or more jobs. Fio attempts to keep this flow counter near zero. The
- ``flow`` parameter stands for how much should be added or subtracted to the
- flow counter on each iteration of the main I/O loop. That is, if one job has
- ``flow=8`` and another job has ``flow=-1``, then there will be a roughly 1:8
- ratio in how much one runs vs the other.
+ Weight in token-based flow control. If this value is used, then fio
+ regulates the activity between two or more jobs sharing the same
+ flow_id. Fio attempts to keep each job activity proportional to other
+ jobs' activities in the same flow_id group, with respect to requested
+ weight per job. That is, if one job has `flow=3', another job has
+ `flow=2' and another with `flow=1`, then there will be a roughly 3:2:1
+ ratio in how much one runs vs the others.
.. option:: flow_sleep=int
make fio terminate all jobs in the same group, as soon as one job of that
group finishes.
-.. option:: exit_what
+.. option:: exit_what=str
By default, fio will continue running all other jobs when one job finishes.
- Sometimes this is not the desired action. Setting ``exit_all`` will
+ Sometimes this is not the desired action. Setting ``exitall`` will
instead make fio terminate all jobs in the same group. The option
``exit_what`` allows to control which jobs get terminated when ``exitall`` is
enabled. The default is ``group`` and does not change the behaviour of
.. option:: experimental_verify=bool
- Enable experimental verification.
+ Enable experimental verification. Standard verify records I/O metadata
+ for later use during the verification phase. Experimental verify
+ instead resets the file after the write phase and then replays I/Os for
+ the verification phase.
Steady state
~~~~~~~~~~~~
.. option:: steadystate_duration=time, ss_dur=time
- A rolling window of this duration will be used to judge whether steady state
- has been reached. Data will be collected once per second. The default is 0
- which disables steady state detection. When the unit is omitted, the
- value is interpreted in seconds.
+ A rolling window of this duration will be used to judge whether steady
+ state has been reached. Data will be collected every
+ :option:`ss_interval`. The default is 0 which disables steady state
+ detection. When the unit is omitted, the value is interpreted in
+ seconds.
.. option:: steadystate_ramp_time=time, ss_ramp=time
collection for checking the steady state job termination criterion. The
default is 0. When the unit is omitted, the value is interpreted in seconds.
+.. option:: steadystate_check_interval=time, ss_interval=time
+
+ The values during the rolling window will be collected with a period of
+ this value. If :option:`ss_interval` is 30s and :option:`ss_dur` is
+ 300s, 10 measurements will be taken. Default is 1s but that might not
+ converge, especially for slower devices, so set this accordingly. When
+ the unit is omitted, the value is interpreted in seconds.
+
Measurements and reporting
~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~
The second version of the trace file format was added in fio version 1.17. It
-allows to access more then one file per trace and has a bigger set of possible
+allows one to access more than one file per trace and has a bigger set of possible
file actions.
The first line of the trace file has to be::