.. option:: runtime=time
- Tell fio to terminate processing after the specified period of time. It
- can be quite hard to determine for how long a specified job will run, so
- this parameter is handy to cap the total runtime to a given time. When
- the unit is omitted, the value is interpreted in seconds.
+ Limit runtime. The test will run until it completes the configured I/O
+ workload or until it has run for this specified amount of time, whichever
+ occurs first. It can be quite hard to determine for how long a specified
+ job will run, so this parameter is handy to cap the total runtime to a
+ given time. When the unit is omitted, the value is interpreted in
+ seconds.
.. option:: time_based
.. option:: max_open_zones=int
- When running a random write test across an entire drive many more
- zones will be open than in a typical application workload. Hence this
- command line option that allows one to limit the number of open zones. The
- number of open zones is defined as the number of zones to which write
- commands are issued.
+ A zone of a zoned block device is in the open state when it is partially
+ written (i.e. not all sectors of the zone have been written). Zoned
+ block devices may have a limit on the total number of zones that can
+ be simultaneously in the open state, that is, the number of zones that
+ can be written to simultaneously. The :option:`max_open_zones` parameter
+ limits the number of zones to which write commands are issued by all fio
+ jobs, that is, limits the number of zones that will be in the open
+ state. This parameter is relevant only if the :option:`zonemode` =zbd is
+ used. The default value is always equal to maximum number of open zones
+ of the target zoned block device and a value higher than this limit
+ cannot be specified by users unless the option
+ :option:`ignore_zone_limits` is specified. When
+ :option:`ignore_zone_limits` is specified or the target device has no
+ limit on the number of zones that can be in an open state,
+ :option:`max_open_zones` can specify 0 to disable any limit on the
+ number of zones that can be simultaneously written to by all jobs.
.. option:: job_max_open_zones=int
- Limit on the number of simultaneously opened zones per single
- thread/process.
+ In the same manner as :option:`max_open_zones`, limit the number of open
+ zones per fio job, that is, the number of zones that a single job can
+ simultaneously write to. A value of zero indicates no limit.
+ Default: zero.
.. option:: ignore_zone_limits=bool
.. option:: zone_reset_threshold=float
- A number between zero and one that indicates the ratio of logical
- blocks with data to the total number of logical blocks in the test
- above which zones should be reset periodically.
+ A number between zero and one that indicates the ratio of written bytes
+ in the zones with write pointers in the IO range to the size of the IO
+ range. When current ratio is above this ratio, zones are reset
+ periodically as :option:`zone_reset_frequency` specifies. If there are
+ multiple jobs when using this option, the IO range for all write jobs
+ has to be the same.
.. option:: zone_reset_frequency=float
OpenBSD and ZFS on Solaris don't support direct I/O. On Windows the synchronous
ioengines don't support direct I/O. Default: false.
-.. option:: atomic=bool
-
- If value is true, attempt to use atomic direct I/O. Atomic writes are
- guaranteed to be stable once acknowledged by the operating system. Only
- Linux supports O_ATOMIC right now.
-
.. option:: buffered=bool
If value is true, use buffered I/O. This is the opposite of the
Generate the same offset.
``sequential`` is only useful for random I/O, where fio would normally
- generate a new random offset for every I/O. If you append e.g. 8 to randread,
- you would get a new random offset for every 8 I/Os. The result would be a
- seek for only every 8 I/Os, instead of for every I/O. Use ``rw=randread:8``
- to specify that. As sequential I/O is already sequential, setting
- ``sequential`` for that would not result in any differences. ``identical``
- behaves in a similar fashion, except it sends the same offset 8 number of
- times before generating a new offset.
+ generate a new random offset for every I/O. If you append e.g. 8 to
+ randread, i.e. ``rw=randread:8`` you would get a new random offset for
+ every 8 I/Os. The result would be a sequence of 8 sequential offsets
+ with a random starting point. However this behavior may change if a
+ sequential I/O reaches end of the file. As sequential I/O is already
+ sequential, setting ``sequential`` for that would not result in any
+ difference. ``identical`` behaves in a similar fashion, except it sends
+ the same offset 8 number of times before generating a new offset.
+
+ Example #1::
+
+ rw=randread:8
+ rw_sequencer=sequential
+ bs=4k
+
+ The generated sequence of offsets will look like this:
+ 4k, 8k, 12k, 16k, 20k, 24k, 28k, 32k, 92k, 96k, 100k, 104k, 108k,
+ 112k, 116k, 120k, 48k, 52k ...
+
+ Example #2::
+
+ rw=randread:8
+ rw_sequencer=identical
+ bs=4k
+
+ The generated sequence of offsets will look like this:
+ 4k, 4k, 4k, 4k, 4k, 4k, 4k, 4k, 92k, 92k, 92k, 92k, 92k, 92k, 92k, 92k,
+ 48k, 48k, 48k ...
.. option:: unified_rw_reporting=str
before overwriting. The `trimwrite` mode works well for this
constraint.
- **pmemblk**
- Read and write using filesystem DAX to a file on a filesystem
- mounted with DAX on a persistent memory device through the PMDK
- libpmemblk library.
-
**dev-dax**
Read and write using device DAX to a persistent memory device (e.g.,
/dev/dax0.0) through the PMDK libpmem library.
:option:`libblkio_driver`. If
:option:`mem`/:option:`iomem` is not specified, memory
allocation is delegated to libblkio (and so is
- guaranteed to work with the selected *driver*).
+ guaranteed to work with the selected *driver*). One
+ libblkio instance is used per process, so all jobs
+ setting option :option:`thread` will share a single
+ instance (with one queue per thread) and must specify
+ compatible options. Note that some drivers don't allow
+ several instances to access the same device or file
+ simultaneously, but allow it for threads.
I/O engine specific parameters
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[libblkio]
- Use poll queues.
+ Use poll queues. This is incompatible with
+ :option:`libblkio_wait_mode=eventfd <libblkio_wait_mode>` and
+ :option:`libblkio_force_enable_completion_eventfd`.
[pvsync2]
For direct I/O, requests will only succeed if cache invalidation isn't required,
file blocks are fully allocated and the disk request could be issued immediately.
+.. option:: fdp=bool : [io_uring_cmd]
+
+ Enable Flexible Data Placement mode for write commands.
+
+.. option:: fdp_pli=str : [io_uring_cmd]
+
+ Select which Placement ID Index/Indicies this job is allowed to use for
+ writes. By default, the job will cycle through all available Placement
+ IDs, so use this to isolate these identifiers to specific jobs. If you
+ want fio to use placement identifier only at indices 0, 2 and 5 specify
+ ``fdp_pli=0,2,5``.
+
.. option:: cpuload=int : [cpuio]
Attempt to use the specified percentage of CPU cycles. This is a mandatory
**posix**
Use the posix asynchronous I/O interface to perform one or
more I/O operations asynchronously.
+ **vfio**
+ Use the user-space VFIO-based backend, implemented using
+ libvfn instead of SPDK.
**nil**
Do not transfer any data; just pretend to. This is mainly used
for introspective performance evaluation.
.. option:: xnvme_dev_nsid=int : [xnvme]
- xnvme namespace identifier for userspace NVMe driver, such as SPDK.
+ xnvme namespace identifier for userspace NVMe driver, SPDK or vfio.
+
+.. option:: xnvme_dev_subnqn=str : [xnvme]
+
+ Sets the subsystem NQN for fabrics. This is for xNVMe to utilize a
+ fabrics target with multiple systems.
+
+.. option:: xnvme_mem=str : [xnvme]
+
+ Select the xnvme memory backend. This can take these values.
+
+ **posix**
+ This is the default posix memory backend for linux NVMe driver.
+ **hugepage**
+ Use hugepages, instead of existing posix memory backend. The
+ memory backend uses hugetlbfs. This require users to allocate
+ hugepages, mount hugetlbfs and set an enviornment variable for
+ XNVME_HUGETLB_PATH.
+ **spdk**
+ Uses SPDK's memory allocator.
+ **vfio**
+ Uses libvfn's memory allocator. This also specifies the use
+ of libvfn backend instead of SPDK.
.. option:: xnvme_iovec=int : [xnvme]
libblkio version in use and are listed at
https://libblkio.gitlab.io/libblkio/blkio.html#drivers
+.. option:: libblkio_path=str : [libblkio]
+
+ Sets the value of the driver-specific "path" property before connecting
+ the libblkio instance, which identifies the target device or file on
+ which to perform I/O. Its exact semantics are driver-dependent and not
+ all drivers may support it; see
+ https://libblkio.gitlab.io/libblkio/blkio.html#drivers
+
.. option:: libblkio_pre_connect_props=str : [libblkio]
- A colon-separated list of libblkio properties to be set after creating
- but before connecting the libblkio instance. Each property must have the
- format ``<name>=<value>``. Colons can be escaped as ``\:``. These are
- set after the engine sets any other properties, so those can be
- overriden. Available properties depend on the libblkio version in use
+ A colon-separated list of additional libblkio properties to be set after
+ creating but before connecting the libblkio instance. Each property must
+ have the format ``<name>=<value>``. Colons can be escaped as ``\:``.
+ These are set after the engine sets any other properties, so those can
+ be overriden. Available properties depend on the libblkio version in use
and are listed at
https://libblkio.gitlab.io/libblkio/blkio.html#properties
+.. option:: libblkio_num_entries=int : [libblkio]
+
+ Sets the value of the driver-specific "num-entries" property before
+ starting the libblkio instance. Its exact semantics are driver-dependent
+ and not all drivers may support it; see
+ https://libblkio.gitlab.io/libblkio/blkio.html#drivers
+
+.. option:: libblkio_queue_size=int : [libblkio]
+
+ Sets the value of the driver-specific "queue-size" property before
+ starting the libblkio instance. Its exact semantics are driver-dependent
+ and not all drivers may support it; see
+ https://libblkio.gitlab.io/libblkio/blkio.html#drivers
+
.. option:: libblkio_pre_start_props=str : [libblkio]
- A colon-separated list of libblkio properties to be set after connecting
- but before starting the libblkio instance. Each property must have the
- format ``<name>=<value>``. Colons can be escaped as ``\:``. These are
- set after the engine sets any other properties, so those can be
- overriden. Available properties depend on the libblkio version in use
+ A colon-separated list of additional libblkio properties to be set after
+ connecting but before starting the libblkio instance. Each property must
+ have the format ``<name>=<value>``. Colons can be escaped as ``\:``.
+ These are set after the engine sets any other properties, so those can
+ be overriden. Available properties depend on the libblkio version in use
and are listed at
https://libblkio.gitlab.io/libblkio/blkio.html#properties
Submit trims as "write zeroes" requests instead of discard requests.
+.. option:: libblkio_wait_mode=str : [libblkio]
+
+ How to wait for completions:
+
+ **block** (default)
+ Use a blocking call to ``blkioq_do_io()``.
+ **eventfd**
+ Use a blocking call to ``read()`` on the completion eventfd.
+ **loop**
+ Use a busy loop with a non-blocking call to ``blkioq_do_io()``.
+
+.. option:: libblkio_force_enable_completion_eventfd : [libblkio]
+
+ Enable the queue's completion eventfd even when unused. This may impact
+ performance. The default is to enable it only if
+ :option:`libblkio_wait_mode=eventfd <libblkio_wait_mode>`.
+
I/O depth
~~~~~~~~~