.. option:: zone_reset_threshold=float
- A number between zero and one that indicates the ratio of logical
- blocks with data to the total number of logical blocks in the test
- above which zones should be reset periodically.
+ A number between zero and one that indicates the ratio of written bytes
+ in the zones with write pointers in the IO range to the size of the IO
+ range. When current ratio is above this ratio, zones are reset
+ periodically as :option:`zone_reset_frequency` specifies. If there are
+ multiple jobs when using this option, the IO range for all write jobs
+ has to be the same.
.. option:: zone_reset_frequency=float
OpenBSD and ZFS on Solaris don't support direct I/O. On Windows the synchronous
ioengines don't support direct I/O. Default: false.
-.. option:: atomic=bool
-
- If value is true, attempt to use atomic direct I/O. Atomic writes are
- guaranteed to be stable once acknowledged by the operating system. Only
- Linux supports O_ATOMIC right now.
-
.. option:: buffered=bool
If value is true, use buffered I/O. This is the opposite of the
Generate the same offset.
``sequential`` is only useful for random I/O, where fio would normally
- generate a new random offset for every I/O. If you append e.g. 8 to randread,
- you would get a new random offset for every 8 I/Os. The result would be a
- seek for only every 8 I/Os, instead of for every I/O. Use ``rw=randread:8``
- to specify that. As sequential I/O is already sequential, setting
- ``sequential`` for that would not result in any differences. ``identical``
- behaves in a similar fashion, except it sends the same offset 8 number of
- times before generating a new offset.
+ generate a new random offset for every I/O. If you append e.g. 8 to
+ randread, i.e. ``rw=randread:8`` you would get a new random offset for
+ every 8 I/Os. The result would be a sequence of 8 sequential offsets
+ with a random starting point. However this behavior may change if a
+ sequential I/O reaches end of the file. As sequential I/O is already
+ sequential, setting ``sequential`` for that would not result in any
+ difference. ``identical`` behaves in a similar fashion, except it sends
+ the same offset 8 number of times before generating a new offset.
+
+ Example #1::
+
+ rw=randread:8
+ rw_sequencer=sequential
+ bs=4k
+
+ The generated sequence of offsets will look like this:
+ 4k, 8k, 12k, 16k, 20k, 24k, 28k, 32k, 92k, 96k, 100k, 104k, 108k,
+ 112k, 116k, 120k, 48k, 52k ...
+
+ Example #2::
+
+ rw=randread:8
+ rw_sequencer=identical
+ bs=4k
+
+ The generated sequence of offsets will look like this:
+ 4k, 4k, 4k, 4k, 4k, 4k, 4k, 4k, 92k, 92k, 92k, 92k, 92k, 92k, 92k, 92k,
+ 48k, 48k, 48k ...
.. option:: unified_rw_reporting=str
before overwriting. The `trimwrite` mode works well for this
constraint.
- **pmemblk**
- Read and write using filesystem DAX to a file on a filesystem
- mounted with DAX on a persistent memory device through the PMDK
- libpmemblk library.
-
**dev-dax**
Read and write using device DAX to a persistent memory device (e.g.,
/dev/dax0.0) through the PMDK libpmem library.
For direct I/O, requests will only succeed if cache invalidation isn't required,
file blocks are fully allocated and the disk request could be issued immediately.
+.. option:: fdp=bool : [io_uring_cmd]
+
+ Enable Flexible Data Placement mode for write commands.
+
+.. option:: fdp_pli=str : [io_uring_cmd]
+
+ Select which Placement ID Index/Indicies this job is allowed to use for
+ writes. By default, the job will cycle through all available Placement
+ IDs, so use this to isolate these identifiers to specific jobs. If you
+ want fio to use placement identifier only at indices 0, 2 and 5 specify
+ ``fdp_pli=0,2,5``.
+
.. option:: cpuload=int : [cpuio]
Attempt to use the specified percentage of CPU cycles. This is a mandatory
**posix**
Use the posix asynchronous I/O interface to perform one or
more I/O operations asynchronously.
+ **vfio**
+ Use the user-space VFIO-based backend, implemented using
+ libvfn instead of SPDK.
**nil**
Do not transfer any data; just pretend to. This is mainly used
for introspective performance evaluation.
.. option:: xnvme_dev_nsid=int : [xnvme]
- xnvme namespace identifier for userspace NVMe driver, such as SPDK.
+ xnvme namespace identifier for userspace NVMe driver, SPDK or vfio.
+
+.. option:: xnvme_dev_subnqn=str : [xnvme]
+
+ Sets the subsystem NQN for fabrics. This is for xNVMe to utilize a
+ fabrics target with multiple systems.
+
+.. option:: xnvme_mem=str : [xnvme]
+
+ Select the xnvme memory backend. This can take these values.
+
+ **posix**
+ This is the default posix memory backend for linux NVMe driver.
+ **hugepage**
+ Use hugepages, instead of existing posix memory backend. The
+ memory backend uses hugetlbfs. This require users to allocate
+ hugepages, mount hugetlbfs and set an enviornment variable for
+ XNVME_HUGETLB_PATH.
+ **spdk**
+ Uses SPDK's memory allocator.
+ **vfio**
+ Uses libvfn's memory allocator. This also specifies the use
+ of libvfn backend instead of SPDK.
.. option:: xnvme_iovec=int : [xnvme]