X-Git-Url: https://git.kernel.dk/?a=blobdiff_plain;f=fio.1;h=e94fad0aba3f8e0f0ea806e551d64129bc1e855d;hb=d08dbc03fe8b5ca5bc123746e2a688168da67a5a;hp=6f7a608d9d8282901af7b0e9703f16a9e66e5408;hpb=13fffdfbe3de66c368fe9d6bfaf61950f7f08857;p=fio.git diff --git a/fio.1 b/fio.1 index 6f7a608d..e94fad0a 100644 --- a/fio.1 +++ b/fio.1 @@ -828,14 +828,25 @@ numbers fio only reads beyond the write pointer if explicitly told to do so. Default: false. .TP .BI max_open_zones \fR=\fPint -When running a random write test across an entire drive many more zones will be -open than in a typical application workload. Hence this command line option -that allows one to limit the number of open zones. The number of open zones is -defined as the number of zones to which write commands are issued by all -threads/processes. +A zone of a zoned block device is in the open state when it is partially written +(i.e. not all sectors of the zone have been written). Zoned block devices may +have limit a on the total number of zones that can be simultaneously in the +open state, that is, the number of zones that can be written to simultaneously. +The \fBmax_open_zones\fR parameter limits the number of zones to which write +commands are issued by all fio jobs, that is, limits the number of zones that +will be in the open state. This parameter is relevant only if the +\fBzonemode=zbd\fR is used. The default value is always equal to maximum number +of open zones of the target zoned block device and a value higher than this +limit cannot be specified by users unless the option \fBignore_zone_limits\fR is +specified. When \fBignore_zone_limits\fR is specified or the target device has +no limit on the number of zones that can be in an open state, +\fBmax_open_zones\fR can specify 0 to disable any limit on the number of zones +that can be simultaneously written to by all jobs. .TP .BI job_max_open_zones \fR=\fPint -Limit on the number of simultaneously opened zones per single thread/process. +In the same manner as \fBmax_open_zones\fR, limit the number of open zones per +fio job, that is, the number of zones that a single job can simultaneously write +to. A value of zero indicates no limit. Default: zero. .TP .BI ignore_zone_limits \fR=\fPbool If this option is used, fio will ignore the maximum number of open zones limit @@ -843,9 +854,11 @@ of the zoned block device in use, thus allowing the option \fBmax_open_zones\fR value to be larger than the device reported limit. Default: false. .TP .BI zone_reset_threshold \fR=\fPfloat -A number between zero and one that indicates the ratio of logical blocks with -data to the total number of logical blocks in the test above which zones -should be reset periodically. +A number between zero and one that indicates the ratio of written bytes in the +zones with write pointers in the IO range to the size of the IO range. When +current ratio is above this ratio, zones are reset periodically as +\fBzone_reset_frequency\fR specifies. If there are multiple jobs when using this +option, the IO range for all write jobs has to be the same. .TP .BI zone_reset_frequency \fR=\fPfloat A number between zero and one that indicates how often a zone reset should be @@ -860,11 +873,6 @@ If value is true, use non-buffered I/O. This is usually O_DIRECT. Note that OpenBSD and ZFS on Solaris don't support direct I/O. On Windows the synchronous ioengines don't support direct I/O. Default: false. .TP -.BI atomic \fR=\fPbool -If value is true, attempt to use atomic direct I/O. Atomic writes are -guaranteed to be stable once acknowledged by the operating system. Only -Linux supports O_ATOMIC right now. -.TP .BI buffered \fR=\fPbool If value is true, use buffered I/O. This is the opposite of the \fBdirect\fR option. Defaults to true. @@ -941,12 +949,47 @@ Generate the same offset. .P \fBsequential\fR is only useful for random I/O, where fio would normally generate a new random offset for every I/O. If you append e.g. 8 to randread, -you would get a new random offset for every 8 I/Os. The result would be a -seek for only every 8 I/Os, instead of for every I/O. Use `rw=randread:8' -to specify that. As sequential I/O is already sequential, setting -\fBsequential\fR for that would not result in any differences. \fBidentical\fR -behaves in a similar fashion, except it sends the same offset 8 number of -times before generating a new offset. +i.e. `rw=randread:8' you would get a new random offset for every 8 I/Os. The +result would be a sequence of 8 sequential offsets with a random starting +point. However this behavior may change if a sequential I/O reaches end of the +file. As sequential I/O is already sequential, setting \fBsequential\fR for +that would not result in any difference. \fBidentical\fR behaves in a similar +fashion, except it sends the same offset 8 number of times before generating a +new offset. +.P +.P +Example #1: +.RS +.P +.PD 0 +rw=randread:8 +.P +rw_sequencer=sequential +.P +bs=4k +.PD +.RE +.P +The generated sequence of offsets will look like this: +4k, 8k, 12k, 16k, 20k, 24k, 28k, 32k, 92k, 96k, 100k, 104k, 108k, 112k, 116k, +120k, 48k, 52k ... +.P +.P +Example #2: +.RS +.P +.PD 0 +rw=randread:8 +.P +rw_sequencer=identical +.P +bs=4k +.PD +.RE +.P +The generated sequence of offsets will look like this: +4k, 4k, 4k, 4k, 4k, 4k, 4k, 4k, 92k, 92k, 92k, 92k, 92k, 92k, 92k, 92k, 48k, +48k, 48k ... .RE .TP .BI unified_rw_reporting \fR=\fPstr @@ -1911,11 +1954,6 @@ e.g., on NAND, writing sequentially to erase blocks and discarding before overwriting. The \fBtrimwrite\fR mode works well for this constraint. .TP -.B pmemblk -Read and write using filesystem DAX to a file on a filesystem -mounted with DAX on a persistent memory device through the PMDK -libpmemblk library. -.TP .B dev\-dax Read and write using device DAX to a persistent memory device (e.g., /dev/dax0.0) through the PMDK libpmem library. @@ -1997,7 +2035,11 @@ engine specific options. (See \fIhttps://xnvme.io/\fR). Use the libblkio library (\fIhttps://gitlab.com/libblkio/libblkio\fR). The specific driver to use must be set using \fBlibblkio_driver\fR. If \fBmem\fR/\fBiomem\fR is not specified, memory allocation is delegated to -libblkio (and so is guaranteed to work with the selected driver). +libblkio (and so is guaranteed to work with the selected driver). One libblkio +instance is used per process, so all jobs setting option \fBthread\fR will share +a single instance (with one queue per thread) and must specify compatible +options. Note that some drivers don't allow several instances to access the same +device or file simultaneously, but allow it for threads. .SS "I/O engine specific parameters" In addition, there are some parameters which are only valid when a specific \fBioengine\fR is in use. These are used identically to normal parameters, @@ -2540,7 +2582,7 @@ replaced by the name of the job .BI (exec)grace_time\fR=\fPint Defines the time between the SIGTERM and SIGKILL signals. Default is 1 second. .TP -.BI (exec)std_redirect\fR=\fbool +.BI (exec)std_redirect\fR=\fPbool If set, stdout and stderr streams are redirected to files named from the job name. Default is true. .TP .BI (xnvme)xnvme_async\fR=\fPstr @@ -2569,6 +2611,10 @@ Use Linux aio for Asynchronous I/O Use the posix asynchronous I/O interface to perform one or more I/O operations asynchronously. .TP +.BI vfio +Use the user-space VFIO-based backend, implemented using libvfn instead of +SPDK. +.TP .BI nil Do not transfer any data; just pretend to. This is mainly used for introspective performance evaluation. @@ -2606,7 +2652,33 @@ Use Linux Block Layer ioctl() and sysfs for admin commands. .RE .TP .BI (xnvme)xnvme_dev_nsid\fR=\fPint -xnvme namespace identifier for userspace NVMe driver such as SPDK. +xnvme namespace identifier for userspace NVMe driver SPDK or vfio. +.TP +.BI (xnvme)xnvme_dev_subnqn\fR=\fPstr +Sets the subsystem NQN for fabrics. This is for xNVMe to utilize a fabrics +target with multiple systems. +.TP +.BI (xnvme)xnvme_mem\fR=\fPstr +Select the xnvme memory backend. This can take these values. +.RS +.RS +.TP +.B posix +This is the default posix memory backend for linux NVMe driver. +.TP +.BI hugepage +Use hugepages, instead of existing posix memory backend. The memory backend +uses hugetlbfs. This require users to allocate hugepages, mount hugetlbfs and +set an enviornment variable for XNVME_HUGETLB_PATH. +.TP +.BI spdk +Uses SPDK's memory allocator. +.TP +.BI vfio +Uses libvfn's memory allocator. This also specifies the use of libvfn backend +instead of SPDK. +.RE +.RE .TP .BI (xnvme)xnvme_iovec If this option is set, xnvme will use vectored read/write commands.