limit reads or writes to a certain rate. If that is the case, then the
distribution may be skewed. Default: 50.
-.. option:: random_distribution=str:float[,str:float][,str:float]
+.. option:: random_distribution=str:float[:float][,str:float][,str:float]
By default, fio will use a completely uniform random distribution when asked
to perform random I/O. Sometimes it is useful to skew the distribution in
map. For the **normal** distribution, a normal (Gaussian) deviation is
supplied as a value between 0 and 100.
+ The second, optional float is allowed for **pareto**, **zipf** and **normal** distributions.
+ It allows to set base of distribution in non-default place, giving more control
+ over most probable outcome. This value is in range [0-1] which maps linearly to
+ range of possible random values.
+ Defaults are: random for **pareto** and **zipf**, and 0.5 for **normal**.
+ If you wanted to use **zipf** with a `theta` of 1.2 centered on 1/4 of allowed value range,
+ you would use ``random_distibution=zipf:1.2:0.25``.
+
For a **zoned** distribution, fio supports specifying percentages of I/O
access that should fall within what range of the file or device. For
example, given a criteria of:
**cpuio**
Doesn't transfer any data, but burns CPU cycles according to the
- :option:`cpuload` and :option:`cpuchunks` options. Setting
- :option:`cpuload`\=85 will cause that job to do nothing but burn 85%
+ :option:`cpuload`, :option:`cpuchunks` and :option:`cpumode` options.
+ Setting :option:`cpuload`\=85 will cause that job to do nothing but burn 85%
of the CPU. In case of SMP machines, use :option:`numjobs`\=<nr_of_cpu>
to get desired CPU usage, as the cpuload only loads a
single CPU at the desired rate. A job never finishes unless there is
at least one non-cpuio job.
+ Setting :option:`cpumode`\=qsort replace the default noop instructions loop
+ by a qsort algorithm to consume more energy.
**rdma**
The RDMA I/O engine supports both RDMA memory semantics
**nbd**
Read and write a Network Block Device (NBD).
+ **libcufile**
+ I/O engine supporting libcufile synchronous access to nvidia-fs and a
+ GPUDirect Storage-supported filesystem. This engine performs
+ I/O without transferring buffers between user-space and the kernel,
+ unless :option:`verify` is set or :option:`cuda_io` is `posix`.
+ :option:`iomem` must not be `cudamalloc`. This ioengine defines
+ engine specific options.
+
I/O engine specific parameters
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
transferred to the device. The writefua option is ignored with this
selection.
+.. option:: hipri : [sg]
+
+ If this option is set, fio will attempt to use polled IO completions.
+ This will have a similar effect as (io_uring)hipri. Only SCSI READ and
+ WRITE commands will have the SGV4_FLAG_HIPRI set (not UNMAP (trim) nor
+ VERIFY). Older versions of the Linux sg driver that do not support
+ hipri will simply ignore this flag and do normal IO. The Linux SCSI
+ Low Level Driver (LLD) that "owns" the device also needs to support
+ hipri (also known as iopoll and mq_poll). The MegaRAID driver is an
+ example of a SCSI LLD. Default: clear (0) which does normal
+ (interrupted based) IO.
+
.. option:: http_host=str : [http]
Hostname to connect to. For S3, this could be the bucket hostname.
nbd+unix:///?socket=/tmp/socket
nbds://tlshost/exportname
+.. option:: gpu_dev_ids=str : [libcufile]
+
+ Specify the GPU IDs to use with CUDA. This is a colon-separated list of
+ int. GPUs are assigned to workers roundrobin. Default is 0.
+
+.. option:: cuda_io=str : [libcufile]
+
+ Specify the type of I/O to use with CUDA. Default is **cufile**.
+
+ **cufile**
+ Use libcufile and nvidia-fs. This option performs I/O directly
+ between a GPUDirect Storage filesystem and GPU buffers,
+ avoiding use of a bounce buffer. If :option:`verify` is set,
+ cudaMemcpy is used to copy verificaton data between RAM and GPU.
+ Verification data is copied from RAM to GPU before a write
+ and from GPU to RAM after a read. :option:`direct` must be 1.
+ **posix**
+ Use POSIX to perform I/O with a RAM buffer, and use cudaMemcpy
+ to transfer data between RAM and the GPUs. Data is copied from
+ GPU to RAM before a write and copied from RAM to GPU after a
+ read. :option:`verify` does not affect use of cudaMemcpy.
+
I/O depth
~~~~~~~~~
before we have to complete it and do our :option:`thinktime`. In other words, this
setting effectively caps the queue depth if the latter is larger.
+.. option:: thinktime_blocks_type=str
+
+ Only valid if :option:`thinktime` is set - control how :option:`thinktime_blocks`
+ triggers. The default is `complete`, which triggers thinktime when fio completes
+ :option:`thinktime_blocks` blocks. If this is set to `issue`, then the trigger happens
+ at the issue side.
+
.. option:: rate=int[,int][,int]
Cap the bandwidth used by this job. The number is in bytes/sec, the normal
Below is a single line containing short names for each of the fields in the
minimal output v3, separated by semicolons::
- terse_version_3;fio_version;jobname;groupid;error;read_kb;read_bandwidth;read_iops;read_runtime_ms;read_slat_min;read_slat_max;read_slat_mean;read_slat_dev;read_clat_min;read_clat_max;read_clat_mean;read_clat_dev;read_clat_pct01;read_clat_pct02;read_clat_pct03;read_clat_pct04;read_clat_pct05;read_clat_pct06;read_clat_pct07;read_clat_pct08;read_clat_pct09;read_clat_pct10;read_clat_pct11;read_clat_pct12;read_clat_pct13;read_clat_pct14;read_clat_pct15;read_clat_pct16;read_clat_pct17;read_clat_pct18;read_clat_pct19;read_clat_pct20;read_tlat_min;read_lat_max;read_lat_mean;read_lat_dev;read_bw_min;read_bw_max;read_bw_agg_pct;read_bw_mean;read_bw_dev;write_kb;write_bandwidth;write_iops;write_runtime_ms;write_slat_min;write_slat_max;write_slat_mean;write_slat_dev;write_clat_min;write_clat_max;write_clat_mean;write_clat_dev;write_clat_pct01;write_clat_pct02;write_clat_pct03;write_clat_pct04;write_clat_pct05;write_clat_pct06;write_clat_pct07;write_clat_pct08;write_clat_pct09;write_clat_pct10;write_clat_pct11;write_clat_pct12;write_clat_pct13;write_clat_pct14;write_clat_pct15;write_clat_pct16;write_clat_pct17;write_clat_pct18;write_clat_pct19;write_clat_pct20;write_tlat_min;write_lat_max;write_lat_mean;write_lat_dev;write_bw_min;write_bw_max;write_bw_agg_pct;write_bw_mean;write_bw_dev;cpu_user;cpu_sys;cpu_csw;cpu_mjf;cpu_minf;iodepth_1;iodepth_2;iodepth_4;iodepth_8;iodepth_16;iodepth_32;iodepth_64;lat_2us;lat_4us;lat_10us;lat_20us;lat_50us;lat_100us;lat_250us;lat_500us;lat_750us;lat_1000us;lat_2ms;lat_4ms;lat_10ms;lat_20ms;lat_50ms;lat_100ms;lat_250ms;lat_500ms;lat_750ms;lat_1000ms;lat_2000ms;lat_over_2000ms;disk_name;disk_read_iops;disk_write_iops;disk_read_merges;disk_write_merges;disk_read_ticks;write_ticks;disk_queue_time;disk_util
+ terse_version_3;fio_version;jobname;groupid;error;read_kb;read_bandwidth_kb;read_iops;read_runtime_ms;read_slat_min_us;read_slat_max_us;read_slat_mean_us;read_slat_dev_us;read_clat_min_us;read_clat_max_us;read_clat_mean_us;read_clat_dev_us;read_clat_pct01;read_clat_pct02;read_clat_pct03;read_clat_pct04;read_clat_pct05;read_clat_pct06;read_clat_pct07;read_clat_pct08;read_clat_pct09;read_clat_pct10;read_clat_pct11;read_clat_pct12;read_clat_pct13;read_clat_pct14;read_clat_pct15;read_clat_pct16;read_clat_pct17;read_clat_pct18;read_clat_pct19;read_clat_pct20;read_tlat_min_us;read_lat_max_us;read_lat_mean_us;read_lat_dev_us;read_bw_min_kb;read_bw_max_kb;read_bw_agg_pct;read_bw_mean_kb;read_bw_dev_kb;write_kb;write_bandwidth_kb;write_iops;write_runtime_ms;write_slat_min_us;write_slat_max_us;write_slat_mean_us;write_slat_dev_us;write_clat_min_us;write_clat_max_us;write_clat_mean_us;write_clat_dev_us;write_clat_pct01;write_clat_pct02;write_clat_pct03;write_clat_pct04;write_clat_pct05;write_clat_pct06;write_clat_pct07;write_clat_pct08;write_clat_pct09;write_clat_pct10;write_clat_pct11;write_clat_pct12;write_clat_pct13;write_clat_pct14;write_clat_pct15;write_clat_pct16;write_clat_pct17;write_clat_pct18;write_clat_pct19;write_clat_pct20;write_tlat_min_us;write_lat_max_us;write_lat_mean_us;write_lat_dev_us;write_bw_min_kb;write_bw_max_kb;write_bw_agg_pct;write_bw_mean_kb;write_bw_dev_kb;cpu_user;cpu_sys;cpu_csw;cpu_mjf;cpu_minf;iodepth_1;iodepth_2;iodepth_4;iodepth_8;iodepth_16;iodepth_32;iodepth_64;lat_2us;lat_4us;lat_10us;lat_20us;lat_50us;lat_100us;lat_250us;lat_500us;lat_750us;lat_1000us;lat_2ms;lat_4ms;lat_10ms;lat_20ms;lat_50ms;lat_100ms;lat_250ms;lat_500ms;lat_750ms;lat_1000ms;lat_2000ms;lat_over_2000ms;disk_name;disk_read_iops;disk_write_iops;disk_read_merges;disk_write_merges;disk_read_ticks;write_ticks;disk_queue_time;disk_util
In client/server mode terse output differs from what appears when jobs are run
locally. Disk utilization data is omitted from the standard terse output and