X-Git-Url: https://git.kernel.dk/?p=fio.git;a=blobdiff_plain;f=HOWTO;h=1c9b2c1020f7e7d53b43926fcb3c3369a1db07f8;hp=acb9e97fc35bf6ca790d45bb93b7be0260509a8f;hb=a02ec45af6b1f9ca1d8b578b4264bcd053886801;hpb=804c08390492023293f7febb71172e5fe0aefbf5 diff --git a/HOWTO b/HOWTO index acb9e97f..1c9b2c10 100644 --- a/HOWTO +++ b/HOWTO @@ -93,6 +93,12 @@ Command line options Dump info related to I/O rate switching. *compress* Dump info related to log compress/decompress. + *steadystate* + Dump info related to steadystate detection. + *helperthread* + Dump info related to the helper thread. + *zbd* + Dump info related to support for zoned block devices. *?* or *help* Show available debug options. @@ -100,6 +106,10 @@ Command line options Parse options only, don't start any I/O. +.. option:: --merge-blktrace-only + + Merge blktraces only, don't start any I/O. + .. option:: --output=filename Write output to file `filename`. @@ -163,12 +173,12 @@ Command line options .. option:: --readonly - Turn on safety read-only checks, preventing writes. The ``--readonly`` - option is an extra safety guard to prevent users from accidentally starting - a write workload when that is not desired. Fio will only write if - `rw=write/randwrite/rw/randrw` is given. This extra safety net can be used - as an extra precaution as ``--readonly`` will also enable a write check in - the I/O engine core to prevent writes due to unknown user space bug(s). + Turn on safety read-only checks, preventing writes and trims. The + ``--readonly`` option is an extra safety guard to prevent users from + accidentally starting a write or trim workload when that is not desired. + Fio will only modify the device under test if + `rw=write/randwrite/rw/randrw/trim/randtrim/trimwrite` is given. This + safety net can be used as an extra precaution. .. option:: --eta=when @@ -194,7 +204,10 @@ Command line options Force a full status dump of cumulative (from job start) values at `time` intervals. This option does *not* provide per-period measurements. So values such as bandwidth are running averages. When the time unit is omitted, - `time` is interpreted in seconds. + `time` is interpreted in seconds. Note that using this option with + ``--output-format=json`` will yield output that technically isn't valid + json, since the output will be collated sets of valid json. It will need + to be split into valid sets of json after the run. .. option:: --section=name @@ -283,7 +296,8 @@ Command line options .. option:: --aux-path=path - Use this `path` for fio state generated files. + Use the directory specified by `path` for generated state files instead + of the current working directory. Any parameters following the options will be assumed to be job files, unless they match a job file parameter. Multiple job files can be listed and each job @@ -748,12 +762,15 @@ Target file/device assigned equally distributed to job clones created by :option:`numjobs` as long as they are using generated filenames. If specific `filename(s)` are set fio will use the first listed directory, and thereby matching the - `filename` semantic which generates a file each clone if not specified, but - let all clones use the same if set. + `filename` semantic (which generates a file for each clone if not + specified, but lets all clones use the same file if set). See the :option:`filename` option for information on how to escape "``:``" and "``\``" characters within the directory path itself. + Note: To control the directory fio will use for internal state files + use :option:`--aux-path`. + .. option:: filename=str Fio normally makes up a `filename` based on the job name, thread number, and @@ -948,18 +965,92 @@ Target file/device Unlink job files after each iteration or loop. Default: false. -.. option:: zonesize=int +.. option:: zonemode=str - Divide a file into zones of the specified size. See :option:`zoneskip`. + Accepted values are: + + **none** + The :option:`zonerange`, :option:`zonesize` and + :option:`zoneskip` parameters are ignored. + **strided** + I/O happens in a single zone until + :option:`zonesize` bytes have been transferred. + After that number of bytes has been + transferred processing of the next zone + starts. + **zbd** + Zoned block device mode. I/O happens + sequentially in each zone, even if random I/O + has been selected. Random I/O happens across + all zones instead of being restricted to a + single zone. The :option:`zoneskip` parameter + is ignored. :option:`zonerange` and + :option:`zonesize` must be identical. .. option:: zonerange=int - Give size of an I/O zone. See :option:`zoneskip`. + Size of a single zone. See also :option:`zonesize` and + :option:`zoneskip`. + +.. option:: zonesize=int + + For :option:`zonemode` =strided, this is the number of bytes to + transfer before skipping :option:`zoneskip` bytes. If this parameter + is smaller than :option:`zonerange` then only a fraction of each zone + with :option:`zonerange` bytes will be accessed. If this parameter is + larger than :option:`zonerange` then each zone will be accessed + multiple times before skipping to the next zone. + + For :option:`zonemode` =zbd, this is the size of a single zone. The + :option:`zonerange` parameter is ignored in this mode. .. option:: zoneskip=int - Skip the specified number of bytes when :option:`zonesize` data has been - read. The two zone options can be used to only do I/O on zones of a file. + For :option:`zonemode` =strided, the number of bytes to skip after + :option:`zonesize` bytes of data have been transferred. This parameter + must be zero for :option:`zonemode` =zbd. + +.. option:: read_beyond_wp=bool + + This parameter applies to :option:`zonemode` =zbd only. + + Zoned block devices are block devices that consist of multiple zones. + Each zone has a type, e.g. conventional or sequential. A conventional + zone can be written at any offset that is a multiple of the block + size. Sequential zones must be written sequentially. The position at + which a write must occur is called the write pointer. A zoned block + device can be either drive managed, host managed or host aware. For + host managed devices the host must ensure that writes happen + sequentially. Fio recognizes host managed devices and serializes + writes to sequential zones for these devices. + + If a read occurs in a sequential zone beyond the write pointer then + the zoned block device will complete the read without reading any data + from the storage medium. Since such reads lead to unrealistically high + bandwidth and IOPS numbers fio only reads beyond the write pointer if + explicitly told to do so. Default: false. + +.. option:: max_open_zones=int + + When running a random write test across an entire drive many more + zones will be open than in a typical application workload. Hence this + command line option that allows to limit the number of open zones. The + number of open zones is defined as the number of zones to which write + commands are issued. + +.. option:: zone_reset_threshold=float + + A number between zero and one that indicates the ratio of logical + blocks with data to the total number of logical blocks in the test + above which zones should be reset periodically. + +.. option:: zone_reset_frequency=float + + A number between zero and one that indicates how often a zone reset + should be issued if the zone reset threshold has been exceeded. A zone + reset is submitted after each (1 / zone_reset_frequency) write + requests. This and the previous parameter can be used to simulate + garbage collection activity. I/O type @@ -991,13 +1082,15 @@ I/O type **write** Sequential writes. **trim** - Sequential trims (Linux block devices only). + Sequential trims (Linux block devices and SCSI + character devices only). **randread** Random reads. **randwrite** Random writes. **randtrim** - Random trims (Linux block devices only). + Random trims (Linux block devices and SCSI + character devices only). **rw,readwrite** Sequential mixed reads and writes. **randrw** @@ -1329,7 +1422,9 @@ I/O type and that some blocks may be read/written more than once. If this option is used with :option:`verify` and multiple blocksizes (via :option:`bsrange`), only intact blocks are verified, i.e., partially-overwritten blocks are - ignored. + ignored. With an async I/O engine and an I/O depth > 1, it is possible for + the same block to be overwritten, which can cause verification errors. Either + do not use norandommap in this case, or also use the lfsr random generator. .. option:: softrandommap=bool @@ -1429,7 +1524,7 @@ Block size If you want a workload that has 50% 2k reads and 50% 4k reads, while having 90% 4k writes and 10% 8k writes, you would specify:: - bssplit=2k/50:4k/50,4k/90,8k/10 + bssplit=2k/50:4k/50,4k/90:8k/10 Fio supports defining up to 64 different weights for each data direction. @@ -1716,6 +1811,11 @@ I/O engine **pvsync2** Basic :manpage:`preadv2(2)` or :manpage:`pwritev2(2)` I/O. + **io_uring** + Fast Linux native asynchronous I/O. Supports async IO + for both direct and buffered IO. + This engine defines engine specific options. + **libaio** Linux native asynchronous I/O. Note that Linux may only support queued behavior with non-buffered I/O (set ``direct=1`` or @@ -1746,7 +1846,8 @@ I/O engine ioctl, or if the target is an sg character device we use :manpage:`read(2)` and :manpage:`write(2)` for asynchronous I/O. Requires :option:`filename` option to specify either block or - character devices. + character devices. This engine supports trim operations. + The sg engine includes engine specific options. **null** Doesn't transfer any data, just pretends to. This is mainly used to @@ -1820,6 +1921,15 @@ I/O engine (RBD) via librbd without the need to use the kernel rbd driver. This ioengine defines engine specific options. + **http** + I/O engine supporting GET/PUT requests over HTTP(S) with libcurl to + a WebDAV or S3 endpoint. This ioengine defines engine specific options. + + This engine only supports direct IO of iodepth=1; you need to scale this + via numjobs. blocksize defines the size of the objects to be created. + + TRIM is translated to object deletion. + **gfapi** Using GlusterFS libgfapi sync interface to direct access to GlusterFS volumes without having to go through FUSE. This ioengine @@ -1853,12 +1963,12 @@ I/O engine **pmemblk** Read and write using filesystem DAX to a file on a filesystem - mounted with DAX on a persistent memory device through the NVML + mounted with DAX on a persistent memory device through the PMDK libpmemblk library. **dev-dax** Read and write using device DAX to a persistent memory device (e.g., - /dev/dax0.0) through the NVML libpmem library. + /dev/dax0.0) through the PMDK libpmem library. **external** Prefix to specify loading an external I/O engine object file. Append @@ -1874,9 +1984,29 @@ I/O engine **libpmem** Read and write using mmap I/O to a file on a filesystem - mounted with DAX on a persistent memory device through the NVML + mounted with DAX on a persistent memory device through the PMDK libpmem library. + **ime_psync** + Synchronous read and write using DDN's Infinite Memory Engine (IME). + This engine is very basic and issues calls to IME whenever an IO is + queued. + + **ime_psyncv** + Synchronous read and write using DDN's Infinite Memory Engine (IME). + This engine uses iovecs and will try to stack as much IOs as possible + (if the IOs are "contiguous" and the IO depth is not exceeded) + before issuing a call to IME. + + **ime_aio** + Asynchronous read and write using DDN's Infinite Memory Engine (IME). + This engine will try to stack as much IOs as possible by creating + requests for IME. FIO will then decide when to commit these requests. + **libiscsi** + Read and write iscsi lun with libiscsi. + **nbd** + Read and write a Network Block Device (NBD). + I/O engine specific parameters ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ @@ -1885,6 +2015,35 @@ In addition, there are some parameters which are only valid when a specific with the caveat that when used on the command line, they must come after the :option:`ioengine` that defines them is selected. +.. option:: hipri : [io_uring] + + If this option is set, fio will attempt to use polled IO completions. + Normal IO completions generate interrupts to signal the completion of + IO, polled completions do not. Hence they are require active reaping + by the application. The benefits are more efficient IO for high IOPS + scenarios, and lower latencies for low queue depth IO. + +.. option:: fixedbufs : [io_uring] + + If fio is asked to do direct IO, then Linux will map pages for each + IO call, and release them when IO is done. If this option is set, the + pages are pre-mapped before IO is started. This eliminates the need to + map and release for each IO. This is more efficient, and reduces the + IO latency as well. + +.. option:: sqthread_poll : [io_uring] + + Normally fio will submit IO by issuing a system call to notify the + kernel of available items in the SQ ring. If this option is set, the + act of submitting IO will be done by a polling thread in the kernel. + This frees up cycles for fio, at the cost of using more CPU in the + system. + +.. option:: sqthread_poll_cpu : [io_uring] + + When :option:`sqthread_poll` is set, this option provides a way to + define which CPU should be used for the polling thread. + .. option:: userspace_reap : [libaio] Normally, with the libaio engine in use, fio will use the @@ -2068,6 +2227,95 @@ with the caveat that when used on the command line, they must come after the multiple paths exist between the client and the server or in certain loopback configurations. +.. option:: readfua=bool : [sg] + + With readfua option set to 1, read operations include + the force unit access (fua) flag. Default is 0. + +.. option:: writefua=bool : [sg] + + With writefua option set to 1, write operations include + the force unit access (fua) flag. Default is 0. + +.. option:: sg_write_mode=str : [sg] + + Specify the type of write commands to issue. This option can take three values: + + **write** + This is the default where write opcodes are issued as usual. + **verify** + Issue WRITE AND VERIFY commands. The BYTCHK bit is set to 0. This + directs the device to carry out a medium verification with no data + comparison. The writefua option is ignored with this selection. + **same** + Issue WRITE SAME commands. This transfers a single block to the device + and writes this same block of data to a contiguous sequence of LBAs + beginning at the specified offset. fio's block size parameter specifies + the amount of data written with each command. However, the amount of data + actually transferred to the device is equal to the device's block + (sector) size. For a device with 512 byte sectors, blocksize=8k will + write 16 sectors with each command. fio will still generate 8k of data + for each command but only the first 512 bytes will be used and + transferred to the device. The writefua option is ignored with this + selection. + +.. option:: http_host=str : [http] + + Hostname to connect to. For S3, this could be the bucket hostname. + Default is **localhost** + +.. option:: http_user=str : [http] + + Username for HTTP authentication. + +.. option:: http_pass=str : [http] + + Password for HTTP authentication. + +.. option:: https=str : [http] + + Enable HTTPS instead of http. *on* enables HTTPS; *insecure* + will enable HTTPS, but disable SSL peer verification (use with + caution!). Default is **off** + +.. option:: http_mode=str : [http] + + Which HTTP access mode to use: *webdav*, *swift*, or *s3*. + Default is **webdav** + +.. option:: http_s3_region=str : [http] + + The S3 region/zone string. + Default is **us-east-1** + +.. option:: http_s3_key=str : [http] + + The S3 secret key. + +.. option:: http_s3_keyid=str : [http] + + The S3 key/access id. + +.. option:: http_swift_auth_token=str : [http] + + The Swift auth token. See the example configuration file on how + to retrieve this. + +.. option:: http_verbose=int : [http] + + Enable verbose requests from libcurl. Useful for debugging. 1 + turns on verbose logging from libcurl, 2 additionally enables + HTTP IO tracing. Default is **0** + +.. option:: uri=str : [nbd] + + Specify the NBD URI of the server to test. The string + is a standard NBD URI + (see https://github.com/NetworkBlockDevice/nbd/tree/master/doc). + Example URIs: nbd://localhost:10809 + nbd+unix:///?socket=/tmp/socket + nbds://tlshost/exportname + I/O depth ~~~~~~~~~ @@ -2144,8 +2392,13 @@ I/O depth ``serialize_overlap`` tells fio to avoid provoking this behavior by explicitly serializing in-flight I/Os that have a non-zero overlap. Note that setting this option can reduce both performance and the :option:`iodepth` achieved. - Additionally this option does not work when :option:`io_submit_mode` is set to - offload. Default: false. + + This option only applies to I/Os issued for a single job except when it is + enabled along with :option:`io_submit_mode`\=offload. In offload mode, fio + will check for overlap among all I/Os submitted by offload jobs with :option:`serialize_overlap` + enabled. + + Default: false. .. option:: io_submit_mode=str @@ -2289,6 +2542,43 @@ I/O replay :manpage:`blktrace(8)` for how to capture such logging data. For blktrace replay, the file needs to be turned into a blkparse binary data file first (``blkparse -o /dev/null -d file_for_fio.bin``). + You can specify a number of files by separating the names with a ':' + character. See the :option:`filename` option for information on how to + escape ':' and '\' characters within the file names. These files will + be sequentially assigned to job clones created by :option:`numjobs`. + +.. option:: read_iolog_chunked=bool + + Determines how iolog is read. If false(default) entire :option:`read_iolog` + will be read at once. If selected true, input from iolog will be read + gradually. Useful when iolog is very large, or it is generated. + +.. option:: merge_blktrace_file=str + + When specified, rather than replaying the logs passed to :option:`read_iolog`, + the logs go through a merge phase which aggregates them into a single + blktrace. The resulting file is then passed on as the :option:`read_iolog` + parameter. The intention here is to make the order of events consistent. + This limits the influence of the scheduler compared to replaying multiple + blktraces via concurrent jobs. + +.. option:: merge_blktrace_scalars=float_list + + This is a percentage based option that is index paired with the list of + files passed to :option:`read_iolog`. When merging is performed, scale + the time of each event by the corresponding amount. For example, + ``--merge_blktrace_scalars="50:100"`` runs the first trace in halftime + and the second trace in realtime. This knob is separately tunable from + :option:`replay_time_scale` which scales the trace during runtime and + does not change the output of the merge unlike this option. + +.. option:: merge_blktrace_iters=float_list + + This is a whole number option that is index paired with the list of files + passed to :option:`read_iolog`. When merging is performed, run each trace + for the specified number of iterations. For example, + ``--merge_blktrace_iters="2:1"`` runs the first trace for two iterations + and the second trace for one iteration. .. option:: replay_no_stall=bool @@ -2299,6 +2589,14 @@ I/O replay still respecting ordering. The result is the same I/O pattern to a given device, but different timings. +.. option:: replay_time_scale=int + + When replaying I/O with :option:`read_iolog`, fio will honor the + original timing in the trace. With this option, it's possible to scale + the time. It's a percentage option, if set to 50 it means run at 50% + the original IO rate in the trace. If set to 200, run at twice the + original IO rate. Defaults to 100. + .. option:: replay_redirect=str While replaying I/O patterns using :option:`read_iolog` the default behavior @@ -2319,12 +2617,21 @@ I/O replay .. option:: replay_align=int - Force alignment of I/O offsets and lengths in a trace to this power of 2 - value. + Force alignment of the byte offsets in a trace to this value. The value + must be a power of 2. .. option:: replay_scale=int - Scale sector offsets down by this factor when replaying traces. + Scale byte offsets down by this factor when replaying traces. Should most + likely use :option:`replay_align` as well. + +.. option:: replay_skip=str + + Sometimes it's useful to skip certain IO types in a replay trace. + This could be, for instance, eliminating the writes in the trace. + Or not replaying the trims/discards, if you are redirecting to + a device that doesn't support them. This option takes a comma + separated list of read, write, trim, sync. Threads, processes and job synchronization @@ -2365,24 +2672,27 @@ Threads, processes and job synchronization Set the I/O priority class. See man :manpage:`ionice(1)`. -.. option:: cpumask=int - - Set the CPU affinity of this job. The parameter given is a bit mask of - allowed CPUs the job may run on. So if you want the allowed CPUs to be 1 - and 5, you would pass the decimal value of (1 << 1 | 1 << 5), or 34. See man - :manpage:`sched_setaffinity(2)`. This may not work on all supported - operating systems or kernel versions. This option doesn't work well for a - higher CPU count than what you can store in an integer mask, so it can only - control cpus 1-32. For boxes with larger CPU counts, use - :option:`cpus_allowed`. - .. option:: cpus_allowed=str Controls the same options as :option:`cpumask`, but accepts a textual - specification of the permitted CPUs instead. So to use CPUs 1 and 5 you - would specify ``cpus_allowed=1,5``. This option also allows a range of CPUs - to be specified -- say you wanted a binding to CPUs 1, 5, and 8 to 15, you - would set ``cpus_allowed=1,5,8-15``. + specification of the permitted CPUs instead and CPUs are indexed from 0. So + to use CPUs 0 and 5 you would specify ``cpus_allowed=0,5``. This option also + allows a range of CPUs to be specified -- say you wanted a binding to CPUs + 0, 5, and 8 to 15, you would set ``cpus_allowed=0,5,8-15``. + + On Windows, when ``cpus_allowed`` is unset only CPUs from fio's current + processor group will be used and affinity settings are inherited from the + system. An fio build configured to target Windows 7 makes options that set + CPUs processor group aware and values will set both the processor group + and a CPU from within that group. For example, on a system where processor + group 0 has 40 CPUs and processor group 1 has 32 CPUs, ``cpus_allowed`` + values between 0 and 39 will bind CPUs from processor group 0 and + ``cpus_allowed`` values between 40 and 71 will bind CPUs from processor + group 1. When using ``cpus_allowed_policy=shared`` all CPUs specified by a + single ``cpus_allowed`` option must be from the same processor group. For + Windows fio builds not built for Windows 7, CPUs will only be selected from + (and be relative to) whatever processor group fio happens to be running in + and CPUs from other processor groups cannot be used. .. option:: cpus_allowed_policy=str @@ -2399,6 +2709,17 @@ Threads, processes and job synchronization enough CPUs are given for the jobs listed, then fio will roundrobin the CPUs in the set. +.. option:: cpumask=int + + Set the CPU affinity of this job. The parameter given is a bit mask of + allowed CPUs the job may run on. So if you want the allowed CPUs to be 1 + and 5, you would pass the decimal value of (1 << 1 | 1 << 5), or 34. See man + :manpage:`sched_setaffinity(2)`. This may not work on all supported + operating systems or kernel versions. This option doesn't work well for a + higher CPU count than what you can store in an integer mask, so it can only + control cpus 1-32. For boxes with larger CPU counts, use + :option:`cpus_allowed`. + .. option:: numa_cpu_nodes=str Set this job running on specified NUMA nodes' CPUs. The arguments allow @@ -2601,17 +2922,10 @@ Verification previously written file. If the data direction includes any form of write, the verify will be of the newly written data. -.. option:: verifysort=bool - - If true, fio will sort written verify blocks when it deems it faster to read - them back in a sorted manner. This is often the case when overwriting an - existing file, since the blocks are already laid out in the file system. You - can ignore this option unless doing huge amounts of really fast I/O where - the red-black tree sorting CPU time becomes significant. Default: true. - -.. option:: verifysort_nr=int - - Pre-load and sort verify blocks for a read workload. + To avoid false verification errors, do not use the norandommap option when + verifying data with async I/O engines and I/O depths > 1. Or use the + norandommap and the lfsr random generator together to avoid writing to the + same offset with muliple outstanding I/Os. .. option:: verify_offset=int @@ -2745,6 +3059,10 @@ Steady state data from the rolling collection window. Threshold limits can be expressed as a fixed value or as a percentage of the mean in the collection window. + When using this feature, most jobs should include the :option:`time_based` + and :option:`runtime` options or the :option:`loops` option so that fio does not + stop running after it has covered the full size of the specified file(s) or device(s). + **iops** Collect IOPS data. Stop the job if all individual IOPS measurements are within the specified limit of the mean IOPS (e.g., ``iops:2`` @@ -2849,9 +3167,11 @@ Measurements and reporting .. option:: write_iops_log=str Same as :option:`write_bw_log`, but writes an IOPS file (e.g. - :file:`name_iops.x.log`) instead. See :option:`write_bw_log` for - details about the filename format and `Log File Formats`_ for how data - is structured within the file. + :file:`name_iops.x.log`) instead. Because fio defaults to individual + I/O logging, the value entry in the IOPS log will be 1 unless windowed + logging (see :option:`log_avg_msec`) has been enabled. See + :option:`write_bw_log` for details about the filename format and `Log + File Formats`_ for how data is structured within the file. .. option:: log_avg_msec=int @@ -2909,7 +3229,8 @@ Measurements and reporting Define the set of CPUs that are allowed to handle online log compression for the I/O jobs. This can provide better isolation between performance - sensitive jobs, and background compression work. + sensitive jobs, and background compression work. See + :option:`cpus_allowed` for the format used. .. option:: log_store_compressed=bool @@ -3423,7 +3744,8 @@ is one long line of values, such as:: 2;card0;0;0;7139336;121836;60004;1;10109;27.932460;116.933948;220;126861;3495.446807;1085.368601;226;126864;3523.635629;1089.012448;24063;99944;50.275485%;59818.274627;5540.657370;7155060;122104;60004;1;8338;29.086342;117.839068;388;128077;5032.488518;1234.785715;391;128085;5061.839412;1236.909129;23436;100928;50.287926%;59964.832030;5644.844189;14.595833%;19.394167%;123706;0;7313;0.1%;0.1%;0.1%;0.1%;0.1%;0.1%;100.0%;0.00%;0.00%;0.00%;0.00%;0.00%;0.00%;0.01%;0.02%;0.05%;0.16%;6.04%;40.40%;52.68%;0.64%;0.01%;0.00%;0.01%;0.00%;0.00%;0.00%;0.00%;0.00% A description of this job goes here. -The job description (if provided) follows on a second line. +The job description (if provided) follows on a second line for terse v2. +It appears on the same line for other terse versions. To enable terse output, use the :option:`--minimal` or :option:`--output-format`\=terse command line options. The @@ -3508,6 +3830,11 @@ minimal output v3, separated by semicolons:: terse_version_3;fio_version;jobname;groupid;error;read_kb;read_bandwidth;read_iops;read_runtime_ms;read_slat_min;read_slat_max;read_slat_mean;read_slat_dev;read_clat_min;read_clat_max;read_clat_mean;read_clat_dev;read_clat_pct01;read_clat_pct02;read_clat_pct03;read_clat_pct04;read_clat_pct05;read_clat_pct06;read_clat_pct07;read_clat_pct08;read_clat_pct09;read_clat_pct10;read_clat_pct11;read_clat_pct12;read_clat_pct13;read_clat_pct14;read_clat_pct15;read_clat_pct16;read_clat_pct17;read_clat_pct18;read_clat_pct19;read_clat_pct20;read_tlat_min;read_lat_max;read_lat_mean;read_lat_dev;read_bw_min;read_bw_max;read_bw_agg_pct;read_bw_mean;read_bw_dev;write_kb;write_bandwidth;write_iops;write_runtime_ms;write_slat_min;write_slat_max;write_slat_mean;write_slat_dev;write_clat_min;write_clat_max;write_clat_mean;write_clat_dev;write_clat_pct01;write_clat_pct02;write_clat_pct03;write_clat_pct04;write_clat_pct05;write_clat_pct06;write_clat_pct07;write_clat_pct08;write_clat_pct09;write_clat_pct10;write_clat_pct11;write_clat_pct12;write_clat_pct13;write_clat_pct14;write_clat_pct15;write_clat_pct16;write_clat_pct17;write_clat_pct18;write_clat_pct19;write_clat_pct20;write_tlat_min;write_lat_max;write_lat_mean;write_lat_dev;write_bw_min;write_bw_max;write_bw_agg_pct;write_bw_mean;write_bw_dev;cpu_user;cpu_sys;cpu_csw;cpu_mjf;cpu_minf;iodepth_1;iodepth_2;iodepth_4;iodepth_8;iodepth_16;iodepth_32;iodepth_64;lat_2us;lat_4us;lat_10us;lat_20us;lat_50us;lat_100us;lat_250us;lat_500us;lat_750us;lat_1000us;lat_2ms;lat_4ms;lat_10ms;lat_20ms;lat_50ms;lat_100ms;lat_250ms;lat_500ms;lat_750ms;lat_1000ms;lat_2000ms;lat_over_2000ms;disk_name;disk_read_iops;disk_write_iops;disk_read_merges;disk_write_merges;disk_read_ticks;write_ticks;disk_queue_time;disk_util +In client/server mode terse output differs from what appears when jobs are run +locally. Disk utilization data is omitted from the standard terse output and +for v3 and later appears on its own separate line at the end of each terse +reporting cycle. + JSON output ------------ @@ -3612,6 +3939,46 @@ given in bytes. The `action` can be one of these: **trim** Trim the given file from the given `offset` for `length` bytes. + +I/O Replay - Merging Traces +--------------------------- + +Colocation is a common practice used to get the most out of a machine. +Knowing which workloads play nicely with each other and which ones don't is +a much harder task. While fio can replay workloads concurrently via multiple +jobs, it leaves some variability up to the scheduler making results harder to +reproduce. Merging is a way to make the order of events consistent. + +Merging is integrated into I/O replay and done when a +:option:`merge_blktrace_file` is specified. The list of files passed to +:option:`read_iolog` go through the merge process and output a single file +stored to the specified file. The output file is passed on as if it were the +only file passed to :option:`read_iolog`. An example would look like:: + + $ fio --read_iolog=":" --merge_blktrace_file="" + +Creating only the merged file can be done by passing the command line argument +:option:`merge-blktrace-only`. + +Scaling traces can be done to see the relative impact of any particular trace +being slowed down or sped up. :option:`merge_blktrace_scalars` takes in a colon +separated list of percentage scalars. It is index paired with the files passed +to :option:`read_iolog`. + +With scaling, it may be desirable to match the running time of all traces. +This can be done with :option:`merge_blktrace_iters`. It is index paired with +:option:`read_iolog` just like :option:`merge_blktrace_scalars`. + +In an example, given two traces, A and B, each 60s long. If we want to see +the impact of trace A issuing IOs twice as fast and repeat trace A over the +runtime of trace B, the following can be done:: + + $ fio --read_iolog=":"" --merge_blktrace_file"" --merge_blktrace_scalars="50:100" --merge_blktrace_iters="2:1" + +This runs trace A at 2x the speed twice for approximately the same runtime as +a single run of trace B. + + CPU idleness profiling ---------------------- @@ -3735,17 +4102,17 @@ on the type of log, it will be one of the following: **2** I/O is a TRIM -The entry's *block size* is always in bytes. The *offset* is the offset, in bytes, -from the start of the file, for that particular I/O. The logging of the offset can be +The entry's *block size* is always in bytes. The *offset* is the position in bytes +from the start of the file for that particular I/O. The logging of the offset can be toggled with :option:`log_offset`. -Fio defaults to logging every individual I/O. When IOPS are logged for individual -I/Os the *value* entry will always be 1. If windowed logging is enabled through -:option:`log_avg_msec`, fio logs the average values over the specified period of time. -If windowed logging is enabled and :option:`log_max_value` is set, then fio logs -maximum values in that window instead of averages. Since *data direction*, *block -size* and *offset* are per-I/O values, if windowed logging is enabled they -aren't applicable and will be 0. +Fio defaults to logging every individual I/O but when windowed logging is set +through :option:`log_avg_msec`, either the average (by default) or the maximum +(:option:`log_max_value` is set) *value* seen over the specified period of time +is recorded. Each *data direction* seen within the window period will aggregate +its values in a separate row. Further, when using windowed logging the *block +size* and *offset* entries will always contain 0. + Client/Server ------------- @@ -3834,3 +4201,6 @@ containing two hostnames ``h1`` and ``h2`` with IP addresses 192.168.10.120 and /mnt/nfs/fio/192.168.10.120.fileio.tmp /mnt/nfs/fio/192.168.10.121.fileio.tmp + +Terse output in client/server mode will differ slightly from what is produced +when fio is run in stand-alone mode. See the terse output section for details.