+How fio works
+-------------
+
+The first step in getting fio to simulate a desired I/O workload, is writing a
+job file describing that specific setup. A job file may contain any number of
+threads and/or files -- the typical contents of the job file is a *global*
+section defining shared parameters, and one or more job sections describing the
+jobs involved. When run, fio parses this file and sets everything up as
+described. If we break down a job from top to bottom, it contains the following
+basic parameters:
+
+`I/O type`_
+
+ Defines the I/O pattern issued to the file(s). We may only be reading
+ sequentially from this file(s), or we may be writing randomly. Or even
+ mixing reads and writes, sequentially or randomly.
+ Should we be doing buffered I/O, or direct/raw I/O?
+
+`Block size`_
+
+ In how large chunks are we issuing I/O? This may be a single value,
+ or it may describe a range of block sizes.
+
+`I/O size`_
+
+ How much data are we going to be reading/writing.
+
+`I/O engine`_
+
+ How do we issue I/O? We could be memory mapping the file, we could be
+ using regular read/write, we could be using splice, async I/O, or even
+ SG (SCSI generic sg).
+
+`I/O depth`_
+
+ If the I/O engine is async, how large a queuing depth do we want to
+ maintain?
+
+
+`Target file/device`_
+
+ How many files are we spreading the workload over.
+
+`Threads, processes and job synchronization`_
+
+ How many threads or processes should we spread this workload over.
+
+The above are the basic parameters defined for a workload, in addition there's a
+multitude of parameters that modify other aspects of how this job behaves.
+
+
+Command line options
+--------------------
+
+.. option:: --debug=type
+
+ Enable verbose tracing of various fio actions. May be ``all`` for all types
+ or individual types separated by a comma (e.g. ``--debug=file,mem`` will
+ enable file and memory debugging). Currently, additional logging is
+ available for:
+
+ *process*
+ Dump info related to processes.
+ *file*
+ Dump info related to file actions.
+ *io*
+ Dump info related to I/O queuing.
+ *mem*
+ Dump info related to memory allocations.
+ *blktrace*
+ Dump info related to blktrace setup.
+ *verify*
+ Dump info related to I/O verification.
+ *all*
+ Enable all debug options.
+ *random*
+ Dump info related to random offset generation.
+ *parse*
+ Dump info related to option matching and parsing.
+ *diskutil*
+ Dump info related to disk utilization updates.
+ *job:x*
+ Dump info only related to job number x.
+ *mutex*
+ Dump info only related to mutex up/down ops.
+ *profile*
+ Dump info related to profile extensions.
+ *time*
+ Dump info related to internal time keeping.
+ *net*
+ Dump info related to networking connections.
+ *rate*
+ Dump info related to I/O rate switching.
+ *compress*
+ Dump info related to log compress/decompress.
+ *?* or *help*
+ Show available debug options.
+
+.. option:: --parse-only
+
+ Parse options only, don\'t start any I/O.
+
+.. option:: --output=filename
+
+ Write output to file `filename`.
+
+.. option:: --bandwidth-log
+
+ Generate aggregate bandwidth logs.
+
+.. option:: --minimal
+
+ Print statistics in a terse, semicolon-delimited format.
+
+.. option:: --append-terse
+
+ Print statistics in selected mode AND terse, semicolon-delimited format.
+ **deprecated**, use :option:`--output-format` instead to select multiple
+ formats.
+
+.. option:: --output-format=type
+
+ Set the reporting format to `normal`, `terse`, `json`, or `json+`. Multiple
+ formats can be selected, separate by a comma. `terse` is a CSV based
+ format. `json+` is like `json`, except it adds a full dump of the latency
+ buckets.
+
+.. option:: --terse-version=type
+
+ Set terse version output format (default 3, or 2 or 4).
+
+.. option:: --version
+
+ Print version info and exit.
+
+.. option:: --help
+
+ Print this page.
+
+.. option:: --cpuclock-test
+
+ Perform test and validation of internal CPU clock.
+
+.. option:: --crctest=test
+
+ Test the speed of the builtin checksumming functions. If no argument is
+ given, all of them are tested. Or a comma separated list can be passed, in
+ which case the given ones are tested.
+
+.. option:: --cmdhelp=command
+
+ Print help information for `command`. May be ``all`` for all commands.
+
+.. option:: --enghelp=[ioengine[,command]]
+
+ List all commands defined by :option:`ioengine`, or print help for `command`
+ defined by :option:`ioengine`. If no :option:`ioengine` is given, list all
+ available ioengines.
+
+.. option:: --showcmd=jobfile
+
+ Turn a job file into command line options.
+
+.. option:: --readonly
+
+ Turn on safety read-only checks, preventing writes. The ``--readonly``
+ option is an extra safety guard to prevent users from accidentally starting
+ a write workload when that is not desired. Fio will only write if
+ `rw=write/randwrite/rw/randrw` is given. This extra safety net can be used
+ as an extra precaution as ``--readonly`` will also enable a write check in
+ the I/O engine core to prevent writes due to unknown user space bug(s).
+
+.. option:: --eta=when
+
+ When real-time ETA estimate should be printed. May be `always`, `never` or
+ `auto`.
+
+.. option:: --eta-newline=time
+
+ Force a new line for every `time` period passed.
+
+.. option:: --status-interval=time
+
+ Force full status dump every `time` period passed.
+
+.. option:: --section=name
+
+ Only run specified section in job file. Multiple sections can be specified.
+ The ``--section`` option allows one to combine related jobs into one file.
+ E.g. one job file could define light, moderate, and heavy sections. Tell
+ fio to run only the "heavy" section by giving ``--section=heavy``
+ command line option. One can also specify the "write" operations in one
+ section and "verify" operation in another section. The ``--section`` option
+ only applies to job sections. The reserved *global* section is always
+ parsed and used.
+
+.. option:: --alloc-size=kb
+
+ Set the internal smalloc pool to this size in kb (def 1024). The
+ ``--alloc-size`` switch allows one to use a larger pool size for smalloc.
+ If running large jobs with randommap enabled, fio can run out of memory.
+ Smalloc is an internal allocator for shared structures from a fixed size
+ memory pool. The pool size defaults to 16M and can grow to 8 pools.
+
+ NOTE: While running :file:`.fio_smalloc.*` backing store files are visible
+ in :file:`/tmp`.
+
+.. option:: --warnings-fatal
+
+ All fio parser warnings are fatal, causing fio to exit with an
+ error.
+
+.. option:: --max-jobs=nr
+
+ Maximum number of threads/processes to support.
+
+.. option:: --server=args
+
+ Start a backend server, with `args` specifying what to listen to.
+ See `Client/Server`_ section.
+
+.. option:: --daemonize=pidfile
+
+ Background a fio server, writing the pid to the given `pidfile` file.
+
+.. option:: --client=hostname
+
+ Instead of running the jobs locally, send and run them on the given host or
+ set of hosts. See `Client/Server`_ section.
+
+.. option:: --remote-config=file
+
+ Tell fio server to load this local file.
+
+.. option:: --idle-prof=option
+
+ Report cpu idleness on a system or percpu basis
+ ``--idle-prof=system,percpu`` or
+ run unit work calibration only ``--idle-prof=calibrate``.
+
+.. option:: --inflate-log=log
+
+ Inflate and output compressed log.
+
+.. option:: --trigger-file=file
+
+ Execute trigger cmd when file exists.
+
+.. option:: --trigger-timeout=t
+
+ Execute trigger at this time.
+
+.. option:: --trigger=cmd
+
+ Set this command as local trigger.
+
+.. option:: --trigger-remote=cmd
+
+ Set this command as remote trigger.
+
+.. option:: --aux-path=path
+
+ Use this path for fio state generated files.
+
+Any parameters following the options will be assumed to be job files, unless
+they match a job file parameter. Multiple job files can be listed and each job
+file will be regarded as a separate group. Fio will :option:`stonewall`
+execution between each group.
+
+
+Job file format
+---------------
+
+As previously described, fio accepts one or more job files describing what it is
+supposed to do. The job file format is the classic ini file, where the names
+enclosed in [] brackets define the job name. You are free to use any ASCII name
+you want, except *global* which has special meaning. Following the job name is
+a sequence of zero or more parameters, one per line, that define the behavior of
+the job. If the first character in a line is a ';' or a '#', the entire line is
+discarded as a comment.
+
+A *global* section sets defaults for the jobs described in that file. A job may
+override a *global* section parameter, and a job file may even have several
+*global* sections if so desired. A job is only affected by a *global* section
+residing above it.
+
+The :option:`--cmdhelp` option also lists all options. If used with an `option`
+argument, :option:`--cmdhelp` will detail the given `option`.
+
+See the `examples/` directory for inspiration on how to write job files. Note
+the copyright and license requirements currently apply to `examples/` files.
+
+So let's look at a really simple job file that defines two processes, each
+randomly reading from a 128MiB file:
+
+.. code-block:: ini
+
+ ; -- start job file --
+ [global]
+ rw=randread
+ size=128m
+
+ [job1]
+
+ [job2]
+
+ ; -- end job file --
+
+As you can see, the job file sections themselves are empty as all the described
+parameters are shared. As no :option:`filename` option is given, fio makes up a
+`filename` for each of the jobs as it sees fit. On the command line, this job
+would look as follows::
+
+$ fio --name=global --rw=randread --size=128m --name=job1 --name=job2
+
+
+Let's look at an example that has a number of processes writing randomly to
+files:
+
+.. code-block:: ini
+
+ ; -- start job file --
+ [random-writers]
+ ioengine=libaio
+ iodepth=4
+ rw=randwrite
+ bs=32k
+ direct=0
+ size=64m
+ numjobs=4
+ ; -- end job file --
+
+Here we have no *global* section, as we only have one job defined anyway. We
+want to use async I/O here, with a depth of 4 for each file. We also increased
+the buffer size used to 32KiB and define numjobs to 4 to fork 4 identical
+jobs. The result is 4 processes each randomly writing to their own 64MiB
+file. Instead of using the above job file, you could have given the parameters
+on the command line. For this case, you would specify::
+
+$ fio --name=random-writers --ioengine=libaio --iodepth=4 --rw=randwrite --bs=32k --direct=0 --size=64m --numjobs=4
+
+When fio is utilized as a basis of any reasonably large test suite, it might be
+desirable to share a set of standardized settings across multiple job files.
+Instead of copy/pasting such settings, any section may pull in an external
+:file:`filename.fio` file with *include filename* directive, as in the following
+example::
+
+ ; -- start job file including.fio --
+ [global]
+ filename=/tmp/test
+ filesize=1m
+ include glob-include.fio
+
+ [test]
+ rw=randread
+ bs=4k
+ time_based=1
+ runtime=10
+ include test-include.fio
+ ; -- end job file including.fio --
+
+.. code-block:: ini
+
+ ; -- start job file glob-include.fio --
+ thread=1
+ group_reporting=1
+ ; -- end job file glob-include.fio --
+
+.. code-block:: ini
+
+ ; -- start job file test-include.fio --
+ ioengine=libaio
+ iodepth=4
+ ; -- end job file test-include.fio --
+
+Settings pulled into a section apply to that section only (except *global*
+section). Include directives may be nested in that any included file may contain
+further include directive(s). Include files may not contain [] sections.
+
+
+Environment variables
+~~~~~~~~~~~~~~~~~~~~~
+
+Fio also supports environment variable expansion in job files. Any sub-string of
+the form ``${VARNAME}`` as part of an option value (in other words, on the right
+of the '='), will be expanded to the value of the environment variable called
+`VARNAME`. If no such environment variable is defined, or `VARNAME` is the
+empty string, the empty string will be substituted.
+
+As an example, let's look at a sample fio invocation and job file::
+
+$ SIZE=64m NUMJOBS=4 fio jobfile.fio
+
+.. code-block:: ini
+
+ ; -- start job file --
+ [random-writers]
+ rw=randwrite
+ size=${SIZE}
+ numjobs=${NUMJOBS}
+ ; -- end job file --
+
+This will expand to the following equivalent job file at runtime:
+
+.. code-block:: ini
+
+ ; -- start job file --
+ [random-writers]
+ rw=randwrite
+ size=64m
+ numjobs=4
+ ; -- end job file --
+
+Fio ships with a few example job files, you can also look there for inspiration.
+
+Reserved keywords
+~~~~~~~~~~~~~~~~~
+
+Additionally, fio has a set of reserved keywords that will be replaced
+internally with the appropriate value. Those keywords are:
+
+**$pagesize**
+
+ The architecture page size of the running system.
+
+**$mb_memory**
+
+ Megabytes of total memory in the system.
+
+**$ncpus**
+
+ Number of online available CPUs.
+
+These can be used on the command line or in the job file, and will be
+automatically substituted with the current system values when the job is
+run. Simple math is also supported on these keywords, so you can perform actions
+like::
+
+ size=8*$mb_memory
+
+and get that properly expanded to 8 times the size of memory in the machine.
+
+
+Job file parameters
+-------------------
+
+This section describes in details each parameter associated with a job. Some
+parameters take an option of a given type, such as an integer or a
+string. Anywhere a numeric value is required, an arithmetic expression may be
+used, provided it is surrounded by parentheses. Supported operators are:
+
+ - addition (+)
+ - subtraction (-)
+ - multiplication (*)
+ - division (/)
+ - modulus (%)
+ - exponentiation (^)
+
+For time values in expressions, units are microseconds by default. This is
+different than for time values not in expressions (not enclosed in
+parentheses). The following types are used:
+
+
+Parameter types
+~~~~~~~~~~~~~~~
+
+**str**
+ String. This is a sequence of alpha characters.
+
+**time**
+ Integer with possible time suffix. In seconds unless otherwise
+ specified, use e.g. 10m for 10 minutes. Accepts s/m/h for seconds, minutes,
+ and hours, and accepts 'ms' (or 'msec') for milliseconds, and 'us' (or
+ 'usec') for microseconds.
+
+.. _int:
+
+**int**
+ Integer. A whole number value, which may contain an integer prefix
+ and an integer suffix:
+
+ [*integer prefix*] **number** [*integer suffix*]
+
+ The optional *integer prefix* specifies the number's base. The default
+ is decimal. *0x* specifies hexadecimal.
+
+ The optional *integer suffix* specifies the number's units, and includes an
+ optional unit prefix and an optional unit. For quantities of data, the
+ default unit is bytes. For quantities of time, the default unit is seconds.
+
+ With :option:`kb_base` =1000, fio follows international standards for unit
+ prefixes. To specify power-of-10 decimal values defined in the
+ International System of Units (SI):
+
+ * *Ki* -- means kilo (K) or 1000
+ * *Mi* -- means mega (M) or 1000**2
+ * *Gi* -- means giga (G) or 1000**3
+ * *Ti* -- means tera (T) or 1000**4
+ * *Pi* -- means peta (P) or 1000**5
+
+ To specify power-of-2 binary values defined in IEC 80000-13:
+
+ * *k* -- means kibi (Ki) or 1024
+ * *M* -- means mebi (Mi) or 1024**2
+ * *G* -- means gibi (Gi) or 1024**3
+ * *T* -- means tebi (Ti) or 1024**4
+ * *P* -- means pebi (Pi) or 1024**5
+
+ With :option:`kb_base` =1024 (the default), the unit prefixes are opposite
+ from those specified in the SI and IEC 80000-13 standards to provide
+ compatibility with old scripts. For example, 4k means 4096.
+
+ For quantities of data, an optional unit of 'B' may be included
+ (e.g., 'kB' is the same as 'k').
+
+ The *integer suffix* is not case sensitive (e.g., m/mi mean mebi/mega,
+ not milli). 'b' and 'B' both mean byte, not bit.
+
+ Examples with :option:`kb_base` =1000:
+
+ * *4 KiB*: 4096, 4096b, 4096B, 4ki, 4kib, 4kiB, 4Ki, 4KiB
+ * *1 MiB*: 1048576, 1mi, 1024ki
+ * *1 MB*: 1000000, 1m, 1000k
+ * *1 TiB*: 1099511627776, 1ti, 1024gi, 1048576mi
+ * *1 TB*: 1000000000, 1t, 1000m, 1000000k
+
+ Examples with :option:`kb_base` =1024 (default):
+
+ * *4 KiB*: 4096, 4096b, 4096B, 4k, 4kb, 4kB, 4K, 4KB
+ * *1 MiB*: 1048576, 1m, 1024k
+ * *1 MB*: 1000000, 1mi, 1000ki
+ * *1 TiB*: 1099511627776, 1t, 1024g, 1048576m
+ * *1 TB*: 1000000000, 1ti, 1000mi, 1000000ki
+
+ To specify times (units are not case sensitive):
+
+ * *D* -- means days
+ * *H* -- means hours
+ * *M* -- mean minutes
+ * *s* -- or sec means seconds (default)
+ * *ms* -- or *msec* means milliseconds
+ * *us* -- or *usec* means microseconds
+
+ If the option accepts an upper and lower range, use a colon ':' or
+ minus '-' to separate such values. See :ref:`irange <irange>`.
+
+.. _bool:
+
+**bool**
+ Boolean. Usually parsed as an integer, however only defined for
+ true and false (1 and 0).
+
+.. _irange:
+
+**irange**
+ Integer range with suffix. Allows value range to be given, such as
+ 1024-4096. A colon may also be used as the separator, e.g. 1k:4k. If the
+ option allows two sets of ranges, they can be specified with a ',' or '/'
+ delimiter: 1k-4k/8k-32k. Also see :ref:`int <int>`.
+
+**float_list**
+ A list of floating point numbers, separated by a ':' character.
+
+
+Units
+~~~~~
+
+.. option:: kb_base=int
+
+ Select the interpretation of unit prefixes in input parameters.
+
+ **1000**
+ Inputs comply with IEC 80000-13 and the International
+ System of Units (SI). Use:
+
+ - power-of-2 values with IEC prefixes (e.g., KiB)
+ - power-of-10 values with SI prefixes (e.g., kB)
+
+ **1024**
+ Compatibility mode (default). To avoid breaking old scripts:
+
+ - power-of-2 values with SI prefixes
+ - power-of-10 values with IEC prefixes
+
+ See :option:`bs` for more details on input parameters.
+
+ Outputs always use correct prefixes. Most outputs include both
+ side-by-side, like::
+
+ bw=2383.3kB/s (2327.4KiB/s)
+
+ If only one value is reported, then kb_base selects the one to use:
+
+ **1000** -- SI prefixes
+
+ **1024** -- IEC prefixes
+
+.. option:: unit_base=int
+
+ Base unit for reporting. Allowed values are:
+
+ **0**
+ Use auto-detection (default).
+ **8**
+ Byte based.
+ **1**
+ Bit based.
+
+
+With the above in mind, here follows the complete list of fio job parameters.
+
+
+Job description
+~~~~~~~~~~~~~~~
+
+.. option:: name=str
+
+ ASCII name of the job. This may be used to override the name printed by fio
+ for this job. Otherwise the job name is used. On the command line this
+ parameter has the special purpose of also signaling the start of a new job.
+
+.. option:: description=str
+
+ Text description of the job. Doesn't do anything except dump this text
+ description when this job is run. It's not parsed.
+
+.. option:: loops=int
+
+ Run the specified number of iterations of this job. Used to repeat the same
+ workload a given number of times. Defaults to 1.
+
+.. option:: numjobs=int
+
+ Create the specified number of clones of this job. May be used to setup a
+ larger number of threads/processes doing the same thing. Each thread is
+ reported separately; to see statistics for all clones as a whole, use
+ :option:`group_reporting` in conjunction with :option:`new_group`.
+ See :option:`--max-jobs`.
+
+
+Time related parameters
+~~~~~~~~~~~~~~~~~~~~~~~
+
+.. option:: runtime=time
+
+ Tell fio to terminate processing after the specified period of time. It
+ can be quite hard to determine for how long a specified job will run, so
+ this parameter is handy to cap the total runtime to a given time. When
+ the unit is omitted, the value is given in seconds.
+
+.. option:: time_based
+
+ If set, fio will run for the duration of the :option:`runtime` specified
+ even if the file(s) are completely read or written. It will simply loop over
+ the same workload as many times as the :option:`runtime` allows.
+
+.. option:: startdelay=irange(time)
+
+ Delay start of job for the specified number of seconds. Supports all time
+ suffixes to allow specification of hours, minutes, seconds and milliseconds
+ -- seconds are the default if a unit is omitted. Can be given as a range
+ which causes each thread to choose randomly out of the range.
+
+.. option:: ramp_time=time
+
+ If set, fio will run the specified workload for this amount of time before
+ logging any performance numbers. Useful for letting performance settle
+ before logging results, thus minimizing the runtime required for stable
+ results. Note that the ``ramp_time`` is considered lead in time for a job,
+ thus it will increase the total runtime if a special timeout or
+ :option:`runtime` is specified. When the unit is omitted, the value is
+ given in seconds.
+
+.. option:: clocksource=str
+
+ Use the given clocksource as the base of timing. The supported options are:
+
+ **gettimeofday**
+ :manpage:`gettimeofday(2)`
+
+ **clock_gettime**
+ :manpage:`clock_gettime(2)`
+
+ **cpu**
+ Internal CPU clock source
+
+ cpu is the preferred clocksource if it is reliable, as it is very fast (and
+ fio is heavy on time calls). Fio will automatically use this clocksource if
+ it's supported and considered reliable on the system it is running on,
+ unless another clocksource is specifically set. For x86/x86-64 CPUs, this
+ means supporting TSC Invariant.
+
+.. option:: gtod_reduce=bool
+
+ Enable all of the :manpage:`gettimeofday(2)` reducing options
+ (:option:`disable_clat`, :option:`disable_slat`, :option:`disable_bw_measurement`) plus
+ reduce precision of the timeout somewhat to really shrink the
+ :manpage:`gettimeofday(2)` call count. With this option enabled, we only do
+ about 0.4% of the :manpage:`gettimeofday(2)` calls we would have done if all
+ time keeping was enabled.
+
+.. option:: gtod_cpu=int
+
+ Sometimes it's cheaper to dedicate a single thread of execution to just
+ getting the current time. Fio (and databases, for instance) are very
+ intensive on :manpage:`gettimeofday(2)` calls. With this option, you can set
+ one CPU aside for doing nothing but logging current time to a shared memory
+ location. Then the other threads/processes that run I/O workloads need only
+ copy that segment, instead of entering the kernel with a
+ :manpage:`gettimeofday(2)` call. The CPU set aside for doing these time
+ calls will be excluded from other uses. Fio will manually clear it from the
+ CPU mask of other jobs.
+
+
+Target file/device
+~~~~~~~~~~~~~~~~~~
+
+.. option:: directory=str
+
+ Prefix filenames with this directory. Used to place files in a different
+ location than :file:`./`. You can specify a number of directories by
+ separating the names with a ':' character. These directories will be
+ assigned equally distributed to job clones creates with :option:`numjobs` as
+ long as they are using generated filenames. If specific `filename(s)` are
+ set fio will use the first listed directory, and thereby matching the
+ `filename` semantic which generates a file each clone if not specified, but
+ let all clones use the same if set.
+
+ See the :option:`filename` option for escaping certain characters.
+
+.. option:: filename=str
+
+ Fio normally makes up a `filename` based on the job name, thread number, and
+ file number. If you want to share files between threads in a job or several
+ jobs, specify a `filename` for each of them to override the default. If the
+ ioengine is file based, you can specify a number of files by separating the
+ names with a ':' colon. So if you wanted a job to open :file:`/dev/sda` and
+ :file:`/dev/sdb` as the two working files, you would use
+ ``filename=/dev/sda:/dev/sdb``.
+ On Windows, disk devices are accessed as :file:`\\\\.\\PhysicalDrive0` for
+ the first device, :file:`\\\\.\\PhysicalDrive1` for the second etc.
+ Note: Windows and FreeBSD prevent write access to areas
+ of the disk containing in-use data (e.g. filesystems). If the wanted
+ `filename` does need to include a colon, then escape that with a ``\``
+ character. For instance, if the `filename` is :file:`/dev/dsk/foo@3,0:c`,
+ then you would use ``filename="/dev/dsk/foo@3,0\:c"``. The
+ :file:`-` is a reserved name, meaning stdin or stdout. Which of the two
+ depends on the read/write direction set.
+
+.. option:: filename_format=str
+
+ If sharing multiple files between jobs, it is usually necessary to have fio
+ generate the exact names that you want. By default, fio will name a file
+ based on the default file format specification of
+ :file:`jobname.jobnumber.filenumber`. With this option, that can be
+ customized. Fio will recognize and replace the following keywords in this
+ string:
+
+ **$jobname**
+ The name of the worker thread or process.
+ **$jobnum**
+ The incremental number of the worker thread or process.
+ **$filenum**
+ The incremental number of the file for that worker thread or
+ process.
+
+ To have dependent jobs share a set of files, this option can be set to have
+ fio generate filenames that are shared between the two. For instance, if
+ :file:`testfiles.$filenum` is specified, file number 4 for any job will be
+ named :file:`testfiles.4`. The default of :file:`$jobname.$jobnum.$filenum`
+ will be used if no other format specifier is given.
+
+.. option:: unique_filename=bool
+
+ To avoid collisions between networked clients, fio defaults to prefixing any
+ generated filenames (with a directory specified) with the source of the
+ client connecting. To disable this behavior, set this option to 0.
+
+.. option:: opendir=str
+
+ Recursively open any files below directory `str`.
+
+.. option:: lockfile=str
+
+ Fio defaults to not locking any files before it does I/O to them. If a file
+ or file descriptor is shared, fio can serialize I/O to that file to make the
+ end result consistent. This is usual for emulating real workloads that share
+ files. The lock modes are:
+
+ **none**
+ No locking. The default.
+ **exclusive**
+ Only one thread or process may do I/O at a time, excluding all
+ others.
+ **readwrite**
+ Read-write locking on the file. Many readers may
+ access the file at the same time, but writes get exclusive access.
+
+.. option:: nrfiles=int
+
+ Number of files to use for this job. Defaults to 1.
+
+.. option:: openfiles=int
+
+ Number of files to keep open at the same time. Defaults to the same as
+ :option:`nrfiles`, can be set smaller to limit the number simultaneous
+ opens.
+
+.. option:: file_service_type=str
+
+ Defines how fio decides which file from a job to service next. The following
+ types are defined:
+
+ **random**
+ Choose a file at random.
+
+ **roundrobin**
+ Round robin over opened files. This is the default.
+
+ **sequential**
+ Finish one file before moving on to the next. Multiple files can
+ still be open depending on 'openfiles'.
+
+ **zipf**
+ Use a *Zipf* distribution to decide what file to access.
+
+ **pareto**
+ Use a *Pareto* distribution to decide what file to access.
+
+ **gauss**
+ Use a *Gaussian* (normal) distribution to decide what file to
+ access.
+
+ For *random*, *roundrobin*, and *sequential*, a postfix can be appended to
+ tell fio how many I/Os to issue before switching to a new file. For example,
+ specifying ``file_service_type=random:8`` would cause fio to issue
+ 8 I/Os before selecting a new file at random. For the non-uniform
+ distributions, a floating point postfix can be given to influence how the
+ distribution is skewed. See :option:`random_distribution` for a description
+ of how that would work.
+
+.. option:: ioscheduler=str
+
+ Attempt to switch the device hosting the file to the specified I/O scheduler
+ before running.
+
+.. option:: create_serialize=bool
+
+ If true, serialize the file creation for the jobs. This may be handy to
+ avoid interleaving of data files, which may greatly depend on the filesystem
+ used and even the number of processors in the system.
+
+.. option:: create_fsync=bool
+
+ fsync the data file after creation. This is the default.
+
+.. option:: create_on_open=bool
+
+ Don't pre-setup the files for I/O, just create open() when it's time to do
+ I/O to that file.
+
+.. option:: create_only=bool
+
+ If true, fio will only run the setup phase of the job. If files need to be
+ laid out or updated on disk, only that will be done. The actual job contents
+ are not executed.
+
+.. option:: allow_file_create=bool
+
+ If true, fio is permitted to create files as part of its workload. This is
+ the default behavior. If this option is false, then fio will error out if
+ the files it needs to use don't already exist. Default: true.
+
+.. option:: allow_mounted_write=bool
+
+ If this isn't set, fio will abort jobs that are destructive (e.g. that write)
+ to what appears to be a mounted device or partition. This should help catch
+ creating inadvertently destructive tests, not realizing that the test will
+ destroy data on the mounted file system. Default: false.
+
+.. option:: pre_read=bool
+
+ If this is given, files will be pre-read into memory before starting the
+ given I/O operation. This will also clear the :option:`invalidate` flag,
+ since it is pointless to pre-read and then drop the cache. This will only
+ work for I/O engines that are seek-able, since they allow you to read the
+ same data multiple times. Thus it will not work on e.g. network or splice I/O.
+
+.. option:: unlink=bool
+
+ Unlink the job files when done. Not the default, as repeated runs of that
+ job would then waste time recreating the file set again and again.
+
+.. option:: unlink_each_loop=bool
+
+ Unlink job files after each iteration or loop.
+
+.. option:: zonesize=int
+
+ Divide a file into zones of the specified size. See :option:`zoneskip`.
+
+.. option:: zonerange=int
+
+ Give size of an I/O zone. See :option:`zoneskip`.
+
+.. option:: zoneskip=int
+
+ Skip the specified number of bytes when :option:`zonesize` data has been
+ read. The two zone options can be used to only do I/O on zones of a file.
+
+
+I/O type
+~~~~~~~~
+
+.. option:: direct=bool
+
+ If value is true, use non-buffered I/O. This is usually O_DIRECT. Note that
+ ZFS on Solaris doesn't support direct I/O. On Windows the synchronous
+ ioengines don't support direct I/O. Default: false.
+
+.. option:: atomic=bool
+
+ If value is true, attempt to use atomic direct I/O. Atomic writes are
+ guaranteed to be stable once acknowledged by the operating system. Only
+ Linux supports O_ATOMIC right now.
+
+.. option:: buffered=bool
+
+ If value is true, use buffered I/O. This is the opposite of the
+ :option:`direct` option. Defaults to true.
+
+.. option:: readwrite=str, rw=str
+
+ Type of I/O pattern. Accepted values are:
+
+ **read**
+ Sequential reads.
+ **write**
+ Sequential writes.
+ **trim**
+ Sequential trims (Linux block devices only).
+ **randwrite**
+ Random writes.
+ **randread**
+ Random reads.
+ **randtrim**
+ Random trims (Linux block devices only).
+ **rw,readwrite**
+ Sequential mixed reads and writes.
+ **randrw**
+ Random mixed reads and writes.
+ **trimwrite**
+ Sequential trim+write sequences. Blocks will be trimmed first,
+ then the same blocks will be written to.
+
+ Fio defaults to read if the option is not specified. For the mixed I/O
+ types, the default is to split them 50/50. For certain types of I/O the
+ result may still be skewed a bit, since the speed may be different. It is
+ possible to specify a number of I/O's to do before getting a new offset,
+ this is done by appending a ``:<nr>`` to the end of the string given. For a
+ random read, it would look like ``rw=randread:8`` for passing in an offset
+ modifier with a value of 8. If the suffix is used with a sequential I/O
+ pattern, then the value specified will be added to the generated offset for
+ each I/O. For instance, using ``rw=write:4k`` will skip 4k for every
+ write. It turns sequential I/O into sequential I/O with holes. See the
+ :option:`rw_sequencer` option.
+
+.. option:: rw_sequencer=str
+
+ If an offset modifier is given by appending a number to the ``rw=<str>``
+ line, then this option controls how that number modifies the I/O offset
+ being generated. Accepted values are:
+
+ **sequential**
+ Generate sequential offset.
+ **identical**
+ Generate the same offset.
+
+ ``sequential`` is only useful for random I/O, where fio would normally
+ generate a new random offset for every I/O. If you append e.g. 8 to randread,
+ you would get a new random offset for every 8 I/O's. The result would be a
+ seek for only every 8 I/O's, instead of for every I/O. Use ``rw=randread:8``
+ to specify that. As sequential I/O is already sequential, setting
+ ``sequential`` for that would not result in any differences. ``identical``
+ behaves in a similar fashion, except it sends the same offset 8 number of
+ times before generating a new offset.
+
+.. option:: unified_rw_reporting=bool
+
+ Fio normally reports statistics on a per data direction basis, meaning that
+ reads, writes, and trims are accounted and reported separately. If this
+ option is set fio sums the results and report them as "mixed" instead.
+
+.. option:: randrepeat=bool
+
+ Seed the random number generator used for random I/O patterns in a
+ predictable way so the pattern is repeatable across runs. Default: true.
+
+.. option:: allrandrepeat=bool
+
+ Seed all random number generators in a predictable way so results are
+ repeatable across runs. Default: false.
+
+.. option:: randseed=int
+
+ Seed the random number generators based on this seed value, to be able to
+ control what sequence of output is being generated. If not set, the random
+ sequence depends on the :option:`randrepeat` setting.
+
+.. option:: fallocate=str
+
+ Whether pre-allocation is performed when laying down files.
+ Accepted values are:
+
+ **none**
+ Do not pre-allocate space.
+
+ **posix**
+ Pre-allocate via :manpage:`posix_fallocate(3)`.
+
+ **keep**
+ Pre-allocate via :manpage:`fallocate(2)` with
+ FALLOC_FL_KEEP_SIZE set.
+
+ **0**
+ Backward-compatible alias for **none**.
+
+ **1**
+ Backward-compatible alias for **posix**.
+
+ May not be available on all supported platforms. **keep** is only available
+ on Linux. If using ZFS on Solaris this must be set to **none** because ZFS
+ doesn't support it. Default: **posix**.
+
+.. option:: fadvise_hint=str
+
+ Use :manpage:`posix_fadvise(2)` to advise the kernel on what I/O patterns
+ are likely to be issued. Accepted values are:
+
+ **0**
+ Backwards-compatible hint for "no hint".
+
+ **1**
+ Backwards compatible hint for "advise with fio workload type". This
+ uses **FADV_RANDOM** for a random workload, and **FADV_SEQUENTIAL**
+ for a sequential workload.
+
+ **sequential**
+ Advise using **FADV_SEQUENTIAL**.
+
+ **random**
+ Advise using **FADV_RANDOM**.
+
+.. option:: fadvise_stream=int
+
+ Use :manpage:`posix_fadvise(2)` to advise the kernel what stream ID the
+ writes issued belong to. Only supported on Linux. Note, this option may
+ change going forward.
+
+.. option:: offset=int
+
+ Start I/O at the given offset in the file. The data before the given offset
+ will not be touched. This effectively caps the file size at `real_size -
+ offset`. Can be combined with :option:`size` to constrain the start and
+ end range that I/O will be done within.
+
+.. option:: offset_increment=int
+
+ If this is provided, then the real offset becomes `offset + offset_increment
+ * thread_number`, where the thread number is a counter that starts at 0 and
+ is incremented for each sub-job (i.e. when :option:`numjobs` option is
+ specified). This option is useful if there are several jobs which are
+ intended to operate on a file in parallel disjoint segments, with even
+ spacing between the starting points.
+
+.. option:: number_ios=int
+
+ Fio will normally perform I/Os until it has exhausted the size of the region
+ set by :option:`size`, or if it exhaust the allocated time (or hits an error
+ condition). With this setting, the range/size can be set independently of
+ the number of I/Os to perform. When fio reaches this number, it will exit
+ normally and report status. Note that this does not extend the amount of I/O
+ that will be done, it will only stop fio if this condition is met before
+ other end-of-job criteria.
+
+.. option:: fsync=int
+
+ If writing to a file, issue a sync of the dirty data for every number of
+ blocks given. For example, if you give 32 as a parameter, fio will sync the
+ file for every 32 writes issued. If fio is using non-buffered I/O, we may
+ not sync the file. The exception is the sg I/O engine, which synchronizes
+ the disk cache anyway.
+
+.. option:: fdatasync=int
+
+ Like :option:`fsync` but uses :manpage:`fdatasync(2)` to only sync data and
+ not metadata blocks. In FreeBSD and Windows there is no
+ :manpage:`fdatasync(2)`, this falls back to using :manpage:`fsync(2)`.
+
+.. option:: write_barrier=int
+
+ Make every `N-th` write a barrier write.
+
+.. option:: sync_file_range=str:val
+
+ Use :manpage:`sync_file_range(2)` for every `val` number of write
+ operations. Fio will track range of writes that have happened since the last
+ :manpage:`sync_file_range(2)` call. `str` can currently be one or more of:
+
+ **wait_before**
+ SYNC_FILE_RANGE_WAIT_BEFORE
+ **write**
+ SYNC_FILE_RANGE_WRITE
+ **wait_after**
+ SYNC_FILE_RANGE_WAIT_AFTER
+
+ So if you do ``sync_file_range=wait_before,write:8``, fio would use
+ ``SYNC_FILE_RANGE_WAIT_BEFORE | SYNC_FILE_RANGE_WRITE`` for every 8
+ writes. Also see the :manpage:`sync_file_range(2)` man page. This option is
+ Linux specific.
+
+.. option:: overwrite=bool
+
+ If true, writes to a file will always overwrite existing data. If the file
+ doesn't already exist, it will be created before the write phase begins. If
+ the file exists and is large enough for the specified write phase, nothing
+ will be done.
+
+.. option:: end_fsync=bool
+
+ If true, fsync file contents when a write stage has completed.
+
+.. option:: fsync_on_close=bool
+
+ If true, fio will :manpage:`fsync(2)` a dirty file on close. This differs
+ from end_fsync in that it will happen on every file close, not just at the
+ end of the job.
+
+.. option:: rwmixread=int
+
+ Percentage of a mixed workload that should be reads. Default: 50.
+
+.. option:: rwmixwrite=int
+
+ Percentage of a mixed workload that should be writes. If both
+ :option:`rwmixread` and :option:`rwmixwrite` is given and the values do not
+ add up to 100%, the latter of the two will be used to override the
+ first. This may interfere with a given rate setting, if fio is asked to
+ limit reads or writes to a certain rate. If that is the case, then the
+ distribution may be skewed. Default: 50.
+
+.. option:: random_distribution=str:float[,str:float][,str:float]
+
+ By default, fio will use a completely uniform random distribution when asked
+ to perform random I/O. Sometimes it is useful to skew the distribution in
+ specific ways, ensuring that some parts of the data is more hot than others.
+ fio includes the following distribution models:
+
+ **random**
+ Uniform random distribution
+
+ **zipf**
+ Zipf distribution
+
+ **pareto**
+ Pareto distribution
+
+ **gauss**
+ Normal (Gaussian) distribution
+
+ **zoned**
+ Zoned random distribution
+
+ When using a **zipf** or **pareto** distribution, an input value is also
+ needed to define the access pattern. For **zipf**, this is the `zipf
+ theta`. For **pareto**, it's the `Pareto power`. Fio includes a test
+ program, :command:`genzipf`, that can be used visualize what the given input
+ values will yield in terms of hit rates. If you wanted to use **zipf** with
+ a `theta` of 1.2, you would use ``random_distribution=zipf:1.2`` as the
+ option. If a non-uniform model is used, fio will disable use of the random
+ map. For the **gauss** distribution, a normal deviation is supplied as a
+ value between 0 and 100.
+
+ For a **zoned** distribution, fio supports specifying percentages of I/O
+ access that should fall within what range of the file or device. For
+ example, given a criteria of:
+
+ * 60% of accesses should be to the first 10%
+ * 30% of accesses should be to the next 20%
+ * 8% of accesses should be to to the next 30%
+ * 2% of accesses should be to the next 40%
+
+ we can define that through zoning of the random accesses. For the above
+ example, the user would do::
+
+ random_distribution=zoned:60/10:30/20:8/30:2/40
+
+ similarly to how :option:`bssplit` works for setting ranges and percentages
+ of block sizes. Like :option:`bssplit`, it's possible to specify separate
+ zones for reads, writes, and trims. If just one set is given, it'll apply to
+ all of them.
+
+.. option:: percentage_random=int[,int][,int]
+
+ For a random workload, set how big a percentage should be random. This
+ defaults to 100%, in which case the workload is fully random. It can be set
+ from anywhere from 0 to 100. Setting it to 0 would make the workload fully
+ sequential. Any setting in between will result in a random mix of sequential
+ and random I/O, at the given percentages. Comma-separated values may be
+ specified for reads, writes, and trims as described in :option:`blocksize`.
+
+.. option:: norandommap
+
+ Normally fio will cover every block of the file when doing random I/O. If
+ this option is given, fio will just get a new random offset without looking
+ at past I/O history. This means that some blocks may not be read or written,
+ and that some blocks may be read/written more than once. If this option is
+ used with :option:`verify` and multiple blocksizes (via :option:`bsrange`),
+ only intact blocks are verified, i.e., partially-overwritten blocks are
+ ignored.
+
+.. option:: softrandommap=bool
+
+ See :option:`norandommap`. If fio runs with the random block map enabled and
+ it fails to allocate the map, if this option is set it will continue without
+ a random block map. As coverage will not be as complete as with random maps,
+ this option is disabled by default.
+
+.. option:: random_generator=str
+
+ Fio supports the following engines for generating
+ I/O offsets for random I/O:
+
+ **tausworthe**
+ Strong 2^88 cycle random number generator
+ **lfsr**
+ Linear feedback shift register generator
+ **tausworthe64**
+ Strong 64-bit 2^258 cycle random number generator
+
+ **tausworthe** is a strong random number generator, but it requires tracking
+ on the side if we want to ensure that blocks are only read or written
+ once. **LFSR** guarantees that we never generate the same offset twice, and
+ it's also less computationally expensive. It's not a true random generator,
+ however, though for I/O purposes it's typically good enough. **LFSR** only
+ works with single block sizes, not with workloads that use multiple block
+ sizes. If used with such a workload, fio may read or write some blocks
+ multiple times. The default value is **tausworthe**, unless the required
+ space exceeds 2^32 blocks. If it does, then **tausworthe64** is
+ selected automatically.
+
+
+Block size
+~~~~~~~~~~
+
+.. option:: blocksize=int[,int][,int], bs=int[,int][,int]
+
+ The block size in bytes used for I/O units. Default: 4096. A single value
+ applies to reads, writes, and trims. Comma-separated values may be
+ specified for reads, writes, and trims. A value not terminated in a comma
+ applies to subsequent types.
+
+ Examples:
+
+ **bs=256k**
+ means 256k for reads, writes and trims.
+
+ **bs=8k,32k**
+ means 8k for reads, 32k for writes and trims.
+
+ **bs=8k,32k,**
+ means 8k for reads, 32k for writes, and default for trims.
+
+ **bs=,8k**
+ means default for reads, 8k for writes and trims.
+
+ **bs=,8k,**
+ means default for reads, 8k for writes, and default for writes.
+
+.. option:: blocksize_range=irange[,irange][,irange], bsrange=irange[,irange][,irange]
+
+ A range of block sizes in bytes for I/O units. The issued I/O unit will
+ always be a multiple of the minimum size, unless
+ :option:`blocksize_unaligned` is set.
+
+ Comma-separated ranges may be specified for reads, writes, and trims as
+ described in :option:`blocksize`.
+
+ Example: ``bsrange=1k-4k,2k-8k``.
+
+.. option:: bssplit=str[,str][,str]
+
+ Sometimes you want even finer grained control of the block sizes issued, not
+ just an even split between them. This option allows you to weight various
+ block sizes, so that you are able to define a specific amount of block sizes
+ issued. The format for this option is::
+
+ bssplit=blocksize/percentage:blocksize/percentage
+
+ for as many block sizes as needed. So if you want to define a workload that
+ has 50% 64k blocks, 10% 4k blocks, and 40% 32k blocks, you would write::
+
+ bssplit=4k/10:64k/50:32k/40
+
+ Ordering does not matter. If the percentage is left blank, fio will fill in
+ the remaining values evenly. So a bssplit option like this one::
+
+ bssplit=4k/50:1k/:32k/
+
+ would have 50% 4k ios, and 25% 1k and 32k ios. The percentages always add up
+ to 100, if bssplit is given a range that adds up to more, it will error out.
+
+ Comma-separated values may be specified for reads, writes, and trims as
+ described in :option:`blocksize`.
+
+ If you want a workload that has 50% 2k reads and 50% 4k reads, while having
+ 90% 4k writes and 10% 8k writes, you would specify::
+
+ bssplit=2k/50:4k/50,4k/90,8k/10
+
+.. option:: blocksize_unaligned, bs_unaligned
+
+ If set, fio will issue I/O units with any size within
+ :option:`blocksize_range`, not just multiples of the minimum size. This
+ typically won't work with direct I/O, as that normally requires sector
+ alignment.
+
+.. option:: bs_is_seq_rand
+
+ If this option is set, fio will use the normal read,write blocksize settings
+ as sequential,random blocksize settings instead. Any random read or write
+ will use the WRITE blocksize settings, and any sequential read or write will
+ use the READ blocksize settings.
+
+.. option:: blockalign=int[,int][,int], ba=int[,int][,int]
+
+ Boundary to which fio will align random I/O units. Default:
+ :option:`blocksize`. Minimum alignment is typically 512b for using direct
+ I/O, though it usually depends on the hardware block size. This option is
+ mutually exclusive with using a random map for files, so it will turn off
+ that option. Comma-separated values may be specified for reads, writes, and
+ trims as described in :option:`blocksize`.
+
+
+Buffers and memory
+~~~~~~~~~~~~~~~~~~
+
+.. option:: zero_buffers
+
+ Initialize buffers with all zeros. Default: fill buffers with random data.
+
+.. option:: refill_buffers
+
+ If this option is given, fio will refill the I/O buffers on every
+ submit. The default is to only fill it at init time and reuse that
+ data. Only makes sense if zero_buffers isn't specified, naturally. If data
+ verification is enabled, `refill_buffers` is also automatically enabled.
+
+.. option:: scramble_buffers=bool
+
+ If :option:`refill_buffers` is too costly and the target is using data
+ deduplication, then setting this option will slightly modify the I/O buffer
+ contents to defeat normal de-dupe attempts. This is not enough to defeat
+ more clever block compression attempts, but it will stop naive dedupe of
+ blocks. Default: true.
+
+.. option:: buffer_compress_percentage=int
+
+ If this is set, then fio will attempt to provide I/O buffer content (on
+ WRITEs) that compress to the specified level. Fio does this by providing a
+ mix of random data and a fixed pattern. The fixed pattern is either zeroes,
+ or the pattern specified by :option:`buffer_pattern`. If the pattern option
+ is used, it might skew the compression ratio slightly. Note that this is per
+ block size unit, for file/disk wide compression level that matches this
+ setting, you'll also want to set :option:`refill_buffers`.
+
+.. option:: buffer_compress_chunk=int
+
+ See :option:`buffer_compress_percentage`. This setting allows fio to manage
+ how big the ranges of random data and zeroed data is. Without this set, fio
+ will provide :option:`buffer_compress_percentage` of blocksize random data,
+ followed by the remaining zeroed. With this set to some chunk size smaller
+ than the block size, fio can alternate random and zeroed data throughout the
+ I/O buffer.
+
+.. option:: buffer_pattern=str
+
+ If set, fio will fill the I/O buffers with this pattern. If not set, the
+ contents of I/O buffers is defined by the other options related to buffer
+ contents. The setting can be any pattern of bytes, and can be prefixed with
+ 0x for hex values. It may also be a string, where the string must then be
+ wrapped with ``""``, e.g.::
+
+ buffer_pattern="abcd"
+
+ or::
+
+ buffer_pattern=-12
+
+ or::
+
+ buffer_pattern=0xdeadface
+
+ Also you can combine everything together in any order::
+
+ buffer_pattern=0xdeadface"abcd"-12
+
+.. option:: dedupe_percentage=int
+
+ If set, fio will generate this percentage of identical buffers when
+ writing. These buffers will be naturally dedupable. The contents of the
+ buffers depend on what other buffer compression settings have been set. It's
+ possible to have the individual buffers either fully compressible, or not at
+ all. This option only controls the distribution of unique buffers.
+
+.. option:: invalidate=bool
+
+ Invalidate the buffer/page cache parts for this file prior to starting
+ I/O. Defaults to true.
+
+.. option:: sync=bool
+
+ Use synchronous I/O for buffered writes. For the majority of I/O engines,
+ this means using O_SYNC. Default: false.
+
+.. option:: iomem=str, mem=str
+
+ Fio can use various types of memory as the I/O unit buffer. The allowed
+ values are:
+
+ **malloc**
+ Use memory from :manpage:`malloc(3)` as the buffers. Default memory
+ type.
+
+ **shm**
+ Use shared memory as the buffers. Allocated through
+ :manpage:`shmget(2)`.
+
+ **shmhuge**
+ Same as shm, but use huge pages as backing.
+
+ **mmap**
+ Use mmap to allocate buffers. May either be anonymous memory, or can
+ be file backed if a filename is given after the option. The format
+ is `mem=mmap:/path/to/file`.
+
+ **mmaphuge**
+ Use a memory mapped huge file as the buffer backing. Append filename
+ after mmaphuge, ala `mem=mmaphuge:/hugetlbfs/file`.
+
+ **mmapshared**
+ Same as mmap, but use a MMAP_SHARED mapping.
+
+ The area allocated is a function of the maximum allowed bs size for the job,
+ multiplied by the I/O depth given. Note that for **shmhuge** and
+ **mmaphuge** to work, the system must have free huge pages allocated. This
+ can normally be checked and set by reading/writing
+ :file:`/proc/sys/vm/nr_hugepages` on a Linux system. Fio assumes a huge page
+ is 4MiB in size. So to calculate the number of huge pages you need for a
+ given job file, add up the I/O depth of all jobs (normally one unless
+ :option:`iodepth` is used) and multiply by the maximum bs set. Then divide
+ that number by the huge page size. You can see the size of the huge pages in
+ :file:`/proc/meminfo`. If no huge pages are allocated by having a non-zero
+ number in `nr_hugepages`, using **mmaphuge** or **shmhuge** will fail. Also
+ see :option:`hugepage-size`.
+
+ **mmaphuge** also needs to have hugetlbfs mounted and the file location
+ should point there. So if it's mounted in :file:`/huge`, you would use
+ `mem=mmaphuge:/huge/somefile`.
+
+.. option:: iomem_align=int
+
+ This indicates the memory alignment of the I/O memory buffers. Note that
+ the given alignment is applied to the first I/O unit buffer, if using
+ :option:`iodepth` the alignment of the following buffers are given by the
+ :option:`bs` used. In other words, if using a :option:`bs` that is a
+ multiple of the page sized in the system, all buffers will be aligned to
+ this value. If using a :option:`bs` that is not page aligned, the alignment
+ of subsequent I/O memory buffers is the sum of the :option:`iomem_align` and
+ :option:`bs` used.
+
+.. option:: hugepage-size=int
+
+ Defines the size of a huge page. Must at least be equal to the system
+ setting, see :file:`/proc/meminfo`. Defaults to 4MiB. Should probably
+ always be a multiple of megabytes, so using ``hugepage-size=Xm`` is the
+ preferred way to set this to avoid setting a non-pow-2 bad value.
+
+.. option:: lockmem=int
+
+ Pin the specified amount of memory with :manpage:`mlock(2)`. Can be used to
+ simulate a smaller amount of memory. The amount specified is per worker.
+
+
+I/O size
+~~~~~~~~
+
+.. option:: size=int
+
+ The total size of file I/O for this job. Fio will run until this many bytes
+ has been transferred, unless runtime is limited by other options (such as
+ :option:`runtime`, for instance, or increased/decreased by
+ :option:`io_size`). Unless specific :option:`nrfiles` and :option:`filesize`
+ options are given, fio will divide this size between the available files
+ specified by the job. If not set, fio will use the full size of the given
+ files or devices. If the files do not exist, size must be given. It is also
+ possible to give size as a percentage between 1 and 100. If ``size=20%`` is
+ given, fio will use 20% of the full size of the given files or devices.
+ Can be combined with :option:`offset` to constrain the start and end range
+ that I/O will be done within.
+
+.. option:: io_size=int, io_limit=int
+
+ Normally fio operates within the region set by :option:`size`, which means
+ that the :option:`size` option sets both the region and size of I/O to be
+ performed. Sometimes that is not what you want. With this option, it is
+ possible to define just the amount of I/O that fio should do. For instance,
+ if :option:`size` is set to 20GiB and :option:`io_size` is set to 5GiB, fio
+ will perform I/O within the first 20GiB but exit when 5GiB have been
+ done. The opposite is also possible -- if :option:`size` is set to 20GiB,
+ and :option:`io_size` is set to 40GiB, then fio will do 40GiB of I/O within
+ the 0..20GiB region.
+
+.. option:: filesize=int
+
+ Individual file sizes. May be a range, in which case fio will select sizes
+ for files at random within the given range and limited to :option:`size` in
+ total (if that is given). If not given, each created file is the same size.
+
+.. option:: file_append=bool
+
+ Perform I/O after the end of the file. Normally fio will operate within the
+ size of a file. If this option is set, then fio will append to the file
+ instead. This has identical behavior to setting :option:`offset` to the size
+ of a file. This option is ignored on non-regular files.
+
+.. option:: fill_device=bool, fill_fs=bool
+
+ Sets size to something really large and waits for ENOSPC (no space left on
+ device) as the terminating condition. Only makes sense with sequential
+ write. For a read workload, the mount point will be filled first then I/O
+ started on the result. This option doesn't make sense if operating on a raw
+ device node, since the size of that is already known by the file system.
+ Additionally, writing beyond end-of-device will not return ENOSPC there.
+
+
+I/O engine
+~~~~~~~~~~
+
+.. option:: ioengine=str
+
+ Defines how the job issues I/O to the file. The following types are defined:
+
+ **sync**
+ Basic :manpage:`read(2)` or :manpage:`write(2)`
+ I/O. :manpage:`lseek(2)` is used to position the I/O location.
+
+ **psync**
+ Basic :manpage:`pread(2)` or :manpage:`pwrite(2)` I/O. Default on
+ all supported operating systems except for Windows.
+
+ **vsync**
+ Basic :manpage:`readv(2)` or :manpage:`writev(2)` I/O. Will emulate
+ queuing by coalescing adjacent I/Os into a single submission.
+
+ **pvsync**
+ Basic :manpage:`preadv(2)` or :manpage:`pwritev(2)` I/O.
+
+ **pvsync2**
+ Basic :manpage:`preadv2(2)` or :manpage:`pwritev2(2)` I/O.
+
+ **libaio**
+ Linux native asynchronous I/O. Note that Linux may only support
+ queued behaviour with non-buffered I/O (set ``direct=1`` or
+ ``buffered=0``).
+ This engine defines engine specific options.
+
+ **posixaio**
+ POSIX asynchronous I/O using :manpage:`aio_read(3)` and
+ :manpage:`aio_write(3)`.
+
+ **solarisaio**
+ Solaris native asynchronous I/O.
+
+ **windowsaio**
+ Windows native asynchronous I/O. Default on Windows.
+
+ **mmap**
+ File is memory mapped with :manpage:`mmap(2)` and data copied
+ to/from using :manpage:`memcpy(3)`.
+
+ **splice**
+ :manpage:`splice(2)` is used to transfer the data and
+ :manpage:`vmsplice(2)` to transfer data from user space to the
+ kernel.
+
+ **sg**
+ SCSI generic sg v3 I/O. May either be synchronous using the SG_IO
+ ioctl, or if the target is an sg character device we use
+ :manpage:`read(2)` and :manpage:`write(2)` for asynchronous
+ I/O. Requires filename option to specify either block or character
+ devices.
+
+ **null**
+ Doesn't transfer any data, just pretends to. This is mainly used to
+ exercise fio itself and for debugging/testing purposes.
+
+ **net**
+ Transfer over the network to given ``host:port``. Depending on the
+ :option:`protocol` used, the :option:`hostname`, :option:`port`,
+ :option:`listen` and :option:`filename` options are used to specify
+ what sort of connection to make, while the :option:`protocol` option
+ determines which protocol will be used. This engine defines engine
+ specific options.
+
+ **netsplice**
+ Like **net**, but uses :manpage:`splice(2)` and
+ :manpage:`vmsplice(2)` to map data and send/receive.
+ This engine defines engine specific options.
+
+ **cpuio**
+ Doesn't transfer any data, but burns CPU cycles according to the
+ :option:`cpuload` and :option:`cpuchunks` options. Setting
+ :option:`cpuload` =85 will cause that job to do nothing but burn 85%
+ of the CPU. In case of SMP machines, use :option:`numjobs`
+ =<no_of_cpu> to get desired CPU usage, as the cpuload only loads a
+ single CPU at the desired rate. A job never finishes unless there is
+ at least one non-cpuio job.
+
+ **guasi**
+ The GUASI I/O engine is the Generic Userspace Asyncronous Syscall
+ Interface approach to async I/O. See
+
+ http://www.xmailserver.org/guasi-lib.html
+
+ for more info on GUASI.
+
+ **rdma**
+ The RDMA I/O engine supports both RDMA memory semantics
+ (RDMA_WRITE/RDMA_READ) and channel semantics (Send/Recv) for the
+ InfiniBand, RoCE and iWARP protocols.
+
+ **falloc**
+ I/O engine that does regular fallocate to simulate data transfer as
+ fio ioengine.
+
+ DDIR_READ
+ does fallocate(,mode = FALLOC_FL_KEEP_SIZE,).
+
+ DDIR_WRITE
+ does fallocate(,mode = 0).
+
+ DDIR_TRIM
+ does fallocate(,mode = FALLOC_FL_KEEP_SIZE|FALLOC_FL_PUNCH_HOLE).
+
+ **e4defrag**
+ I/O engine that does regular EXT4_IOC_MOVE_EXT ioctls to simulate
+ defragment activity in request to DDIR_WRITE event.
+
+ **rbd**
+ I/O engine supporting direct access to Ceph Rados Block Devices
+ (RBD) via librbd without the need to use the kernel rbd driver. This
+ ioengine defines engine specific options.
+
+ **gfapi**
+ Using Glusterfs libgfapi sync interface to direct access to
+ Glusterfs volumes without having to go through FUSE. This ioengine
+ defines engine specific options.
+
+ **gfapi_async**
+ Using Glusterfs libgfapi async interface to direct access to
+ Glusterfs volumes without having to go through FUSE. This ioengine
+ defines engine specific options.
+
+ **libhdfs**
+ Read and write through Hadoop (HDFS). The :file:`filename` option
+ is used to specify host,port of the hdfs name-node to connect. This
+ engine interprets offsets a little differently. In HDFS, files once
+ created cannot be modified. So random writes are not possible. To
+ imitate this, libhdfs engine expects bunch of small files to be
+ created over HDFS, and engine will randomly pick a file out of those
+ files based on the offset generated by fio backend. (see the example
+ job file to create such files, use ``rw=write`` option). Please
+ note, you might want to set necessary environment variables to work
+ with hdfs/libhdfs properly. Each job uses its own connection to
+ HDFS.
+
+ **mtd**
+ Read, write and erase an MTD character device (e.g.,
+ :file:`/dev/mtd0`). Discards are treated as erases. Depending on the
+ underlying device type, the I/O may have to go in a certain pattern,
+ e.g., on NAND, writing sequentially to erase blocks and discarding
+ before overwriting. The writetrim mode works well for this
+ constraint.
+
+ **pmemblk**
+ Read and write using filesystem DAX to a file on a filesystem
+ mounted with DAX on a persistent memory device through the NVML
+ libpmemblk library.
+
+ **dev-dax**
+ Read and write using device DAX to a persistent memory device (e.g.,
+ /dev/dax0.0) through the NVML libpmem library.
+
+ **external**
+ Prefix to specify loading an external I/O engine object file. Append
+ the engine filename, e.g. ``ioengine=external:/tmp/foo.o`` to load
+ ioengine :file:`foo.o` in :file:`/tmp`.
+
+
+I/O engine specific parameters
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+In addition, there are some parameters which are only valid when a specific
+ioengine is in use. These are used identically to normal parameters, with the
+caveat that when used on the command line, they must come after the
+:option:`ioengine` that defines them is selected.
+
+.. option:: userspace_reap : [libaio]
+
+ Normally, with the libaio engine in use, fio will use the
+ :manpage:`io_getevents(2)` system call to reap newly returned events. With
+ this flag turned on, the AIO ring will be read directly from user-space to
+ reap events. The reaping mode is only enabled when polling for a minimum of
+ 0 events (e.g. when :option:`iodepth_batch_complete` `=0`).
+
+.. option:: hipri : [pvsync2]
+
+ Set RWF_HIPRI on I/O, indicating to the kernel that it's of higher priority
+ than normal.
+
+.. option:: cpuload=int : [cpuio]
+
+ Attempt to use the specified percentage of CPU cycles.
+
+.. option:: cpuchunks=int : [cpuio]
+
+ Split the load into cycles of the given time. In microseconds.
+
+.. option:: exit_on_io_done=bool : [cpuio]
+
+ Detect when I/O threads are done, then exit.
+
+.. option:: hostname=str : [netsplice] [net]
+
+ The host name or IP address to use for TCP or UDP based I/O. If the job is
+ a TCP listener or UDP reader, the host name is not used and must be omitted
+ unless it is a valid UDP multicast address.
+
+.. option:: namenode=str : [libhdfs]
+
+ The host name or IP address of a HDFS cluster namenode to contact.
+
+.. option:: port=int
+
+ [netsplice], [net]
+
+ The TCP or UDP port to bind to or connect to. If this is used with
+ :option:`numjobs` to spawn multiple instances of the same job type, then
+ this will be the starting port number since fio will use a range of
+ ports.
+
+ [libhdfs]
+
+ the listening port of the HFDS cluster namenode.
+
+.. option:: interface=str : [netsplice] [net]
+
+ The IP address of the network interface used to send or receive UDP
+ multicast.
+
+.. option:: ttl=int : [netsplice] [net]
+
+ Time-to-live value for outgoing UDP multicast packets. Default: 1.
+
+.. option:: nodelay=bool : [netsplice] [net]
+
+ Set TCP_NODELAY on TCP connections.
+
+.. option:: protocol=str : [netsplice] [net]
+
+.. option:: proto=str : [netsplice] [net]
+
+ The network protocol to use. Accepted values are:
+
+ **tcp**
+ Transmission control protocol.
+ **tcpv6**
+ Transmission control protocol V6.
+ **udp**
+ User datagram protocol.
+ **udpv6**
+ User datagram protocol V6.
+ **unix**
+ UNIX domain socket.
+
+ When the protocol is TCP or UDP, the port must also be given, as well as the
+ hostname if the job is a TCP listener or UDP reader. For unix sockets, the
+ normal filename option should be used and the port is invalid.
+
+.. option:: listen : [net]
+
+ For TCP network connections, tell fio to listen for incoming connections
+ rather than initiating an outgoing connection. The :option:`hostname` must
+ be omitted if this option is used.
+
+.. option:: pingpong : [net]
+
+ Normally a network writer will just continue writing data, and a network
+ reader will just consume packages. If ``pingpong=1`` is set, a writer will
+ send its normal payload to the reader, then wait for the reader to send the
+ same payload back. This allows fio to measure network latencies. The
+ submission and completion latencies then measure local time spent sending or
+ receiving, and the completion latency measures how long it took for the
+ other end to receive and send back. For UDP multicast traffic
+ ``pingpong=1`` should only be set for a single reader when multiple readers
+ are listening to the same address.
+
+.. option:: window_size : [net]
+
+ Set the desired socket buffer size for the connection.
+
+.. option:: mss : [net]
+
+ Set the TCP maximum segment size (TCP_MAXSEG).
+
+.. option:: donorname=str : [e4defrag]
+
+ File will be used as a block donor(swap extents between files).
+
+.. option:: inplace=int : [e4defrag]
+
+ Configure donor file blocks allocation strategy:
+
+ **0**
+ Default. Preallocate donor's file on init.
+ **1**
+ Allocate space immediately inside defragment event, and free right
+ after event.
+
+.. option:: clustername=str : [rbd]
+
+ Specifies the name of the Ceph cluster.
+
+.. option:: rbdname=str : [rbd]
+
+ Specifies the name of the RBD.
+
+.. option:: pool=str : [rbd]
+
+ Specifies the name of the Ceph pool containing RBD.
+
+.. option:: clientname=str : [rbd]
+
+ Specifies the username (without the 'client.' prefix) used to access the
+ Ceph cluster. If the *clustername* is specified, the *clientname* shall be
+ the full *type.id* string. If no type. prefix is given, fio will add
+ 'client.' by default.
+
+.. option:: skip_bad=bool : [mtd]
+
+ Skip operations against known bad blocks.
+
+.. option:: hdfsdirectory : [libhdfs]
+
+ libhdfs will create chunk in this HDFS directory.
+
+.. option:: chunk_size : [libhdfs]
+
+ the size of the chunk to use for each file.
+
+
+I/O depth
+~~~~~~~~~
+
+.. option:: iodepth=int
+
+ Number of I/O units to keep in flight against the file. Note that
+ increasing *iodepth* beyond 1 will not affect synchronous ioengines (except
+ for small degrees when :option:`verify_async` is in use). Even async
+ engines may impose OS restrictions causing the desired depth not to be
+ achieved. This may happen on Linux when using libaio and not setting
+ :option:`direct` =1, since buffered I/O is not async on that OS. Keep an
+ eye on the I/O depth distribution in the fio output to verify that the
+ achieved depth is as expected. Default: 1.
+
+.. option:: iodepth_batch_submit=int, iodepth_batch=int
+
+ This defines how many pieces of I/O to submit at once. It defaults to 1
+ which means that we submit each I/O as soon as it is available, but can be
+ raised to submit bigger batches of I/O at the time. If it is set to 0 the
+ :option:`iodepth` value will be used.
+
+.. option:: iodepth_batch_complete_min=int, iodepth_batch_complete=int
+
+ This defines how many pieces of I/O to retrieve at once. It defaults to 1
+ which means that we'll ask for a minimum of 1 I/O in the retrieval process
+ from the kernel. The I/O retrieval will go on until we hit the limit set by
+ :option:`iodepth_low`. If this variable is set to 0, then fio will always
+ check for completed events before queuing more I/O. This helps reduce I/O
+ latency, at the cost of more retrieval system calls.
+
+.. option:: iodepth_batch_complete_max=int
+
+ This defines maximum pieces of I/O to retrieve at once. This variable should
+ be used along with :option:`iodepth_batch_complete_min` =int variable,
+ specifying the range of min and max amount of I/O which should be
+ retrieved. By default it is equal to :option:`iodepth_batch_complete_min`
+ value.
+
+ Example #1::
+
+ iodepth_batch_complete_min=1
+ iodepth_batch_complete_max=<iodepth>
+
+ which means that we will retrieve at least 1 I/O and up to the whole
+ submitted queue depth. If none of I/O has been completed yet, we will wait.
+
+ Example #2::
+
+ iodepth_batch_complete_min=0
+ iodepth_batch_complete_max=<iodepth>
+
+ which means that we can retrieve up to the whole submitted queue depth, but
+ if none of I/O has been completed yet, we will NOT wait and immediately exit
+ the system call. In this example we simply do polling.
+
+.. option:: iodepth_low=int
+
+ The low water mark indicating when to start filling the queue
+ again. Defaults to the same as :option:`iodepth`, meaning that fio will
+ attempt to keep the queue full at all times. If :option:`iodepth` is set to
+ e.g. 16 and *iodepth_low* is set to 4, then after fio has filled the queue of
+ 16 requests, it will let the depth drain down to 4 before starting to fill
+ it again.
+
+.. option:: io_submit_mode=str
+
+ This option controls how fio submits the I/O to the I/O engine. The default
+ is `inline`, which means that the fio job threads submit and reap I/O
+ directly. If set to `offload`, the job threads will offload I/O submission
+ to a dedicated pool of I/O threads. This requires some coordination and thus
+ has a bit of extra overhead, especially for lower queue depth I/O where it
+ can increase latencies. The benefit is that fio can manage submission rates
+ independently of the device completion rates. This avoids skewed latency
+ reporting if I/O gets back up on the device side (the coordinated omission
+ problem).
+
+
+I/O rate
+~~~~~~~~
+
+.. option:: thinktime=time
+
+ Stall the job for the specified period of time after an I/O has completed before issuing the
+ next. May be used to simulate processing being done by an application.
+ When the unit is omitted, the value is given in microseconds. See
+ :option:`thinktime_blocks` and :option:`thinktime_spin`.
+
+.. option:: thinktime_spin=time
+
+ Only valid if :option:`thinktime` is set - pretend to spend CPU time doing
+ something with the data received, before falling back to sleeping for the
+ rest of the period specified by :option:`thinktime`. When the unit is
+ omitted, the value is given in microseconds.
+
+.. option:: thinktime_blocks=int
+
+ Only valid if :option:`thinktime` is set - control how many blocks to issue,
+ before waiting `thinktime` usecs. If not set, defaults to 1 which will make
+ fio wait `thinktime` usecs after every block. This effectively makes any
+ queue depth setting redundant, since no more than 1 I/O will be queued
+ before we have to complete it and do our thinktime. In other words, this
+ setting effectively caps the queue depth if the latter is larger.
+
+.. option:: rate=int[,int][,int]
+
+ Cap the bandwidth used by this job. The number is in bytes/sec, the normal
+ suffix rules apply. Comma-separated values may be specified for reads,
+ writes, and trims as described in :option:`blocksize`.
+
+.. option:: rate_min=int[,int][,int]
+
+ Tell fio to do whatever it can to maintain at least this bandwidth. Failing
+ to meet this requirement will cause the job to exit. Comma-separated values
+ may be specified for reads, writes, and trims as described in
+ :option:`blocksize`.
+
+.. option:: rate_iops=int[,int][,int]
+
+ Cap the bandwidth to this number of IOPS. Basically the same as
+ :option:`rate`, just specified independently of bandwidth. If the job is
+ given a block size range instead of a fixed value, the smallest block size
+ is used as the metric. Comma-separated values may be specified for reads,
+ writes, and trims as described in :option:`blocksize`.
+
+.. option:: rate_iops_min=int[,int][,int]
+
+ If fio doesn't meet this rate of I/O, it will cause the job to exit.
+ Comma-separated values may be specified for reads, writes, and trims as
+ described in :option:`blocksize`.
+
+.. option:: rate_process=str
+
+ This option controls how fio manages rated I/O submissions. The default is
+ `linear`, which submits I/O in a linear fashion with fixed delays between
+ I/Os that gets adjusted based on I/O completion rates. If this is set to
+ `poisson`, fio will submit I/O based on a more real world random request
+ flow, known as the Poisson process
+ (https://en.wikipedia.org/wiki/Poisson_point_process). The lambda will be
+ 10^6 / IOPS for the given workload.
+
+
+I/O latency
+~~~~~~~~~~~
+
+.. option:: latency_target=time
+
+ If set, fio will attempt to find the max performance point that the given
+ workload will run at while maintaining a latency below this target. When
+ the unit is omitted, the value is given in microseconds. See
+ :option:`latency_window` and :option:`latency_percentile`.
+
+.. option:: latency_window=time
+
+ Used with :option:`latency_target` to specify the sample window that the job
+ is run at varying queue depths to test the performance. When the unit is
+ omitted, the value is given in microseconds.
+
+.. option:: latency_percentile=float
+
+ The percentage of I/Os that must fall within the criteria specified by
+ :option:`latency_target` and :option:`latency_window`. If not set, this
+ defaults to 100.0, meaning that all I/Os must be equal or below to the value
+ set by :option:`latency_target`.
+
+.. option:: max_latency=time
+
+ If set, fio will exit the job with an ETIMEDOUT error if it exceeds this
+ maximum latency. When the unit is omitted, the value is given in
+ microseconds.
+
+.. option:: rate_cycle=int
+
+ Average bandwidth for :option:`rate` and :option:`rate_min` over this number
+ of milliseconds.
+
+
+I/O replay
+~~~~~~~~~~
+
+.. option:: write_iolog=str
+
+ Write the issued I/O patterns to the specified file. See
+ :option:`read_iolog`. Specify a separate file for each job, otherwise the
+ iologs will be interspersed and the file may be corrupt.
+
+.. option:: read_iolog=str
+
+ Open an iolog with the specified file name and replay the I/O patterns it
+ contains. This can be used to store a workload and replay it sometime
+ later. The iolog given may also be a blktrace binary file, which allows fio
+ to replay a workload captured by :command:`blktrace`. See
+ :manpage:`blktrace(8)` for how to capture such logging data. For blktrace
+ replay, the file needs to be turned into a blkparse binary data file first
+ (``blkparse <device> -o /dev/null -d file_for_fio.bin``).
+
+.. option:: replay_no_stall=int
+
+ When replaying I/O with :option:`read_iolog` the default behavior is to
+ attempt to respect the time stamps within the log and replay them with the
+ appropriate delay between IOPS. By setting this variable fio will not
+ respect the timestamps and attempt to replay them as fast as possible while
+ still respecting ordering. The result is the same I/O pattern to a given
+ device, but different timings.
+
+.. option:: replay_redirect=str
+
+ While replaying I/O patterns using :option:`read_iolog` the default behavior
+ is to replay the IOPS onto the major/minor device that each IOP was recorded
+ from. This is sometimes undesirable because on a different machine those
+ major/minor numbers can map to a different device. Changing hardware on the
+ same system can also result in a different major/minor mapping.
+ ``replay_redirect`` causes all IOPS to be replayed onto the single specified
+ device regardless of the device it was recorded
+ from. i.e. :option:`replay_redirect` = :file:`/dev/sdc` would cause all I/O
+ in the blktrace or iolog to be replayed onto :file:`/dev/sdc`. This means
+ multiple devices will be replayed onto a single device, if the trace
+ contains multiple devices. If you want multiple devices to be replayed
+ concurrently to multiple redirected devices you must blkparse your trace
+ into separate traces and replay them with independent fio invocations.
+ Unfortunately this also breaks the strict time ordering between multiple
+ device accesses.
+
+.. option:: replay_align=int