'#', the entire line is discarded as a comment.
So let's look at a really simple job file that defines two processes, each
-randomly reading from a 128MiB file.
+randomly reading from a 128MB file.
; -- start job file --
[global]
Here we have no global section, as we only have one job defined anyway.
We want to use async io here, with a depth of 4 for each file. We also
-increased the buffer size used to 32KiB and define numjobs to 4 to
+increased the buffer size used to 32KB and define numjobs to 4 to
fork 4 identical jobs. The result is 4 processes each randomly writing
-to their own 64MiB file. Instead of using the above job file, you could
+to their own 64MB file. Instead of using the above job file, you could
have given the parameters on the command line. For this case, you would
specify:
$ fio --name=random-writers --ioengine=libaio --iodepth=4 --rw=randwrite --bs=32k --direct=0 --size=64m --numjobs=4
+4.1 Environment variables
+-------------------------
+
fio also supports environment variable expansion in job files. Any
substring of the form "${VARNAME}" as part of an option value (in other
words, on the right of the `='), will be expanded to the value of the
fio ships with a few example job files, you can also look there for
inspiration.
+4.2 Reserved keywords
+---------------------
+
+Additionally, fio has a set of reserved keywords that will be replaced
+internally with the appropriate value. Those keywords are:
+
+$pagesize The architecture page size of the running system
+$mb_memory Megabytes of total memory in the system
+$ncpus Number of online available CPUs
+
+These can be used on the command line or in the job file, and will be
+automatically substituted with the current system values when the job
+is run.
+
5.0 Detailed list of parameters
-------------------------------
a string. The following types are used:
str String. This is a sequence of alpha characters.
-time Integer with possible time postfix. In seconds unless otherwise
+time Integer with possible time suffix. In seconds unless otherwise
specified, use eg 10m for 10 minutes. Accepts s/m/h for seconds,
minutes, and hours.
-int SI integer. A whole number value, which may contain a postfix
- describing the base of the number. Accepted postfixes are k/m/g,
- meaning kilo, mega, and giga. So if you want to specify 4096,
- you could either write out '4096' or just give 4k. The postfixes
- signify base 2 values, so 1024 is 1k and 1024k is 1m and so on.
- If the option accepts an upper and lower range, use a colon ':'
- or minus '-' to separate such values. May also include a prefix
- to indicate numbers base. If 0x is used, the number is assumed to
- be hexadecimal. See irange.
+int SI integer. A whole number value, which may contain a suffix
+ describing the base of the number. Accepted suffixes are k/m/g/t/p,
+ meaning kilo, mega, giga, tera, and peta. The suffix is not case
+ sensitive. So if you want to specify 4096, you could either write
+ out '4096' or just give 4k. The suffixes signify base 2 values, so
+ 1024 is 1k and 1024k is 1m and so on. If the option accepts an upper
+ and lower range, use a colon ':' or minus '-' to separate such values.
+ May also include a prefix to indicate numbers base. If 0x is used,
+ the number is assumed to be hexadecimal. See irange.
bool Boolean. Usually parsed as an integer, however only defined for
true and false (1 and 0).
-irange Integer range with postfix. Allows value range to be given, such
+irange Integer range with suffix. Allows value range to be given, such
as 1024-4096. A colon may also be used as the separator, eg
1k:4k. If the option allows two sets of ranges, they can be
specified with a ',' or '/' delimiter: 1k-4k/8k-32k. Also see
can specify a number of files by separating the names with a
':' colon. So if you wanted a job to open /dev/sda and /dev/sdb
as the two working files, you would use
- filename=/dev/sda:/dev/sdb. '-' is a reserved name, meaning
- stdin or stdout. Which of the two depends on the read/write
+ filename=/dev/sda:/dev/sdb. If the wanted filename does need to
+ include a colon, then escape that with a '\' character. For
+ instance, if the filename is "/dev/dsk/foo@3,0:c", then you would
+ use filename="/dev/dsk/foo@3,0\:c". '-' is a reserved name,
+ meaning stdin or stdout. Which of the two depends on the read/write
direction set.
opendir=str Tell fio to recursively add any file it can find in this
IO's, instead of for every IO. Use rw=randread:8 to specify
that.
+kb_base=int The base unit for a kilobyte. The defacto base is 2^10, 1024.
+ Storage manufacturers like to use 10^3 or 1000 as a base
+ ten unit instead, for obvious reasons. Allow values are
+ 1024 or 1000, with 1024 being the default.
+
randrepeat=bool For random IO workloads, seed the generator in a predictable
way so that results are repeatable across repetitions.
fill_device=bool Sets size to something really large and waits for ENOSPC (no
space left on device) as the terminating condition. Only makes
- sense with sequential write.
+ sense with sequential write. For a read workload, the mount
+ point will be filled first then IO started on the result.
blocksize=int
bs=int The block size used for the io units. Defaults to 4k. Values
not sync the file. The exception is the sg io engine, which
synchronizes the disk cache anyway.
+fsyncdata=int Like fsync= but uses fdatasync() to only sync data and not
+ metadata blocks.
+
overwrite=bool If true, writes to a file will always overwrite existing
data. If the file doesn't already exist, it will be
created before the write phase begins. If the file exists
rwmixwrite=int How large a percentage of the mix should be writes. If both
rwmixread and rwmixwrite is given and the values do not add
up to 100%, the latter of the two will be used to override
- the first.
+ the first. This may interfere with a given rate setting,
+ if fio is asked to limit reads or writes to a certain rate.
+ If that is the case, then the distribution may be skewed.
norandommap Normally fio will cover every block of the file when doing
random IO. If this option is given, fio will just get a
after every block.
rate=int Cap the bandwidth used by this job. The number is in bytes/sec,
- the normal postfix rules apply. You can use rate=500k to limit
+ the normal suffix rules apply. You can use rate=500k to limit
reads and writes to 500k each, or you can specify read and
writes separately. Using rate=1m,500k would limit reads to
1MB/sec and writes to 500KB/sec. Capping only reads or
that for shmhuge and mmaphuge to work, the system must have
free huge pages allocated. This can normally be checked
and set by reading/writing /proc/sys/vm/nr_hugepages on a
- Linux system. Fio assumes a huge page is 4MiB in size. So
+ Linux system. Fio assumes a huge page is 4MB in size. So
to calculate the number of huge pages you need for a given
job file, add up the io depth of all jobs (normally one unless
iodepth= is used) and multiply by the maximum bs set. Then
location should point there. So if it's mounted in /huge,
you would use mem=mmaphuge:/huge/somefile.
+iomem_align=int This indiciates the memory alignment of the IO memory buffers.
+ Note that the given alignment is applied to the first IO unit
+ buffer, if using iodepth the alignment of the following buffers
+ are given by the bs used. In other words, if using a bs that is
+ a multiple of the page sized in the system, all buffers will
+ be aligned to this value. If using a bs that is not page
+ aligned, the alignment of subsequent IO memory buffers is the
+ sum of the iomem_align and bs used.
+
hugepage-size=int
Defines the size of a huge page. Must at least be equal
- to the system setting, see /proc/meminfo. Defaults to 4MiB.
+ to the system setting, see /proc/meminfo. Defaults to 4MB.
Should probably always be a multiple of megabytes, so using
hugepage-size=Xm is the preferred way to set this to avoid
setting a non-pow-2 bad value.
pre_read=bool If this is given, files will be pre-read into memory before
starting the given IO operation. This will also clear
the 'invalidate' flag, since it is pointless to pre-read
- and then drop the cache.
+ and then drop the cache. This will only work for IO engines
+ that are seekable, since they allow you to read the same data
+ multiple times. Thus it will not work on eg network or splice
+ IO.
unlink=bool Unlink the job files when done. Not the default, as repeated
runs of that job would then waste time recreating the file
sha256 Use sha256 as the checksum function.
+ sha1 Use optimized sha1 as the checksum function.
+
meta Write extra information about each io
(timestamp, block number etc.). The block
number is verified.
This option can be used for repeated burn-in tests of a
system to make sure that the written data is also
- correctly read back.
+ correctly read back. If the data direction given is
+ a read or random read, fio will assume that it should
+ verify a previously written file. If the data direction
+ includes any form of write, the verify will be of the
+ newly written data.
verifysort=bool If set, fio will sort written verify blocks when it deems
it faster to read them back in a sorted manner. This is
before quitting on a block verification failure. If this
option is set, fio will exit the job on the first observed
failure.
+
+verify_async=int Fio will normally verify IO inline from the submitting
+ thread. This option takes an integer describing how many
+ async offload threads to create for IO verification instead,
+ causing fio to offload the duty of verifying IO contents
+ to one or more separate threads. If using this offload
+ option, even sync IO engines can benefit from using an
+ iodepth setting higher than 1, as it allows them to have
+ IO in flight while verifies are running.
+
+verify_async_cpus=str Tell fio to set the given CPU affinity on the
+ async IO verification threads. See cpus_allowed for the
+ format used.
stonewall Wait for preceeding jobs in the job file to exit, before
starting this one. Can be used to insert serialization
for doing these time calls will be excluded from other
uses. Fio will manually clear it from the CPU mask of other
jobs.
+continue_on_error=bool Normally fio will exit the job on the first observed
+ failure. If this option is set, fio will continue the job when
+ there is a 'non-fatal error' (EIO or EILSEQ) until the runtime
+ is exceeded or the I/O size specified is completed. If this
+ option is used, there are two more stats that are appended,
+ the total error count and the first error. The error field
+ given in the stats is the first error that was hit during the
+ run.
6.0 Interpreting the output
direction, the output looks like:
Client1 (g=0): err= 0:
- write: io= 32MiB, bw= 666KiB/s, runt= 50320msec
+ write: io= 32MB, bw= 666KB/s, runt= 50320msec
slat (msec): min= 0, max= 136, avg= 0.03, stdev= 1.92
clat (msec): min= 0, max= 631, avg=48.50, stdev=86.82
- bw (KiB/s) : min= 0, max= 1196, per=51.00%, avg=664.02, stdev=681.68
+ bw (KB/s) : min= 0, max= 1196, per=51.00%, avg=664.02, stdev=681.68
cpu : usr=1.49%, sys=0.25%, ctx=7969, majf=0, minf=17
IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=99.0%, 16=0.0%, 32=0.0%, >32=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
will look like this:
Run status group 0 (all jobs):
- READ: io=64MiB, aggrb=22178, minb=11355, maxb=11814, mint=2840msec, maxt=2955msec
- WRITE: io=64MiB, aggrb=1302, minb=666, maxb=669, mint=50093msec, maxt=50320msec
+ READ: io=64MB, aggrb=22178, minb=11355, maxb=11814, mint=2840msec, maxt=2955msec
+ WRITE: io=64MB, aggrb=1302, minb=666, maxb=669, mint=50093msec, maxt=50320msec
For each data direction, it prints:
jobname, groupid, error
READ status:
- KiB IO, bandwidth (KiB/sec), runtime (msec)
+ KB IO, bandwidth (KB/sec), runtime (msec)
Submission latency: min, max, mean, deviation
Completion latency: min, max, mean, deviation
Bw: min, max, aggregate percentage of total, mean, deviation
WRITE status:
- KiB IO, bandwidth (KiB/sec), runtime (msec)
+ KB IO, bandwidth (KB/sec), runtime (msec)
Submission latency: min, max, mean, deviation
Completion latency: min, max, mean, deviation
Bw: min, max, aggregate percentage of total, mean, deviation