bool Boolean. Usually parsed as an integer, however only defined for
true and false (1 and 0).
irange Integer range with postfix. Allows value range to be given, such
- as 1024-4096. Also see siint.
+ as 1024-4096. A colon may also be used as the seperator, eg
+ 1k:4k. If the option allows two sets of ranges, they can be
+ specified with a ',' or '/' delimiter: 1k-4k/8k-32k. Also see
+ siint.
With the above in mind, here follows the complete list of fio job
parameters.
special purpose of also signaling the start of a new
job.
+description=str Text description of the job. Doesn't do anything except
+ dump this text description when this job is run. It's
+ not parsed.
+
directory=str Prefix filenames with this directory. Used to places files
in a different location than "./".
filename=str Fio normally makes up a filename based on the job name,
thread number, and file number. If you want to share
files between threads in a job or several jobs, specify
- a filename for each of them to override the default.
+ a filename for each of them to override the default. If
+ the ioengine used is 'net', the filename is the host and
+ port to connect to in the format of =host:port.
rw=str Type of io pattern. Accepted values are:
For certain types of io the result may still be skewed a bit,
since the speed may be different.
+randrepeat=bool For random IO workloads, seed the generator in a predictable
+ way so that results are repeatable across repetitions.
+
size=siint The total size of file io for this job. This may describe
the size of the single file the job uses, or it may be
divided between the number of files in the job. If the
after a comma, it will apply to writes only. In other words,
the format is either bs=read_and_write or bs=read,write.
bs=4k,8k will thus use 4k blocks for reads, and 8k blocks
- for writes.
+ for writes. If you only wish to set the write size, you
+ can do so by passing an empty read size - bs=,8k will set
+ 8k for writes and leave the read default value.
bsrange=irange Instead of giving a single block size, specify a range
and fio will mix the issued io block sizes. The issued
we use read(2) and write(2) for asynchronous
io.
+ null Doesn't transfer any data, just pretends
+ to. This is mainly used to exercise fio
+ itself and for debugging/testing purposes.
+
+ net Transfer over the network to given host:port.
+ 'filename' must be set appropriately to
+ filename=host:port regardless of send
+ or receive, if the latter only the port
+ argument is used.
+
iodepth=int This defines how many io units to keep in flight against
the file. The default is 1 for each file defined in this
job, can be overridden with a larger value for higher
concurrency.
direct=bool If value is true, use non-buffered io. This is usually
- O_DIRECT. Defaults to true.
+ O_DIRECT.
+
+buffered=bool If value is true, use buffered io. This is the opposite
+ of the 'direct' option. Defaults to true.
offset=siint Start io at the given offset in the file. The data before
the given offset will not be touched. This effectively
thinktime=int Stall the job x microseconds after an io has completed before
issuing the next. May be used to simulate processing being
- done by an application.
+ done by an application. See thinktime_blocks.
+
+thinktime_blocks
+ Only valid if thinktime is set - control how many blocks
+ to issue, before waiting 'thinktime' usecs. If not set,
+ defaults to 1 which will make fio wait 'thinktime' usecs
+ after every block.
rate=int Cap the bandwidth used by this job to this number of KiB/sec.
jobs, and you want to delay starting some jobs to a certain
time.
-timeout=int Tell fio to terminate processing after the specified number
+runtime=int Tell fio to terminate processing after the specified number
of seconds. It can be quite hard to determine for how long
a specified job will run, so this parameter is handy to
cap the total runtime to a given time.
shm Use shared memory as the buffers. Allocated
through shmget(2).
- mmap Use anonymous memory maps as the buffers.
- Allocated through mmap(2).
+ shmhuge Same as shm, but use huge pages as backing.
+
+ mmap Use mmap to allocate buffers. May either be
+ anonymous memory, or can be file backed if
+ a filename is given after the option. The
+ format is mem=mmap:/path/to/file.
+
+ mmaphuge Use a memory mapped huge file as the buffer
+ backing. Append filename after mmaphuge, ala
+ mem=mmaphuge:/hugetlbfs/file
The area allocated is a function of the maximum allowed
- bs size for the job, multiplied by the io depth given.
+ bs size for the job, multiplied by the io depth given. Note
+ that for shmhuge and mmaphuge to work, the system must have
+ free huge pages allocated. This can normally be checked
+ and set by reading/writing /proc/sys/vm/nr_hugepages on a
+ Linux system. Fio assumes a huge page is 4MiB in size. So
+ to calculate the number of huge pages you need for a given
+ job file, add up the io depth of all jobs (normally one unless
+ iodepth= is used) and multiply by the maximum bs set. Then
+ divide that number by the huge page size. You can see the
+ size of the huge pages in /proc/meminfo. If no huge pages
+ are allocated by having a non-zero number in nr_hugepages,
+ using mmaphuge or shmhuge will fail. Also see hugepage-size.
+
+ mmaphuge also needs to have hugetlbfs mounted and the file
+ location should point there. So if it's mounted in /huge,
+ you would use mem=mmaphuge:/huge/somefile.
+
+hugepage-size=siint
+ Defines the size of a huge page. Must at least be equal
+ to the system setting, see /proc/meminfo. Defaults to 4MiB.
+ Should probably always be a multiple of megabytes, so using
+ hugepage-size=Xm is the preferred way to set this to avoid
+ setting a non-pow-2 bad value.
exitall When one job finishes, terminate the rest. The default is
to wait for each job to finish, sometimes that is not the
create_fsync=bool fsync the data file after creation. This is the
default.
-unlink Unlink the job files when done. fio defaults to doing this,
- if it created the file itself.
+unlink=bool Unlink the job files when done. Not the default, as repeated
+ runs of that job would then waste time recreating the fileset
+ again and again.
loops=int Run the specified number of iterations of this job. Used
to repeat the same workload a given number of times. Defaults
fio spits out a lot of output. While running, fio will display the
status of the jobs created. An example of that would be:
-Threads running: 1: [_r] [24.79% done] [ 13509/ 8334 kb/s] [eta 00h:01m:31s]
+Threads: 1: [_r] [24.8% done] [ 13509/ 8334 kb/s] [eta 00h:01m:31s]
The characters inside the square brackets denote the current status of
each thread. The possible values (in typical life cycle order) are:
Client1 (g=0): err= 0:
write: io= 32MiB, bw= 666KiB/s, runt= 50320msec
- slat (msec): min= 0, max= 136, avg= 0.03, dev= 1.92
- clat (msec): min= 0, max= 631, avg=48.50, dev=86.82
- bw (KiB/s) : min= 0, max= 1196, per=51.00%, avg=664.02, dev=681.68
+ slat (msec): min= 0, max= 136, avg= 0.03, stdev= 1.92
+ clat (msec): min= 0, max= 631, avg=48.50, stdev=86.82
+ bw (KiB/s) : min= 0, max= 1196, per=51.00%, avg=664.02, stdev=681.68
cpu : usr=1.49%, sys=0.25%, ctx=7969
+ IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=99.0%, 16=0.0%, 32=0.0%, >32=0.0%
The client number is printed, along with the group id and error of that
thread. Below is the io statistics, here for writes. In the order listed,
same disk, since they are then competing for disk access.
cpu= CPU usage. User and system time, along with the number
of context switches this thread went through.
+IO depths= The distribution of io depths over the job life time. The
+ numbers are divided into powers of 2, so for example the
+ 16= entries includes depths up to that value but higher
+ than the previous entry. In other words, it covers the
+ range from 16 to 31.
After each client has been listed, the group statistics are printed. They
will look like this: