section residing above it. If the first character in a line is a ';' or a
'#', the entire line is discarded as a comment.
-So lets look at a really simple job file that define to threads, each
+So let's look at a really simple job file that defines two processes, each
randomly reading from a 128MiB file.
; -- start job file --
$ fio --name=global --rw=randread --size=128m --name=job1 --name=job2
-Lets look at an example that have a number of processes writing randomly
+Let's look at an example that has a number of processes writing randomly
to files.
; -- start job file --
$ fio --name=random-writers --ioengine=libaio --iodepth=4 --rw=randwrite --bs=32k --direct=0 --size=64m --numjobs=4
+fio also supports environment variable expansion in job files. Any
+substring of the form "${VARNAME}" as part of an option value (in other
+words, on the right of the `='), will be expanded to the value of the
+environment variable called VARNAME. If no such environment variable
+is defined, or VARNAME is the empty string, the empty string will be
+substituted.
+
+As an example, let's look at a sample fio invocation and job file:
+
+$ SIZE=64m NUMJOBS=4 fio jobfile.fio
+
+; -- start job file --
+[random-writers]
+rw=randwrite
+size=${SIZE}
+numjobs=${NUMJOBS}
+; -- end job file --
+
+This will expand to the following equivalent job file at runtime:
+
+; -- start job file --
+[random-writers]
+rw=randwrite
+size=64m
+numjobs=4
+; -- end job file --
+
fio ships with a few example job files, you can also look there for
inspiration.
thread number, and file number. If you want to share
files between threads in a job or several jobs, specify
a filename for each of them to override the default. If
- the ioengine used is 'net', the filename is the host and
- port to connect to in the format of =host/port. If the
- ioengine is file based, you can specify a number of files
- by separating the names with a ':' colon. So if you wanted
- a job to open /dev/sda and /dev/sdb as the two working files,
- you would use filename=/dev/sda:/dev/sdb. '-' is a reserved
- name, meaning stdin or stdout. Which of the two depends
- on the read/write direction set.
+ the ioengine used is 'net', the filename is the host, port,
+ and protocol to use in the format of =host/port/protocol.
+ See ioengine=net for more. If the ioengine is file based, you
+ can specify a number of files by separating the names with a
+ ':' colon. So if you wanted a job to open /dev/sda and /dev/sdb
+ as the two working files, you would use
+ filename=/dev/sda:/dev/sdb. '-' is a reserved name, meaning
+ stdin or stdout. Which of the two depends on the read/write
+ direction set.
opendir=str Tell fio to recursively add any file it can find in this
directory and down the file system tree.
vsync Basic readv(2) or writev(2) IO.
- libaio Linux native asynchronous io.
+ libaio Linux native asynchronous io. Note that Linux
+ may only support queued behaviour with
+ non-buffered IO (set direct=1 or buffered=0).
posixaio glibc posix asynchronous io.
net Transfer over the network to given host:port.
'filename' must be set appropriately to
- filename=host/port regardless of send
+ filename=host/port/protocol regardless of send
or receive, if the latter only the port
- argument is used.
+ argument is used. 'host' may be an IP address
+ or hostname, port is the port number to be used,
+ and protocol may be 'udp' or 'tcp'. If no
+ protocol is given, TCP is used.
netsplice Like net, but uses splice/vmsplice to
map data and send/receive.
the allowed CPUs to be 1 and 5, you would pass the decimal
value of (1 << 1 | 1 << 5), or 34. See man
sched_setaffinity(2). This may not work on all supported
- operating systems or kernel versions.
+ operating systems or kernel versions. This option doesn't
+ work well for a higher CPU count than what you can store in
+ an integer mask, so it can only control cpus 1-32. For
+ boxes with larger CPU counts, use cpus_allowed.
cpus_allowed=str Controls the same options as cpumask, but it allows a text
setting of the permitted CPUs instead. So to use CPUs 1 and
- 5, you would specify cpus_allowed=1,5.
+ 5, you would specify cpus_allowed=1,5. This options also
+ allows a range of CPUs. Say you wanted a binding to CPUs
+ 1, 5, and 8-15, you would set cpus_allowed=1,5,8-15.
startdelay=time Start this job the specified number of seconds after fio
has started. Only useful if the job file contains several
ramp_time=time If set, fio will run the specified workload for this amount
of time before logging any performance numbers. Useful for
letting performance settle before logging results, thus
- minimizing the runtime required for stable results.
+ minimizing the runtime required for stable results. Note
+ that the ramp_time is considered lead in time for a job,
+ thus it will increase the total runtime if a special timeout
+ or runtime is specified.
invalidate=bool Invalidate the buffer/page cache parts for this file prior
to starting io. Defaults to true.
the file needs to be turned into a blkparse binary data
file first (blktrace <device> -d file_for_fio.bin).
-write_bw_log If given, write a bandwidth log of the jobs in this job
+write_bw_log=str If given, write a bandwidth log of the jobs in this job
file. Can be used to store data of the bandwidth of the
jobs in their lifetime. The included fio_generate_plots
script uses gnuplot to turn these text files into nice
- graphs.
+ graphs. See write_log_log for behaviour of given
+ filename. For this option, the postfix is _bw.log.
+
+write_lat_log=str Same as write_bw_log, except that this option stores io
+ completion latencies instead. If no filename is given
+ with this option, the default filename of "jobname_type.log"
+ is used. Even if the filename is given, fio will still
+ append the type of log. So if one specifies
+
+ write_lat_log=foo
-write_lat_log Same as write_bw_log, except that this option stores io
- completion latencies instead.
+ The actual log names will be foo_clat.log and foo_slat.log.
+ This helps fio_generate_plot fine the logs automatically.
lockmem=siint Pin down the specified amount of memory with mlock(2). Can
potentially be used instead of removing memory or booting
disk_util=bool Generate disk utilization statistics, if the platform
supports it. Defaults to on.
+disable_clat=bool Disable measurements of completion latency numbers. Useful
+ only for cutting back the number of calls to gettimeofday,
+ as that does impact performance at really high IOPS rates.
+ Note that to really get rid of a large amount of these
+ calls, this option must be used with disable_slat and
+ disable_bw as well.
+
+disable_slat=bool Disable measurements of submission latency numbers. See
+ disable_clat.
+
+disable_bw=bool Disable measurements of throughput/bandwidth numbers. See
+ disable_clat.
+
+gtod_reduce=bool Enable all of the gettimeofday() reducing options
+ (disable_clat, disable_slat, disable_bw) plus reduce
+ precision of the timeout somewhat to really shrink
+ the gettimeofday() call count. With this option enabled,
+ we only do about 0.4% of the gtod() calls we would have
+ done if all time keeping was enabled.
+
+gtod_cpu=int Sometimes it's cheaper to dedicate a single thread of
+ execution to just getting the current time. Fio (and
+ databases, for instance) are very intensive on gettimeofday()
+ calls. With this option, you can set one CPU aside for
+ doing nothing but logging current time to a shared memory
+ location. Then the other threads/processes that run IO
+ workloads need only copy that segment, instead of entering
+ the kernel with a gettimeofday() call. The CPU set aside
+ for doing these time calls will be excluded from other
+ uses. Fio will manually clear it from the CPU mask of other
+ jobs.
+
6.0 Interpreting the output
---------------------------
client1;0;0;1906777;1090804;1790;0;0;0.000000;0.000000;0;0;0.000000;0.000000;929380;1152890;25.510151%;1078276.333333;128948.113404;0;0;0;0;0;0.000000;0.000000;0;0;0.000000;0.000000;0;0;0.000000%;0.000000;0.000000;100.000000%;0.000000%;324;100.0%;0.0%;0.0%;0.0%;0.0%;0.0%;0.0%;100.0%;0.0%;0.0%;0.0%;0.0%;0.0%
;0.0%;0.0%;0.0%;0.0%;0.0%
+To enable terse output, use the --minimal command line option.
+
Split up, the format is as follows:
jobname, groupid, error