This might change in the future if I opt for an autoconf type setup.
-Options
--------
+Command line
+------------
$ fio
- -s IO is sequential
- -b block size in KiB for each io
-t <sec> Runtime in seconds
- -r For random io, sequence must be repeatable
- -R <on> If one thread fails to meet rate, quit all
- -o <on> Use direct IO is 1, buffered if 0
-l Generate per-job latency logs
-w Generate per-job bandwidth logs
- -f <file> Read <file> for job descriptions
- -O <file> Log output to file
+ -o <file> Log output to file
+ -m Minimal (terse) output
-h Print help info
-v Print version information and exit
+Any parameters following the options will be assumed to be job files.
+You can add as many as you want, each job file will be regarded as a
+separate group and fio will stonewall it's execution.
+
Job file
--------
be used if they don't add up to 100%.
rwmixwrite=x 'x' percentage of rw mix ios will be writes. See
rwmixread.
+ rand_repeatable=x The sequence of random io blocks can be repeatable
+ across runs, if 'x' is 1.
size=x Set file size to x bytes (x string can include k/m/g)
ioengine=x 'x' may be: aio/libaio/linuxaio for Linux aio,
posixaio for POSIX aio, sync for regular read/write io,
usb-storage or sata/libata driven) devices.
iodepth=x For async io, allow 'x' ios in flight
overwrite=x If 'x', layout a write file first.
+ nrfiles=x Spread io load over 'x' number of files per job,
+ if possible.
prio=x Run io at prio X, 0-7 is the kernel allowed range
prioclass=x Run io at prio class X
bs=x Use 'x' for thread blocksize. May include k/m postfix.
cpumask=x Only allow job to run on CPUs defined by mask.
fsync=x If writing, fsync after every x blocks have been written
startdelay=x Start this thread x seconds after startup
- timeout=x Terminate x seconds after startup
+ timeout=x Terminate x seconds after startup. Can include a
+ normal time suffix if not given in seconds, such as
+ 'm' for minutes, 'h' for hours, and 'd' for days.
offset=x Start io at offset x (x string can include k/m/g)
invalidate=x Invalidate page cache for file prior to doing io
sync=x Use sync writes if x and writing
bwavgtime=x Average bandwidth stats over an x msec window.
create_serialize=x If 'x', serialize file creation.
create_fsync=x If 'x', run fsync() after file creation.
+ unlink If set, unlink files when done.
end_fsync=x If 'x', run fsync() after end-of-job.
loops=x Run the job 'x' number of times.
verify=x If 'x' == md5, use md5 for verifies. If 'x' == crc32,
exec_prerun=x Run 'x' before job io is begun.
exec_postrun=x Run 'x' after job io has finished.
ioscheduler=x Use ioscheduler 'x' for this job.
+ cpuload=x For a CPU io thread, percentage of CPU time to attempt
+ to burn.
+ cpuchunks=x Split burn cycles into pieces of x.
Examples using a job file
fio spits out a lot of output. While running, fio will display the
status of the jobs created. An example of that would be:
-Threads now running: 2 : [ww] [5.73% done]
+Threads running: 1: [_r] [24.79% done] [eta 00h:01m:31s]
The characters inside the square brackets denote the current status of
each thread. The possible values (in typical life cycle order) are:
The other values are fairly self explanatory - number of threads
currently running and doing io, and the estimated completion percentage
-and time.
+and time for the running group. It's impossible to estimate runtime
+of the following groups (if any).
When fio is done (or interrupted by ctrl-c), it will show the data for
each thread, group of threads, and disks in that order. For each data
busy constantly, 50% would be a disk idling half of the time.
+Terse output
+------------
+
+For scripted usage where you typically want to generate tables or graphs
+of the results, fio can output the results in a comma seperated format.
+The format is one long line of values, such as:
+
+client1,0,0,936,331,2894,0,0,0.000000,0.000000,1,170,22.115385,34.290410,16,714,84.252874%,366.500000,566.417819,3496,1237,2894,0,0,0.000000,0.000000,0,246,6.671625,21.436952,0,2534,55.465300%,1406.600000,2008.044216,0.000000%,0.431928%,1109
+
+Split up, the format is as follows:
+
+ jobname, groupid, error
+ READ status:
+ KiB IO, bandwidth (KiB/sec), runtime (msec)
+ Submission latency: min, max, mean, deviation
+ Completion latency: min, max, mean, deviation
+ Bw: min, max, aggreate percentage of total, mean, deviation
+ WRITE status:
+ KiB IO, bandwidth (KiB/sec), runtime (msec)
+ Submission latency: min, max, mean, deviation
+ Completion latency: min, max, mean, deviation
+ Bw: min, max, aggreate percentage of total, mean, deviation
+ CPU usage: user, system, context switches
+
+
Author
------
-Fio was written by Jens Axboe <axboe@suse.de> to enable flexible testing
+Fio was written by Jens Axboe <axboe@kernel.dk> to enable flexible testing
of the Linux IO subsystem and schedulers. He got tired of writing
specific test applications to simulate a given workload, and found that
the existing io benchmark/test tools out there weren't flexible enough
to do what he wanted.
-Jens Axboe <axboe@suse.de> 20060609
+Jens Axboe <axboe@kernel.dk> 20060905