4 fio is a tool that will spawn a number of threads or processes doing a
5 particular type of io action as specified by the user. fio takes a
6 number of global parameters, each inherited by the thread unless
7 otherwise parameters given to them overriding that setting is given.
8 The typical use of fio is to write a job file matching the io load
15 fio resides in a git repo, the canonical place is:
17 git://brick.kernel.dk/data/git/fio.git
19 Snapshots are frequently generated and they include the git meta data as
20 well. You can download them here:
22 http://brick.kernel.dk/snaps/
24 Pascal Bleser <guru@unixtech.be> has fio RPMs in his repository, you
27 http://linux01.gwdg.de/~pbleser/rpm-navigation.php?cat=System/fio
33 Just type 'make' and 'make install'. If on FreeBSD, for now you have to
34 specify the FreeBSD Makefile with -f, eg:
36 $ make -f Makefile.Freebsd && make -f Makefile.FreeBSD install
38 Likewise with OpenSolaris, use the Makefile.solaris to compile there.
39 This might change in the future if I opt for an autoconf type setup.
46 -t <sec> Runtime in seconds
47 -l Generate per-job latency logs
48 -w Generate per-job bandwidth logs
49 -o <file> Log output to file
50 -m Minimal (terse) output
52 -v Print version information and exit
54 Any parameters following the options will be assumed to be job files.
55 You can add as many as you want, each job file will be regarded as a
56 separate group and fio will stonewall it's execution.
62 Only a few options can be controlled with command line parameters,
63 generally it's a lot easier to just write a simple job file to describe
64 the workload. The job file format is in the ini style format, as it's
65 easy to read and write for the user.
67 The job file parameters are:
69 name=x Use 'x' as the identifier for this job.
70 directory=x Use 'x' as the top level directory for storing files
71 filename=x Force the use of 'x' as the filename for all files
72 in this thread. If not given, fio will make up
73 a suitable filename based on the thread and file
75 rw=x 'x' may be: read, randread, write, randwrite,
76 rw (read-write mix), randrw (read-write random mix)
77 rwmixcycle=x Base cycle for switching between read and write
79 rwmixread=x 'x' percentage of rw mix ios will be reads. If
80 rwmixwrite is also given, the last of the two will
81 be used if they don't add up to 100%.
82 rwmixwrite=x 'x' percentage of rw mix ios will be writes. See
84 rand_repeatable=x The sequence of random io blocks can be repeatable
85 across runs, if 'x' is 1.
86 size=x Set file size to x bytes (x string can include k/m/g)
87 ioengine=x 'x' may be: aio/libaio/linuxaio for Linux aio,
88 posixaio for POSIX aio, sync for regular read/write io,
89 mmap for mmap'ed io, splice for using splice/vmsplice,
90 or sgio for direct SG_IO io. The latter only works on
91 Linux on SCSI (or SCSI-like devices, such as
92 usb-storage or sata/libata driven) devices.
93 iodepth=x For async io, allow 'x' ios in flight
94 overwrite=x If 'x', layout a write file first.
95 nrfiles=x Spread io load over 'x' number of files per job,
97 prio=x Run io at prio X, 0-7 is the kernel allowed range
98 prioclass=x Run io at prio class X
99 bs=x Use 'x' for thread blocksize. May include k/m postfix.
100 bsrange=x-y Mix thread block sizes randomly between x and y. May
101 also include k/m postfix.
102 direct=x 1 for direct IO, 0 for buffered IO
103 thinktime=x "Think" x usec after each io
104 rate=x Throttle rate to x KiB/sec
105 ratemin=x Quit if rate of x KiB/sec can't be met
106 ratecycle=x ratemin averaged over x msecs
107 cpumask=x Only allow job to run on CPUs defined by mask.
108 fsync=x If writing with buffered IO, fsync after every
109 'x' blocks have been written.
110 end_fsync=x If 'x', run fsync() after end-of-job.
111 startdelay=x Start this thread x seconds after startup
112 timeout=x Terminate x seconds after startup. Can include a
113 normal time suffix if not given in seconds, such as
114 'm' for minutes, 'h' for hours, and 'd' for days.
115 offset=x Start io at offset x (x string can include k/m/g)
116 invalidate=x Invalidate page cache for file prior to doing io
117 sync=x Use sync writes if x and writing buffered IO.
118 mem=x If x == malloc, use malloc for buffers. If x == shm,
119 use shared memory for buffers. If x == mmap, use
121 exitall When one thread quits, terminate the others
122 bwavgtime=x Average bandwidth stats over an x msec window.
123 create_serialize=x If 'x', serialize file creation.
124 create_fsync=x If 'x', run fsync() after file creation.
125 unlink If set, unlink files when done.
126 loops=x Run the job 'x' number of times.
127 verify=x If 'x' == md5, use md5 for verifies. If 'x' == crc32,
128 use crc32 for verifies. md5 is 'safer', but crc32 is
129 a lot faster. Only makes sense for writing to a file.
130 stonewall Wait for preceeding jobs to end before running.
131 numjobs=x Create 'x' similar entries for this job
132 thread Use pthreads instead of forked jobs
134 zoneskip=y Zone options must be paired. If given, the job
135 will skip y bytes for every x read/written. This
136 can be used to gauge hard drive speed over the entire
137 platter, without reading everything. Both x/y can
138 include k/m/g suffix.
139 iolog=x Open and read io pattern from file 'x'. The file must
140 contain one io action per line in the following format:
142 where with rw=0/1 for read/write, and the offset
143 and length entries being in bytes.
144 write_iolog=x Write an iolog to file 'x' in the same format as iolog.
145 The iolog options are exclusive, if both given the
146 read iolog will be performed.
147 write_bw_log Write a bandwidth log.
148 write_lat_log Write a latency log.
149 lockmem=x Lock down x amount of memory on the machine, to
150 simulate a machine with less memory available. x can
151 include k/m/g suffix.
152 nice=x Run job at given nice value.
153 exec_prerun=x Run 'x' before job io is begun.
154 exec_postrun=x Run 'x' after job io has finished.
155 ioscheduler=x Use ioscheduler 'x' for this job.
156 cpuload=x For a CPU io thread, percentage of CPU time to attempt
158 cpuchunks=x Split burn cycles into pieces of x.
161 Examples using a job file
162 -------------------------
164 Example 1) Two random readers
166 Lets say we want to simulate two threads reading randomly from a file
167 each. They will be doing IO in 4KiB chunks, using raw (O_DIRECT) IO.
168 Since they share most parameters, we'll put those in the [global]
169 section. Job 1 will use a 128MiB file, job 2 will use a 256MiB file.
174 ioengine=sync ; regular read/write(2), the default
187 Generally the [] bracketed name specifies a file name, but the "global"
188 keyword is reserved for setting options that are inherited by each
189 subsequent job description. It's possible to have several [global]
190 sections in the job file, each one adds options that are inherited by
191 jobs defined below it. The name can also point to a block device, such
192 as /dev/sda. To run the above job file, simply do:
196 Example 2) Many random writers
198 Say we want to exercise the IO subsystem some more. We'll define 64
199 threads doing random buffered writes. We'll let each thread use async io
200 with a depth of 4 ios in flight. A job file would then look like this:
217 This will create files.[0-63] and perform the random writes to them.
219 There are endless ways to define jobs, the examples/ directory contains
223 Interpreting the output
224 -----------------------
226 fio spits out a lot of output. While running, fio will display the
227 status of the jobs created. An example of that would be:
229 Threads running: 1: [_r] [24.79% done] [eta 00h:01m:31s]
231 The characters inside the square brackets denote the current status of
232 each thread. The possible values (in typical life cycle order) are:
236 P Thread setup, but not started.
238 I Thread initialized, waiting.
239 R Running, doing sequential reads.
240 r Running, doing random reads.
241 W Running, doing sequential writes.
242 w Running, doing random writes.
243 M Running, doing mixed sequential reads/writes.
244 m Running, doing mixed random reads/writes.
245 F Running, currently waiting for fsync()
246 V Running, doing verification of written data.
247 E Thread exited, not reaped by main thread yet.
250 The other values are fairly self explanatory - number of threads
251 currently running and doing io, and the estimated completion percentage
252 and time for the running group. It's impossible to estimate runtime
253 of the following groups (if any).
255 When fio is done (or interrupted by ctrl-c), it will show the data for
256 each thread, group of threads, and disks in that order. For each data
257 direction, the output looks like:
259 Client1 (g=0): err= 0:
260 write: io= 32MiB, bw= 666KiB/s, runt= 50320msec
261 slat (msec): min= 0, max= 136, avg= 0.03, dev= 1.92
262 clat (msec): min= 0, max= 631, avg=48.50, dev=86.82
263 bw (KiB/s) : min= 0, max= 1196, per=51.00%, avg=664.02, dev=681.68
264 cpu : usr=1.49%, sys=0.25%, ctx=7969
266 The client number is printed, along with the group id and error of that
267 thread. Below is the io statistics, here for writes. In the order listed,
270 io= Number of megabytes io performed
271 bw= Average bandwidth rate
272 runt= The runtime of that thread
273 slat= Submission latency (avg being the average, dev being the
274 standard deviation). This is the time it took to submit
275 the io. For sync io, the slat is really the completion
276 latency, since queue/complete is one operation there.
277 clat= Completion latency. Same names as slat, this denotes the
278 time from submission to completion of the io pieces. For
279 sync io, clat will usually be equal (or very close) to 0,
280 as the time from submit to complete is basically just
281 CPU time (io has already been done, see slat explanation).
282 bw= Bandwidth. Same names as the xlat stats, but also includes
283 an approximate percentage of total aggregate bandwidth
284 this thread received in this group. This last value is
285 only really useful if the threads in this group are on the
286 same disk, since they are then competing for disk access.
287 cpu= CPU usage. User and system time, along with the number
288 of context switches this thread went through.
290 After each client has been listed, the group statistics are printed. They
293 Run status group 0 (all jobs):
294 READ: io=64MiB, aggrb=22178, minb=11355, maxb=11814, mint=2840msec, maxt=2955msec
295 WRITE: io=64MiB, aggrb=1302, minb=666, maxb=669, mint=50093msec, maxt=50320msec
297 For each data direction, it prints:
299 io= Number of megabytes io performed.
300 aggrb= Aggregate bandwidth of threads in this group.
301 minb= The minimum average bandwidth a thread saw.
302 maxb= The maximum average bandwidth a thread saw.
303 mint= The smallest runtime of the threads in that group.
304 maxt= The longest runtime of the threads in that group.
306 And finally, the disk statistics are printed. They will look like this:
308 Disk stats (read/write):
309 sda: ios=16398/16511, merge=30/162, ticks=6853/819634, in_queue=826487, util=100.00%
311 Each value is printed for both reads and writes, with reads first. The
314 ios= Number of ios performed by all groups.
315 merge= Number of merges io the io scheduler.
316 ticks= Number of ticks we kept the disk busy.
317 io_queue= Total time spent in the disk queue.
318 util= The disk utilization. A value of 100% means we kept the disk
319 busy constantly, 50% would be a disk idling half of the time.
325 For scripted usage where you typically want to generate tables or graphs
326 of the results, fio can output the results in a comma seperated format.
327 The format is one long line of values, such as:
329 client1,0,0,936,331,2894,0,0,0.000000,0.000000,1,170,22.115385,34.290410,16,714,84.252874%,366.500000,566.417819,3496,1237,2894,0,0,0.000000,0.000000,0,246,6.671625,21.436952,0,2534,55.465300%,1406.600000,2008.044216,0.000000%,0.431928%,1109
331 Split up, the format is as follows:
333 jobname, groupid, error
335 KiB IO, bandwidth (KiB/sec), runtime (msec)
336 Submission latency: min, max, mean, deviation
337 Completion latency: min, max, mean, deviation
338 Bw: min, max, aggreate percentage of total, mean, deviation
340 KiB IO, bandwidth (KiB/sec), runtime (msec)
341 Submission latency: min, max, mean, deviation
342 Completion latency: min, max, mean, deviation
343 Bw: min, max, aggreate percentage of total, mean, deviation
344 CPU usage: user, system, context switches
350 Fio was written by Jens Axboe <axboe@kernel.dk> to enable flexible testing
351 of the Linux IO subsystem and schedulers. He got tired of writing
352 specific test applications to simulate a given workload, and found that
353 the existing io benchmark/test tools out there weren't flexible enough
354 to do what he wanted.
356 Jens Axboe <axboe@kernel.dk> 20060905