[PATCH] First cut at supporting > 1 file per job
[fio.git] / README
CommitLineData
ebac4655
JA
1fio
2---
3
79809113
JA
4fio is a tool that will spawn a number of threads or processes doing a
5particular type of io action as specified by the user. fio takes a
6number of global parameters, each inherited by the thread unless
7otherwise parameters given to them overriding that setting is given.
8The typical use of fio is to write a job file matching the io load
9one wants to simulate.
ebac4655 10
2b02b546
JA
11
12Source
13------
14
15fio resides in a git repo, the canonical place is:
16
17git://brick.kernel.dk/data/git/fio.git
18
79809113
JA
19Snapshots are frequently generated and they include the git meta data as
20well. You can download them here:
2b02b546
JA
21
22http://brick.kernel.dk/snaps/
23
1053a106
JA
24Pascal Bleser <guru@unixtech.be> has fio RPMs in his repository, you
25can find them here:
26
27http://linux01.gwdg.de/~pbleser/rpm-navigation.php?cat=System/fio
28
2b02b546 29
bbfd6b00
JA
30Building
31--------
32
33Just type 'make' and 'make install'. If on FreeBSD, for now you have to
34specify the FreeBSD Makefile with -f, eg:
35
36$ make -f Makefile.Freebsd && make -f Makefile.FreeBSD install
37
edffcb96 38Likewise with OpenSolaris, use the Makefile.solaris to compile there.
bbfd6b00
JA
39This might change in the future if I opt for an autoconf type setup.
40
41
972cfd25
JA
42Command line
43------------
ebac4655
JA
44
45$ fio
ebac4655 46 -t <sec> Runtime in seconds
ebac4655
JA
47 -l Generate per-job latency logs
48 -w Generate per-job bandwidth logs
9ebc27e1 49 -o <file> Log output to file
c6ae0a5b 50 -m Minimal (terse) output
4785f995 51 -h Print help info
ebac4655
JA
52 -v Print version information and exit
53
972cfd25
JA
54Any parameters following the options will be assumed to be job files.
55You can add as many as you want, each job file will be regarded as a
56separate group and fio will stonewall it's execution.
57
79809113
JA
58
59Job file
60--------
61
62Only a few options can be controlled with command line parameters,
63generally it's a lot easier to just write a simple job file to describe
64the workload. The job file format is in the ini style format, as it's
65easy to read and write for the user.
66
67The job file parameters are:
ebac4655 68
01452055 69 name=x Use 'x' as the identifier for this job.
ebac4655 70 directory=x Use 'x' as the top level directory for storing files
3d60d1ed
JA
71 rw=x 'x' may be: read, randread, write, randwrite,
72 rw (read-write mix), randrw (read-write random mix)
a6ccc7be
JA
73 rwmixcycle=x Base cycle for switching between read and write
74 in msecs.
75 rwmixread=x 'x' percentage of rw mix ios will be reads. If
76 rwmixwrite is also given, the last of the two will
77 be used if they don't add up to 100%.
78 rwmixwrite=x 'x' percentage of rw mix ios will be writes. See
79 rwmixread.
9ebc27e1
JA
80 rand_repeatable=x The sequence of random io blocks can be repeatable
81 across runs, if 'x' is 1.
ebac4655
JA
82 size=x Set file size to x bytes (x string can include k/m/g)
83 ioengine=x 'x' may be: aio/libaio/linuxaio for Linux aio,
84 posixaio for POSIX aio, sync for regular read/write io,
8756e4d4
JA
85 mmap for mmap'ed io, splice for using splice/vmsplice,
86 or sgio for direct SG_IO io. The latter only works on
87 Linux on SCSI (or SCSI-like devices, such as
88 usb-storage or sata/libata driven) devices.
ebac4655
JA
89 iodepth=x For async io, allow 'x' ios in flight
90 overwrite=x If 'x', layout a write file first.
53cdc686
JA
91 nrfiles=x Spread io load over 'x' number of files per job,
92 if possible.
ebac4655
JA
93 prio=x Run io at prio X, 0-7 is the kernel allowed range
94 prioclass=x Run io at prio class X
95 bs=x Use 'x' for thread blocksize. May include k/m postfix.
96 bsrange=x-y Mix thread block sizes randomly between x and y. May
97 also include k/m postfix.
98 direct=x 1 for direct IO, 0 for buffered IO
99 thinktime=x "Think" x usec after each io
100 rate=x Throttle rate to x KiB/sec
101 ratemin=x Quit if rate of x KiB/sec can't be met
102 ratecycle=x ratemin averaged over x msecs
103 cpumask=x Only allow job to run on CPUs defined by mask.
104 fsync=x If writing, fsync after every x blocks have been written
105 startdelay=x Start this thread x seconds after startup
906c8d75
JA
106 timeout=x Terminate x seconds after startup. Can include a
107 normal time suffix if not given in seconds, such as
108 'm' for minutes, 'h' for hours, and 'd' for days.
ebac4655
JA
109 offset=x Start io at offset x (x string can include k/m/g)
110 invalidate=x Invalidate page cache for file prior to doing io
111 sync=x Use sync writes if x and writing
112 mem=x If x == malloc, use malloc for buffers. If x == shm,
113 use shm for buffers. If x == mmap, use anon mmap.
114 exitall When one thread quits, terminate the others
115 bwavgtime=x Average bandwidth stats over an x msec window.
116 create_serialize=x If 'x', serialize file creation.
117 create_fsync=x If 'x', run fsync() after file creation.
fc1a4713 118 end_fsync=x If 'x', run fsync() after end-of-job.
ebac4655
JA
119 loops=x Run the job 'x' number of times.
120 verify=x If 'x' == md5, use md5 for verifies. If 'x' == crc32,
121 use crc32 for verifies. md5 is 'safer', but crc32 is
122 a lot faster. Only makes sense for writing to a file.
123 stonewall Wait for preceeding jobs to end before running.
124 numjobs=x Create 'x' similar entries for this job
125 thread Use pthreads instead of forked jobs
20dc95c4
JA
126 zonesize=x
127 zoneskip=y Zone options must be paired. If given, the job
128 will skip y bytes for every x read/written. This
129 can be used to gauge hard drive speed over the entire
130 platter, without reading everything. Both x/y can
131 include k/m/g suffix.
aea47d44
JA
132 iolog=x Open and read io pattern from file 'x'. The file must
133 contain one io action per line in the following format:
134 rw, offset, length
135 where with rw=0/1 for read/write, and the offset
136 and length entries being in bytes.
843a7413
JA
137 write_iolog=x Write an iolog to file 'x' in the same format as iolog.
138 The iolog options are exclusive, if both given the
139 read iolog will be performed.
c04f7ec3
JA
140 lockmem=x Lock down x amount of memory on the machine, to
141 simulate a machine with less memory available. x can
142 include k/m/g suffix.
b6f4d880 143 nice=x Run job at given nice value.
4e0ba8af
JA
144 exec_prerun=x Run 'x' before job io is begun.
145 exec_postrun=x Run 'x' after job io has finished.
da86774e 146 ioscheduler=x Use ioscheduler 'x' for this job.
b990b5c0
JA
147 cpuload=x For a CPU io thread, percentage of CPU time to attempt
148 to burn.
149 cpuchunks=x Split burn cycles into pieces of x.
ebac4655 150
79809113 151
ebac4655
JA
152Examples using a job file
153-------------------------
154
79809113 155Example 1) Two random readers
ebac4655 156
79809113
JA
157Lets say we want to simulate two threads reading randomly from a file
158each. They will be doing IO in 4KiB chunks, using raw (O_DIRECT) IO.
159Since they share most parameters, we'll put those in the [global]
160section. Job 1 will use a 128MiB file, job 2 will use a 256MiB file.
ebac4655 161
79809113 162; ---snip---
ebac4655 163
79809113
JA
164[global]
165ioengine=sync ; regular read/write(2), the default
166rw=randread
167bs=4k
168direct=1
ebac4655 169
79809113
JA
170[file1]
171size=128m
ebac4655 172
79809113
JA
173[file2]
174size=256m
ebac4655 175
79809113 176; ---snip---
ebac4655 177
79809113
JA
178Generally the [] bracketed name specifies a file name, but the "global"
179keyword is reserved for setting options that are inherited by each
180subsequent job description. It's possible to have several [global]
181sections in the job file, each one adds options that are inherited by
182jobs defined below it. The name can also point to a block device, such
183as /dev/sda. To run the above job file, simply do:
ebac4655 184
79809113
JA
185$ fio jobfile
186
187Example 2) Many random writers
188
189Say we want to exercise the IO subsystem some more. We'll define 64
190threads doing random buffered writes. We'll let each thread use async io
191with a depth of 4 ios in flight. A job file would then look like this:
ebac4655 192
79809113 193; ---snip---
ebac4655 194
79809113
JA
195[global]
196ioengine=libaio
197iodepth=4
198rw=randwrite
199bs=32k
200direct=0
201size=64m
ebac4655 202
79809113
JA
203[files]
204numjobs=64
ebac4655 205
79809113
JA
206; ---snip---
207
208This will create files.[0-63] and perform the random writes to them.
209
210There are endless ways to define jobs, the examples/ directory contains
211a few more examples.
ebac4655
JA
212
213
214Interpreting the output
215-----------------------
216
217fio spits out a lot of output. While running, fio will display the
218status of the jobs created. An example of that would be:
219
972cfd25 220Threads running: 1: [_r] [24.79% done] [eta 00h:01m:31s]
ebac4655
JA
221
222The characters inside the square brackets denote the current status of
223each thread. The possible values (in typical life cycle order) are:
224
225Idle Run
226---- ---
227P Thread setup, but not started.
79809113
JA
228C Thread created.
229I Thread initialized, waiting.
ebac4655
JA
230 R Running, doing sequential reads.
231 r Running, doing random reads.
232 W Running, doing sequential writes.
233 w Running, doing random writes.
79809113
JA
234 M Running, doing mixed sequential reads/writes.
235 m Running, doing mixed random reads/writes.
236 F Running, currently waiting for fsync()
ebac4655
JA
237V Running, doing verification of written data.
238E Thread exited, not reaped by main thread yet.
239_ Thread reaped.
240
79809113
JA
241The other values are fairly self explanatory - number of threads
242currently running and doing io, and the estimated completion percentage
972cfd25
JA
243and time for the running group. It's impossible to estimate runtime
244of the following groups (if any).
ebac4655
JA
245
246When fio is done (or interrupted by ctrl-c), it will show the data for
247each thread, group of threads, and disks in that order. For each data
248direction, the output looks like:
249
250Client1 (g=0): err= 0:
251 write: io= 32MiB, bw= 666KiB/s, runt= 50320msec
252 slat (msec): min= 0, max= 136, avg= 0.03, dev= 1.92
253 clat (msec): min= 0, max= 631, avg=48.50, dev=86.82
254 bw (KiB/s) : min= 0, max= 1196, per=51.00%, avg=664.02, dev=681.68
255 cpu : usr=1.49%, sys=0.25%, ctx=7969
256
257The client number is printed, along with the group id and error of that
258thread. Below is the io statistics, here for writes. In the order listed,
259they denote:
260
261io= Number of megabytes io performed
262bw= Average bandwidth rate
263runt= The runtime of that thread
264 slat= Submission latency (avg being the average, dev being the
265 standard deviation). This is the time it took to submit
266 the io. For sync io, the slat is really the completion
267 latency, since queue/complete is one operation there.
268 clat= Completion latency. Same names as slat, this denotes the
269 time from submission to completion of the io pieces. For
270 sync io, clat will usually be equal (or very close) to 0,
271 as the time from submit to complete is basically just
272 CPU time (io has already been done, see slat explanation).
273 bw= Bandwidth. Same names as the xlat stats, but also includes
274 an approximate percentage of total aggregate bandwidth
275 this thread received in this group. This last value is
276 only really useful if the threads in this group are on the
277 same disk, since they are then competing for disk access.
278cpu= CPU usage. User and system time, along with the number
279 of context switches this thread went through.
280
281After each client has been listed, the group statistics are printed. They
282will look like this:
283
284Run status group 0 (all jobs):
285 READ: io=64MiB, aggrb=22178, minb=11355, maxb=11814, mint=2840msec, maxt=2955msec
286 WRITE: io=64MiB, aggrb=1302, minb=666, maxb=669, mint=50093msec, maxt=50320msec
287
288For each data direction, it prints:
289
290io= Number of megabytes io performed.
291aggrb= Aggregate bandwidth of threads in this group.
292minb= The minimum average bandwidth a thread saw.
293maxb= The maximum average bandwidth a thread saw.
79809113
JA
294mint= The smallest runtime of the threads in that group.
295maxt= The longest runtime of the threads in that group.
ebac4655
JA
296
297And finally, the disk statistics are printed. They will look like this:
298
299Disk stats (read/write):
300 sda: ios=16398/16511, merge=30/162, ticks=6853/819634, in_queue=826487, util=100.00%
301
302Each value is printed for both reads and writes, with reads first. The
303numbers denote:
304
305ios= Number of ios performed by all groups.
306merge= Number of merges io the io scheduler.
307ticks= Number of ticks we kept the disk busy.
308io_queue= Total time spent in the disk queue.
309util= The disk utilization. A value of 100% means we kept the disk
310 busy constantly, 50% would be a disk idling half of the time.
79809113
JA
311
312
c6ae0a5b
JA
313Terse output
314------------
315
316For scripted usage where you typically want to generate tables or graphs
317of the results, fio can output the results in a comma seperated format.
318The format is one long line of values, such as:
319
320client1,0,0,936,331,2894,0,0,0.000000,0.000000,1,170,22.115385,34.290410,16,714,84.252874%,366.500000,566.417819,3496,1237,2894,0,0,0.000000,0.000000,0,246,6.671625,21.436952,0,2534,55.465300%,1406.600000,2008.044216,0.000000%,0.431928%,1109
321
322Split up, the format is as follows:
323
324 jobname, groupid, error
325 READ status:
326 KiB IO, bandwidth (KiB/sec), runtime (msec)
327 Submission latency: min, max, mean, deviation
328 Completion latency: min, max, mean, deviation
329 Bw: min, max, aggreate percentage of total, mean, deviation
330 WRITE status:
331 KiB IO, bandwidth (KiB/sec), runtime (msec)
332 Submission latency: min, max, mean, deviation
333 Completion latency: min, max, mean, deviation
334 Bw: min, max, aggreate percentage of total, mean, deviation
335 CPU usage: user, system, context switches
336
337
79809113
JA
338Author
339------
340
aae22ca7 341Fio was written by Jens Axboe <axboe@kernel.dk> to enable flexible testing
79809113
JA
342of the Linux IO subsystem and schedulers. He got tired of writing
343specific test applications to simulate a given workload, and found that
344the existing io benchmark/test tools out there weren't flexible enough
345to do what he wanted.
346
aae22ca7 347Jens Axboe <axboe@kernel.dk> 20060905
79809113 348