[PATCH] Various little fixes and lots of commenting
[fio.git] / README
... / ...
CommitLineData
1fio
2---
3
4fio is a tool that will spawn a number of threads or processes doing a
5particular type of io action as specified by the user. fio takes a
6number of global parameters, each inherited by the thread unless
7otherwise parameters given to them overriding that setting is given.
8The typical use of fio is to write a job file matching the io load
9one wants to simulate.
10
11
12Source
13------
14
15fio resides in a git repo, the canonical place is:
16
17git://brick.kernel.dk/data/git/fio.git
18
19Snapshots are frequently generated and they include the git meta data as
20well. You can download them here:
21
22http://brick.kernel.dk/snaps/
23
24Pascal Bleser <guru@unixtech.be> has fio RPMs in his repository, you
25can find them here:
26
27http://linux01.gwdg.de/~pbleser/rpm-navigation.php?cat=System/fio
28
29
30Building
31--------
32
33Just type 'make' and 'make install'. If on FreeBSD, for now you have to
34specify the FreeBSD Makefile with -f, eg:
35
36$ make -f Makefile.Freebsd && make -f Makefile.FreeBSD install
37
38Likewise with OpenSolaris, use the Makefile.solaris to compile there.
39This might change in the future if I opt for an autoconf type setup.
40
41
42Command line
43------------
44
45$ fio
46 -t <sec> Runtime in seconds
47 -l Generate per-job latency logs
48 -w Generate per-job bandwidth logs
49 -f <file> Read <file> for job descriptions
50 -o <file> Log output to file
51 -m Minimal (terse) output
52 -h Print help info
53 -v Print version information and exit
54
55Any parameters following the options will be assumed to be job files.
56You can add as many as you want, each job file will be regarded as a
57separate group and fio will stonewall it's execution.
58
59
60Job file
61--------
62
63Only a few options can be controlled with command line parameters,
64generally it's a lot easier to just write a simple job file to describe
65the workload. The job file format is in the ini style format, as it's
66easy to read and write for the user.
67
68The job file parameters are:
69
70 name=x Use 'x' as the identifier for this job.
71 directory=x Use 'x' as the top level directory for storing files
72 rw=x 'x' may be: read, randread, write, randwrite,
73 rw (read-write mix), randrw (read-write random mix)
74 rwmixcycle=x Base cycle for switching between read and write
75 in msecs.
76 rwmixread=x 'x' percentage of rw mix ios will be reads. If
77 rwmixwrite is also given, the last of the two will
78 be used if they don't add up to 100%.
79 rwmixwrite=x 'x' percentage of rw mix ios will be writes. See
80 rwmixread.
81 rand_repeatable=x The sequence of random io blocks can be repeatable
82 across runs, if 'x' is 1.
83 size=x Set file size to x bytes (x string can include k/m/g)
84 ioengine=x 'x' may be: aio/libaio/linuxaio for Linux aio,
85 posixaio for POSIX aio, sync for regular read/write io,
86 mmap for mmap'ed io, splice for using splice/vmsplice,
87 or sgio for direct SG_IO io. The latter only works on
88 Linux on SCSI (or SCSI-like devices, such as
89 usb-storage or sata/libata driven) devices.
90 iodepth=x For async io, allow 'x' ios in flight
91 overwrite=x If 'x', layout a write file first.
92 prio=x Run io at prio X, 0-7 is the kernel allowed range
93 prioclass=x Run io at prio class X
94 bs=x Use 'x' for thread blocksize. May include k/m postfix.
95 bsrange=x-y Mix thread block sizes randomly between x and y. May
96 also include k/m postfix.
97 direct=x 1 for direct IO, 0 for buffered IO
98 thinktime=x "Think" x usec after each io
99 rate=x Throttle rate to x KiB/sec
100 ratemin=x Quit if rate of x KiB/sec can't be met
101 ratecycle=x ratemin averaged over x msecs
102 cpumask=x Only allow job to run on CPUs defined by mask.
103 fsync=x If writing, fsync after every x blocks have been written
104 startdelay=x Start this thread x seconds after startup
105 timeout=x Terminate x seconds after startup. Can include a
106 normal time suffix if not given in seconds, such as
107 'm' for minutes, 'h' for hours, and 'd' for days.
108 offset=x Start io at offset x (x string can include k/m/g)
109 invalidate=x Invalidate page cache for file prior to doing io
110 sync=x Use sync writes if x and writing
111 mem=x If x == malloc, use malloc for buffers. If x == shm,
112 use shm for buffers. If x == mmap, use anon mmap.
113 exitall When one thread quits, terminate the others
114 bwavgtime=x Average bandwidth stats over an x msec window.
115 create_serialize=x If 'x', serialize file creation.
116 create_fsync=x If 'x', run fsync() after file creation.
117 end_fsync=x If 'x', run fsync() after end-of-job.
118 loops=x Run the job 'x' number of times.
119 verify=x If 'x' == md5, use md5 for verifies. If 'x' == crc32,
120 use crc32 for verifies. md5 is 'safer', but crc32 is
121 a lot faster. Only makes sense for writing to a file.
122 stonewall Wait for preceeding jobs to end before running.
123 numjobs=x Create 'x' similar entries for this job
124 thread Use pthreads instead of forked jobs
125 zonesize=x
126 zoneskip=y Zone options must be paired. If given, the job
127 will skip y bytes for every x read/written. This
128 can be used to gauge hard drive speed over the entire
129 platter, without reading everything. Both x/y can
130 include k/m/g suffix.
131 iolog=x Open and read io pattern from file 'x'. The file must
132 contain one io action per line in the following format:
133 rw, offset, length
134 where with rw=0/1 for read/write, and the offset
135 and length entries being in bytes.
136 write_iolog=x Write an iolog to file 'x' in the same format as iolog.
137 The iolog options are exclusive, if both given the
138 read iolog will be performed.
139 lockmem=x Lock down x amount of memory on the machine, to
140 simulate a machine with less memory available. x can
141 include k/m/g suffix.
142 nice=x Run job at given nice value.
143 exec_prerun=x Run 'x' before job io is begun.
144 exec_postrun=x Run 'x' after job io has finished.
145 ioscheduler=x Use ioscheduler 'x' for this job.
146
147
148Examples using a job file
149-------------------------
150
151Example 1) Two random readers
152
153Lets say we want to simulate two threads reading randomly from a file
154each. They will be doing IO in 4KiB chunks, using raw (O_DIRECT) IO.
155Since they share most parameters, we'll put those in the [global]
156section. Job 1 will use a 128MiB file, job 2 will use a 256MiB file.
157
158; ---snip---
159
160[global]
161ioengine=sync ; regular read/write(2), the default
162rw=randread
163bs=4k
164direct=1
165
166[file1]
167size=128m
168
169[file2]
170size=256m
171
172; ---snip---
173
174Generally the [] bracketed name specifies a file name, but the "global"
175keyword is reserved for setting options that are inherited by each
176subsequent job description. It's possible to have several [global]
177sections in the job file, each one adds options that are inherited by
178jobs defined below it. The name can also point to a block device, such
179as /dev/sda. To run the above job file, simply do:
180
181$ fio jobfile
182
183Example 2) Many random writers
184
185Say we want to exercise the IO subsystem some more. We'll define 64
186threads doing random buffered writes. We'll let each thread use async io
187with a depth of 4 ios in flight. A job file would then look like this:
188
189; ---snip---
190
191[global]
192ioengine=libaio
193iodepth=4
194rw=randwrite
195bs=32k
196direct=0
197size=64m
198
199[files]
200numjobs=64
201
202; ---snip---
203
204This will create files.[0-63] and perform the random writes to them.
205
206There are endless ways to define jobs, the examples/ directory contains
207a few more examples.
208
209
210Interpreting the output
211-----------------------
212
213fio spits out a lot of output. While running, fio will display the
214status of the jobs created. An example of that would be:
215
216Threads running: 1: [_r] [24.79% done] [eta 00h:01m:31s]
217
218The characters inside the square brackets denote the current status of
219each thread. The possible values (in typical life cycle order) are:
220
221Idle Run
222---- ---
223P Thread setup, but not started.
224C Thread created.
225I Thread initialized, waiting.
226 R Running, doing sequential reads.
227 r Running, doing random reads.
228 W Running, doing sequential writes.
229 w Running, doing random writes.
230 M Running, doing mixed sequential reads/writes.
231 m Running, doing mixed random reads/writes.
232 F Running, currently waiting for fsync()
233V Running, doing verification of written data.
234E Thread exited, not reaped by main thread yet.
235_ Thread reaped.
236
237The other values are fairly self explanatory - number of threads
238currently running and doing io, and the estimated completion percentage
239and time for the running group. It's impossible to estimate runtime
240of the following groups (if any).
241
242When fio is done (or interrupted by ctrl-c), it will show the data for
243each thread, group of threads, and disks in that order. For each data
244direction, the output looks like:
245
246Client1 (g=0): err= 0:
247 write: io= 32MiB, bw= 666KiB/s, runt= 50320msec
248 slat (msec): min= 0, max= 136, avg= 0.03, dev= 1.92
249 clat (msec): min= 0, max= 631, avg=48.50, dev=86.82
250 bw (KiB/s) : min= 0, max= 1196, per=51.00%, avg=664.02, dev=681.68
251 cpu : usr=1.49%, sys=0.25%, ctx=7969
252
253The client number is printed, along with the group id and error of that
254thread. Below is the io statistics, here for writes. In the order listed,
255they denote:
256
257io= Number of megabytes io performed
258bw= Average bandwidth rate
259runt= The runtime of that thread
260 slat= Submission latency (avg being the average, dev being the
261 standard deviation). This is the time it took to submit
262 the io. For sync io, the slat is really the completion
263 latency, since queue/complete is one operation there.
264 clat= Completion latency. Same names as slat, this denotes the
265 time from submission to completion of the io pieces. For
266 sync io, clat will usually be equal (or very close) to 0,
267 as the time from submit to complete is basically just
268 CPU time (io has already been done, see slat explanation).
269 bw= Bandwidth. Same names as the xlat stats, but also includes
270 an approximate percentage of total aggregate bandwidth
271 this thread received in this group. This last value is
272 only really useful if the threads in this group are on the
273 same disk, since they are then competing for disk access.
274cpu= CPU usage. User and system time, along with the number
275 of context switches this thread went through.
276
277After each client has been listed, the group statistics are printed. They
278will look like this:
279
280Run status group 0 (all jobs):
281 READ: io=64MiB, aggrb=22178, minb=11355, maxb=11814, mint=2840msec, maxt=2955msec
282 WRITE: io=64MiB, aggrb=1302, minb=666, maxb=669, mint=50093msec, maxt=50320msec
283
284For each data direction, it prints:
285
286io= Number of megabytes io performed.
287aggrb= Aggregate bandwidth of threads in this group.
288minb= The minimum average bandwidth a thread saw.
289maxb= The maximum average bandwidth a thread saw.
290mint= The smallest runtime of the threads in that group.
291maxt= The longest runtime of the threads in that group.
292
293And finally, the disk statistics are printed. They will look like this:
294
295Disk stats (read/write):
296 sda: ios=16398/16511, merge=30/162, ticks=6853/819634, in_queue=826487, util=100.00%
297
298Each value is printed for both reads and writes, with reads first. The
299numbers denote:
300
301ios= Number of ios performed by all groups.
302merge= Number of merges io the io scheduler.
303ticks= Number of ticks we kept the disk busy.
304io_queue= Total time spent in the disk queue.
305util= The disk utilization. A value of 100% means we kept the disk
306 busy constantly, 50% would be a disk idling half of the time.
307
308
309Terse output
310------------
311
312For scripted usage where you typically want to generate tables or graphs
313of the results, fio can output the results in a comma seperated format.
314The format is one long line of values, such as:
315
316client1,0,0,936,331,2894,0,0,0.000000,0.000000,1,170,22.115385,34.290410,16,714,84.252874%,366.500000,566.417819,3496,1237,2894,0,0,0.000000,0.000000,0,246,6.671625,21.436952,0,2534,55.465300%,1406.600000,2008.044216,0.000000%,0.431928%,1109
317
318Split up, the format is as follows:
319
320 jobname, groupid, error
321 READ status:
322 KiB IO, bandwidth (KiB/sec), runtime (msec)
323 Submission latency: min, max, mean, deviation
324 Completion latency: min, max, mean, deviation
325 Bw: min, max, aggreate percentage of total, mean, deviation
326 WRITE status:
327 KiB IO, bandwidth (KiB/sec), runtime (msec)
328 Submission latency: min, max, mean, deviation
329 Completion latency: min, max, mean, deviation
330 Bw: min, max, aggreate percentage of total, mean, deviation
331 CPU usage: user, system, context switches
332
333
334Author
335------
336
337Fio was written by Jens Axboe <axboe@suse.de> to enable flexible testing
338of the Linux IO subsystem and schedulers. He got tired of writing
339specific test applications to simulate a given workload, and found that
340the existing io benchmark/test tools out there weren't flexible enough
341to do what he wanted.
342
343Jens Axboe <axboe@suse.de> 20060609
344