[PATCH] Document filename= option
[fio.git] / README
... / ...
CommitLineData
1fio
2---
3
4fio is a tool that will spawn a number of threads or processes doing a
5particular type of io action as specified by the user. fio takes a
6number of global parameters, each inherited by the thread unless
7otherwise parameters given to them overriding that setting is given.
8The typical use of fio is to write a job file matching the io load
9one wants to simulate.
10
11
12Source
13------
14
15fio resides in a git repo, the canonical place is:
16
17git://brick.kernel.dk/data/git/fio.git
18
19Snapshots are frequently generated and they include the git meta data as
20well. You can download them here:
21
22http://brick.kernel.dk/snaps/
23
24Pascal Bleser <guru@unixtech.be> has fio RPMs in his repository, you
25can find them here:
26
27http://linux01.gwdg.de/~pbleser/rpm-navigation.php?cat=System/fio
28
29
30Building
31--------
32
33Just type 'make' and 'make install'. If on FreeBSD, for now you have to
34specify the FreeBSD Makefile with -f, eg:
35
36$ make -f Makefile.Freebsd && make -f Makefile.FreeBSD install
37
38Likewise with OpenSolaris, use the Makefile.solaris to compile there.
39This might change in the future if I opt for an autoconf type setup.
40
41
42Command line
43------------
44
45$ fio
46 -t <sec> Runtime in seconds
47 -l Generate per-job latency logs
48 -w Generate per-job bandwidth logs
49 -o <file> Log output to file
50 -m Minimal (terse) output
51 -h Print help info
52 -v Print version information and exit
53
54Any parameters following the options will be assumed to be job files.
55You can add as many as you want, each job file will be regarded as a
56separate group and fio will stonewall it's execution.
57
58
59Job file
60--------
61
62Only a few options can be controlled with command line parameters,
63generally it's a lot easier to just write a simple job file to describe
64the workload. The job file format is in the ini style format, as it's
65easy to read and write for the user.
66
67The job file parameters are:
68
69 name=x Use 'x' as the identifier for this job.
70 directory=x Use 'x' as the top level directory for storing files
71 filename=x Force the use of 'x' as the filename for all files
72 in this thread. If not given, fio will make up
73 a suitable filename based on the thread and file
74 number.
75 rw=x 'x' may be: read, randread, write, randwrite,
76 rw (read-write mix), randrw (read-write random mix)
77 rwmixcycle=x Base cycle for switching between read and write
78 in msecs.
79 rwmixread=x 'x' percentage of rw mix ios will be reads. If
80 rwmixwrite is also given, the last of the two will
81 be used if they don't add up to 100%.
82 rwmixwrite=x 'x' percentage of rw mix ios will be writes. See
83 rwmixread.
84 rand_repeatable=x The sequence of random io blocks can be repeatable
85 across runs, if 'x' is 1.
86 size=x Set file size to x bytes (x string can include k/m/g)
87 ioengine=x 'x' may be: aio/libaio/linuxaio for Linux aio,
88 posixaio for POSIX aio, sync for regular read/write io,
89 mmap for mmap'ed io, splice for using splice/vmsplice,
90 or sgio for direct SG_IO io. The latter only works on
91 Linux on SCSI (or SCSI-like devices, such as
92 usb-storage or sata/libata driven) devices.
93 iodepth=x For async io, allow 'x' ios in flight
94 overwrite=x If 'x', layout a write file first.
95 nrfiles=x Spread io load over 'x' number of files per job,
96 if possible.
97 prio=x Run io at prio X, 0-7 is the kernel allowed range
98 prioclass=x Run io at prio class X
99 bs=x Use 'x' for thread blocksize. May include k/m postfix.
100 bsrange=x-y Mix thread block sizes randomly between x and y. May
101 also include k/m postfix.
102 direct=x 1 for direct IO, 0 for buffered IO
103 thinktime=x "Think" x usec after each io
104 rate=x Throttle rate to x KiB/sec
105 ratemin=x Quit if rate of x KiB/sec can't be met
106 ratecycle=x ratemin averaged over x msecs
107 cpumask=x Only allow job to run on CPUs defined by mask.
108 fsync=x If writing, fsync after every x blocks have been written
109 startdelay=x Start this thread x seconds after startup
110 timeout=x Terminate x seconds after startup. Can include a
111 normal time suffix if not given in seconds, such as
112 'm' for minutes, 'h' for hours, and 'd' for days.
113 offset=x Start io at offset x (x string can include k/m/g)
114 invalidate=x Invalidate page cache for file prior to doing io
115 sync=x Use sync writes if x and writing
116 mem=x If x == malloc, use malloc for buffers. If x == shm,
117 use shm for buffers. If x == mmap, use anon mmap.
118 exitall When one thread quits, terminate the others
119 bwavgtime=x Average bandwidth stats over an x msec window.
120 create_serialize=x If 'x', serialize file creation.
121 create_fsync=x If 'x', run fsync() after file creation.
122 unlink If set, unlink files when done.
123 end_fsync=x If 'x', run fsync() after end-of-job.
124 loops=x Run the job 'x' number of times.
125 verify=x If 'x' == md5, use md5 for verifies. If 'x' == crc32,
126 use crc32 for verifies. md5 is 'safer', but crc32 is
127 a lot faster. Only makes sense for writing to a file.
128 stonewall Wait for preceeding jobs to end before running.
129 numjobs=x Create 'x' similar entries for this job
130 thread Use pthreads instead of forked jobs
131 zonesize=x
132 zoneskip=y Zone options must be paired. If given, the job
133 will skip y bytes for every x read/written. This
134 can be used to gauge hard drive speed over the entire
135 platter, without reading everything. Both x/y can
136 include k/m/g suffix.
137 iolog=x Open and read io pattern from file 'x'. The file must
138 contain one io action per line in the following format:
139 rw, offset, length
140 where with rw=0/1 for read/write, and the offset
141 and length entries being in bytes.
142 write_iolog=x Write an iolog to file 'x' in the same format as iolog.
143 The iolog options are exclusive, if both given the
144 read iolog will be performed.
145 write_bw_log Write a bandwidth log.
146 write_lat_log Write a latency log.
147 lockmem=x Lock down x amount of memory on the machine, to
148 simulate a machine with less memory available. x can
149 include k/m/g suffix.
150 nice=x Run job at given nice value.
151 exec_prerun=x Run 'x' before job io is begun.
152 exec_postrun=x Run 'x' after job io has finished.
153 ioscheduler=x Use ioscheduler 'x' for this job.
154 cpuload=x For a CPU io thread, percentage of CPU time to attempt
155 to burn.
156 cpuchunks=x Split burn cycles into pieces of x.
157
158
159Examples using a job file
160-------------------------
161
162Example 1) Two random readers
163
164Lets say we want to simulate two threads reading randomly from a file
165each. They will be doing IO in 4KiB chunks, using raw (O_DIRECT) IO.
166Since they share most parameters, we'll put those in the [global]
167section. Job 1 will use a 128MiB file, job 2 will use a 256MiB file.
168
169; ---snip---
170
171[global]
172ioengine=sync ; regular read/write(2), the default
173rw=randread
174bs=4k
175direct=1
176
177[file1]
178size=128m
179
180[file2]
181size=256m
182
183; ---snip---
184
185Generally the [] bracketed name specifies a file name, but the "global"
186keyword is reserved for setting options that are inherited by each
187subsequent job description. It's possible to have several [global]
188sections in the job file, each one adds options that are inherited by
189jobs defined below it. The name can also point to a block device, such
190as /dev/sda. To run the above job file, simply do:
191
192$ fio jobfile
193
194Example 2) Many random writers
195
196Say we want to exercise the IO subsystem some more. We'll define 64
197threads doing random buffered writes. We'll let each thread use async io
198with a depth of 4 ios in flight. A job file would then look like this:
199
200; ---snip---
201
202[global]
203ioengine=libaio
204iodepth=4
205rw=randwrite
206bs=32k
207direct=0
208size=64m
209
210[files]
211numjobs=64
212
213; ---snip---
214
215This will create files.[0-63] and perform the random writes to them.
216
217There are endless ways to define jobs, the examples/ directory contains
218a few more examples.
219
220
221Interpreting the output
222-----------------------
223
224fio spits out a lot of output. While running, fio will display the
225status of the jobs created. An example of that would be:
226
227Threads running: 1: [_r] [24.79% done] [eta 00h:01m:31s]
228
229The characters inside the square brackets denote the current status of
230each thread. The possible values (in typical life cycle order) are:
231
232Idle Run
233---- ---
234P Thread setup, but not started.
235C Thread created.
236I Thread initialized, waiting.
237 R Running, doing sequential reads.
238 r Running, doing random reads.
239 W Running, doing sequential writes.
240 w Running, doing random writes.
241 M Running, doing mixed sequential reads/writes.
242 m Running, doing mixed random reads/writes.
243 F Running, currently waiting for fsync()
244V Running, doing verification of written data.
245E Thread exited, not reaped by main thread yet.
246_ Thread reaped.
247
248The other values are fairly self explanatory - number of threads
249currently running and doing io, and the estimated completion percentage
250and time for the running group. It's impossible to estimate runtime
251of the following groups (if any).
252
253When fio is done (or interrupted by ctrl-c), it will show the data for
254each thread, group of threads, and disks in that order. For each data
255direction, the output looks like:
256
257Client1 (g=0): err= 0:
258 write: io= 32MiB, bw= 666KiB/s, runt= 50320msec
259 slat (msec): min= 0, max= 136, avg= 0.03, dev= 1.92
260 clat (msec): min= 0, max= 631, avg=48.50, dev=86.82
261 bw (KiB/s) : min= 0, max= 1196, per=51.00%, avg=664.02, dev=681.68
262 cpu : usr=1.49%, sys=0.25%, ctx=7969
263
264The client number is printed, along with the group id and error of that
265thread. Below is the io statistics, here for writes. In the order listed,
266they denote:
267
268io= Number of megabytes io performed
269bw= Average bandwidth rate
270runt= The runtime of that thread
271 slat= Submission latency (avg being the average, dev being the
272 standard deviation). This is the time it took to submit
273 the io. For sync io, the slat is really the completion
274 latency, since queue/complete is one operation there.
275 clat= Completion latency. Same names as slat, this denotes the
276 time from submission to completion of the io pieces. For
277 sync io, clat will usually be equal (or very close) to 0,
278 as the time from submit to complete is basically just
279 CPU time (io has already been done, see slat explanation).
280 bw= Bandwidth. Same names as the xlat stats, but also includes
281 an approximate percentage of total aggregate bandwidth
282 this thread received in this group. This last value is
283 only really useful if the threads in this group are on the
284 same disk, since they are then competing for disk access.
285cpu= CPU usage. User and system time, along with the number
286 of context switches this thread went through.
287
288After each client has been listed, the group statistics are printed. They
289will look like this:
290
291Run status group 0 (all jobs):
292 READ: io=64MiB, aggrb=22178, minb=11355, maxb=11814, mint=2840msec, maxt=2955msec
293 WRITE: io=64MiB, aggrb=1302, minb=666, maxb=669, mint=50093msec, maxt=50320msec
294
295For each data direction, it prints:
296
297io= Number of megabytes io performed.
298aggrb= Aggregate bandwidth of threads in this group.
299minb= The minimum average bandwidth a thread saw.
300maxb= The maximum average bandwidth a thread saw.
301mint= The smallest runtime of the threads in that group.
302maxt= The longest runtime of the threads in that group.
303
304And finally, the disk statistics are printed. They will look like this:
305
306Disk stats (read/write):
307 sda: ios=16398/16511, merge=30/162, ticks=6853/819634, in_queue=826487, util=100.00%
308
309Each value is printed for both reads and writes, with reads first. The
310numbers denote:
311
312ios= Number of ios performed by all groups.
313merge= Number of merges io the io scheduler.
314ticks= Number of ticks we kept the disk busy.
315io_queue= Total time spent in the disk queue.
316util= The disk utilization. A value of 100% means we kept the disk
317 busy constantly, 50% would be a disk idling half of the time.
318
319
320Terse output
321------------
322
323For scripted usage where you typically want to generate tables or graphs
324of the results, fio can output the results in a comma seperated format.
325The format is one long line of values, such as:
326
327client1,0,0,936,331,2894,0,0,0.000000,0.000000,1,170,22.115385,34.290410,16,714,84.252874%,366.500000,566.417819,3496,1237,2894,0,0,0.000000,0.000000,0,246,6.671625,21.436952,0,2534,55.465300%,1406.600000,2008.044216,0.000000%,0.431928%,1109
328
329Split up, the format is as follows:
330
331 jobname, groupid, error
332 READ status:
333 KiB IO, bandwidth (KiB/sec), runtime (msec)
334 Submission latency: min, max, mean, deviation
335 Completion latency: min, max, mean, deviation
336 Bw: min, max, aggreate percentage of total, mean, deviation
337 WRITE status:
338 KiB IO, bandwidth (KiB/sec), runtime (msec)
339 Submission latency: min, max, mean, deviation
340 Completion latency: min, max, mean, deviation
341 Bw: min, max, aggreate percentage of total, mean, deviation
342 CPU usage: user, system, context switches
343
344
345Author
346------
347
348Fio was written by Jens Axboe <axboe@kernel.dk> to enable flexible testing
349of the Linux IO subsystem and schedulers. He got tired of writing
350specific test applications to simulate a given workload, and found that
351the existing io benchmark/test tools out there weren't flexible enough
352to do what he wanted.
353
354Jens Axboe <axboe@kernel.dk> 20060905
355