[PATCH] Add support for multiple jobs on the command line
[fio.git] / HOWTO
CommitLineData
71bfa161
JA
1Table of contents
2-----------------
3
41. Overview
52. How fio works
63. Running fio
74. Job file format
85. Detailed list of parameters
96. Normal output
107. Terse output
11
12
131.0 Overview and history
14------------------------
15fio was originally written to save me the hassle of writing special test
16case programs when I wanted to test a specific workload, either for
17performance reasons or to find/reproduce a bug. The process of writing
18such a test app can be tiresome, especially if you have to do it often.
19Hence I needed a tool that would be able to simulate a given io workload
20without resorting to writing a tailored test case again and again.
21
22A test work load is difficult to define, though. There can be any number
23of processes or threads involved, and they can each be using their own
24way of generating io. You could have someone dirtying large amounts of
25memory in an memory mapped file, or maybe several threads issuing
26reads using asynchronous io. fio needed to be flexible enough to
27simulate both of these cases, and many more.
28
292.0 How fio works
30-----------------
31The first step in getting fio to simulate a desired io workload, is
32writing a job file describing that specific setup. A job file may contain
33any number of threads and/or files - the typical contents of the job file
34is a global section defining shared parameters, and one or more job
35sections describing the jobs involved. When run, fio parses this file
36and sets everything up as described. If we break down a job from top to
37bottom, it contains the following basic parameters:
38
39 IO type Defines the io pattern issued to the file(s).
40 We may only be reading sequentially from this
41 file(s), or we may be writing randomly. Or even
42 mixing reads and writes, sequentially or randomly.
43
44 Block size In how large chunks are we issuing io? This may be
45 a single value, or it may describe a range of
46 block sizes.
47
48 IO size How much data are we going to be reading/writing.
49
50 IO engine How do we issue io? We could be memory mapping the
51 file, we could be using regular read/write, we
52 could be using splice, async io, or even
53 SG (SCSI generic sg).
54
55 IO depth If the io engine is async, how large a queueing
56 depth do we want to maintain?
57
58 IO type Should we be doing buffered io, or direct/raw io?
59
60 Num files How many files are we spreading the workload over.
61
62 Num threads How many threads or processes should we spread
63 this workload over.
64
65The above are the basic parameters defined for a workload, in addition
66there's a multitude of parameters that modify other aspects of how this
67job behaves.
68
69
703.0 Running fio
71---------------
72See the README file for command line parameters, there are only a few
73of them.
74
75Running fio is normally the easiest part - you just give it the job file
76(or job files) as parameters:
77
78$ fio job_file
79
80and it will start doing what the job_file tells it to do. You can give
81more than one job file on the command line, fio will serialize the running
82of those files. Internally that is the same as using the 'stonewall'
83parameter described the the parameter section.
84
b4692828
JA
85If the job file contains only one job, you may as well just give the
86parameters on the command line. The command line parameters are identical
87to the job parameters, with a few extra that control global parameters
88(see README). For example, for the job file parameter iodepth=2, the
c2b1e753
JA
89mirror command line option would be --iodepth 2 or --iodepth=2. You can
90also use the command line for giving more than one job entry. For each
91--name option that fio sees, it will start a new job with that name.
92Command line entries following a --name entry will apply to that job,
93until there are no more entries or a new --name entry is seen. This is
94similar to the job file options, where each option applies to the current
95job until a new [] job entry is seen.
b4692828 96
71bfa161
JA
97fio does not need to run as root, except if the files or devices specified
98in the job section requires that. Some other options may also be restricted,
99such as memory locking, io scheduler switching, and descreasing the nice value.
100
101
1024.0 Job file format
103-------------------
104As previously described, fio accepts one or more job files describing
105what it is supposed to do. The job file format is the classic ini file,
106where the names enclosed in [] brackets define the job name. You are free
107to use any ascii name you want, except 'global' which has special meaning.
108A global section sets defaults for the jobs described in that file. A job
109may override a global section parameter, and a job file may even have
110several global sections if so desired. A job is only affected by a global
111section residing above it. If the first character in a line is a ';', the
112entire line is discarded as a comment.
113
114So lets look at a really simple job file that define to threads, each
115randomly reading from a 128MiB file.
116
117; -- start job file --
118[global]
119rw=randread
120size=128m
121
122[job1]
123
124[job2]
125
126; -- end job file --
127
128As you can see, the job file sections themselves are empty as all the
129described parameters are shared. As no filename= option is given, fio
c2b1e753
JA
130makes up a filename for each of the jobs as it sees fit. On the command
131line, this job would look as follows:
132
133$ fio --name=global --rw=randread --size=128m --name=job1 --name=job2
134
71bfa161
JA
135
136Lets look at an example that have a number of processes writing randomly
137to files.
138
139; -- start job file --
140[random-writers]
141ioengine=libaio
142iodepth=4
143rw=randwrite
144bs=32k
145direct=0
146size=64m
147numjobs=4
148
149; -- end job file --
150
151Here we have no global section, as we only have one job defined anyway.
152We want to use async io here, with a depth of 4 for each file. We also
153increased the buffer size used to 32KiB and define numjobs to 4 to
154fork 4 identical jobs. The result is 4 processes each randomly writing
b4692828
JA
155to their own 64MiB file. Instead of using the above job file, you could
156have given the parameters on the command line. For this case, you would
157specify:
158
159$ fio --name=random-writers --ioengine=libaio --iodepth=4 --rw=randwrite --bs=32k --direct=0 --size=64m --numjobs=4
71bfa161
JA
160
161fio ships with a few example job files, you can also look there for
162inspiration.
163
164
1655.0 Detailed list of parameters
166-------------------------------
167
168This section describes in details each parameter associated with a job.
169Some parameters take an option of a given type, such as an integer or
170a string. The following types are used:
171
172str String. This is a sequence of alpha characters.
173int Integer. A whole number value, may be negative.
174siint SI integer. A whole number value, which may contain a postfix
175 describing the base of the number. Accepted postfixes are k/m/g,
176 meaning kilo, mega, and giga. So if you want to specifiy 4096,
177 you could either write out '4096' or just give 4k. The postfixes
178 signify base 2 values, so 1024 is 1k and 1024k is 1m and so on.
179bool Boolean. Usually parsed as an integer, however only defined for
180 true and false (1 and 0).
181irange Integer range with postfix. Allows value range to be given, such
182 as 1024-4096. Also see siint.
183
184With the above in mind, here follows the complete list of fio job
185parameters.
186
187name=str ASCII name of the job. This may be used to override the
188 name printed by fio for this job. Otherwise the job
c2b1e753
JA
189 name is used. On the command line this parameter has the
190 special purpose of also signalling the start of a new
191 job.
71bfa161
JA
192
193directory=str Prefix filenames with this directory. Used to places files
194 in a different location than "./".
195
196filename=str Fio normally makes up a filename based on the job name,
197 thread number, and file number. If you want to share
198 files between threads in a job or several jobs, specify
199 a filename for each of them to override the default.
200
201rw=str Type of io pattern. Accepted values are:
202
203 read Sequential reads
204 write Sequential writes
205 randwrite Random writes
206 randread Random reads
207 rw Sequential mixed reads and writes
208 randrw Random mixed reads and writes
209
210 For the mixed io types, the default is to split them 50/50.
211 For certain types of io the result may still be skewed a bit,
212 since the speed may be different.
213
214size=siint The total size of file io for this job. This may describe
215 the size of the single file the job uses, or it may be
216 divided between the number of files in the job. If the
217 file already exists, the file size will be adjusted to this
218 size if larger than the current file size. If this parameter
219 is not given and the file exists, the file size will be used.
220
221bs=siint The block size used for the io units. Defaults to 4k.
222
223bsrange=irange Instead of giving a single block size, specify a range
224 and fio will mix the issued io block sizes. The issued
225 io unit will always be a multiple of the minimum value
226 given.
227
228nrfiles=int Number of files to use for this job. Defaults to 1.
229
230ioengine=str Defines how the job issues io to the file. The following
231 types are defined:
232
233 sync Basic read(2) or write(2) io. lseek(2) is
234 used to position the io location.
235
236 libaio Linux native asynchronous io.
237
238 posixaio glibc posix asynchronous io.
239
240 mmap File is memory mapped and data copied
241 to/from using memcpy(3).
242
243 splice splice(2) is used to transfer the data and
244 vmsplice(2) to transfer data from user
245 space to the kernel.
246
247 sg SCSI generic sg v3 io. May either be
248 syncrhonous using the SG_IO ioctl, or if
249 the target is an sg character device
250 we use read(2) and write(2) for asynchronous
251 io.
252
253iodepth=int This defines how many io units to keep in flight against
254 the file. The default is 1 for each file defined in this
255 job, can be overridden with a larger value for higher
256 concurrency.
257
258direct=bool If value is true, use non-buffered io. This is usually
259 O_DIRECT. Defaults to true.
260
261offset=siint Start io at the given offset in the file. The data before
262 the given offset will not be touched. This effectively
263 caps the file size at real_size - offset.
264
265fsync=int If writing to a file, issue a sync of the dirty data
266 for every number of blocks given. For example, if you give
267 32 as a parameter, fio will sync the file for every 32
268 writes issued. If fio is using non-buffered io, we may
269 not sync the file. The exception is the sg io engine, which
270 syncronizes the disk cache anyway.
271
272overwrite=bool If writing to a file, setup the file first and do overwrites.
273
274end_fsync=bool If true, fsync file contents when the job exits.
275
276rwmixcycle=int Value in miliseconds describing how often to switch between
277 reads and writes for a mixed workload. The default is
278 500 msecs.
279
280rwmixread=int How large a percentage of the mix should be reads.
281
282rwmixwrite=int How large a percentage of the mix should be writes. If both
283 rwmixread and rwmixwrite is given and the values do not add
284 up to 100%, the latter of the two will be used to override
285 the first.
286
287nice=int Run the job with the given nice value. See man nice(2).
288
289prio=int Set the io priority value of this job. Linux limits us to
290 a positive value between 0 and 7, with 0 being the highest.
291 See man ionice(1).
292
293prioclass=int Set the io priority class. See man ionice(1).
294
295thinktime=int Stall the job x microseconds after an io has completed before
296 issuing the next. May be used to simulate processing being
297 done by an application.
298
299rate=int Cap the bandwidth used by this job to this number of KiB/sec.
300
301ratemin=int Tell fio to do whatever it can to maintain at least this
302 bandwidth.
303
304ratecycle=int Average bandwidth for 'rate' and 'ratemin' over this number
305 of miliseconds.
306
307cpumask=int Set the CPU affinity of this job. The parameter given is a
308 bitmask of allowed CPU's the job may run on. See man
309 sched_setaffinity(2).
310
311startdelay=int Start this job the specified number of seconds after fio
312 has started. Only useful if the job file contains several
313 jobs, and you want to delay starting some jobs to a certain
314 time.
315
316timeout=int Tell fio to terminate processing after the specified number
317 of seconds. It can be quite hard to determine for how long
318 a specified job will run, so this parameter is handy to
319 cap the total runtime to a given time.
320
321invalidate=bool Invalidate the buffer/page cache parts for this file prior
322 to starting io. Defaults to true.
323
324sync=bool Use sync io for buffered writes. For the majority of the
325 io engines, this means using O_SYNC.
326
327mem=str Fio can use various types of memory as the io unit buffer.
328 The allowed values are:
329
330 malloc Use memory from malloc(3) as the buffers.
331
332 shm Use shared memory as the buffers. Allocated
333 through shmget(2).
334
335 mmap Use anonymous memory maps as the buffers.
336 Allocated through mmap(2).
337
338 The area allocated is a function of the maximum allowed
339 bs size for the job, multiplied by the io depth given.
340
341exitall When one job finishes, terminate the rest. The default is
342 to wait for each job to finish, sometimes that is not the
343 desired action.
344
345bwavgtime=int Average the calculated bandwidth over the given time. Value
346 is specified in miliseconds.
347
348create_serialize=bool If true, serialize the file creating for the jobs.
349 This may be handy to avoid interleaving of data
350 files, which may greatly depend on the filesystem
351 used and even the number of processors in the system.
352
353create_fsync=bool fsync the data file after creation. This is the
354 default.
355
356unlink Unlink the job files when done. fio defaults to doing this,
357 if it created the file itself.
358
359loops=int Run the specified number of iterations of this job. Used
360 to repeat the same workload a given number of times. Defaults
361 to 1.
362
363verify=str If writing to a file, fio can verify the file contents
364 after each iteration of the job. The allowed values are:
365
366 md5 Use an md5 sum of the data area and store
367 it in the header of each block.
368
369 crc32 Use a crc32 sum of the data area and store
370 it in the header of each block.
371
372 This option can be used for repeated burnin tests of a
373 system to make sure that the written data is also
374 correctly read back.
375
376stonewall Wait for preceeding jobs in the job file to exit, before
377 starting this one. Can be used to insert serialization
378 points in the job file.
379
380numjobs=int Create the specified number of clones of this job. May be
381 used to setup a larger number of threads/processes doing
382 the same thing.
383
384thread fio defaults to forking jobs, however if this option is
385 given, fio will use pthread_create(3) to create threads
386 instead.
387
388zonesize=siint Divide a file into zones of the specified size. See zoneskip.
389
390zoneskip=siint Skip the specified number of bytes when zonesize data has
391 been read. The two zone options can be used to only do
392 io on zones of a file.
393
076efc7c
JA
394write_iolog=str Write the issued io patterns to the specified file. See
395 read_iolog.
71bfa161 396
076efc7c 397read_iolog=str Open an iolog with the specified file name and replay the
71bfa161
JA
398 io patterns it contains. This can be used to store a
399 workload and replay it sometime later.
400
401write_bw_log If given, write a bandwidth log of the jobs in this job
402 file. Can be used to store data of the bandwidth of the
e0da9bc2
JA
403 jobs in their lifetime. The included fio_generate_plots
404 script uses gnuplot to turn these text files into nice
405 graphs.
71bfa161
JA
406
407write_lat_log Same as write_bw_log, except that this option stores io
408 completion latencies instead.
409
410lockmem=siint Pin down the specified amount of memory with mlock(2). Can
411 potentially be used instead of removing memory or booting
412 with less memory to simulate a smaller amount of memory.
413
414exec_prerun=str Before running this job, issue the command specified
415 through system(3).
416
417exec_postrun=str After the job completes, issue the command specified
418 though system(3).
419
420ioscheduler=str Attempt to switch the device hosting the file to the specified
421 io scheduler before running.
422
423cpuload=int If the job is a CPU cycle eater, attempt to use the specified
424 percentage of CPU cycles.
425
426cpuchunks=int If the job is a CPU cycle eater, split the load into
427 cycles of the given time. In miliseconds.
428
429
4306.0 Interpreting the output
431---------------------------
432
433fio spits out a lot of output. While running, fio will display the
434status of the jobs created. An example of that would be:
435
436Threads running: 1: [_r] [24.79% done] [eta 00h:01m:31s]
437
438The characters inside the square brackets denote the current status of
439each thread. The possible values (in typical life cycle order) are:
440
441Idle Run
442---- ---
443P Thread setup, but not started.
444C Thread created.
445I Thread initialized, waiting.
446 R Running, doing sequential reads.
447 r Running, doing random reads.
448 W Running, doing sequential writes.
449 w Running, doing random writes.
450 M Running, doing mixed sequential reads/writes.
451 m Running, doing mixed random reads/writes.
452 F Running, currently waiting for fsync()
453V Running, doing verification of written data.
454E Thread exited, not reaped by main thread yet.
455_ Thread reaped.
456
457The other values are fairly self explanatory - number of threads
458currently running and doing io, and the estimated completion percentage
459and time for the running group. It's impossible to estimate runtime
460of the following groups (if any).
461
462When fio is done (or interrupted by ctrl-c), it will show the data for
463each thread, group of threads, and disks in that order. For each data
464direction, the output looks like:
465
466Client1 (g=0): err= 0:
467 write: io= 32MiB, bw= 666KiB/s, runt= 50320msec
468 slat (msec): min= 0, max= 136, avg= 0.03, dev= 1.92
469 clat (msec): min= 0, max= 631, avg=48.50, dev=86.82
470 bw (KiB/s) : min= 0, max= 1196, per=51.00%, avg=664.02, dev=681.68
471 cpu : usr=1.49%, sys=0.25%, ctx=7969
472
473The client number is printed, along with the group id and error of that
474thread. Below is the io statistics, here for writes. In the order listed,
475they denote:
476
477io= Number of megabytes io performed
478bw= Average bandwidth rate
479runt= The runtime of that thread
480 slat= Submission latency (avg being the average, dev being the
481 standard deviation). This is the time it took to submit
482 the io. For sync io, the slat is really the completion
483 latency, since queue/complete is one operation there.
484 clat= Completion latency. Same names as slat, this denotes the
485 time from submission to completion of the io pieces. For
486 sync io, clat will usually be equal (or very close) to 0,
487 as the time from submit to complete is basically just
488 CPU time (io has already been done, see slat explanation).
489 bw= Bandwidth. Same names as the xlat stats, but also includes
490 an approximate percentage of total aggregate bandwidth
491 this thread received in this group. This last value is
492 only really useful if the threads in this group are on the
493 same disk, since they are then competing for disk access.
494cpu= CPU usage. User and system time, along with the number
495 of context switches this thread went through.
496
497After each client has been listed, the group statistics are printed. They
498will look like this:
499
500Run status group 0 (all jobs):
501 READ: io=64MiB, aggrb=22178, minb=11355, maxb=11814, mint=2840msec, maxt=2955msec
502 WRITE: io=64MiB, aggrb=1302, minb=666, maxb=669, mint=50093msec, maxt=50320msec
503
504For each data direction, it prints:
505
506io= Number of megabytes io performed.
507aggrb= Aggregate bandwidth of threads in this group.
508minb= The minimum average bandwidth a thread saw.
509maxb= The maximum average bandwidth a thread saw.
510mint= The smallest runtime of the threads in that group.
511maxt= The longest runtime of the threads in that group.
512
513And finally, the disk statistics are printed. They will look like this:
514
515Disk stats (read/write):
516 sda: ios=16398/16511, merge=30/162, ticks=6853/819634, in_queue=826487, util=100.00%
517
518Each value is printed for both reads and writes, with reads first. The
519numbers denote:
520
521ios= Number of ios performed by all groups.
522merge= Number of merges io the io scheduler.
523ticks= Number of ticks we kept the disk busy.
524io_queue= Total time spent in the disk queue.
525util= The disk utilization. A value of 100% means we kept the disk
526 busy constantly, 50% would be a disk idling half of the time.
527
528
5297.0 Terse output
530----------------
531
532For scripted usage where you typically want to generate tables or graphs
533of the results, fio can output the results in a comma seperated format.
534The format is one long line of values, such as:
535
536client1,0,0,936,331,2894,0,0,0.000000,0.000000,1,170,22.115385,34.290410,16,714,84.252874%,366.500000,566.417819,3496,1237,2894,0,0,0.000000,0.000000,0,246,6.671625,21.436952,0,2534,55.465300%,1406.600000,2008.044216,0.000000%,0.431928%,1109
537
538Split up, the format is as follows:
539
540 jobname, groupid, error
541 READ status:
542 KiB IO, bandwidth (KiB/sec), runtime (msec)
543 Submission latency: min, max, mean, deviation
544 Completion latency: min, max, mean, deviation
545 Bw: min, max, aggreate percentage of total, mean, deviation
546 WRITE status:
547 KiB IO, bandwidth (KiB/sec), runtime (msec)
548 Submission latency: min, max, mean, deviation
549 Completion latency: min, max, mean, deviation
550 Bw: min, max, aggreate percentage of total, mean, deviation
551 CPU usage: user, system, context switches
552