Merge branch 'multiclnt-sharedfs-try3' of git://github.com/bengland2/fio
[fio.git] / HOWTO
... / ...
CommitLineData
1Table of contents
2-----------------
3
41. Overview
52. How fio works
63. Running fio
74. Job file format
85. Detailed list of parameters
96. Normal output
107. Terse output
118. Trace file format
129. CPU idleness profiling
13
141.0 Overview and history
15------------------------
16fio was originally written to save me the hassle of writing special test
17case programs when I wanted to test a specific workload, either for
18performance reasons or to find/reproduce a bug. The process of writing
19such a test app can be tiresome, especially if you have to do it often.
20Hence I needed a tool that would be able to simulate a given io workload
21without resorting to writing a tailored test case again and again.
22
23A test work load is difficult to define, though. There can be any number
24of processes or threads involved, and they can each be using their own
25way of generating io. You could have someone dirtying large amounts of
26memory in an memory mapped file, or maybe several threads issuing
27reads using asynchronous io. fio needed to be flexible enough to
28simulate both of these cases, and many more.
29
302.0 How fio works
31-----------------
32The first step in getting fio to simulate a desired io workload, is
33writing a job file describing that specific setup. A job file may contain
34any number of threads and/or files - the typical contents of the job file
35is a global section defining shared parameters, and one or more job
36sections describing the jobs involved. When run, fio parses this file
37and sets everything up as described. If we break down a job from top to
38bottom, it contains the following basic parameters:
39
40 IO type Defines the io pattern issued to the file(s).
41 We may only be reading sequentially from this
42 file(s), or we may be writing randomly. Or even
43 mixing reads and writes, sequentially or randomly.
44
45 Block size In how large chunks are we issuing io? This may be
46 a single value, or it may describe a range of
47 block sizes.
48
49 IO size How much data are we going to be reading/writing.
50
51 IO engine How do we issue io? We could be memory mapping the
52 file, we could be using regular read/write, we
53 could be using splice, async io, syslet, or even
54 SG (SCSI generic sg).
55
56 IO depth If the io engine is async, how large a queuing
57 depth do we want to maintain?
58
59 IO type Should we be doing buffered io, or direct/raw io?
60
61 Num files How many files are we spreading the workload over.
62
63 Num threads How many threads or processes should we spread
64 this workload over.
65
66The above are the basic parameters defined for a workload, in addition
67there's a multitude of parameters that modify other aspects of how this
68job behaves.
69
70
713.0 Running fio
72---------------
73See the README file for command line parameters, there are only a few
74of them.
75
76Running fio is normally the easiest part - you just give it the job file
77(or job files) as parameters:
78
79$ fio job_file
80
81and it will start doing what the job_file tells it to do. You can give
82more than one job file on the command line, fio will serialize the running
83of those files. Internally that is the same as using the 'stonewall'
84parameter described in the parameter section.
85
86If the job file contains only one job, you may as well just give the
87parameters on the command line. The command line parameters are identical
88to the job parameters, with a few extra that control global parameters
89(see README). For example, for the job file parameter iodepth=2, the
90mirror command line option would be --iodepth 2 or --iodepth=2. You can
91also use the command line for giving more than one job entry. For each
92--name option that fio sees, it will start a new job with that name.
93Command line entries following a --name entry will apply to that job,
94until there are no more entries or a new --name entry is seen. This is
95similar to the job file options, where each option applies to the current
96job until a new [] job entry is seen.
97
98fio does not need to run as root, except if the files or devices specified
99in the job section requires that. Some other options may also be restricted,
100such as memory locking, io scheduler switching, and decreasing the nice value.
101
102
1034.0 Job file format
104-------------------
105As previously described, fio accepts one or more job files describing
106what it is supposed to do. The job file format is the classic ini file,
107where the names enclosed in [] brackets define the job name. You are free
108to use any ascii name you want, except 'global' which has special meaning.
109A global section sets defaults for the jobs described in that file. A job
110may override a global section parameter, and a job file may even have
111several global sections if so desired. A job is only affected by a global
112section residing above it. If the first character in a line is a ';' or a
113'#', the entire line is discarded as a comment.
114
115So let's look at a really simple job file that defines two processes, each
116randomly reading from a 128MB file.
117
118; -- start job file --
119[global]
120rw=randread
121size=128m
122
123[job1]
124
125[job2]
126
127; -- end job file --
128
129As you can see, the job file sections themselves are empty as all the
130described parameters are shared. As no filename= option is given, fio
131makes up a filename for each of the jobs as it sees fit. On the command
132line, this job would look as follows:
133
134$ fio --name=global --rw=randread --size=128m --name=job1 --name=job2
135
136
137Let's look at an example that has a number of processes writing randomly
138to files.
139
140; -- start job file --
141[random-writers]
142ioengine=libaio
143iodepth=4
144rw=randwrite
145bs=32k
146direct=0
147size=64m
148numjobs=4
149
150; -- end job file --
151
152Here we have no global section, as we only have one job defined anyway.
153We want to use async io here, with a depth of 4 for each file. We also
154increased the buffer size used to 32KB and define numjobs to 4 to
155fork 4 identical jobs. The result is 4 processes each randomly writing
156to their own 64MB file. Instead of using the above job file, you could
157have given the parameters on the command line. For this case, you would
158specify:
159
160$ fio --name=random-writers --ioengine=libaio --iodepth=4 --rw=randwrite --bs=32k --direct=0 --size=64m --numjobs=4
161
162When fio is utilized as a basis of any reasonably large test suite, it might be
163desirable to share a set of standardized settings across multiple job files.
164Instead of copy/pasting such settings, any section may pull in an external
165.fio file with 'include filename' directive, as in the following example:
166
167; -- start job file including.fio --
168[global]
169filename=/tmp/test
170filesize=1m
171include glob-include.fio
172
173[test]
174rw=randread
175bs=4k
176time_based=1
177runtime=10
178include test-include.fio
179; -- end job file including.fio --
180
181; -- start job file glob-include.fio --
182thread=1
183group_reporting=1
184; -- end job file glob-include.fio --
185
186; -- start job file test-include.fio --
187ioengine=libaio
188iodepth=4
189; -- end job file test-include.fio --
190
191Settings pulled into a section apply to that section only (except global
192section). Include directives may be nested in that any included file may
193contain further include directive(s). Include files may not contain []
194sections.
195
196
1974.1 Environment variables
198-------------------------
199
200fio also supports environment variable expansion in job files. Any
201substring of the form "${VARNAME}" as part of an option value (in other
202words, on the right of the `='), will be expanded to the value of the
203environment variable called VARNAME. If no such environment variable
204is defined, or VARNAME is the empty string, the empty string will be
205substituted.
206
207As an example, let's look at a sample fio invocation and job file:
208
209$ SIZE=64m NUMJOBS=4 fio jobfile.fio
210
211; -- start job file --
212[random-writers]
213rw=randwrite
214size=${SIZE}
215numjobs=${NUMJOBS}
216; -- end job file --
217
218This will expand to the following equivalent job file at runtime:
219
220; -- start job file --
221[random-writers]
222rw=randwrite
223size=64m
224numjobs=4
225; -- end job file --
226
227fio ships with a few example job files, you can also look there for
228inspiration.
229
2304.2 Reserved keywords
231---------------------
232
233Additionally, fio has a set of reserved keywords that will be replaced
234internally with the appropriate value. Those keywords are:
235
236$pagesize The architecture page size of the running system
237$mb_memory Megabytes of total memory in the system
238$ncpus Number of online available CPUs
239
240These can be used on the command line or in the job file, and will be
241automatically substituted with the current system values when the job
242is run. Simple math is also supported on these keywords, so you can
243perform actions like:
244
245size=8*$mb_memory
246
247and get that properly expanded to 8 times the size of memory in the
248machine.
249
250
2515.0 Detailed list of parameters
252-------------------------------
253
254This section describes in details each parameter associated with a job.
255Some parameters take an option of a given type, such as an integer or
256a string. Anywhere a numeric value is required, an arithmetic expression
257may be used, provided it is surrounded by parentheses. Supported operators
258are:
259
260 addition (+)
261 subtraction (-)
262 multiplication (*)
263 division (/)
264 modulus (%)
265 exponentiation (^)
266
267For time values in expressions, units are microseconds by default. This is
268different than for time values not in expressions (not enclosed in
269parentheses). The following types are used:
270
271str String. This is a sequence of alpha characters.
272time Integer with possible time suffix. In seconds unless otherwise
273 specified, use eg 10m for 10 minutes. Accepts s/m/h for seconds,
274 minutes, and hours, and accepts 'ms' (or 'msec') for milliseconds,
275 and 'us' (or 'usec') for microseconds.
276int SI integer. A whole number value, which may contain a suffix
277 describing the base of the number. Accepted suffixes are k/m/g/t/p,
278 meaning kilo, mega, giga, tera, and peta. The suffix is not case
279 sensitive, and you may also include trailing 'b' (eg 'kb' is the same
280 as 'k'). So if you want to specify 4096, you could either write
281 out '4096' or just give 4k. The suffixes signify base 2 values, so
282 1024 is 1k and 1024k is 1m and so on, unless the suffix is explicitly
283 set to a base 10 value using 'kib', 'mib', 'gib', etc. If that is the
284 case, then 1000 is used as the multiplier. This can be handy for
285 disks, since manufacturers generally use base 10 values when listing
286 the capacity of a drive. If the option accepts an upper and lower
287 range, use a colon ':' or minus '-' to separate such values. May also
288 include a prefix to indicate numbers base. If 0x is used, the number
289 is assumed to be hexadecimal. See irange.
290bool Boolean. Usually parsed as an integer, however only defined for
291 true and false (1 and 0).
292irange Integer range with suffix. Allows value range to be given, such
293 as 1024-4096. A colon may also be used as the separator, eg
294 1k:4k. If the option allows two sets of ranges, they can be
295 specified with a ',' or '/' delimiter: 1k-4k/8k-32k. Also see
296 int.
297float_list A list of floating numbers, separated by a ':' character.
298
299With the above in mind, here follows the complete list of fio job
300parameters.
301
302name=str ASCII name of the job. This may be used to override the
303 name printed by fio for this job. Otherwise the job
304 name is used. On the command line this parameter has the
305 special purpose of also signaling the start of a new
306 job.
307
308description=str Text description of the job. Doesn't do anything except
309 dump this text description when this job is run. It's
310 not parsed.
311
312directory=str Prefix filenames with this directory. Used to place files
313 in a different location than "./". See the 'filename' option
314 for escaping certain characters.
315
316filename=str Fio normally makes up a filename based on the job name,
317 thread number, and file number. If you want to share
318 files between threads in a job or several jobs, specify
319 a filename for each of them to override the default. If
320 the ioengine used is 'net', the filename is the host, port,
321 and protocol to use in the format of =host,port,protocol.
322 See ioengine=net for more. If the ioengine is file based, you
323 can specify a number of files by separating the names with a
324 ':' colon. So if you wanted a job to open /dev/sda and /dev/sdb
325 as the two working files, you would use
326 filename=/dev/sda:/dev/sdb. On Windows, disk devices are
327 accessed as \\.\PhysicalDrive0 for the first device,
328 \\.\PhysicalDrive1 for the second etc. Note: Windows and
329 FreeBSD prevent write access to areas of the disk containing
330 in-use data (e.g. filesystems).
331 If the wanted filename does need to include a colon, then
332 escape that with a '\' character. For instance, if the filename
333 is "/dev/dsk/foo@3,0:c", then you would use
334 filename="/dev/dsk/foo@3,0\:c". '-' is a reserved name, meaning
335 stdin or stdout. Which of the two depends on the read/write
336 direction set.
337
338filename_format=str
339 If sharing multiple files between jobs, it is usually necessary
340 to have fio generate the exact names that you want. By default,
341 fio will name a file based on the default file format
342 specification of jobname.jobnumber.filenumber. With this
343 option, that can be customized. Fio will recognize and replace
344 the following keywords in this string:
345
346 $jobname
347 The name of the worker thread or process.
348
349 $jobnum
350 The incremental number of the worker thread or
351 process.
352
353 $filenum
354 The incremental number of the file for that worker
355 thread or process.
356
357 To have dependent jobs share a set of files, this option can
358 be set to have fio generate filenames that are shared between
359 the two. For instance, if testfiles.$filenum is specified,
360 file number 4 for any job will be named testfiles.4. The
361 default of $jobname.$jobnum.$filenum will be used if
362 no other format specifier is given.
363
364opendir=str Tell fio to recursively add any file it can find in this
365 directory and down the file system tree.
366
367lockfile=str Fio defaults to not locking any files before it does
368 IO to them. If a file or file descriptor is shared, fio
369 can serialize IO to that file to make the end result
370 consistent. This is usual for emulating real workloads that
371 share files. The lock modes are:
372
373 none No locking. The default.
374 exclusive Only one thread/process may do IO,
375 excluding all others.
376 readwrite Read-write locking on the file. Many
377 readers may access the file at the
378 same time, but writes get exclusive
379 access.
380
381readwrite=str
382rw=str Type of io pattern. Accepted values are:
383
384 read Sequential reads
385 write Sequential writes
386 randwrite Random writes
387 randread Random reads
388 rw,readwrite Sequential mixed reads and writes
389 randrw Random mixed reads and writes
390 trimwrite Mixed trims and writes. Blocks will be
391 trimmed first, then written to.
392
393 For the mixed io types, the default is to split them 50/50.
394 For certain types of io the result may still be skewed a bit,
395 since the speed may be different. It is possible to specify
396 a number of IO's to do before getting a new offset, this is
397 done by appending a ':<nr>' to the end of the string given.
398 For a random read, it would look like 'rw=randread:8' for
399 passing in an offset modifier with a value of 8. If the
400 suffix is used with a sequential IO pattern, then the value
401 specified will be added to the generated offset for each IO.
402 For instance, using rw=write:4k will skip 4k for every
403 write. It turns sequential IO into sequential IO with holes.
404 See the 'rw_sequencer' option.
405
406rw_sequencer=str If an offset modifier is given by appending a number to
407 the rw=<str> line, then this option controls how that
408 number modifies the IO offset being generated. Accepted
409 values are:
410
411 sequential Generate sequential offset
412 identical Generate the same offset
413
414 'sequential' is only useful for random IO, where fio would
415 normally generate a new random offset for every IO. If you
416 append eg 8 to randread, you would get a new random offset for
417 every 8 IO's. The result would be a seek for only every 8
418 IO's, instead of for every IO. Use rw=randread:8 to specify
419 that. As sequential IO is already sequential, setting
420 'sequential' for that would not result in any differences.
421 'identical' behaves in a similar fashion, except it sends
422 the same offset 8 number of times before generating a new
423 offset.
424
425kb_base=int The base unit for a kilobyte. The defacto base is 2^10, 1024.
426 Storage manufacturers like to use 10^3 or 1000 as a base
427 ten unit instead, for obvious reasons. Allow values are
428 1024 or 1000, with 1024 being the default.
429
430unified_rw_reporting=bool Fio normally reports statistics on a per
431 data direction basis, meaning that read, write, and trim are
432 accounted and reported separately. If this option is set,
433 the fio will sum the results and report them as "mixed"
434 instead.
435
436randrepeat=bool For random IO workloads, seed the generator in a predictable
437 way so that results are repeatable across repetitions.
438
439randseed=int Seed the random number generators based on this seed value, to
440 be able to control what sequence of output is being generated.
441 If not set, the random sequence depends on the randrepeat
442 setting.
443
444fallocate=str Whether pre-allocation is performed when laying down files.
445 Accepted values are:
446
447 none Do not pre-allocate space
448 posix Pre-allocate via posix_fallocate()
449 keep Pre-allocate via fallocate() with
450 FALLOC_FL_KEEP_SIZE set
451 0 Backward-compatible alias for 'none'
452 1 Backward-compatible alias for 'posix'
453
454 May not be available on all supported platforms. 'keep' is only
455 available on Linux.If using ZFS on Solaris this must be set to
456 'none' because ZFS doesn't support it. Default: 'posix'.
457
458fadvise_hint=bool By default, fio will use fadvise() to advise the kernel
459 on what IO patterns it is likely to issue. Sometimes you
460 want to test specific IO patterns without telling the
461 kernel about it, in which case you can disable this option.
462 If set, fio will use POSIX_FADV_SEQUENTIAL for sequential
463 IO and POSIX_FADV_RANDOM for random IO.
464
465fadvise_stream=int Notify the kernel what write stream ID to place these
466 writes under. Only supported on Linux. Note, this option
467 may change going forward.
468
469size=int The total size of file io for this job. Fio will run until
470 this many bytes has been transferred, unless runtime is
471 limited by other options (such as 'runtime', for instance,
472 or increased/decreased by 'io_size'). Unless specific nrfiles
473 and filesize options are given, fio will divide this size
474 between the available files specified by the job. If not set,
475 fio will use the full size of the given files or devices.
476 If the files do not exist, size must be given. It is also
477 possible to give size as a percentage between 1 and 100. If
478 size=20% is given, fio will use 20% of the full size of the
479 given files or devices.
480
481io_size=int
482io_limit=int Normally fio operates within the region set by 'size', which
483 means that the 'size' option sets both the region and size of
484 IO to be performed. Sometimes that is not what you want. With
485 this option, it is possible to define just the amount of IO
486 that fio should do. For instance, if 'size' is set to 20G and
487 'io_size' is set to 5G, fio will perform IO within the first
488 20G but exit when 5G have been done. The opposite is also
489 possible - if 'size' is set to 20G, and 'io_size' is set to
490 40G, then fio will do 40G of IO within the 0..20G region.
491
492filesize=int Individual file sizes. May be a range, in which case fio
493 will select sizes for files at random within the given range
494 and limited to 'size' in total (if that is given). If not
495 given, each created file is the same size.
496
497file_append=bool Perform IO after the end of the file. Normally fio will
498 operate within the size of a file. If this option is set, then
499 fio will append to the file instead. This has identical
500 behavior to setting offset to the size of a file. This option
501 is ignored on non-regular files.
502
503fill_device=bool
504fill_fs=bool Sets size to something really large and waits for ENOSPC (no
505 space left on device) as the terminating condition. Only makes
506 sense with sequential write. For a read workload, the mount
507 point will be filled first then IO started on the result. This
508 option doesn't make sense if operating on a raw device node,
509 since the size of that is already known by the file system.
510 Additionally, writing beyond end-of-device will not return
511 ENOSPC there.
512
513blocksize=int
514bs=int The block size used for the io units. Defaults to 4k. Values
515 can be given for both read and writes. If a single int is
516 given, it will apply to both. If a second int is specified
517 after a comma, it will apply to writes only. In other words,
518 the format is either bs=read_and_write or bs=read,write,trim.
519 bs=4k,8k will thus use 4k blocks for reads, 8k blocks for
520 writes, and 8k for trims. You can terminate the list with
521 a trailing comma. bs=4k,8k, would use the default value for
522 trims.. If you only wish to set the write size, you
523 can do so by passing an empty read size - bs=,8k will set
524 8k for writes and leave the read default value.
525
526blockalign=int
527ba=int At what boundary to align random IO offsets. Defaults to
528 the same as 'blocksize' the minimum blocksize given.
529 Minimum alignment is typically 512b for using direct IO,
530 though it usually depends on the hardware block size. This
531 option is mutually exclusive with using a random map for
532 files, so it will turn off that option.
533
534blocksize_range=irange
535bsrange=irange Instead of giving a single block size, specify a range
536 and fio will mix the issued io block sizes. The issued
537 io unit will always be a multiple of the minimum value
538 given (also see bs_unaligned). Applies to both reads and
539 writes, however a second range can be given after a comma.
540 See bs=.
541
542bssplit=str Sometimes you want even finer grained control of the
543 block sizes issued, not just an even split between them.
544 This option allows you to weight various block sizes,
545 so that you are able to define a specific amount of
546 block sizes issued. The format for this option is:
547
548 bssplit=blocksize/percentage:blocksize/percentage
549
550 for as many block sizes as needed. So if you want to define
551 a workload that has 50% 64k blocks, 10% 4k blocks, and
552 40% 32k blocks, you would write:
553
554 bssplit=4k/10:64k/50:32k/40
555
556 Ordering does not matter. If the percentage is left blank,
557 fio will fill in the remaining values evenly. So a bssplit
558 option like this one:
559
560 bssplit=4k/50:1k/:32k/
561
562 would have 50% 4k ios, and 25% 1k and 32k ios. The percentages
563 always add up to 100, if bssplit is given a range that adds
564 up to more, it will error out.
565
566 bssplit also supports giving separate splits to reads and
567 writes. The format is identical to what bs= accepts. You
568 have to separate the read and write parts with a comma. So
569 if you want a workload that has 50% 2k reads and 50% 4k reads,
570 while having 90% 4k writes and 10% 8k writes, you would
571 specify:
572
573 bssplit=2k/50:4k/50,4k/90:8k/10
574
575blocksize_unaligned
576bs_unaligned If this option is given, any byte size value within bsrange
577 may be used as a block range. This typically wont work with
578 direct IO, as that normally requires sector alignment.
579
580bs_is_seq_rand If this option is set, fio will use the normal read,write
581 blocksize settings as sequential,random instead. Any random
582 read or write will use the WRITE blocksize settings, and any
583 sequential read or write will use the READ blocksize setting.
584
585zero_buffers If this option is given, fio will init the IO buffers to
586 all zeroes. The default is to fill them with random data.
587
588refill_buffers If this option is given, fio will refill the IO buffers
589 on every submit. The default is to only fill it at init
590 time and reuse that data. Only makes sense if zero_buffers
591 isn't specified, naturally. If data verification is enabled,
592 refill_buffers is also automatically enabled.
593
594scramble_buffers=bool If refill_buffers is too costly and the target is
595 using data deduplication, then setting this option will
596 slightly modify the IO buffer contents to defeat normal
597 de-dupe attempts. This is not enough to defeat more clever
598 block compression attempts, but it will stop naive dedupe of
599 blocks. Default: true.
600
601buffer_compress_percentage=int If this is set, then fio will attempt to
602 provide IO buffer content (on WRITEs) that compress to
603 the specified level. Fio does this by providing a mix of
604 random data and a fixed pattern. The fixed pattern is either
605 zeroes, or the pattern specified by buffer_pattern. If the
606 pattern option is used, it might skew the compression ratio
607 slightly. Note that this is per block size unit, for file/disk
608 wide compression level that matches this setting, you'll also
609 want to set refill_buffers.
610
611buffer_compress_chunk=int See buffer_compress_percentage. This
612 setting allows fio to manage how big the ranges of random
613 data and zeroed data is. Without this set, fio will
614 provide buffer_compress_percentage of blocksize random
615 data, followed by the remaining zeroed. With this set
616 to some chunk size smaller than the block size, fio can
617 alternate random and zeroed data throughout the IO
618 buffer.
619
620buffer_pattern=str If set, fio will fill the io buffers with this
621 pattern. If not set, the contents of io buffers is defined by
622 the other options related to buffer contents. The setting can
623 be any pattern of bytes, and can be prefixed with 0x for hex
624 values. It may also be a string, where the string must then
625 be wrapped with "".
626
627dedupe_percentage=int If set, fio will generate this percentage of
628 identical buffers when writing. These buffers will be
629 naturally dedupable. The contents of the buffers depend on
630 what other buffer compression settings have been set. It's
631 possible to have the individual buffers either fully
632 compressible, or not at all. This option only controls the
633 distribution of unique buffers.
634
635nrfiles=int Number of files to use for this job. Defaults to 1.
636
637openfiles=int Number of files to keep open at the same time. Defaults to
638 the same as nrfiles, can be set smaller to limit the number
639 simultaneous opens.
640
641file_service_type=str Defines how fio decides which file from a job to
642 service next. The following types are defined:
643
644 random Just choose a file at random.
645
646 roundrobin Round robin over open files. This
647 is the default.
648
649 sequential Finish one file before moving on to
650 the next. Multiple files can still be
651 open depending on 'openfiles'.
652
653 The string can have a number appended, indicating how
654 often to switch to a new file. So if option random:4 is
655 given, fio will switch to a new random file after 4 ios
656 have been issued.
657
658ioengine=str Defines how the job issues io to the file. The following
659 types are defined:
660
661 sync Basic read(2) or write(2) io. lseek(2) is
662 used to position the io location.
663
664 psync Basic pread(2) or pwrite(2) io.
665
666 vsync Basic readv(2) or writev(2) IO.
667
668 psyncv Basic preadv(2) or pwritev(2) IO.
669
670 libaio Linux native asynchronous io. Note that Linux
671 may only support queued behaviour with
672 non-buffered IO (set direct=1 or buffered=0).
673 This engine defines engine specific options.
674
675 posixaio glibc posix asynchronous io.
676
677 solarisaio Solaris native asynchronous io.
678
679 windowsaio Windows native asynchronous io.
680
681 mmap File is memory mapped and data copied
682 to/from using memcpy(3).
683
684 splice splice(2) is used to transfer the data and
685 vmsplice(2) to transfer data from user
686 space to the kernel.
687
688 syslet-rw Use the syslet system calls to make
689 regular read/write async.
690
691 sg SCSI generic sg v3 io. May either be
692 synchronous using the SG_IO ioctl, or if
693 the target is an sg character device
694 we use read(2) and write(2) for asynchronous
695 io.
696
697 null Doesn't transfer any data, just pretends
698 to. This is mainly used to exercise fio
699 itself and for debugging/testing purposes.
700
701 net Transfer over the network to given host:port.
702 Depending on the protocol used, the hostname,
703 port, listen and filename options are used to
704 specify what sort of connection to make, while
705 the protocol option determines which protocol
706 will be used.
707 This engine defines engine specific options.
708
709 netsplice Like net, but uses splice/vmsplice to
710 map data and send/receive.
711 This engine defines engine specific options.
712
713 cpuio Doesn't transfer any data, but burns CPU
714 cycles according to the cpuload= and
715 cpucycle= options. Setting cpuload=85
716 will cause that job to do nothing but burn
717 85% of the CPU. In case of SMP machines,
718 use numjobs=<no_of_cpu> to get desired CPU
719 usage, as the cpuload only loads a single
720 CPU at the desired rate.
721
722 guasi The GUASI IO engine is the Generic Userspace
723 Asyncronous Syscall Interface approach
724 to async IO. See
725
726 http://www.xmailserver.org/guasi-lib.html
727
728 for more info on GUASI.
729
730 rdma The RDMA I/O engine supports both RDMA
731 memory semantics (RDMA_WRITE/RDMA_READ) and
732 channel semantics (Send/Recv) for the
733 InfiniBand, RoCE and iWARP protocols.
734
735 falloc IO engine that does regular fallocate to
736 simulate data transfer as fio ioengine.
737 DDIR_READ does fallocate(,mode = keep_size,)
738 DDIR_WRITE does fallocate(,mode = 0)
739 DDIR_TRIM does fallocate(,mode = punch_hole)
740
741 e4defrag IO engine that does regular EXT4_IOC_MOVE_EXT
742 ioctls to simulate defragment activity in
743 request to DDIR_WRITE event
744
745 rbd IO engine supporting direct access to Ceph
746 Rados Block Devices (RBD) via librbd without
747 the need to use the kernel rbd driver. This
748 ioengine defines engine specific options.
749
750 gfapi Using Glusterfs libgfapi sync interface to
751 direct access to Glusterfs volumes without
752 options.
753
754 gfapi_async Using Glusterfs libgfapi async interface
755 to direct access to Glusterfs volumes without
756 having to go through FUSE. This ioengine
757 defines engine specific options.
758
759 libhdfs Read and write through Hadoop (HDFS).
760 The 'filename' option is used to specify host,
761 port of the hdfs name-node to connect. This
762 engine interprets offsets a little
763 differently. In HDFS, files once created
764 cannot be modified. So random writes are not
765 possible. To imitate this, libhdfs engine
766 expects bunch of small files to be created
767 over HDFS, and engine will randomly pick a
768 file out of those files based on the offset
769 generated by fio backend. (see the example
770 job file to create such files, use rw=write
771 option). Please note, you might want to set
772 necessary environment variables to work with
773 hdfs/libhdfs properly.
774
775 mtd Read, write and erase an MTD character device
776 (e.g., /dev/mtd0). Discards are treated as
777 erases. Depending on the underlying device
778 type, the I/O may have to go in a certain
779 pattern, e.g., on NAND, writing sequentially
780 to erase blocks and discarding before
781 overwriting. The writetrim mode works well
782 for this constraint.
783
784 external Prefix to specify loading an external
785 IO engine object file. Append the engine
786 filename, eg ioengine=external:/tmp/foo.o
787 to load ioengine foo.o in /tmp.
788
789iodepth=int This defines how many io units to keep in flight against
790 the file. The default is 1 for each file defined in this
791 job, can be overridden with a larger value for higher
792 concurrency. Note that increasing iodepth beyond 1 will not
793 affect synchronous ioengines (except for small degress when
794 verify_async is in use). Even async engines may impose OS
795 restrictions causing the desired depth not to be achieved.
796 This may happen on Linux when using libaio and not setting
797 direct=1, since buffered IO is not async on that OS. Keep an
798 eye on the IO depth distribution in the fio output to verify
799 that the achieved depth is as expected. Default: 1.
800
801iodepth_batch_submit=int
802iodepth_batch=int This defines how many pieces of IO to submit at once.
803 It defaults to 1 which means that we submit each IO
804 as soon as it is available, but can be raised to submit
805 bigger batches of IO at the time.
806
807iodepth_batch_complete=int This defines how many pieces of IO to retrieve
808 at once. It defaults to 1 which means that we'll ask
809 for a minimum of 1 IO in the retrieval process from
810 the kernel. The IO retrieval will go on until we
811 hit the limit set by iodepth_low. If this variable is
812 set to 0, then fio will always check for completed
813 events before queuing more IO. This helps reduce
814 IO latency, at the cost of more retrieval system calls.
815
816iodepth_low=int The low water mark indicating when to start filling
817 the queue again. Defaults to the same as iodepth, meaning
818 that fio will attempt to keep the queue full at all times.
819 If iodepth is set to eg 16 and iodepth_low is set to 4, then
820 after fio has filled the queue of 16 requests, it will let
821 the depth drain down to 4 before starting to fill it again.
822
823io_submit_mode=str This option controls how fio submits the IO to
824 the IO engine. The default is 'inline', which means that the
825 fio job threads submit and reap IO directly. If set to
826 'offload', the job threads will offload IO submission to a
827 dedicated pool of IO threads. This requires some coordination
828 and thus has a bit of extra overhead, especially for lower
829 queue depth IO where it can increase latencies. The benefit
830 is that fio can manage submission rates independently of
831 the device completion rates. This avoids skewed latency
832 reporting if IO gets back up on the device side (the
833 coordinated omission problem).
834
835direct=bool If value is true, use non-buffered io. This is usually
836 O_DIRECT. Note that ZFS on Solaris doesn't support direct io.
837 On Windows the synchronous ioengines don't support direct io.
838
839atomic=bool If value is true, attempt to use atomic direct IO. Atomic
840 writes are guaranteed to be stable once acknowledged by
841 the operating system. Only Linux supports O_ATOMIC right
842 now.
843
844buffered=bool If value is true, use buffered io. This is the opposite
845 of the 'direct' option. Defaults to true.
846
847offset=int Start io at the given offset in the file. The data before
848 the given offset will not be touched. This effectively
849 caps the file size at real_size - offset.
850
851offset_increment=int If this is provided, then the real offset becomes
852 offset + offset_increment * thread_number, where the thread
853 number is a counter that starts at 0 and is incremented for
854 each sub-job (i.e. when numjobs option is specified). This
855 option is useful if there are several jobs which are intended
856 to operate on a file in parallel disjoint segments, with
857 even spacing between the starting points.
858
859number_ios=int Fio will normally perform IOs until it has exhausted the size
860 of the region set by size=, or if it exhaust the allocated
861 time (or hits an error condition). With this setting, the
862 range/size can be set independently of the number of IOs to
863 perform. When fio reaches this number, it will exit normally
864 and report status. Note that this does not extend the amount
865 of IO that will be done, it will only stop fio if this
866 condition is met before other end-of-job criteria.
867
868fsync=int If writing to a file, issue a sync of the dirty data
869 for every number of blocks given. For example, if you give
870 32 as a parameter, fio will sync the file for every 32
871 writes issued. If fio is using non-buffered io, we may
872 not sync the file. The exception is the sg io engine, which
873 synchronizes the disk cache anyway.
874
875fdatasync=int Like fsync= but uses fdatasync() to only sync data and not
876 metadata blocks.
877 In FreeBSD and Windows there is no fdatasync(), this falls back to
878 using fsync()
879
880sync_file_range=str:val Use sync_file_range() for every 'val' number of
881 write operations. Fio will track range of writes that
882 have happened since the last sync_file_range() call. 'str'
883 can currently be one or more of:
884
885 wait_before SYNC_FILE_RANGE_WAIT_BEFORE
886 write SYNC_FILE_RANGE_WRITE
887 wait_after SYNC_FILE_RANGE_WAIT_AFTER
888
889 So if you do sync_file_range=wait_before,write:8, fio would
890 use SYNC_FILE_RANGE_WAIT_BEFORE | SYNC_FILE_RANGE_WRITE for
891 every 8 writes. Also see the sync_file_range(2) man page.
892 This option is Linux specific.
893
894overwrite=bool If true, writes to a file will always overwrite existing
895 data. If the file doesn't already exist, it will be
896 created before the write phase begins. If the file exists
897 and is large enough for the specified write phase, nothing
898 will be done.
899
900end_fsync=bool If true, fsync file contents when a write stage has completed.
901
902fsync_on_close=bool If true, fio will fsync() a dirty file on close.
903 This differs from end_fsync in that it will happen on every
904 file close, not just at the end of the job.
905
906rwmixread=int How large a percentage of the mix should be reads.
907
908rwmixwrite=int How large a percentage of the mix should be writes. If both
909 rwmixread and rwmixwrite is given and the values do not add
910 up to 100%, the latter of the two will be used to override
911 the first. This may interfere with a given rate setting,
912 if fio is asked to limit reads or writes to a certain rate.
913 If that is the case, then the distribution may be skewed.
914
915random_distribution=str:float By default, fio will use a completely uniform
916 random distribution when asked to perform random IO. Sometimes
917 it is useful to skew the distribution in specific ways,
918 ensuring that some parts of the data is more hot than others.
919 fio includes the following distribution models:
920
921 random Uniform random distribution
922 zipf Zipf distribution
923 pareto Pareto distribution
924
925 When using a zipf or pareto distribution, an input value
926 is also needed to define the access pattern. For zipf, this
927 is the zipf theta. For pareto, it's the pareto power. Fio
928 includes a test program, genzipf, that can be used visualize
929 what the given input values will yield in terms of hit rates.
930 If you wanted to use zipf with a theta of 1.2, you would use
931 random_distribution=zipf:1.2 as the option. If a non-uniform
932 model is used, fio will disable use of the random map.
933
934percentage_random=int For a random workload, set how big a percentage should
935 be random. This defaults to 100%, in which case the workload
936 is fully random. It can be set from anywhere from 0 to 100.
937 Setting it to 0 would make the workload fully sequential. Any
938 setting in between will result in a random mix of sequential
939 and random IO, at the given percentages. It is possible to
940 set different values for reads, writes, and trim. To do so,
941 simply use a comma separated list. See blocksize.
942
943norandommap Normally fio will cover every block of the file when doing
944 random IO. If this option is given, fio will just get a
945 new random offset without looking at past io history. This
946 means that some blocks may not be read or written, and that
947 some blocks may be read/written more than once. If this option
948 is used with verify= and multiple blocksizes (via bsrange=),
949 only intact blocks are verified, i.e., partially-overwritten
950 blocks are ignored.
951
952softrandommap=bool See norandommap. If fio runs with the random block map
953 enabled and it fails to allocate the map, if this option is
954 set it will continue without a random block map. As coverage
955 will not be as complete as with random maps, this option is
956 disabled by default.
957
958random_generator=str Fio supports the following engines for generating
959 IO offsets for random IO:
960
961 tausworthe Strong 2^88 cycle random number generator
962 lfsr Linear feedback shift register generator
963
964 Tausworthe is a strong random number generator, but it
965 requires tracking on the side if we want to ensure that
966 blocks are only read or written once. LFSR guarantees
967 that we never generate the same offset twice, and it's
968 also less computationally expensive. It's not a true
969 random generator, however, though for IO purposes it's
970 typically good enough. LFSR only works with single
971 block sizes, not with workloads that use multiple block
972 sizes. If used with such a workload, fio may read or write
973 some blocks multiple times.
974
975nice=int Run the job with the given nice value. See man nice(2).
976
977prio=int Set the io priority value of this job. Linux limits us to
978 a positive value between 0 and 7, with 0 being the highest.
979 See man ionice(1).
980
981prioclass=int Set the io priority class. See man ionice(1).
982
983thinktime=int Stall the job x microseconds after an io has completed before
984 issuing the next. May be used to simulate processing being
985 done by an application. See thinktime_blocks and
986 thinktime_spin.
987
988thinktime_spin=int
989 Only valid if thinktime is set - pretend to spend CPU time
990 doing something with the data received, before falling back
991 to sleeping for the rest of the period specified by
992 thinktime.
993
994thinktime_blocks=int
995 Only valid if thinktime is set - control how many blocks
996 to issue, before waiting 'thinktime' usecs. If not set,
997 defaults to 1 which will make fio wait 'thinktime' usecs
998 after every block. This effectively makes any queue depth
999 setting redundant, since no more than 1 IO will be queued
1000 before we have to complete it and do our thinktime. In
1001 other words, this setting effectively caps the queue depth
1002 if the latter is larger.
1003
1004rate=int Cap the bandwidth used by this job. The number is in bytes/sec,
1005 the normal suffix rules apply. You can use rate=500k to limit
1006 reads and writes to 500k each, or you can specify read and
1007 writes separately. Using rate=1m,500k would limit reads to
1008 1MB/sec and writes to 500KB/sec. Capping only reads or
1009 writes can be done with rate=,500k or rate=500k,. The former
1010 will only limit writes (to 500KB/sec), the latter will only
1011 limit reads.
1012
1013ratemin=int Tell fio to do whatever it can to maintain at least this
1014 bandwidth. Failing to meet this requirement, will cause
1015 the job to exit. The same format as rate is used for
1016 read vs write separation.
1017
1018rate_iops=int Cap the bandwidth to this number of IOPS. Basically the same
1019 as rate, just specified independently of bandwidth. If the
1020 job is given a block size range instead of a fixed value,
1021 the smallest block size is used as the metric. The same format
1022 as rate is used for read vs write separation.
1023
1024rate_iops_min=int If fio doesn't meet this rate of IO, it will cause
1025 the job to exit. The same format as rate is used for read vs
1026 write separation.
1027
1028latency_target=int If set, fio will attempt to find the max performance
1029 point that the given workload will run at while maintaining a
1030 latency below this target. The values is given in microseconds.
1031 See latency_window and latency_percentile
1032
1033latency_window=int Used with latency_target to specify the sample window
1034 that the job is run at varying queue depths to test the
1035 performance. The value is given in microseconds.
1036
1037latency_percentile=float The percentage of IOs that must fall within the
1038 criteria specified by latency_target and latency_window. If not
1039 set, this defaults to 100.0, meaning that all IOs must be equal
1040 or below to the value set by latency_target.
1041
1042max_latency=int If set, fio will exit the job if it exceeds this maximum
1043 latency. It will exit with an ETIME error.
1044
1045ratecycle=int Average bandwidth for 'rate' and 'ratemin' over this number
1046 of milliseconds.
1047
1048cpumask=int Set the CPU affinity of this job. The parameter given is a
1049 bitmask of allowed CPU's the job may run on. So if you want
1050 the allowed CPUs to be 1 and 5, you would pass the decimal
1051 value of (1 << 1 | 1 << 5), or 34. See man
1052 sched_setaffinity(2). This may not work on all supported
1053 operating systems or kernel versions. This option doesn't
1054 work well for a higher CPU count than what you can store in
1055 an integer mask, so it can only control cpus 1-32. For
1056 boxes with larger CPU counts, use cpus_allowed.
1057
1058cpus_allowed=str Controls the same options as cpumask, but it allows a text
1059 setting of the permitted CPUs instead. So to use CPUs 1 and
1060 5, you would specify cpus_allowed=1,5. This options also
1061 allows a range of CPUs. Say you wanted a binding to CPUs
1062 1, 5, and 8-15, you would set cpus_allowed=1,5,8-15.
1063
1064cpus_allowed_policy=str Set the policy of how fio distributes the CPUs
1065 specified by cpus_allowed or cpumask. Two policies are
1066 supported:
1067
1068 shared All jobs will share the CPU set specified.
1069 split Each job will get a unique CPU from the CPU set.
1070
1071 'shared' is the default behaviour, if the option isn't
1072 specified. If split is specified, then fio will will assign
1073 one cpu per job. If not enough CPUs are given for the jobs
1074 listed, then fio will roundrobin the CPUs in the set.
1075
1076numa_cpu_nodes=str Set this job running on spcified NUMA nodes' CPUs. The
1077 arguments allow comma delimited list of cpu numbers,
1078 A-B ranges, or 'all'. Note, to enable numa options support,
1079 fio must be built on a system with libnuma-dev(el) installed.
1080
1081numa_mem_policy=str Set this job's memory policy and corresponding NUMA
1082 nodes. Format of the argements:
1083 <mode>[:<nodelist>]
1084 `mode' is one of the following memory policy:
1085 default, prefer, bind, interleave, local
1086 For `default' and `local' memory policy, no node is
1087 needed to be specified.
1088 For `prefer', only one node is allowed.
1089 For `bind' and `interleave', it allow comma delimited
1090 list of numbers, A-B ranges, or 'all'.
1091
1092startdelay=time Start this job the specified number of seconds after fio
1093 has started. Only useful if the job file contains several
1094 jobs, and you want to delay starting some jobs to a certain
1095 time.
1096
1097runtime=time Tell fio to terminate processing after the specified number
1098 of seconds. It can be quite hard to determine for how long
1099 a specified job will run, so this parameter is handy to
1100 cap the total runtime to a given time.
1101
1102time_based If set, fio will run for the duration of the runtime
1103 specified even if the file(s) are completely read or
1104 written. It will simply loop over the same workload
1105 as many times as the runtime allows.
1106
1107ramp_time=time If set, fio will run the specified workload for this amount
1108 of time before logging any performance numbers. Useful for
1109 letting performance settle before logging results, thus
1110 minimizing the runtime required for stable results. Note
1111 that the ramp_time is considered lead in time for a job,
1112 thus it will increase the total runtime if a special timeout
1113 or runtime is specified.
1114
1115invalidate=bool Invalidate the buffer/page cache parts for this file prior
1116 to starting io. Defaults to true.
1117
1118sync=bool Use sync io for buffered writes. For the majority of the
1119 io engines, this means using O_SYNC.
1120
1121iomem=str
1122mem=str Fio can use various types of memory as the io unit buffer.
1123 The allowed values are:
1124
1125 malloc Use memory from malloc(3) as the buffers.
1126
1127 shm Use shared memory as the buffers. Allocated
1128 through shmget(2).
1129
1130 shmhuge Same as shm, but use huge pages as backing.
1131
1132 mmap Use mmap to allocate buffers. May either be
1133 anonymous memory, or can be file backed if
1134 a filename is given after the option. The
1135 format is mem=mmap:/path/to/file.
1136
1137 mmaphuge Use a memory mapped huge file as the buffer
1138 backing. Append filename after mmaphuge, ala
1139 mem=mmaphuge:/hugetlbfs/file
1140
1141 The area allocated is a function of the maximum allowed
1142 bs size for the job, multiplied by the io depth given. Note
1143 that for shmhuge and mmaphuge to work, the system must have
1144 free huge pages allocated. This can normally be checked
1145 and set by reading/writing /proc/sys/vm/nr_hugepages on a
1146 Linux system. Fio assumes a huge page is 4MB in size. So
1147 to calculate the number of huge pages you need for a given
1148 job file, add up the io depth of all jobs (normally one unless
1149 iodepth= is used) and multiply by the maximum bs set. Then
1150 divide that number by the huge page size. You can see the
1151 size of the huge pages in /proc/meminfo. If no huge pages
1152 are allocated by having a non-zero number in nr_hugepages,
1153 using mmaphuge or shmhuge will fail. Also see hugepage-size.
1154
1155 mmaphuge also needs to have hugetlbfs mounted and the file
1156 location should point there. So if it's mounted in /huge,
1157 you would use mem=mmaphuge:/huge/somefile.
1158
1159iomem_align=int This indiciates the memory alignment of the IO memory buffers.
1160 Note that the given alignment is applied to the first IO unit
1161 buffer, if using iodepth the alignment of the following buffers
1162 are given by the bs used. In other words, if using a bs that is
1163 a multiple of the page sized in the system, all buffers will
1164 be aligned to this value. If using a bs that is not page
1165 aligned, the alignment of subsequent IO memory buffers is the
1166 sum of the iomem_align and bs used.
1167
1168hugepage-size=int
1169 Defines the size of a huge page. Must at least be equal
1170 to the system setting, see /proc/meminfo. Defaults to 4MB.
1171 Should probably always be a multiple of megabytes, so using
1172 hugepage-size=Xm is the preferred way to set this to avoid
1173 setting a non-pow-2 bad value.
1174
1175exitall When one job finishes, terminate the rest. The default is
1176 to wait for each job to finish, sometimes that is not the
1177 desired action.
1178
1179bwavgtime=int Average the calculated bandwidth over the given time. Value
1180 is specified in milliseconds.
1181
1182iopsavgtime=int Average the calculated IOPS over the given time. Value
1183 is specified in milliseconds.
1184
1185create_serialize=bool If true, serialize the file creating for the jobs.
1186 This may be handy to avoid interleaving of data
1187 files, which may greatly depend on the filesystem
1188 used and even the number of processors in the system.
1189
1190create_fsync=bool fsync the data file after creation. This is the
1191 default.
1192
1193create_on_open=bool Don't pre-setup the files for IO, just create open()
1194 when it's time to do IO to that file.
1195
1196create_only=bool If true, fio will only run the setup phase of the job.
1197 If files need to be laid out or updated on disk, only
1198 that will be done. The actual job contents are not
1199 executed.
1200
1201pre_read=bool If this is given, files will be pre-read into memory before
1202 starting the given IO operation. This will also clear
1203 the 'invalidate' flag, since it is pointless to pre-read
1204 and then drop the cache. This will only work for IO engines
1205 that are seekable, since they allow you to read the same data
1206 multiple times. Thus it will not work on eg network or splice
1207 IO.
1208
1209unlink=bool Unlink the job files when done. Not the default, as repeated
1210 runs of that job would then waste time recreating the file
1211 set again and again.
1212
1213loops=int Run the specified number of iterations of this job. Used
1214 to repeat the same workload a given number of times. Defaults
1215 to 1.
1216
1217verify_only Do not perform specified workload---only verify data still
1218 matches previous invocation of this workload. This option
1219 allows one to check data multiple times at a later date
1220 without overwriting it. This option makes sense only for
1221 workloads that write data, and does not support workloads
1222 with the time_based option set.
1223
1224do_verify=bool Run the verify phase after a write phase. Only makes sense if
1225 verify is set. Defaults to 1.
1226
1227verify=str If writing to a file, fio can verify the file contents
1228 after each iteration of the job. The allowed values are:
1229
1230 md5 Use an md5 sum of the data area and store
1231 it in the header of each block.
1232
1233 crc64 Use an experimental crc64 sum of the data
1234 area and store it in the header of each
1235 block.
1236
1237 crc32c Use a crc32c sum of the data area and store
1238 it in the header of each block.
1239
1240 crc32c-intel Use hardware assisted crc32c calcuation
1241 provided on SSE4.2 enabled processors. Falls
1242 back to regular software crc32c, if not
1243 supported by the system.
1244
1245 crc32 Use a crc32 sum of the data area and store
1246 it in the header of each block.
1247
1248 crc16 Use a crc16 sum of the data area and store
1249 it in the header of each block.
1250
1251 crc7 Use a crc7 sum of the data area and store
1252 it in the header of each block.
1253
1254 xxhash Use xxhash as the checksum function. Generally
1255 the fastest software checksum that fio
1256 supports.
1257
1258 sha512 Use sha512 as the checksum function.
1259
1260 sha256 Use sha256 as the checksum function.
1261
1262 sha1 Use optimized sha1 as the checksum function.
1263
1264 meta Write extra information about each io
1265 (timestamp, block number etc.). The block
1266 number is verified. The io sequence number is
1267 verified for workloads that write data.
1268 See also verify_pattern.
1269
1270 null Only pretend to verify. Useful for testing
1271 internals with ioengine=null, not for much
1272 else.
1273
1274 This option can be used for repeated burn-in tests of a
1275 system to make sure that the written data is also
1276 correctly read back. If the data direction given is
1277 a read or random read, fio will assume that it should
1278 verify a previously written file. If the data direction
1279 includes any form of write, the verify will be of the
1280 newly written data.
1281
1282verifysort=bool If set, fio will sort written verify blocks when it deems
1283 it faster to read them back in a sorted manner. This is
1284 often the case when overwriting an existing file, since
1285 the blocks are already laid out in the file system. You
1286 can ignore this option unless doing huge amounts of really
1287 fast IO where the red-black tree sorting CPU time becomes
1288 significant.
1289
1290verify_offset=int Swap the verification header with data somewhere else
1291 in the block before writing. Its swapped back before
1292 verifying.
1293
1294verify_interval=int Write the verification header at a finer granularity
1295 than the blocksize. It will be written for chunks the
1296 size of header_interval. blocksize should divide this
1297 evenly.
1298
1299verify_pattern=str If set, fio will fill the io buffers with this
1300 pattern. Fio defaults to filling with totally random
1301 bytes, but sometimes it's interesting to fill with a known
1302 pattern for io verification purposes. Depending on the
1303 width of the pattern, fio will fill 1/2/3/4 bytes of the
1304 buffer at the time(it can be either a decimal or a hex number).
1305 The verify_pattern if larger than a 32-bit quantity has to
1306 be a hex number that starts with either "0x" or "0X". Use
1307 with verify=meta.
1308
1309verify_fatal=bool Normally fio will keep checking the entire contents
1310 before quitting on a block verification failure. If this
1311 option is set, fio will exit the job on the first observed
1312 failure.
1313
1314verify_dump=bool If set, dump the contents of both the original data
1315 block and the data block we read off disk to files. This
1316 allows later analysis to inspect just what kind of data
1317 corruption occurred. Off by default.
1318
1319verify_async=int Fio will normally verify IO inline from the submitting
1320 thread. This option takes an integer describing how many
1321 async offload threads to create for IO verification instead,
1322 causing fio to offload the duty of verifying IO contents
1323 to one or more separate threads. If using this offload
1324 option, even sync IO engines can benefit from using an
1325 iodepth setting higher than 1, as it allows them to have
1326 IO in flight while verifies are running.
1327
1328verify_async_cpus=str Tell fio to set the given CPU affinity on the
1329 async IO verification threads. See cpus_allowed for the
1330 format used.
1331
1332verify_backlog=int Fio will normally verify the written contents of a
1333 job that utilizes verify once that job has completed. In
1334 other words, everything is written then everything is read
1335 back and verified. You may want to verify continually
1336 instead for a variety of reasons. Fio stores the meta data
1337 associated with an IO block in memory, so for large
1338 verify workloads, quite a bit of memory would be used up
1339 holding this meta data. If this option is enabled, fio
1340 will write only N blocks before verifying these blocks.
1341
1342verify_backlog_batch=int Control how many blocks fio will verify
1343 if verify_backlog is set. If not set, will default to
1344 the value of verify_backlog (meaning the entire queue
1345 is read back and verified). If verify_backlog_batch is
1346 less than verify_backlog then not all blocks will be verified,
1347 if verify_backlog_batch is larger than verify_backlog, some
1348 blocks will be verified more than once.
1349
1350verify_state_save=bool When a job exits during the write phase of a verify
1351 workload, save its current state. This allows fio to replay
1352 up until that point, if the verify state is loaded for the
1353 verify read phase. The format of the filename is, roughly,
1354 <type>-<jobname>-<jobindex>-verify.state. <type> is "local"
1355 for a local run, "sock" for a client/server socket connection,
1356 and "ip" (192.168.0.1, for instance) for a networked
1357 client/server connection.
1358
1359verify_state_load=bool If a verify termination trigger was used, fio stores
1360 the current write state of each thread. This can be used at
1361 verification time so that fio knows how far it should verify.
1362 Without this information, fio will run a full verification
1363 pass, according to the settings in the job file used.
1364
1365stonewall
1366wait_for_previous Wait for preceding jobs in the job file to exit, before
1367 starting this one. Can be used to insert serialization
1368 points in the job file. A stone wall also implies starting
1369 a new reporting group.
1370
1371new_group Start a new reporting group. See: group_reporting.
1372
1373numjobs=int Create the specified number of clones of this job. May be
1374 used to setup a larger number of threads/processes doing
1375 the same thing. Each thread is reported separately; to see
1376 statistics for all clones as a whole, use group_reporting in
1377 conjunction with new_group.
1378
1379group_reporting It may sometimes be interesting to display statistics for
1380 groups of jobs as a whole instead of for each individual job.
1381 This is especially true if 'numjobs' is used; looking at
1382 individual thread/process output quickly becomes unwieldy.
1383 To see the final report per-group instead of per-job, use
1384 'group_reporting'. Jobs in a file will be part of the same
1385 reporting group, unless if separated by a stonewall, or by
1386 using 'new_group'.
1387
1388thread fio defaults to forking jobs, however if this option is
1389 given, fio will use pthread_create(3) to create threads
1390 instead.
1391
1392zonesize=int Divide a file into zones of the specified size. See zoneskip.
1393
1394zoneskip=int Skip the specified number of bytes when zonesize data has
1395 been read. The two zone options can be used to only do
1396 io on zones of a file.
1397
1398write_iolog=str Write the issued io patterns to the specified file. See
1399 read_iolog. Specify a separate file for each job, otherwise
1400 the iologs will be interspersed and the file may be corrupt.
1401
1402read_iolog=str Open an iolog with the specified file name and replay the
1403 io patterns it contains. This can be used to store a
1404 workload and replay it sometime later. The iolog given
1405 may also be a blktrace binary file, which allows fio
1406 to replay a workload captured by blktrace. See blktrace
1407 for how to capture such logging data. For blktrace replay,
1408 the file needs to be turned into a blkparse binary data
1409 file first (blkparse <device> -o /dev/null -d file_for_fio.bin).
1410
1411replay_no_stall=int When replaying I/O with read_iolog the default behavior
1412 is to attempt to respect the time stamps within the log and
1413 replay them with the appropriate delay between IOPS. By
1414 setting this variable fio will not respect the timestamps and
1415 attempt to replay them as fast as possible while still
1416 respecting ordering. The result is the same I/O pattern to a
1417 given device, but different timings.
1418
1419replay_redirect=str While replaying I/O patterns using read_iolog the
1420 default behavior is to replay the IOPS onto the major/minor
1421 device that each IOP was recorded from. This is sometimes
1422 undesirable because on a different machine those major/minor
1423 numbers can map to a different device. Changing hardware on
1424 the same system can also result in a different major/minor
1425 mapping. Replay_redirect causes all IOPS to be replayed onto
1426 the single specified device regardless of the device it was
1427 recorded from. i.e. replay_redirect=/dev/sdc would cause all
1428 IO in the blktrace to be replayed onto /dev/sdc. This means
1429 multiple devices will be replayed onto a single, if the trace
1430 contains multiple devices. If you want multiple devices to be
1431 replayed concurrently to multiple redirected devices you must
1432 blkparse your trace into separate traces and replay them with
1433 independent fio invocations. Unfortuantely this also breaks
1434 the strict time ordering between multiple device accesses.
1435
1436replay_align=int Force alignment of IO offsets and lengths in a trace
1437 to this power of 2 value.
1438
1439replay_scale=int Scale sector offsets down by this factor when
1440 replaying traces.
1441
1442write_bw_log=str If given, write a bandwidth log of the jobs in this job
1443 file. Can be used to store data of the bandwidth of the
1444 jobs in their lifetime. The included fio_generate_plots
1445 script uses gnuplot to turn these text files into nice
1446 graphs. See write_lat_log for behaviour of given
1447 filename. For this option, the suffix is _bw.x.log, where
1448 x is the index of the job (1..N, where N is the number of
1449 jobs).
1450
1451write_lat_log=str Same as write_bw_log, except that this option stores io
1452 submission, completion, and total latencies instead. If no
1453 filename is given with this option, the default filename of
1454 "jobname_type.log" is used. Even if the filename is given,
1455 fio will still append the type of log. So if one specifies
1456
1457 write_lat_log=foo
1458
1459 The actual log names will be foo_slat.x.log, foo_clat.x.log,
1460 and foo_lat.x.log, where x is the index of the job (1..N,
1461 where N is the number of jobs). This helps fio_generate_plot
1462 fine the logs automatically.
1463
1464write_iops_log=str Same as write_bw_log, but writes IOPS. If no filename is
1465 given with this option, the default filename of
1466 "jobname_type.x.log" is used,where x is the index of the job
1467 (1..N, where N is the number of jobs). Even if the filename
1468 is given, fio will still append the type of log.
1469
1470log_avg_msec=int By default, fio will log an entry in the iops, latency,
1471 or bw log for every IO that completes. When writing to the
1472 disk log, that can quickly grow to a very large size. Setting
1473 this option makes fio average the each log entry over the
1474 specified period of time, reducing the resolution of the log.
1475 Defaults to 0.
1476
1477log_offset=int If this is set, the iolog options will include the byte
1478 offset for the IO entry as well as the other data values.
1479
1480log_compression=int If this is set, fio will compress the IO logs as
1481 it goes, to keep the memory footprint lower. When a log
1482 reaches the specified size, that chunk is removed and
1483 compressed in the background. Given that IO logs are
1484 fairly highly compressible, this yields a nice memory
1485 savings for longer runs. The downside is that the
1486 compression will consume some background CPU cycles, so
1487 it may impact the run. This, however, is also true if
1488 the logging ends up consuming most of the system memory.
1489 So pick your poison. The IO logs are saved normally at the
1490 end of a run, by decompressing the chunks and storing them
1491 in the specified log file. This feature depends on the
1492 availability of zlib.
1493
1494log_store_compressed=bool If set, and log_compression is also set,
1495 fio will store the log files in a compressed format. They
1496 can be decompressed with fio, using the --inflate-log
1497 command line parameter. The files will be stored with a
1498 .fz suffix.
1499
1500block_error_percentiles=bool If set, record errors in trim block-sized
1501 units from writes and trims and output a histogram of
1502 how many trims it took to get to errors, and what kind
1503 of error was encountered.
1504
1505lockmem=int Pin down the specified amount of memory with mlock(2). Can
1506 potentially be used instead of removing memory or booting
1507 with less memory to simulate a smaller amount of memory.
1508 The amount specified is per worker.
1509
1510exec_prerun=str Before running this job, issue the command specified
1511 through system(3). Output is redirected in a file called
1512 jobname.prerun.txt.
1513
1514exec_postrun=str After the job completes, issue the command specified
1515 though system(3). Output is redirected in a file called
1516 jobname.postrun.txt.
1517
1518ioscheduler=str Attempt to switch the device hosting the file to the specified
1519 io scheduler before running.
1520
1521disk_util=bool Generate disk utilization statistics, if the platform
1522 supports it. Defaults to on.
1523
1524disable_lat=bool Disable measurements of total latency numbers. Useful
1525 only for cutting back the number of calls to gettimeofday,
1526 as that does impact performance at really high IOPS rates.
1527 Note that to really get rid of a large amount of these
1528 calls, this option must be used with disable_slat and
1529 disable_bw as well.
1530
1531disable_clat=bool Disable measurements of completion latency numbers. See
1532 disable_lat.
1533
1534disable_slat=bool Disable measurements of submission latency numbers. See
1535 disable_slat.
1536
1537disable_bw=bool Disable measurements of throughput/bandwidth numbers. See
1538 disable_lat.
1539
1540clat_percentiles=bool Enable the reporting of percentiles of
1541 completion latencies.
1542
1543percentile_list=float_list Overwrite the default list of percentiles
1544 for completion latencies and the block error histogram.
1545 Each number is a floating number in the range (0,100],
1546 and the maximum length of the list is 20. Use ':'
1547 to separate the numbers, and list the numbers in ascending
1548 order. For example, --percentile_list=99.5:99.9 will cause
1549 fio to report the values of completion latency below which
1550 99.5% and 99.9% of the observed latencies fell, respectively.
1551
1552clocksource=str Use the given clocksource as the base of timing. The
1553 supported options are:
1554
1555 gettimeofday gettimeofday(2)
1556
1557 clock_gettime clock_gettime(2)
1558
1559 cpu Internal CPU clock source
1560
1561 cpu is the preferred clocksource if it is reliable, as it
1562 is very fast (and fio is heavy on time calls). Fio will
1563 automatically use this clocksource if it's supported and
1564 considered reliable on the system it is running on, unless
1565 another clocksource is specifically set. For x86/x86-64 CPUs,
1566 this means supporting TSC Invariant.
1567
1568gtod_reduce=bool Enable all of the gettimeofday() reducing options
1569 (disable_clat, disable_slat, disable_bw) plus reduce
1570 precision of the timeout somewhat to really shrink
1571 the gettimeofday() call count. With this option enabled,
1572 we only do about 0.4% of the gtod() calls we would have
1573 done if all time keeping was enabled.
1574
1575gtod_cpu=int Sometimes it's cheaper to dedicate a single thread of
1576 execution to just getting the current time. Fio (and
1577 databases, for instance) are very intensive on gettimeofday()
1578 calls. With this option, you can set one CPU aside for
1579 doing nothing but logging current time to a shared memory
1580 location. Then the other threads/processes that run IO
1581 workloads need only copy that segment, instead of entering
1582 the kernel with a gettimeofday() call. The CPU set aside
1583 for doing these time calls will be excluded from other
1584 uses. Fio will manually clear it from the CPU mask of other
1585 jobs.
1586
1587continue_on_error=str Normally fio will exit the job on the first observed
1588 failure. If this option is set, fio will continue the job when
1589 there is a 'non-fatal error' (EIO or EILSEQ) until the runtime
1590 is exceeded or the I/O size specified is completed. If this
1591 option is used, there are two more stats that are appended,
1592 the total error count and the first error. The error field
1593 given in the stats is the first error that was hit during the
1594 run.
1595
1596 The allowed values are:
1597
1598 none Exit on any IO or verify errors.
1599
1600 read Continue on read errors, exit on all others.
1601
1602 write Continue on write errors, exit on all others.
1603
1604 io Continue on any IO error, exit on all others.
1605
1606 verify Continue on verify errors, exit on all others.
1607
1608 all Continue on all errors.
1609
1610 0 Backward-compatible alias for 'none'.
1611
1612 1 Backward-compatible alias for 'all'.
1613
1614ignore_error=str Sometimes you want to ignore some errors during test
1615 in that case you can specify error list for each error type.
1616 ignore_error=READ_ERR_LIST,WRITE_ERR_LIST,VERIFY_ERR_LIST
1617 errors for given error type is separated with ':'. Error
1618 may be symbol ('ENOSPC', 'ENOMEM') or integer.
1619 Example:
1620 ignore_error=EAGAIN,ENOSPC:122
1621 This option will ignore EAGAIN from READ, and ENOSPC and
1622 122(EDQUOT) from WRITE.
1623
1624error_dump=bool If set dump every error even if it is non fatal, true
1625 by default. If disabled only fatal error will be dumped
1626
1627cgroup=str Add job to this control group. If it doesn't exist, it will
1628 be created. The system must have a mounted cgroup blkio
1629 mount point for this to work. If your system doesn't have it
1630 mounted, you can do so with:
1631
1632 # mount -t cgroup -o blkio none /cgroup
1633
1634cgroup_weight=int Set the weight of the cgroup to this value. See
1635 the documentation that comes with the kernel, allowed values
1636 are in the range of 100..1000.
1637
1638cgroup_nodelete=bool Normally fio will delete the cgroups it has created after
1639 the job completion. To override this behavior and to leave
1640 cgroups around after the job completion, set cgroup_nodelete=1.
1641 This can be useful if one wants to inspect various cgroup
1642 files after job completion. Default: false
1643
1644uid=int Instead of running as the invoking user, set the user ID to
1645 this value before the thread/process does any work.
1646
1647gid=int Set group ID, see uid.
1648
1649flow_id=int The ID of the flow. If not specified, it defaults to being a
1650 global flow. See flow.
1651
1652flow=int Weight in token-based flow control. If this value is used, then
1653 there is a 'flow counter' which is used to regulate the
1654 proportion of activity between two or more jobs. fio attempts
1655 to keep this flow counter near zero. The 'flow' parameter
1656 stands for how much should be added or subtracted to the flow
1657 counter on each iteration of the main I/O loop. That is, if
1658 one job has flow=8 and another job has flow=-1, then there
1659 will be a roughly 1:8 ratio in how much one runs vs the other.
1660
1661flow_watermark=int The maximum value that the absolute value of the flow
1662 counter is allowed to reach before the job must wait for a
1663 lower value of the counter.
1664
1665flow_sleep=int The period of time, in microseconds, to wait after the flow
1666 watermark has been exceeded before retrying operations
1667
1668In addition, there are some parameters which are only valid when a specific
1669ioengine is in use. These are used identically to normal parameters, with the
1670caveat that when used on the command line, they must come after the ioengine
1671that defines them is selected.
1672
1673[libaio] userspace_reap Normally, with the libaio engine in use, fio will use
1674 the io_getevents system call to reap newly returned events.
1675 With this flag turned on, the AIO ring will be read directly
1676 from user-space to reap events. The reaping mode is only
1677 enabled when polling for a minimum of 0 events (eg when
1678 iodepth_batch_complete=0).
1679
1680[cpu] cpuload=int Attempt to use the specified percentage of CPU cycles.
1681
1682[cpu] cpuchunks=int Split the load into cycles of the given time. In
1683 microseconds.
1684
1685[cpu] exit_on_io_done=bool Detect when IO threads are done, then exit.
1686
1687[netsplice] hostname=str
1688[net] hostname=str The host name or IP address to use for TCP or UDP based IO.
1689 If the job is a TCP listener or UDP reader, the hostname is not
1690 used and must be omitted unless it is a valid UDP multicast
1691 address.
1692
1693[netsplice] port=int
1694[net] port=int The TCP or UDP port to bind to or connect to. If this is used
1695with numjobs to spawn multiple instances of the same job type, then this will
1696be the starting port number since fio will use a range of ports.
1697
1698[netsplice] interface=str
1699[net] interface=str The IP address of the network interface used to send or
1700 receive UDP multicast
1701
1702[netsplice] ttl=int
1703[net] ttl=int Time-to-live value for outgoing UDP multicast packets.
1704 Default: 1
1705
1706[netsplice] nodelay=bool
1707[net] nodelay=bool Set TCP_NODELAY on TCP connections.
1708
1709[netsplice] protocol=str
1710[netsplice] proto=str
1711[net] protocol=str
1712[net] proto=str The network protocol to use. Accepted values are:
1713
1714 tcp Transmission control protocol
1715 tcpv6 Transmission control protocol V6
1716 udp User datagram protocol
1717 udpv6 User datagram protocol V6
1718 unix UNIX domain socket
1719
1720 When the protocol is TCP or UDP, the port must also be given,
1721 as well as the hostname if the job is a TCP listener or UDP
1722 reader. For unix sockets, the normal filename option should be
1723 used and the port is invalid.
1724
1725[net] listen For TCP network connections, tell fio to listen for incoming
1726 connections rather than initiating an outgoing connection. The
1727 hostname must be omitted if this option is used.
1728
1729[net] pingpong Normaly a network writer will just continue writing data, and
1730 a network reader will just consume packages. If pingpong=1
1731 is set, a writer will send its normal payload to the reader,
1732 then wait for the reader to send the same payload back. This
1733 allows fio to measure network latencies. The submission
1734 and completion latencies then measure local time spent
1735 sending or receiving, and the completion latency measures
1736 how long it took for the other end to receive and send back.
1737 For UDP multicast traffic pingpong=1 should only be set for a
1738 single reader when multiple readers are listening to the same
1739 address.
1740
1741[net] window_size Set the desired socket buffer size for the connection.
1742
1743[net] mss Set the TCP maximum segment size (TCP_MAXSEG).
1744
1745[e4defrag] donorname=str
1746 File will be used as a block donor(swap extents between files)
1747[e4defrag] inplace=int
1748 Configure donor file blocks allocation strategy
1749 0(default): Preallocate donor's file on init
1750 1 : allocate space immidietly inside defragment event,
1751 and free right after event
1752
1753[mtd] skip_bad=bool Skip operations against known bad blocks.
1754
1755
17566.0 Interpreting the output
1757---------------------------
1758
1759fio spits out a lot of output. While running, fio will display the
1760status of the jobs created. An example of that would be:
1761
1762Threads: 1: [_r] [24.8% done] [ 13509/ 8334 kb/s] [eta 00h:01m:31s]
1763
1764The characters inside the square brackets denote the current status of
1765each thread. The possible values (in typical life cycle order) are:
1766
1767Idle Run
1768---- ---
1769P Thread setup, but not started.
1770C Thread created.
1771I Thread initialized, waiting or generating necessary data.
1772 p Thread running pre-reading file(s).
1773 R Running, doing sequential reads.
1774 r Running, doing random reads.
1775 W Running, doing sequential writes.
1776 w Running, doing random writes.
1777 M Running, doing mixed sequential reads/writes.
1778 m Running, doing mixed random reads/writes.
1779 F Running, currently waiting for fsync()
1780 f Running, finishing up (writing IO logs, etc)
1781 V Running, doing verification of written data.
1782E Thread exited, not reaped by main thread yet.
1783_ Thread reaped, or
1784X Thread reaped, exited with an error.
1785K Thread reaped, exited due to signal.
1786
1787Fio will condense the thread string as not to take up more space on the
1788command line as is needed. For instance, if you have 10 readers and 10
1789writers running, the output would look like this:
1790
1791Jobs: 20 (f=20): [R(10),W(10)] [4.0% done] [2103MB/0KB/0KB /s] [538K/0/0 iops] [eta 57m:36s]
1792
1793Fio will still maintain the ordering, though. So the above means that jobs
17941..10 are readers, and 11..20 are writers.
1795
1796The other values are fairly self explanatory - number of threads
1797currently running and doing io, rate of io since last check (read speed
1798listed first, then write speed), and the estimated completion percentage
1799and time for the running group. It's impossible to estimate runtime of
1800the following groups (if any). Note that the string is displayed in order,
1801so it's possible to tell which of the jobs are currently doing what. The
1802first character is the first job defined in the job file, and so forth.
1803
1804When fio is done (or interrupted by ctrl-c), it will show the data for
1805each thread, group of threads, and disks in that order. For each data
1806direction, the output looks like:
1807
1808Client1 (g=0): err= 0:
1809 write: io= 32MB, bw= 666KB/s, iops=89 , runt= 50320msec
1810 slat (msec): min= 0, max= 136, avg= 0.03, stdev= 1.92
1811 clat (msec): min= 0, max= 631, avg=48.50, stdev=86.82
1812 bw (KB/s) : min= 0, max= 1196, per=51.00%, avg=664.02, stdev=681.68
1813 cpu : usr=1.49%, sys=0.25%, ctx=7969, majf=0, minf=17
1814 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=99.0%, 16=0.0%, 32=0.0%, >32=0.0%
1815 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
1816 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
1817 issued r/w: total=0/32768, short=0/0
1818 lat (msec): 2=1.6%, 4=0.0%, 10=3.2%, 20=12.8%, 50=38.4%, 100=24.8%,
1819 lat (msec): 250=15.2%, 500=0.0%, 750=0.0%, 1000=0.0%, >=2048=0.0%
1820
1821The client number is printed, along with the group id and error of that
1822thread. Below is the io statistics, here for writes. In the order listed,
1823they denote:
1824
1825io= Number of megabytes io performed
1826bw= Average bandwidth rate
1827iops= Average IOs performed per second
1828runt= The runtime of that thread
1829 slat= Submission latency (avg being the average, stdev being the
1830 standard deviation). This is the time it took to submit
1831 the io. For sync io, the slat is really the completion
1832 latency, since queue/complete is one operation there. This
1833 value can be in milliseconds or microseconds, fio will choose
1834 the most appropriate base and print that. In the example
1835 above, milliseconds is the best scale. Note: in --minimal mode
1836 latencies are always expressed in microseconds.
1837 clat= Completion latency. Same names as slat, this denotes the
1838 time from submission to completion of the io pieces. For
1839 sync io, clat will usually be equal (or very close) to 0,
1840 as the time from submit to complete is basically just
1841 CPU time (io has already been done, see slat explanation).
1842 bw= Bandwidth. Same names as the xlat stats, but also includes
1843 an approximate percentage of total aggregate bandwidth
1844 this thread received in this group. This last value is
1845 only really useful if the threads in this group are on the
1846 same disk, since they are then competing for disk access.
1847cpu= CPU usage. User and system time, along with the number
1848 of context switches this thread went through, usage of
1849 system and user time, and finally the number of major
1850 and minor page faults.
1851IO depths= The distribution of io depths over the job life time. The
1852 numbers are divided into powers of 2, so for example the
1853 16= entries includes depths up to that value but higher
1854 than the previous entry. In other words, it covers the
1855 range from 16 to 31.
1856IO submit= How many pieces of IO were submitting in a single submit
1857 call. Each entry denotes that amount and below, until
1858 the previous entry - eg, 8=100% mean that we submitted
1859 anywhere in between 5-8 ios per submit call.
1860IO complete= Like the above submit number, but for completions instead.
1861IO issued= The number of read/write requests issued, and how many
1862 of them were short.
1863IO latencies= The distribution of IO completion latencies. This is the
1864 time from when IO leaves fio and when it gets completed.
1865 The numbers follow the same pattern as the IO depths,
1866 meaning that 2=1.6% means that 1.6% of the IO completed
1867 within 2 msecs, 20=12.8% means that 12.8% of the IO
1868 took more than 10 msecs, but less than (or equal to) 20 msecs.
1869
1870After each client has been listed, the group statistics are printed. They
1871will look like this:
1872
1873Run status group 0 (all jobs):
1874 READ: io=64MB, aggrb=22178, minb=11355, maxb=11814, mint=2840msec, maxt=2955msec
1875 WRITE: io=64MB, aggrb=1302, minb=666, maxb=669, mint=50093msec, maxt=50320msec
1876
1877For each data direction, it prints:
1878
1879io= Number of megabytes io performed.
1880aggrb= Aggregate bandwidth of threads in this group.
1881minb= The minimum average bandwidth a thread saw.
1882maxb= The maximum average bandwidth a thread saw.
1883mint= The smallest runtime of the threads in that group.
1884maxt= The longest runtime of the threads in that group.
1885
1886And finally, the disk statistics are printed. They will look like this:
1887
1888Disk stats (read/write):
1889 sda: ios=16398/16511, merge=30/162, ticks=6853/819634, in_queue=826487, util=100.00%
1890
1891Each value is printed for both reads and writes, with reads first. The
1892numbers denote:
1893
1894ios= Number of ios performed by all groups.
1895merge= Number of merges io the io scheduler.
1896ticks= Number of ticks we kept the disk busy.
1897io_queue= Total time spent in the disk queue.
1898util= The disk utilization. A value of 100% means we kept the disk
1899 busy constantly, 50% would be a disk idling half of the time.
1900
1901It is also possible to get fio to dump the current output while it is
1902running, without terminating the job. To do that, send fio the USR1 signal.
1903You can also get regularly timed dumps by using the --status-interval
1904parameter, or by creating a file in /tmp named fio-dump-status. If fio
1905sees this file, it will unlink it and dump the current output status.
1906
1907
19087.0 Terse output
1909----------------
1910
1911For scripted usage where you typically want to generate tables or graphs
1912of the results, fio can output the results in a semicolon separated format.
1913The format is one long line of values, such as:
1914
19152;card0;0;0;7139336;121836;60004;1;10109;27.932460;116.933948;220;126861;3495.446807;1085.368601;226;126864;3523.635629;1089.012448;24063;99944;50.275485%;59818.274627;5540.657370;7155060;122104;60004;1;8338;29.086342;117.839068;388;128077;5032.488518;1234.785715;391;128085;5061.839412;1236.909129;23436;100928;50.287926%;59964.832030;5644.844189;14.595833%;19.394167%;123706;0;7313;0.1%;0.1%;0.1%;0.1%;0.1%;0.1%;100.0%;0.00%;0.00%;0.00%;0.00%;0.00%;0.00%;0.01%;0.02%;0.05%;0.16%;6.04%;40.40%;52.68%;0.64%;0.01%;0.00%;0.01%;0.00%;0.00%;0.00%;0.00%;0.00%
1916A description of this job goes here.
1917
1918The job description (if provided) follows on a second line.
1919
1920To enable terse output, use the --minimal command line option. The first
1921value is the version of the terse output format. If the output has to
1922be changed for some reason, this number will be incremented by 1 to
1923signify that change.
1924
1925Split up, the format is as follows:
1926
1927 terse version, fio version, jobname, groupid, error
1928 READ status:
1929 Total IO (KB), bandwidth (KB/sec), IOPS, runtime (msec)
1930 Submission latency: min, max, mean, deviation (usec)
1931 Completion latency: min, max, mean, deviation (usec)
1932 Completion latency percentiles: 20 fields (see below)
1933 Total latency: min, max, mean, deviation (usec)
1934 Bw (KB/s): min, max, aggregate percentage of total, mean, deviation
1935 WRITE status:
1936 Total IO (KB), bandwidth (KB/sec), IOPS, runtime (msec)
1937 Submission latency: min, max, mean, deviation (usec)
1938 Completion latency: min, max, mean, deviation (usec)
1939 Completion latency percentiles: 20 fields (see below)
1940 Total latency: min, max, mean, deviation (usec)
1941 Bw (KB/s): min, max, aggregate percentage of total, mean, deviation
1942 CPU usage: user, system, context switches, major faults, minor faults
1943 IO depths: <=1, 2, 4, 8, 16, 32, >=64
1944 IO latencies microseconds: <=2, 4, 10, 20, 50, 100, 250, 500, 750, 1000
1945 IO latencies milliseconds: <=2, 4, 10, 20, 50, 100, 250, 500, 750, 1000, 2000, >=2000
1946 Disk utilization: Disk name, Read ios, write ios,
1947 Read merges, write merges,
1948 Read ticks, write ticks,
1949 Time spent in queue, disk utilization percentage
1950 Additional Info (dependent on continue_on_error, default off): total # errors, first error code
1951
1952 Additional Info (dependent on description being set): Text description
1953
1954Completion latency percentiles can be a grouping of up to 20 sets, so
1955for the terse output fio writes all of them. Each field will look like this:
1956
1957 1.00%=6112
1958
1959which is the Xth percentile, and the usec latency associated with it.
1960
1961For disk utilization, all disks used by fio are shown. So for each disk
1962there will be a disk utilization section.
1963
1964
19658.0 Trace file format
1966---------------------
1967There are two trace file format that you can encounter. The older (v1) format
1968is unsupported since version 1.20-rc3 (March 2008). It will still be described
1969below in case that you get an old trace and want to understand it.
1970
1971In any case the trace is a simple text file with a single action per line.
1972
1973
19748.1 Trace file format v1
1975------------------------
1976Each line represents a single io action in the following format:
1977
1978rw, offset, length
1979
1980where rw=0/1 for read/write, and the offset and length entries being in bytes.
1981
1982This format is not supported in Fio versions => 1.20-rc3.
1983
1984
19858.2 Trace file format v2
1986------------------------
1987The second version of the trace file format was added in Fio version 1.17.
1988It allows to access more then one file per trace and has a bigger set of
1989possible file actions.
1990
1991The first line of the trace file has to be:
1992
1993fio version 2 iolog
1994
1995Following this can be lines in two different formats, which are described below.
1996
1997The file management format:
1998
1999filename action
2000
2001The filename is given as an absolute path. The action can be one of these:
2002
2003add Add the given filename to the trace
2004open Open the file with the given filename. The filename has to have
2005 been added with the add action before.
2006close Close the file with the given filename. The file has to have been
2007 opened before.
2008
2009
2010The file io action format:
2011
2012filename action offset length
2013
2014The filename is given as an absolute path, and has to have been added and opened
2015before it can be used with this format. The offset and length are given in
2016bytes. The action can be one of these:
2017
2018wait Wait for 'offset' microseconds. Everything below 100 is discarded.
2019 The time is relative to the previous wait statement.
2020read Read 'length' bytes beginning from 'offset'
2021write Write 'length' bytes beginning from 'offset'
2022sync fsync() the file
2023datasync fdatasync() the file
2024trim trim the given file from the given 'offset' for 'length' bytes
2025
2026
20279.0 CPU idleness profiling
2028--------------------------
2029In some cases, we want to understand CPU overhead in a test. For example,
2030we test patches for the specific goodness of whether they reduce CPU usage.
2031fio implements a balloon approach to create a thread per CPU that runs at
2032idle priority, meaning that it only runs when nobody else needs the cpu.
2033By measuring the amount of work completed by the thread, idleness of each
2034CPU can be derived accordingly.
2035
2036An unit work is defined as touching a full page of unsigned characters. Mean
2037and standard deviation of time to complete an unit work is reported in "unit
2038work" section. Options can be chosen to report detailed percpu idleness or
2039overall system idleness by aggregating percpu stats.
2040
2041
204210.0 Verification and triggers
2043------------------------------
2044Fio is usually run in one of two ways, when data verification is done. The
2045first is a normal write job of some sort with verify enabled. When the
2046write phase has completed, fio switches to reads and verifies everything
2047it wrote. The second model is running just the write phase, and then later
2048on running the same job (but with reads instead of writes) to repeat the
2049same IO patterns and verify the contents. Both of these methods depend
2050on the write phase being completed, as fio otherwise has no idea how much
2051data was written.
2052
2053With verification triggers, fio supports dumping the current write state
2054to local files. Then a subsequent read verify workload can load this state
2055and know exactly where to stop. This is useful for testing cases where
2056power is cut to a server in a managed fashion, for instance.
2057
2058A verification trigger consists of two things:
2059
20601) Storing the write state of each job
20612) Executing a trigger command
2062
2063The write state is relatively small, on the order of hundreds of bytes
2064to single kilobytes. It contains information on the number of completions
2065done, the last X completions, etc.
2066
2067A trigger is invoked either through creation ('touch') of a specified
2068file in the system, or through a timeout setting. If fio is run with
2069--trigger-file=/tmp/trigger-file, then it will continually check for
2070the existence of /tmp/trigger-file. When it sees this file, it will
2071fire off the trigger (thus saving state, and executing the trigger
2072command).
2073
2074For client/server runs, there's both a local and remote trigger. If
2075fio is running as a server backend, it will send the job states back
2076to the client for safe storage, then execute the remote trigger, if
2077specified. If a local trigger is specified, the server will still send
2078back the write state, but the client will then execute the trigger.
2079
208010.1 Verification trigger example
2081---------------------------------
2082Lets say we want to run a powercut test on the remote machine 'server'.
2083Our write workload is in write-test.fio. We want to cut power to 'server'
2084at some point during the run, and we'll run this test from the safety
2085or our local machine, 'localbox'. On the server, we'll start the fio
2086backend normally:
2087
2088server# fio --server
2089
2090and on the client, we'll fire off the workload:
2091
2092localbox$ fio --client=server --trigger-file=/tmp/my-trigger --trigger-remote="bash -c \"echo b > /proc/sysrq-triger\""
2093
2094We set /tmp/my-trigger as the trigger file, and we tell fio to execute
2095
2096echo b > /proc/sysrq-trigger
2097
2098on the server once it has received the trigger and sent us the write
2099state. This will work, but it's not _really_ cutting power to the server,
2100it's merely abruptly rebooting it. If we have a remote way of cutting
2101power to the server through IPMI or similar, we could do that through
2102a local trigger command instead. Lets assume we have a script that does
2103IPMI reboot of a given hostname, ipmi-reboot. On localbox, we could
2104then have run fio with a local trigger instead:
2105
2106localbox$ fio --client=server --trigger-file=/tmp/my-trigger --trigger="ipmi-reboot server"
2107
2108For this case, fio would wait for the server to send us the write state,
2109then execute 'ipmi-reboot server' when that happened.
2110
211110.1 Loading verify state
2112-------------------------
2113To load store write state, read verification job file must contain
2114the verify_state_load option. If that is set, fio will load the previously
2115stored state. For a local fio run this is done by loading the files directly,
2116and on a client/server run, the server backend will ask the client to send
2117the files over and load them from there.