| 1 | Table of contents |
| 2 | ----------------- |
| 3 | |
| 4 | 1. Overview |
| 5 | 2. How fio works |
| 6 | 3. Running fio |
| 7 | 4. Job file format |
| 8 | 5. Detailed list of parameters |
| 9 | 6. Normal output |
| 10 | 7. Terse output |
| 11 | |
| 12 | |
| 13 | 1.0 Overview and history |
| 14 | ------------------------ |
| 15 | fio was originally written to save me the hassle of writing special test |
| 16 | case programs when I wanted to test a specific workload, either for |
| 17 | performance reasons or to find/reproduce a bug. The process of writing |
| 18 | such a test app can be tiresome, especially if you have to do it often. |
| 19 | Hence I needed a tool that would be able to simulate a given io workload |
| 20 | without resorting to writing a tailored test case again and again. |
| 21 | |
| 22 | A test work load is difficult to define, though. There can be any number |
| 23 | of processes or threads involved, and they can each be using their own |
| 24 | way of generating io. You could have someone dirtying large amounts of |
| 25 | memory in an memory mapped file, or maybe several threads issuing |
| 26 | reads using asynchronous io. fio needed to be flexible enough to |
| 27 | simulate both of these cases, and many more. |
| 28 | |
| 29 | 2.0 How fio works |
| 30 | ----------------- |
| 31 | The first step in getting fio to simulate a desired io workload, is |
| 32 | writing a job file describing that specific setup. A job file may contain |
| 33 | any number of threads and/or files - the typical contents of the job file |
| 34 | is a global section defining shared parameters, and one or more job |
| 35 | sections describing the jobs involved. When run, fio parses this file |
| 36 | and sets everything up as described. If we break down a job from top to |
| 37 | bottom, it contains the following basic parameters: |
| 38 | |
| 39 | IO type Defines the io pattern issued to the file(s). |
| 40 | We may only be reading sequentially from this |
| 41 | file(s), or we may be writing randomly. Or even |
| 42 | mixing reads and writes, sequentially or randomly. |
| 43 | |
| 44 | Block size In how large chunks are we issuing io? This may be |
| 45 | a single value, or it may describe a range of |
| 46 | block sizes. |
| 47 | |
| 48 | IO size How much data are we going to be reading/writing. |
| 49 | |
| 50 | IO engine How do we issue io? We could be memory mapping the |
| 51 | file, we could be using regular read/write, we |
| 52 | could be using splice, async io, syslet, or even |
| 53 | SG (SCSI generic sg). |
| 54 | |
| 55 | IO depth If the io engine is async, how large a queuing |
| 56 | depth do we want to maintain? |
| 57 | |
| 58 | IO type Should we be doing buffered io, or direct/raw io? |
| 59 | |
| 60 | Num files How many files are we spreading the workload over. |
| 61 | |
| 62 | Num threads How many threads or processes should we spread |
| 63 | this workload over. |
| 64 | |
| 65 | The above are the basic parameters defined for a workload, in addition |
| 66 | there's a multitude of parameters that modify other aspects of how this |
| 67 | job behaves. |
| 68 | |
| 69 | |
| 70 | 3.0 Running fio |
| 71 | --------------- |
| 72 | See the README file for command line parameters, there are only a few |
| 73 | of them. |
| 74 | |
| 75 | Running fio is normally the easiest part - you just give it the job file |
| 76 | (or job files) as parameters: |
| 77 | |
| 78 | $ fio job_file |
| 79 | |
| 80 | and it will start doing what the job_file tells it to do. You can give |
| 81 | more than one job file on the command line, fio will serialize the running |
| 82 | of those files. Internally that is the same as using the 'stonewall' |
| 83 | parameter described the the parameter section. |
| 84 | |
| 85 | If the job file contains only one job, you may as well just give the |
| 86 | parameters on the command line. The command line parameters are identical |
| 87 | to the job parameters, with a few extra that control global parameters |
| 88 | (see README). For example, for the job file parameter iodepth=2, the |
| 89 | mirror command line option would be --iodepth 2 or --iodepth=2. You can |
| 90 | also use the command line for giving more than one job entry. For each |
| 91 | --name option that fio sees, it will start a new job with that name. |
| 92 | Command line entries following a --name entry will apply to that job, |
| 93 | until there are no more entries or a new --name entry is seen. This is |
| 94 | similar to the job file options, where each option applies to the current |
| 95 | job until a new [] job entry is seen. |
| 96 | |
| 97 | fio does not need to run as root, except if the files or devices specified |
| 98 | in the job section requires that. Some other options may also be restricted, |
| 99 | such as memory locking, io scheduler switching, and decreasing the nice value. |
| 100 | |
| 101 | |
| 102 | 4.0 Job file format |
| 103 | ------------------- |
| 104 | As previously described, fio accepts one or more job files describing |
| 105 | what it is supposed to do. The job file format is the classic ini file, |
| 106 | where the names enclosed in [] brackets define the job name. You are free |
| 107 | to use any ascii name you want, except 'global' which has special meaning. |
| 108 | A global section sets defaults for the jobs described in that file. A job |
| 109 | may override a global section parameter, and a job file may even have |
| 110 | several global sections if so desired. A job is only affected by a global |
| 111 | section residing above it. If the first character in a line is a ';' or a |
| 112 | '#', the entire line is discarded as a comment. |
| 113 | |
| 114 | So let's look at a really simple job file that defines two processes, each |
| 115 | randomly reading from a 128MiB file. |
| 116 | |
| 117 | ; -- start job file -- |
| 118 | [global] |
| 119 | rw=randread |
| 120 | size=128m |
| 121 | |
| 122 | [job1] |
| 123 | |
| 124 | [job2] |
| 125 | |
| 126 | ; -- end job file -- |
| 127 | |
| 128 | As you can see, the job file sections themselves are empty as all the |
| 129 | described parameters are shared. As no filename= option is given, fio |
| 130 | makes up a filename for each of the jobs as it sees fit. On the command |
| 131 | line, this job would look as follows: |
| 132 | |
| 133 | $ fio --name=global --rw=randread --size=128m --name=job1 --name=job2 |
| 134 | |
| 135 | |
| 136 | Let's look at an example that has a number of processes writing randomly |
| 137 | to files. |
| 138 | |
| 139 | ; -- start job file -- |
| 140 | [random-writers] |
| 141 | ioengine=libaio |
| 142 | iodepth=4 |
| 143 | rw=randwrite |
| 144 | bs=32k |
| 145 | direct=0 |
| 146 | size=64m |
| 147 | numjobs=4 |
| 148 | |
| 149 | ; -- end job file -- |
| 150 | |
| 151 | Here we have no global section, as we only have one job defined anyway. |
| 152 | We want to use async io here, with a depth of 4 for each file. We also |
| 153 | increased the buffer size used to 32KiB and define numjobs to 4 to |
| 154 | fork 4 identical jobs. The result is 4 processes each randomly writing |
| 155 | to their own 64MiB file. Instead of using the above job file, you could |
| 156 | have given the parameters on the command line. For this case, you would |
| 157 | specify: |
| 158 | |
| 159 | $ fio --name=random-writers --ioengine=libaio --iodepth=4 --rw=randwrite --bs=32k --direct=0 --size=64m --numjobs=4 |
| 160 | |
| 161 | fio also supports environment variable expansion in job files. Any |
| 162 | substring of the form "${VARNAME}" as part of an option value (in other |
| 163 | words, on the right of the `='), will be expanded to the value of the |
| 164 | environment variable called VARNAME. If no such environment variable |
| 165 | is defined, or VARNAME is the empty string, the empty string will be |
| 166 | substituted. |
| 167 | |
| 168 | As an example, let's look at a sample fio invocation and job file: |
| 169 | |
| 170 | $ SIZE=64m NUMJOBS=4 fio jobfile.fio |
| 171 | |
| 172 | ; -- start job file -- |
| 173 | [random-writers] |
| 174 | rw=randwrite |
| 175 | size=${SIZE} |
| 176 | numjobs=${NUMJOBS} |
| 177 | ; -- end job file -- |
| 178 | |
| 179 | This will expand to the following equivalent job file at runtime: |
| 180 | |
| 181 | ; -- start job file -- |
| 182 | [random-writers] |
| 183 | rw=randwrite |
| 184 | size=64m |
| 185 | numjobs=4 |
| 186 | ; -- end job file -- |
| 187 | |
| 188 | fio ships with a few example job files, you can also look there for |
| 189 | inspiration. |
| 190 | |
| 191 | |
| 192 | 5.0 Detailed list of parameters |
| 193 | ------------------------------- |
| 194 | |
| 195 | This section describes in details each parameter associated with a job. |
| 196 | Some parameters take an option of a given type, such as an integer or |
| 197 | a string. The following types are used: |
| 198 | |
| 199 | str String. This is a sequence of alpha characters. |
| 200 | int Integer. A whole number value, can be negative. If prefixed with |
| 201 | 0x, the integer is assumed to be of base 16 (hexadecimal). |
| 202 | time Integer with possible time postfix. In seconds unless otherwise |
| 203 | specified, use eg 10m for 10 minutes. Accepts s/m/h for seconds, |
| 204 | minutes, and hours. |
| 205 | siint SI integer. A whole number value, which may contain a postfix |
| 206 | describing the base of the number. Accepted postfixes are k/m/g, |
| 207 | meaning kilo, mega, and giga. So if you want to specify 4096, |
| 208 | you could either write out '4096' or just give 4k. The postfixes |
| 209 | signify base 2 values, so 1024 is 1k and 1024k is 1m and so on. |
| 210 | If the option accepts an upper and lower range, use a colon ':' |
| 211 | or minus '-' to separate such values. See irange. |
| 212 | bool Boolean. Usually parsed as an integer, however only defined for |
| 213 | true and false (1 and 0). |
| 214 | irange Integer range with postfix. Allows value range to be given, such |
| 215 | as 1024-4096. A colon may also be used as the separator, eg |
| 216 | 1k:4k. If the option allows two sets of ranges, they can be |
| 217 | specified with a ',' or '/' delimiter: 1k-4k/8k-32k. Also see |
| 218 | siint. |
| 219 | |
| 220 | With the above in mind, here follows the complete list of fio job |
| 221 | parameters. |
| 222 | |
| 223 | name=str ASCII name of the job. This may be used to override the |
| 224 | name printed by fio for this job. Otherwise the job |
| 225 | name is used. On the command line this parameter has the |
| 226 | special purpose of also signaling the start of a new |
| 227 | job. |
| 228 | |
| 229 | description=str Text description of the job. Doesn't do anything except |
| 230 | dump this text description when this job is run. It's |
| 231 | not parsed. |
| 232 | |
| 233 | directory=str Prefix filenames with this directory. Used to places files |
| 234 | in a different location than "./". |
| 235 | |
| 236 | filename=str Fio normally makes up a filename based on the job name, |
| 237 | thread number, and file number. If you want to share |
| 238 | files between threads in a job or several jobs, specify |
| 239 | a filename for each of them to override the default. If |
| 240 | the ioengine used is 'net', the filename is the host and |
| 241 | port to connect to in the format of =host/port. If the |
| 242 | ioengine is file based, you can specify a number of files |
| 243 | by separating the names with a ':' colon. So if you wanted |
| 244 | a job to open /dev/sda and /dev/sdb as the two working files, |
| 245 | you would use filename=/dev/sda:/dev/sdb. '-' is a reserved |
| 246 | name, meaning stdin or stdout. Which of the two depends |
| 247 | on the read/write direction set. |
| 248 | |
| 249 | opendir=str Tell fio to recursively add any file it can find in this |
| 250 | directory and down the file system tree. |
| 251 | |
| 252 | lockfile=str Fio defaults to not doing any locking files before it does |
| 253 | IO to them. If a file or file descriptor is shared, fio |
| 254 | can serialize IO to that file to make the end result |
| 255 | consistent. This is usual for emulating real workloads that |
| 256 | share files. The lock modes are: |
| 257 | |
| 258 | none No locking. The default. |
| 259 | exclusive Only one thread/process may do IO, |
| 260 | excluding all others. |
| 261 | readwrite Read-write locking on the file. Many |
| 262 | readers may access the file at the |
| 263 | same time, but writes get exclusive |
| 264 | access. |
| 265 | |
| 266 | The option may be post-fixed with a lock batch number. If |
| 267 | set, then each thread/process may do that amount of IOs to |
| 268 | the file before giving up the lock. Since lock acquisition is |
| 269 | expensive, batching the lock/unlocks will speed up IO. |
| 270 | |
| 271 | readwrite=str |
| 272 | rw=str Type of io pattern. Accepted values are: |
| 273 | |
| 274 | read Sequential reads |
| 275 | write Sequential writes |
| 276 | randwrite Random writes |
| 277 | randread Random reads |
| 278 | rw Sequential mixed reads and writes |
| 279 | randrw Random mixed reads and writes |
| 280 | |
| 281 | For the mixed io types, the default is to split them 50/50. |
| 282 | For certain types of io the result may still be skewed a bit, |
| 283 | since the speed may be different. It is possible to specify |
| 284 | a number of IO's to do before getting a new offset - this |
| 285 | is only useful for random IO, where fio would normally |
| 286 | generate a new random offset for every IO. If you append |
| 287 | eg 8 to randread, you would get a new random offset for |
| 288 | every 8 IO's. The result would be a seek for only every 8 |
| 289 | IO's, instead of for every IO. Use rw=randread:8 to specify |
| 290 | that. |
| 291 | |
| 292 | randrepeat=bool For random IO workloads, seed the generator in a predictable |
| 293 | way so that results are repeatable across repetitions. |
| 294 | |
| 295 | fadvise_hint=bool By default, fio will use fadvise() to advise the kernel |
| 296 | on what IO patterns it is likely to issue. Sometimes you |
| 297 | want to test specific IO patterns without telling the |
| 298 | kernel about it, in which case you can disable this option. |
| 299 | If set, fio will use POSIX_FADV_SEQUENTIAL for sequential |
| 300 | IO and POSIX_FADV_RANDOM for random IO. |
| 301 | |
| 302 | size=siint The total size of file io for this job. Fio will run until |
| 303 | this many bytes has been transferred, unless runtime is |
| 304 | limited by other options (such as 'runtime', for instance). |
| 305 | Unless specific nr_files and filesize options are given, |
| 306 | fio will divide this size between the available files |
| 307 | specified by the job. |
| 308 | |
| 309 | filesize=siint Individual file sizes. May be a range, in which case fio |
| 310 | will select sizes for files at random within the given range |
| 311 | and limited to 'size' in total (if that is given). If not |
| 312 | given, each created file is the same size. |
| 313 | |
| 314 | fill_device=bool Sets size to something really large and waits for ENOSPC (no |
| 315 | space left on device) as the terminating condition. Only makes |
| 316 | sense with sequential write. |
| 317 | |
| 318 | blocksize=siint |
| 319 | bs=siint The block size used for the io units. Defaults to 4k. Values |
| 320 | can be given for both read and writes. If a single siint is |
| 321 | given, it will apply to both. If a second siint is specified |
| 322 | after a comma, it will apply to writes only. In other words, |
| 323 | the format is either bs=read_and_write or bs=read,write. |
| 324 | bs=4k,8k will thus use 4k blocks for reads, and 8k blocks |
| 325 | for writes. If you only wish to set the write size, you |
| 326 | can do so by passing an empty read size - bs=,8k will set |
| 327 | 8k for writes and leave the read default value. |
| 328 | |
| 329 | blocksize_range=irange |
| 330 | bsrange=irange Instead of giving a single block size, specify a range |
| 331 | and fio will mix the issued io block sizes. The issued |
| 332 | io unit will always be a multiple of the minimum value |
| 333 | given (also see bs_unaligned). Applies to both reads and |
| 334 | writes, however a second range can be given after a comma. |
| 335 | See bs=. |
| 336 | |
| 337 | bssplit=str Sometimes you want even finer grained control of the |
| 338 | block sizes issued, not just an even split between them. |
| 339 | This option allows you to weight various block sizes, |
| 340 | so that you are able to define a specific amount of |
| 341 | block sizes issued. The format for this option is: |
| 342 | |
| 343 | bssplit=blocksize/percentage:blocksize/percentage |
| 344 | |
| 345 | for as many block sizes as needed. So if you want to define |
| 346 | a workload that has 50% 64k blocks, 10% 4k blocks, and |
| 347 | 40% 32k blocks, you would write: |
| 348 | |
| 349 | bssplit=4k/10:64k/50:32k/40 |
| 350 | |
| 351 | Ordering does not matter. If the percentage is left blank, |
| 352 | fio will fill in the remaining values evenly. So a bssplit |
| 353 | option like this one: |
| 354 | |
| 355 | bssplit=4k/50:1k/:32k/ |
| 356 | |
| 357 | would have 50% 4k ios, and 25% 1k and 32k ios. The percentages |
| 358 | always add up to 100, if bssplit is given a range that adds |
| 359 | up to more, it will error out. |
| 360 | |
| 361 | blocksize_unaligned |
| 362 | bs_unaligned If this option is given, any byte size value within bsrange |
| 363 | may be used as a block range. This typically wont work with |
| 364 | direct IO, as that normally requires sector alignment. |
| 365 | |
| 366 | zero_buffers If this option is given, fio will init the IO buffers to |
| 367 | all zeroes. The default is to fill them with random data. |
| 368 | |
| 369 | refill_buffers If this option is given, fio will refill the IO buffers |
| 370 | on every submit. The default is to only fill it at init |
| 371 | time and reuse that data. Only makes sense if zero_buffers |
| 372 | isn't specified, naturally. If data verification is enabled, |
| 373 | refill_buffers is also automatically enabled. |
| 374 | |
| 375 | nrfiles=int Number of files to use for this job. Defaults to 1. |
| 376 | |
| 377 | openfiles=int Number of files to keep open at the same time. Defaults to |
| 378 | the same as nrfiles, can be set smaller to limit the number |
| 379 | simultaneous opens. |
| 380 | |
| 381 | file_service_type=str Defines how fio decides which file from a job to |
| 382 | service next. The following types are defined: |
| 383 | |
| 384 | random Just choose a file at random. |
| 385 | |
| 386 | roundrobin Round robin over open files. This |
| 387 | is the default. |
| 388 | |
| 389 | The string can have a number appended, indicating how |
| 390 | often to switch to a new file. So if option random:4 is |
| 391 | given, fio will switch to a new random file after 4 ios |
| 392 | have been issued. |
| 393 | |
| 394 | ioengine=str Defines how the job issues io to the file. The following |
| 395 | types are defined: |
| 396 | |
| 397 | sync Basic read(2) or write(2) io. lseek(2) is |
| 398 | used to position the io location. |
| 399 | |
| 400 | psync Basic pread(2) or pwrite(2) io. |
| 401 | |
| 402 | vsync Basic readv(2) or writev(2) IO. |
| 403 | |
| 404 | libaio Linux native asynchronous io. |
| 405 | |
| 406 | posixaio glibc posix asynchronous io. |
| 407 | |
| 408 | solarisaio Solaris native asynchronous io. |
| 409 | |
| 410 | mmap File is memory mapped and data copied |
| 411 | to/from using memcpy(3). |
| 412 | |
| 413 | splice splice(2) is used to transfer the data and |
| 414 | vmsplice(2) to transfer data from user |
| 415 | space to the kernel. |
| 416 | |
| 417 | syslet-rw Use the syslet system calls to make |
| 418 | regular read/write async. |
| 419 | |
| 420 | sg SCSI generic sg v3 io. May either be |
| 421 | synchronous using the SG_IO ioctl, or if |
| 422 | the target is an sg character device |
| 423 | we use read(2) and write(2) for asynchronous |
| 424 | io. |
| 425 | |
| 426 | null Doesn't transfer any data, just pretends |
| 427 | to. This is mainly used to exercise fio |
| 428 | itself and for debugging/testing purposes. |
| 429 | |
| 430 | net Transfer over the network to given host:port. |
| 431 | 'filename' must be set appropriately to |
| 432 | filename=host/port regardless of send |
| 433 | or receive, if the latter only the port |
| 434 | argument is used. |
| 435 | |
| 436 | netsplice Like net, but uses splice/vmsplice to |
| 437 | map data and send/receive. |
| 438 | |
| 439 | cpuio Doesn't transfer any data, but burns CPU |
| 440 | cycles according to the cpuload= and |
| 441 | cpucycle= options. Setting cpuload=85 |
| 442 | will cause that job to do nothing but burn |
| 443 | 85% of the CPU. In case of SMP machines, |
| 444 | use numjobs=<no_of_cpu> to get desired CPU |
| 445 | usage, as the cpuload only loads a single |
| 446 | CPU at the desired rate. |
| 447 | |
| 448 | guasi The GUASI IO engine is the Generic Userspace |
| 449 | Asyncronous Syscall Interface approach |
| 450 | to async IO. See |
| 451 | |
| 452 | http://www.xmailserver.org/guasi-lib.html |
| 453 | |
| 454 | for more info on GUASI. |
| 455 | |
| 456 | external Prefix to specify loading an external |
| 457 | IO engine object file. Append the engine |
| 458 | filename, eg ioengine=external:/tmp/foo.o |
| 459 | to load ioengine foo.o in /tmp. |
| 460 | |
| 461 | iodepth=int This defines how many io units to keep in flight against |
| 462 | the file. The default is 1 for each file defined in this |
| 463 | job, can be overridden with a larger value for higher |
| 464 | concurrency. |
| 465 | |
| 466 | iodepth_batch_submit=int |
| 467 | iodepth_batch=int This defines how many pieces of IO to submit at once. |
| 468 | It defaults to 1 which means that we submit each IO |
| 469 | as soon as it is available, but can be raised to submit |
| 470 | bigger batches of IO at the time. |
| 471 | |
| 472 | iodepth_batch_complete=int This defines how many pieces of IO to retrieve |
| 473 | at once. It defaults to 1 which means that we'll ask |
| 474 | for a minimum of 1 IO in the retrieval process from |
| 475 | the kernel. The IO retrieval will go on until we |
| 476 | hit the limit set by iodepth_low. If this variable is |
| 477 | set to 0, then fio will always check for completed |
| 478 | events before queuing more IO. This helps reduce |
| 479 | IO latency, at the cost of more retrieval system calls. |
| 480 | |
| 481 | iodepth_low=int The low water mark indicating when to start filling |
| 482 | the queue again. Defaults to the same as iodepth, meaning |
| 483 | that fio will attempt to keep the queue full at all times. |
| 484 | If iodepth is set to eg 16 and iodepth_low is set to 4, then |
| 485 | after fio has filled the queue of 16 requests, it will let |
| 486 | the depth drain down to 4 before starting to fill it again. |
| 487 | |
| 488 | direct=bool If value is true, use non-buffered io. This is usually |
| 489 | O_DIRECT. |
| 490 | |
| 491 | buffered=bool If value is true, use buffered io. This is the opposite |
| 492 | of the 'direct' option. Defaults to true. |
| 493 | |
| 494 | offset=siint Start io at the given offset in the file. The data before |
| 495 | the given offset will not be touched. This effectively |
| 496 | caps the file size at real_size - offset. |
| 497 | |
| 498 | fsync=int If writing to a file, issue a sync of the dirty data |
| 499 | for every number of blocks given. For example, if you give |
| 500 | 32 as a parameter, fio will sync the file for every 32 |
| 501 | writes issued. If fio is using non-buffered io, we may |
| 502 | not sync the file. The exception is the sg io engine, which |
| 503 | synchronizes the disk cache anyway. |
| 504 | |
| 505 | overwrite=bool If true, writes to a file will always overwrite existing |
| 506 | data. If the file doesn't already exist, it will be |
| 507 | created before the write phase begins. If the file exists |
| 508 | and is large enough for the specified write phase, nothing |
| 509 | will be done. |
| 510 | |
| 511 | end_fsync=bool If true, fsync file contents when the job exits. |
| 512 | |
| 513 | fsync_on_close=bool If true, fio will fsync() a dirty file on close. |
| 514 | This differs from end_fsync in that it will happen on every |
| 515 | file close, not just at the end of the job. |
| 516 | |
| 517 | rwmixread=int How large a percentage of the mix should be reads. |
| 518 | |
| 519 | rwmixwrite=int How large a percentage of the mix should be writes. If both |
| 520 | rwmixread and rwmixwrite is given and the values do not add |
| 521 | up to 100%, the latter of the two will be used to override |
| 522 | the first. |
| 523 | |
| 524 | norandommap Normally fio will cover every block of the file when doing |
| 525 | random IO. If this option is given, fio will just get a |
| 526 | new random offset without looking at past io history. This |
| 527 | means that some blocks may not be read or written, and that |
| 528 | some blocks may be read/written more than once. This option |
| 529 | is mutually exclusive with verify= for that reason, since |
| 530 | fio doesn't track potential block rewrites which may alter |
| 531 | the calculated checksum for that block. |
| 532 | |
| 533 | softrandommap See norandommap. If fio runs with the random block map enabled |
| 534 | and it fails to allocate the map, if this option is set it |
| 535 | will continue without a random block map. As coverage will |
| 536 | not be as complete as with random maps, this option is |
| 537 | disabled by default. |
| 538 | |
| 539 | nice=int Run the job with the given nice value. See man nice(2). |
| 540 | |
| 541 | prio=int Set the io priority value of this job. Linux limits us to |
| 542 | a positive value between 0 and 7, with 0 being the highest. |
| 543 | See man ionice(1). |
| 544 | |
| 545 | prioclass=int Set the io priority class. See man ionice(1). |
| 546 | |
| 547 | thinktime=int Stall the job x microseconds after an io has completed before |
| 548 | issuing the next. May be used to simulate processing being |
| 549 | done by an application. See thinktime_blocks and |
| 550 | thinktime_spin. |
| 551 | |
| 552 | thinktime_spin=int |
| 553 | Only valid if thinktime is set - pretend to spend CPU time |
| 554 | doing something with the data received, before falling back |
| 555 | to sleeping for the rest of the period specified by |
| 556 | thinktime. |
| 557 | |
| 558 | thinktime_blocks |
| 559 | Only valid if thinktime is set - control how many blocks |
| 560 | to issue, before waiting 'thinktime' usecs. If not set, |
| 561 | defaults to 1 which will make fio wait 'thinktime' usecs |
| 562 | after every block. |
| 563 | |
| 564 | rate=int Cap the bandwidth used by this job to this number of KiB/sec. |
| 565 | |
| 566 | ratemin=int Tell fio to do whatever it can to maintain at least this |
| 567 | bandwidth. Failing to meet this requirement, will cause |
| 568 | the job to exit. |
| 569 | |
| 570 | rate_iops=int Cap the bandwidth to this number of IOPS. Basically the same |
| 571 | as rate, just specified independently of bandwidth. If the |
| 572 | job is given a block size range instead of a fixed value, |
| 573 | the smallest block size is used as the metric. |
| 574 | |
| 575 | rate_iops_min=int If fio doesn't meet this rate of IO, it will cause |
| 576 | the job to exit. |
| 577 | |
| 578 | ratecycle=int Average bandwidth for 'rate' and 'ratemin' over this number |
| 579 | of milliseconds. |
| 580 | |
| 581 | cpumask=int Set the CPU affinity of this job. The parameter given is a |
| 582 | bitmask of allowed CPU's the job may run on. So if you want |
| 583 | the allowed CPUs to be 1 and 5, you would pass the decimal |
| 584 | value of (1 << 1 | 1 << 5), or 34. See man |
| 585 | sched_setaffinity(2). This may not work on all supported |
| 586 | operating systems or kernel versions. |
| 587 | |
| 588 | cpus_allowed=str Controls the same options as cpumask, but it allows a text |
| 589 | setting of the permitted CPUs instead. So to use CPUs 1 and |
| 590 | 5, you would specify cpus_allowed=1,5. |
| 591 | |
| 592 | startdelay=time Start this job the specified number of seconds after fio |
| 593 | has started. Only useful if the job file contains several |
| 594 | jobs, and you want to delay starting some jobs to a certain |
| 595 | time. |
| 596 | |
| 597 | runtime=time Tell fio to terminate processing after the specified number |
| 598 | of seconds. It can be quite hard to determine for how long |
| 599 | a specified job will run, so this parameter is handy to |
| 600 | cap the total runtime to a given time. |
| 601 | |
| 602 | time_based If set, fio will run for the duration of the runtime |
| 603 | specified even if the file(s) are completely read or |
| 604 | written. It will simply loop over the same workload |
| 605 | as many times as the runtime allows. |
| 606 | |
| 607 | ramp_time=time If set, fio will run the specified workload for this amount |
| 608 | of time before logging any performance numbers. Useful for |
| 609 | letting performance settle before logging results, thus |
| 610 | minimizing the runtime required for stable results. Note |
| 611 | that the ramp_time is considered lead in time for a job, |
| 612 | thus it will increase the total runtime if a special timeout |
| 613 | or runtime is specified. |
| 614 | |
| 615 | invalidate=bool Invalidate the buffer/page cache parts for this file prior |
| 616 | to starting io. Defaults to true. |
| 617 | |
| 618 | sync=bool Use sync io for buffered writes. For the majority of the |
| 619 | io engines, this means using O_SYNC. |
| 620 | |
| 621 | iomem=str |
| 622 | mem=str Fio can use various types of memory as the io unit buffer. |
| 623 | The allowed values are: |
| 624 | |
| 625 | malloc Use memory from malloc(3) as the buffers. |
| 626 | |
| 627 | shm Use shared memory as the buffers. Allocated |
| 628 | through shmget(2). |
| 629 | |
| 630 | shmhuge Same as shm, but use huge pages as backing. |
| 631 | |
| 632 | mmap Use mmap to allocate buffers. May either be |
| 633 | anonymous memory, or can be file backed if |
| 634 | a filename is given after the option. The |
| 635 | format is mem=mmap:/path/to/file. |
| 636 | |
| 637 | mmaphuge Use a memory mapped huge file as the buffer |
| 638 | backing. Append filename after mmaphuge, ala |
| 639 | mem=mmaphuge:/hugetlbfs/file |
| 640 | |
| 641 | The area allocated is a function of the maximum allowed |
| 642 | bs size for the job, multiplied by the io depth given. Note |
| 643 | that for shmhuge and mmaphuge to work, the system must have |
| 644 | free huge pages allocated. This can normally be checked |
| 645 | and set by reading/writing /proc/sys/vm/nr_hugepages on a |
| 646 | Linux system. Fio assumes a huge page is 4MiB in size. So |
| 647 | to calculate the number of huge pages you need for a given |
| 648 | job file, add up the io depth of all jobs (normally one unless |
| 649 | iodepth= is used) and multiply by the maximum bs set. Then |
| 650 | divide that number by the huge page size. You can see the |
| 651 | size of the huge pages in /proc/meminfo. If no huge pages |
| 652 | are allocated by having a non-zero number in nr_hugepages, |
| 653 | using mmaphuge or shmhuge will fail. Also see hugepage-size. |
| 654 | |
| 655 | mmaphuge also needs to have hugetlbfs mounted and the file |
| 656 | location should point there. So if it's mounted in /huge, |
| 657 | you would use mem=mmaphuge:/huge/somefile. |
| 658 | |
| 659 | hugepage-size=siint |
| 660 | Defines the size of a huge page. Must at least be equal |
| 661 | to the system setting, see /proc/meminfo. Defaults to 4MiB. |
| 662 | Should probably always be a multiple of megabytes, so using |
| 663 | hugepage-size=Xm is the preferred way to set this to avoid |
| 664 | setting a non-pow-2 bad value. |
| 665 | |
| 666 | exitall When one job finishes, terminate the rest. The default is |
| 667 | to wait for each job to finish, sometimes that is not the |
| 668 | desired action. |
| 669 | |
| 670 | bwavgtime=int Average the calculated bandwidth over the given time. Value |
| 671 | is specified in milliseconds. |
| 672 | |
| 673 | create_serialize=bool If true, serialize the file creating for the jobs. |
| 674 | This may be handy to avoid interleaving of data |
| 675 | files, which may greatly depend on the filesystem |
| 676 | used and even the number of processors in the system. |
| 677 | |
| 678 | create_fsync=bool fsync the data file after creation. This is the |
| 679 | default. |
| 680 | |
| 681 | unlink=bool Unlink the job files when done. Not the default, as repeated |
| 682 | runs of that job would then waste time recreating the file |
| 683 | set again and again. |
| 684 | |
| 685 | loops=int Run the specified number of iterations of this job. Used |
| 686 | to repeat the same workload a given number of times. Defaults |
| 687 | to 1. |
| 688 | |
| 689 | do_verify=bool Run the verify phase after a write phase. Only makes sense if |
| 690 | verify is set. Defaults to 1. |
| 691 | |
| 692 | verify=str If writing to a file, fio can verify the file contents |
| 693 | after each iteration of the job. The allowed values are: |
| 694 | |
| 695 | md5 Use an md5 sum of the data area and store |
| 696 | it in the header of each block. |
| 697 | |
| 698 | crc64 Use an experimental crc64 sum of the data |
| 699 | area and store it in the header of each |
| 700 | block. |
| 701 | |
| 702 | crc32c Use a crc32c sum of the data area and store |
| 703 | it in the header of each block. |
| 704 | |
| 705 | crc32c-intel Use hardware assisted crc32c calcuation |
| 706 | provided on SSE4.2 enabled processors. |
| 707 | |
| 708 | crc32 Use a crc32 sum of the data area and store |
| 709 | it in the header of each block. |
| 710 | |
| 711 | crc16 Use a crc16 sum of the data area and store |
| 712 | it in the header of each block. |
| 713 | |
| 714 | crc7 Use a crc7 sum of the data area and store |
| 715 | it in the header of each block. |
| 716 | |
| 717 | sha512 Use sha512 as the checksum function. |
| 718 | |
| 719 | sha256 Use sha256 as the checksum function. |
| 720 | |
| 721 | meta Write extra information about each io |
| 722 | (timestamp, block number etc.). The block |
| 723 | number is verified. |
| 724 | |
| 725 | null Only pretend to verify. Useful for testing |
| 726 | internals with ioengine=null, not for much |
| 727 | else. |
| 728 | |
| 729 | This option can be used for repeated burn-in tests of a |
| 730 | system to make sure that the written data is also |
| 731 | correctly read back. |
| 732 | |
| 733 | verifysort=bool If set, fio will sort written verify blocks when it deems |
| 734 | it faster to read them back in a sorted manner. This is |
| 735 | often the case when overwriting an existing file, since |
| 736 | the blocks are already laid out in the file system. You |
| 737 | can ignore this option unless doing huge amounts of really |
| 738 | fast IO where the red-black tree sorting CPU time becomes |
| 739 | significant. |
| 740 | |
| 741 | verify_offset=siint Swap the verification header with data somewhere else |
| 742 | in the block before writing. Its swapped back before |
| 743 | verifying. |
| 744 | |
| 745 | verify_interval=siint Write the verification header at a finer granularity |
| 746 | than the blocksize. It will be written for chunks the |
| 747 | size of header_interval. blocksize should divide this |
| 748 | evenly. |
| 749 | |
| 750 | verify_pattern=int If set, fio will fill the io buffers with this |
| 751 | pattern. Fio defaults to filling with totally random |
| 752 | bytes, but sometimes it's interesting to fill with a known |
| 753 | pattern for io verification purposes. Depending on the |
| 754 | width of the pattern, fio will fill 1/2/3/4 bytes of the |
| 755 | buffer at the time. The verify_pattern cannot be larger than |
| 756 | a 32-bit quantity. |
| 757 | |
| 758 | verify_fatal=bool Normally fio will keep checking the entire contents |
| 759 | before quitting on a block verification failure. If this |
| 760 | option is set, fio will exit the job on the first observed |
| 761 | failure. |
| 762 | |
| 763 | stonewall Wait for preceeding jobs in the job file to exit, before |
| 764 | starting this one. Can be used to insert serialization |
| 765 | points in the job file. A stone wall also implies starting |
| 766 | a new reporting group. |
| 767 | |
| 768 | new_group Start a new reporting group. If this option isn't given, |
| 769 | jobs in a file will be part of the same reporting group |
| 770 | unless separated by a stone wall (or if it's a group |
| 771 | by itself, with the numjobs option). |
| 772 | |
| 773 | numjobs=int Create the specified number of clones of this job. May be |
| 774 | used to setup a larger number of threads/processes doing |
| 775 | the same thing. We regard that grouping of jobs as a |
| 776 | specific group. |
| 777 | |
| 778 | group_reporting If 'numjobs' is set, it may be interesting to display |
| 779 | statistics for the group as a whole instead of for each |
| 780 | individual job. This is especially true of 'numjobs' is |
| 781 | large, looking at individual thread/process output quickly |
| 782 | becomes unwieldy. If 'group_reporting' is specified, fio |
| 783 | will show the final report per-group instead of per-job. |
| 784 | |
| 785 | thread fio defaults to forking jobs, however if this option is |
| 786 | given, fio will use pthread_create(3) to create threads |
| 787 | instead. |
| 788 | |
| 789 | zonesize=siint Divide a file into zones of the specified size. See zoneskip. |
| 790 | |
| 791 | zoneskip=siint Skip the specified number of bytes when zonesize data has |
| 792 | been read. The two zone options can be used to only do |
| 793 | io on zones of a file. |
| 794 | |
| 795 | write_iolog=str Write the issued io patterns to the specified file. See |
| 796 | read_iolog. |
| 797 | |
| 798 | read_iolog=str Open an iolog with the specified file name and replay the |
| 799 | io patterns it contains. This can be used to store a |
| 800 | workload and replay it sometime later. The iolog given |
| 801 | may also be a blktrace binary file, which allows fio |
| 802 | to replay a workload captured by blktrace. See blktrace |
| 803 | for how to capture such logging data. For blktrace replay, |
| 804 | the file needs to be turned into a blkparse binary data |
| 805 | file first (blktrace <device> -d file_for_fio.bin). |
| 806 | |
| 807 | write_bw_log If given, write a bandwidth log of the jobs in this job |
| 808 | file. Can be used to store data of the bandwidth of the |
| 809 | jobs in their lifetime. The included fio_generate_plots |
| 810 | script uses gnuplot to turn these text files into nice |
| 811 | graphs. |
| 812 | |
| 813 | write_lat_log Same as write_bw_log, except that this option stores io |
| 814 | completion latencies instead. |
| 815 | |
| 816 | lockmem=siint Pin down the specified amount of memory with mlock(2). Can |
| 817 | potentially be used instead of removing memory or booting |
| 818 | with less memory to simulate a smaller amount of memory. |
| 819 | |
| 820 | exec_prerun=str Before running this job, issue the command specified |
| 821 | through system(3). |
| 822 | |
| 823 | exec_postrun=str After the job completes, issue the command specified |
| 824 | though system(3). |
| 825 | |
| 826 | ioscheduler=str Attempt to switch the device hosting the file to the specified |
| 827 | io scheduler before running. |
| 828 | |
| 829 | cpuload=int If the job is a CPU cycle eater, attempt to use the specified |
| 830 | percentage of CPU cycles. |
| 831 | |
| 832 | cpuchunks=int If the job is a CPU cycle eater, split the load into |
| 833 | cycles of the given time. In milliseconds. |
| 834 | |
| 835 | disk_util=bool Generate disk utilization statistics, if the platform |
| 836 | supports it. Defaults to on. |
| 837 | |
| 838 | |
| 839 | 6.0 Interpreting the output |
| 840 | --------------------------- |
| 841 | |
| 842 | fio spits out a lot of output. While running, fio will display the |
| 843 | status of the jobs created. An example of that would be: |
| 844 | |
| 845 | Threads: 1: [_r] [24.8% done] [ 13509/ 8334 kb/s] [eta 00h:01m:31s] |
| 846 | |
| 847 | The characters inside the square brackets denote the current status of |
| 848 | each thread. The possible values (in typical life cycle order) are: |
| 849 | |
| 850 | Idle Run |
| 851 | ---- --- |
| 852 | P Thread setup, but not started. |
| 853 | C Thread created. |
| 854 | I Thread initialized, waiting. |
| 855 | R Running, doing sequential reads. |
| 856 | r Running, doing random reads. |
| 857 | W Running, doing sequential writes. |
| 858 | w Running, doing random writes. |
| 859 | M Running, doing mixed sequential reads/writes. |
| 860 | m Running, doing mixed random reads/writes. |
| 861 | F Running, currently waiting for fsync() |
| 862 | V Running, doing verification of written data. |
| 863 | E Thread exited, not reaped by main thread yet. |
| 864 | _ Thread reaped. |
| 865 | |
| 866 | The other values are fairly self explanatory - number of threads |
| 867 | currently running and doing io, rate of io since last check (read speed |
| 868 | listed first, then write speed), and the estimated completion percentage |
| 869 | and time for the running group. It's impossible to estimate runtime of |
| 870 | the following groups (if any). |
| 871 | |
| 872 | When fio is done (or interrupted by ctrl-c), it will show the data for |
| 873 | each thread, group of threads, and disks in that order. For each data |
| 874 | direction, the output looks like: |
| 875 | |
| 876 | Client1 (g=0): err= 0: |
| 877 | write: io= 32MiB, bw= 666KiB/s, runt= 50320msec |
| 878 | slat (msec): min= 0, max= 136, avg= 0.03, stdev= 1.92 |
| 879 | clat (msec): min= 0, max= 631, avg=48.50, stdev=86.82 |
| 880 | bw (KiB/s) : min= 0, max= 1196, per=51.00%, avg=664.02, stdev=681.68 |
| 881 | cpu : usr=1.49%, sys=0.25%, ctx=7969, majf=0, minf=17 |
| 882 | IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=99.0%, 16=0.0%, 32=0.0%, >32=0.0% |
| 883 | submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% |
| 884 | complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% |
| 885 | issued r/w: total=0/32768, short=0/0 |
| 886 | lat (msec): 2=1.6%, 4=0.0%, 10=3.2%, 20=12.8%, 50=38.4%, 100=24.8%, |
| 887 | lat (msec): 250=15.2%, 500=0.0%, 750=0.0%, 1000=0.0%, >=2048=0.0% |
| 888 | |
| 889 | The client number is printed, along with the group id and error of that |
| 890 | thread. Below is the io statistics, here for writes. In the order listed, |
| 891 | they denote: |
| 892 | |
| 893 | io= Number of megabytes io performed |
| 894 | bw= Average bandwidth rate |
| 895 | runt= The runtime of that thread |
| 896 | slat= Submission latency (avg being the average, stdev being the |
| 897 | standard deviation). This is the time it took to submit |
| 898 | the io. For sync io, the slat is really the completion |
| 899 | latency, since queue/complete is one operation there. This |
| 900 | value can be in milliseconds or microseconds, fio will choose |
| 901 | the most appropriate base and print that. In the example |
| 902 | above, milliseconds is the best scale. |
| 903 | clat= Completion latency. Same names as slat, this denotes the |
| 904 | time from submission to completion of the io pieces. For |
| 905 | sync io, clat will usually be equal (or very close) to 0, |
| 906 | as the time from submit to complete is basically just |
| 907 | CPU time (io has already been done, see slat explanation). |
| 908 | bw= Bandwidth. Same names as the xlat stats, but also includes |
| 909 | an approximate percentage of total aggregate bandwidth |
| 910 | this thread received in this group. This last value is |
| 911 | only really useful if the threads in this group are on the |
| 912 | same disk, since they are then competing for disk access. |
| 913 | cpu= CPU usage. User and system time, along with the number |
| 914 | of context switches this thread went through, usage of |
| 915 | system and user time, and finally the number of major |
| 916 | and minor page faults. |
| 917 | IO depths= The distribution of io depths over the job life time. The |
| 918 | numbers are divided into powers of 2, so for example the |
| 919 | 16= entries includes depths up to that value but higher |
| 920 | than the previous entry. In other words, it covers the |
| 921 | range from 16 to 31. |
| 922 | IO submit= How many pieces of IO were submitting in a single submit |
| 923 | call. Each entry denotes that amount and below, until |
| 924 | the previous entry - eg, 8=100% mean that we submitted |
| 925 | anywhere in between 5-8 ios per submit call. |
| 926 | IO complete= Like the above submit number, but for completions instead. |
| 927 | IO issued= The number of read/write requests issued, and how many |
| 928 | of them were short. |
| 929 | IO latencies= The distribution of IO completion latencies. This is the |
| 930 | time from when IO leaves fio and when it gets completed. |
| 931 | The numbers follow the same pattern as the IO depths, |
| 932 | meaning that 2=1.6% means that 1.6% of the IO completed |
| 933 | within 2 msecs, 20=12.8% means that 12.8% of the IO |
| 934 | took more than 10 msecs, but less than (or equal to) 20 msecs. |
| 935 | |
| 936 | After each client has been listed, the group statistics are printed. They |
| 937 | will look like this: |
| 938 | |
| 939 | Run status group 0 (all jobs): |
| 940 | READ: io=64MiB, aggrb=22178, minb=11355, maxb=11814, mint=2840msec, maxt=2955msec |
| 941 | WRITE: io=64MiB, aggrb=1302, minb=666, maxb=669, mint=50093msec, maxt=50320msec |
| 942 | |
| 943 | For each data direction, it prints: |
| 944 | |
| 945 | io= Number of megabytes io performed. |
| 946 | aggrb= Aggregate bandwidth of threads in this group. |
| 947 | minb= The minimum average bandwidth a thread saw. |
| 948 | maxb= The maximum average bandwidth a thread saw. |
| 949 | mint= The smallest runtime of the threads in that group. |
| 950 | maxt= The longest runtime of the threads in that group. |
| 951 | |
| 952 | And finally, the disk statistics are printed. They will look like this: |
| 953 | |
| 954 | Disk stats (read/write): |
| 955 | sda: ios=16398/16511, merge=30/162, ticks=6853/819634, in_queue=826487, util=100.00% |
| 956 | |
| 957 | Each value is printed for both reads and writes, with reads first. The |
| 958 | numbers denote: |
| 959 | |
| 960 | ios= Number of ios performed by all groups. |
| 961 | merge= Number of merges io the io scheduler. |
| 962 | ticks= Number of ticks we kept the disk busy. |
| 963 | io_queue= Total time spent in the disk queue. |
| 964 | util= The disk utilization. A value of 100% means we kept the disk |
| 965 | busy constantly, 50% would be a disk idling half of the time. |
| 966 | |
| 967 | |
| 968 | 7.0 Terse output |
| 969 | ---------------- |
| 970 | |
| 971 | For scripted usage where you typically want to generate tables or graphs |
| 972 | of the results, fio can output the results in a semicolon separated format. |
| 973 | The format is one long line of values, such as: |
| 974 | |
| 975 | client1;0;0;1906777;1090804;1790;0;0;0.000000;0.000000;0;0;0.000000;0.000000;929380;1152890;25.510151%;1078276.333333;128948.113404;0;0;0;0;0;0.000000;0.000000;0;0;0.000000;0.000000;0;0;0.000000%;0.000000;0.000000;100.000000%;0.000000%;324;100.0%;0.0%;0.0%;0.0%;0.0%;0.0%;0.0%;100.0%;0.0%;0.0%;0.0%;0.0%;0.0% |
| 976 | ;0.0%;0.0%;0.0%;0.0%;0.0% |
| 977 | |
| 978 | To enable terse output, use the --minimal command line option. |
| 979 | |
| 980 | Split up, the format is as follows: |
| 981 | |
| 982 | jobname, groupid, error |
| 983 | READ status: |
| 984 | KiB IO, bandwidth (KiB/sec), runtime (msec) |
| 985 | Submission latency: min, max, mean, deviation |
| 986 | Completion latency: min, max, mean, deviation |
| 987 | Bw: min, max, aggregate percentage of total, mean, deviation |
| 988 | WRITE status: |
| 989 | KiB IO, bandwidth (KiB/sec), runtime (msec) |
| 990 | Submission latency: min, max, mean, deviation |
| 991 | Completion latency: min, max, mean, deviation |
| 992 | Bw: min, max, aggregate percentage of total, mean, deviation |
| 993 | CPU usage: user, system, context switches, major faults, minor faults |
| 994 | IO depths: <=1, 2, 4, 8, 16, 32, >=64 |
| 995 | IO latencies: <=2, 4, 10, 20, 50, 100, 250, 500, 750, 1000, >=2000 |
| 996 | Text description |
| 997 | |