| 1 | Table of contents |
| 2 | ----------------- |
| 3 | |
| 4 | 1. Overview |
| 5 | 2. How fio works |
| 6 | 3. Running fio |
| 7 | 4. Job file format |
| 8 | 5. Detailed list of parameters |
| 9 | 6. Normal output |
| 10 | 7. Terse output |
| 11 | |
| 12 | |
| 13 | 1.0 Overview and history |
| 14 | ------------------------ |
| 15 | fio was originally written to save me the hassle of writing special test |
| 16 | case programs when I wanted to test a specific workload, either for |
| 17 | performance reasons or to find/reproduce a bug. The process of writing |
| 18 | such a test app can be tiresome, especially if you have to do it often. |
| 19 | Hence I needed a tool that would be able to simulate a given io workload |
| 20 | without resorting to writing a tailored test case again and again. |
| 21 | |
| 22 | A test work load is difficult to define, though. There can be any number |
| 23 | of processes or threads involved, and they can each be using their own |
| 24 | way of generating io. You could have someone dirtying large amounts of |
| 25 | memory in an memory mapped file, or maybe several threads issuing |
| 26 | reads using asynchronous io. fio needed to be flexible enough to |
| 27 | simulate both of these cases, and many more. |
| 28 | |
| 29 | 2.0 How fio works |
| 30 | ----------------- |
| 31 | The first step in getting fio to simulate a desired io workload, is |
| 32 | writing a job file describing that specific setup. A job file may contain |
| 33 | any number of threads and/or files - the typical contents of the job file |
| 34 | is a global section defining shared parameters, and one or more job |
| 35 | sections describing the jobs involved. When run, fio parses this file |
| 36 | and sets everything up as described. If we break down a job from top to |
| 37 | bottom, it contains the following basic parameters: |
| 38 | |
| 39 | IO type Defines the io pattern issued to the file(s). |
| 40 | We may only be reading sequentially from this |
| 41 | file(s), or we may be writing randomly. Or even |
| 42 | mixing reads and writes, sequentially or randomly. |
| 43 | |
| 44 | Block size In how large chunks are we issuing io? This may be |
| 45 | a single value, or it may describe a range of |
| 46 | block sizes. |
| 47 | |
| 48 | IO size How much data are we going to be reading/writing. |
| 49 | |
| 50 | IO engine How do we issue io? We could be memory mapping the |
| 51 | file, we could be using regular read/write, we |
| 52 | could be using splice, async io, syslet, or even |
| 53 | SG (SCSI generic sg). |
| 54 | |
| 55 | IO depth If the io engine is async, how large a queuing |
| 56 | depth do we want to maintain? |
| 57 | |
| 58 | IO type Should we be doing buffered io, or direct/raw io? |
| 59 | |
| 60 | Num files How many files are we spreading the workload over. |
| 61 | |
| 62 | Num threads How many threads or processes should we spread |
| 63 | this workload over. |
| 64 | |
| 65 | The above are the basic parameters defined for a workload, in addition |
| 66 | there's a multitude of parameters that modify other aspects of how this |
| 67 | job behaves. |
| 68 | |
| 69 | |
| 70 | 3.0 Running fio |
| 71 | --------------- |
| 72 | See the README file for command line parameters, there are only a few |
| 73 | of them. |
| 74 | |
| 75 | Running fio is normally the easiest part - you just give it the job file |
| 76 | (or job files) as parameters: |
| 77 | |
| 78 | $ fio job_file |
| 79 | |
| 80 | and it will start doing what the job_file tells it to do. You can give |
| 81 | more than one job file on the command line, fio will serialize the running |
| 82 | of those files. Internally that is the same as using the 'stonewall' |
| 83 | parameter described the the parameter section. |
| 84 | |
| 85 | If the job file contains only one job, you may as well just give the |
| 86 | parameters on the command line. The command line parameters are identical |
| 87 | to the job parameters, with a few extra that control global parameters |
| 88 | (see README). For example, for the job file parameter iodepth=2, the |
| 89 | mirror command line option would be --iodepth 2 or --iodepth=2. You can |
| 90 | also use the command line for giving more than one job entry. For each |
| 91 | --name option that fio sees, it will start a new job with that name. |
| 92 | Command line entries following a --name entry will apply to that job, |
| 93 | until there are no more entries or a new --name entry is seen. This is |
| 94 | similar to the job file options, where each option applies to the current |
| 95 | job until a new [] job entry is seen. |
| 96 | |
| 97 | fio does not need to run as root, except if the files or devices specified |
| 98 | in the job section requires that. Some other options may also be restricted, |
| 99 | such as memory locking, io scheduler switching, and decreasing the nice value. |
| 100 | |
| 101 | |
| 102 | 4.0 Job file format |
| 103 | ------------------- |
| 104 | As previously described, fio accepts one or more job files describing |
| 105 | what it is supposed to do. The job file format is the classic ini file, |
| 106 | where the names enclosed in [] brackets define the job name. You are free |
| 107 | to use any ascii name you want, except 'global' which has special meaning. |
| 108 | A global section sets defaults for the jobs described in that file. A job |
| 109 | may override a global section parameter, and a job file may even have |
| 110 | several global sections if so desired. A job is only affected by a global |
| 111 | section residing above it. If the first character in a line is a ';' or a |
| 112 | '#', the entire line is discarded as a comment. |
| 113 | |
| 114 | So let's look at a really simple job file that defines two processes, each |
| 115 | randomly reading from a 128MiB file. |
| 116 | |
| 117 | ; -- start job file -- |
| 118 | [global] |
| 119 | rw=randread |
| 120 | size=128m |
| 121 | |
| 122 | [job1] |
| 123 | |
| 124 | [job2] |
| 125 | |
| 126 | ; -- end job file -- |
| 127 | |
| 128 | As you can see, the job file sections themselves are empty as all the |
| 129 | described parameters are shared. As no filename= option is given, fio |
| 130 | makes up a filename for each of the jobs as it sees fit. On the command |
| 131 | line, this job would look as follows: |
| 132 | |
| 133 | $ fio --name=global --rw=randread --size=128m --name=job1 --name=job2 |
| 134 | |
| 135 | |
| 136 | Let's look at an example that has a number of processes writing randomly |
| 137 | to files. |
| 138 | |
| 139 | ; -- start job file -- |
| 140 | [random-writers] |
| 141 | ioengine=libaio |
| 142 | iodepth=4 |
| 143 | rw=randwrite |
| 144 | bs=32k |
| 145 | direct=0 |
| 146 | size=64m |
| 147 | numjobs=4 |
| 148 | |
| 149 | ; -- end job file -- |
| 150 | |
| 151 | Here we have no global section, as we only have one job defined anyway. |
| 152 | We want to use async io here, with a depth of 4 for each file. We also |
| 153 | increased the buffer size used to 32KiB and define numjobs to 4 to |
| 154 | fork 4 identical jobs. The result is 4 processes each randomly writing |
| 155 | to their own 64MiB file. Instead of using the above job file, you could |
| 156 | have given the parameters on the command line. For this case, you would |
| 157 | specify: |
| 158 | |
| 159 | $ fio --name=random-writers --ioengine=libaio --iodepth=4 --rw=randwrite --bs=32k --direct=0 --size=64m --numjobs=4 |
| 160 | |
| 161 | fio also supports environment variable expansion in job files. Any |
| 162 | substring of the form "${VARNAME}" as part of an option value (in other |
| 163 | words, on the right of the `='), will be expanded to the value of the |
| 164 | environment variable called VARNAME. If no such environment variable |
| 165 | is defined, or VARNAME is the empty string, the empty string will be |
| 166 | substituted. |
| 167 | |
| 168 | As an example, let's look at a sample fio invocation and job file: |
| 169 | |
| 170 | $ SIZE=64m NUMJOBS=4 fio jobfile.fio |
| 171 | |
| 172 | ; -- start job file -- |
| 173 | [random-writers] |
| 174 | rw=randwrite |
| 175 | size=${SIZE} |
| 176 | numjobs=${NUMJOBS} |
| 177 | ; -- end job file -- |
| 178 | |
| 179 | This will expand to the following equivalent job file at runtime: |
| 180 | |
| 181 | ; -- start job file -- |
| 182 | [random-writers] |
| 183 | rw=randwrite |
| 184 | size=64m |
| 185 | numjobs=4 |
| 186 | ; -- end job file -- |
| 187 | |
| 188 | fio ships with a few example job files, you can also look there for |
| 189 | inspiration. |
| 190 | |
| 191 | |
| 192 | 5.0 Detailed list of parameters |
| 193 | ------------------------------- |
| 194 | |
| 195 | This section describes in details each parameter associated with a job. |
| 196 | Some parameters take an option of a given type, such as an integer or |
| 197 | a string. The following types are used: |
| 198 | |
| 199 | str String. This is a sequence of alpha characters. |
| 200 | int Integer. A whole number value, can be negative. If prefixed with |
| 201 | 0x, the integer is assumed to be of base 16 (hexadecimal). |
| 202 | time Integer with possible time postfix. In seconds unless otherwise |
| 203 | specified, use eg 10m for 10 minutes. Accepts s/m/h for seconds, |
| 204 | minutes, and hours. |
| 205 | siint SI integer. A whole number value, which may contain a postfix |
| 206 | describing the base of the number. Accepted postfixes are k/m/g, |
| 207 | meaning kilo, mega, and giga. So if you want to specify 4096, |
| 208 | you could either write out '4096' or just give 4k. The postfixes |
| 209 | signify base 2 values, so 1024 is 1k and 1024k is 1m and so on. |
| 210 | If the option accepts an upper and lower range, use a colon ':' |
| 211 | or minus '-' to separate such values. See irange. |
| 212 | bool Boolean. Usually parsed as an integer, however only defined for |
| 213 | true and false (1 and 0). |
| 214 | irange Integer range with postfix. Allows value range to be given, such |
| 215 | as 1024-4096. A colon may also be used as the separator, eg |
| 216 | 1k:4k. If the option allows two sets of ranges, they can be |
| 217 | specified with a ',' or '/' delimiter: 1k-4k/8k-32k. Also see |
| 218 | siint. |
| 219 | |
| 220 | With the above in mind, here follows the complete list of fio job |
| 221 | parameters. |
| 222 | |
| 223 | name=str ASCII name of the job. This may be used to override the |
| 224 | name printed by fio for this job. Otherwise the job |
| 225 | name is used. On the command line this parameter has the |
| 226 | special purpose of also signaling the start of a new |
| 227 | job. |
| 228 | |
| 229 | description=str Text description of the job. Doesn't do anything except |
| 230 | dump this text description when this job is run. It's |
| 231 | not parsed. |
| 232 | |
| 233 | directory=str Prefix filenames with this directory. Used to places files |
| 234 | in a different location than "./". |
| 235 | |
| 236 | filename=str Fio normally makes up a filename based on the job name, |
| 237 | thread number, and file number. If you want to share |
| 238 | files between threads in a job or several jobs, specify |
| 239 | a filename for each of them to override the default. If |
| 240 | the ioengine used is 'net', the filename is the host, port, |
| 241 | and protocol to use in the format of =host/port/protocol. |
| 242 | See ioengine=net for more. If the ioengine is file based, you |
| 243 | can specify a number of files by separating the names with a |
| 244 | ':' colon. So if you wanted a job to open /dev/sda and /dev/sdb |
| 245 | as the two working files, you would use |
| 246 | filename=/dev/sda:/dev/sdb. '-' is a reserved name, meaning |
| 247 | stdin or stdout. Which of the two depends on the read/write |
| 248 | direction set. |
| 249 | |
| 250 | opendir=str Tell fio to recursively add any file it can find in this |
| 251 | directory and down the file system tree. |
| 252 | |
| 253 | lockfile=str Fio defaults to not doing any locking files before it does |
| 254 | IO to them. If a file or file descriptor is shared, fio |
| 255 | can serialize IO to that file to make the end result |
| 256 | consistent. This is usual for emulating real workloads that |
| 257 | share files. The lock modes are: |
| 258 | |
| 259 | none No locking. The default. |
| 260 | exclusive Only one thread/process may do IO, |
| 261 | excluding all others. |
| 262 | readwrite Read-write locking on the file. Many |
| 263 | readers may access the file at the |
| 264 | same time, but writes get exclusive |
| 265 | access. |
| 266 | |
| 267 | The option may be post-fixed with a lock batch number. If |
| 268 | set, then each thread/process may do that amount of IOs to |
| 269 | the file before giving up the lock. Since lock acquisition is |
| 270 | expensive, batching the lock/unlocks will speed up IO. |
| 271 | |
| 272 | readwrite=str |
| 273 | rw=str Type of io pattern. Accepted values are: |
| 274 | |
| 275 | read Sequential reads |
| 276 | write Sequential writes |
| 277 | randwrite Random writes |
| 278 | randread Random reads |
| 279 | rw Sequential mixed reads and writes |
| 280 | randrw Random mixed reads and writes |
| 281 | |
| 282 | For the mixed io types, the default is to split them 50/50. |
| 283 | For certain types of io the result may still be skewed a bit, |
| 284 | since the speed may be different. It is possible to specify |
| 285 | a number of IO's to do before getting a new offset - this |
| 286 | is only useful for random IO, where fio would normally |
| 287 | generate a new random offset for every IO. If you append |
| 288 | eg 8 to randread, you would get a new random offset for |
| 289 | every 8 IO's. The result would be a seek for only every 8 |
| 290 | IO's, instead of for every IO. Use rw=randread:8 to specify |
| 291 | that. |
| 292 | |
| 293 | randrepeat=bool For random IO workloads, seed the generator in a predictable |
| 294 | way so that results are repeatable across repetitions. |
| 295 | |
| 296 | fadvise_hint=bool By default, fio will use fadvise() to advise the kernel |
| 297 | on what IO patterns it is likely to issue. Sometimes you |
| 298 | want to test specific IO patterns without telling the |
| 299 | kernel about it, in which case you can disable this option. |
| 300 | If set, fio will use POSIX_FADV_SEQUENTIAL for sequential |
| 301 | IO and POSIX_FADV_RANDOM for random IO. |
| 302 | |
| 303 | size=siint The total size of file io for this job. Fio will run until |
| 304 | this many bytes has been transferred, unless runtime is |
| 305 | limited by other options (such as 'runtime', for instance). |
| 306 | Unless specific nr_files and filesize options are given, |
| 307 | fio will divide this size between the available files |
| 308 | specified by the job. |
| 309 | |
| 310 | filesize=siint Individual file sizes. May be a range, in which case fio |
| 311 | will select sizes for files at random within the given range |
| 312 | and limited to 'size' in total (if that is given). If not |
| 313 | given, each created file is the same size. |
| 314 | |
| 315 | fill_device=bool Sets size to something really large and waits for ENOSPC (no |
| 316 | space left on device) as the terminating condition. Only makes |
| 317 | sense with sequential write. |
| 318 | |
| 319 | blocksize=siint |
| 320 | bs=siint The block size used for the io units. Defaults to 4k. Values |
| 321 | can be given for both read and writes. If a single siint is |
| 322 | given, it will apply to both. If a second siint is specified |
| 323 | after a comma, it will apply to writes only. In other words, |
| 324 | the format is either bs=read_and_write or bs=read,write. |
| 325 | bs=4k,8k will thus use 4k blocks for reads, and 8k blocks |
| 326 | for writes. If you only wish to set the write size, you |
| 327 | can do so by passing an empty read size - bs=,8k will set |
| 328 | 8k for writes and leave the read default value. |
| 329 | |
| 330 | blocksize_range=irange |
| 331 | bsrange=irange Instead of giving a single block size, specify a range |
| 332 | and fio will mix the issued io block sizes. The issued |
| 333 | io unit will always be a multiple of the minimum value |
| 334 | given (also see bs_unaligned). Applies to both reads and |
| 335 | writes, however a second range can be given after a comma. |
| 336 | See bs=. |
| 337 | |
| 338 | bssplit=str Sometimes you want even finer grained control of the |
| 339 | block sizes issued, not just an even split between them. |
| 340 | This option allows you to weight various block sizes, |
| 341 | so that you are able to define a specific amount of |
| 342 | block sizes issued. The format for this option is: |
| 343 | |
| 344 | bssplit=blocksize/percentage:blocksize/percentage |
| 345 | |
| 346 | for as many block sizes as needed. So if you want to define |
| 347 | a workload that has 50% 64k blocks, 10% 4k blocks, and |
| 348 | 40% 32k blocks, you would write: |
| 349 | |
| 350 | bssplit=4k/10:64k/50:32k/40 |
| 351 | |
| 352 | Ordering does not matter. If the percentage is left blank, |
| 353 | fio will fill in the remaining values evenly. So a bssplit |
| 354 | option like this one: |
| 355 | |
| 356 | bssplit=4k/50:1k/:32k/ |
| 357 | |
| 358 | would have 50% 4k ios, and 25% 1k and 32k ios. The percentages |
| 359 | always add up to 100, if bssplit is given a range that adds |
| 360 | up to more, it will error out. |
| 361 | |
| 362 | blocksize_unaligned |
| 363 | bs_unaligned If this option is given, any byte size value within bsrange |
| 364 | may be used as a block range. This typically wont work with |
| 365 | direct IO, as that normally requires sector alignment. |
| 366 | |
| 367 | zero_buffers If this option is given, fio will init the IO buffers to |
| 368 | all zeroes. The default is to fill them with random data. |
| 369 | |
| 370 | refill_buffers If this option is given, fio will refill the IO buffers |
| 371 | on every submit. The default is to only fill it at init |
| 372 | time and reuse that data. Only makes sense if zero_buffers |
| 373 | isn't specified, naturally. If data verification is enabled, |
| 374 | refill_buffers is also automatically enabled. |
| 375 | |
| 376 | nrfiles=int Number of files to use for this job. Defaults to 1. |
| 377 | |
| 378 | openfiles=int Number of files to keep open at the same time. Defaults to |
| 379 | the same as nrfiles, can be set smaller to limit the number |
| 380 | simultaneous opens. |
| 381 | |
| 382 | file_service_type=str Defines how fio decides which file from a job to |
| 383 | service next. The following types are defined: |
| 384 | |
| 385 | random Just choose a file at random. |
| 386 | |
| 387 | roundrobin Round robin over open files. This |
| 388 | is the default. |
| 389 | |
| 390 | sequential Finish one file before moving on to |
| 391 | the next. Multiple files can still be |
| 392 | open depending on 'openfiles'. |
| 393 | |
| 394 | The string can have a number appended, indicating how |
| 395 | often to switch to a new file. So if option random:4 is |
| 396 | given, fio will switch to a new random file after 4 ios |
| 397 | have been issued. |
| 398 | |
| 399 | ioengine=str Defines how the job issues io to the file. The following |
| 400 | types are defined: |
| 401 | |
| 402 | sync Basic read(2) or write(2) io. lseek(2) is |
| 403 | used to position the io location. |
| 404 | |
| 405 | psync Basic pread(2) or pwrite(2) io. |
| 406 | |
| 407 | vsync Basic readv(2) or writev(2) IO. |
| 408 | |
| 409 | libaio Linux native asynchronous io. Note that Linux |
| 410 | may only support queued behaviour with |
| 411 | non-buffered IO (set direct=1 or buffered=0). |
| 412 | |
| 413 | posixaio glibc posix asynchronous io. |
| 414 | |
| 415 | solarisaio Solaris native asynchronous io. |
| 416 | |
| 417 | mmap File is memory mapped and data copied |
| 418 | to/from using memcpy(3). |
| 419 | |
| 420 | splice splice(2) is used to transfer the data and |
| 421 | vmsplice(2) to transfer data from user |
| 422 | space to the kernel. |
| 423 | |
| 424 | syslet-rw Use the syslet system calls to make |
| 425 | regular read/write async. |
| 426 | |
| 427 | sg SCSI generic sg v3 io. May either be |
| 428 | synchronous using the SG_IO ioctl, or if |
| 429 | the target is an sg character device |
| 430 | we use read(2) and write(2) for asynchronous |
| 431 | io. |
| 432 | |
| 433 | null Doesn't transfer any data, just pretends |
| 434 | to. This is mainly used to exercise fio |
| 435 | itself and for debugging/testing purposes. |
| 436 | |
| 437 | net Transfer over the network to given host:port. |
| 438 | 'filename' must be set appropriately to |
| 439 | filename=host/port/protocol regardless of send |
| 440 | or receive, if the latter only the port |
| 441 | argument is used. 'host' may be an IP address |
| 442 | or hostname, port is the port number to be used, |
| 443 | and protocol may be 'udp' or 'tcp'. If no |
| 444 | protocol is given, TCP is used. |
| 445 | |
| 446 | netsplice Like net, but uses splice/vmsplice to |
| 447 | map data and send/receive. |
| 448 | |
| 449 | cpuio Doesn't transfer any data, but burns CPU |
| 450 | cycles according to the cpuload= and |
| 451 | cpucycle= options. Setting cpuload=85 |
| 452 | will cause that job to do nothing but burn |
| 453 | 85% of the CPU. In case of SMP machines, |
| 454 | use numjobs=<no_of_cpu> to get desired CPU |
| 455 | usage, as the cpuload only loads a single |
| 456 | CPU at the desired rate. |
| 457 | |
| 458 | guasi The GUASI IO engine is the Generic Userspace |
| 459 | Asyncronous Syscall Interface approach |
| 460 | to async IO. See |
| 461 | |
| 462 | http://www.xmailserver.org/guasi-lib.html |
| 463 | |
| 464 | for more info on GUASI. |
| 465 | |
| 466 | external Prefix to specify loading an external |
| 467 | IO engine object file. Append the engine |
| 468 | filename, eg ioengine=external:/tmp/foo.o |
| 469 | to load ioengine foo.o in /tmp. |
| 470 | |
| 471 | iodepth=int This defines how many io units to keep in flight against |
| 472 | the file. The default is 1 for each file defined in this |
| 473 | job, can be overridden with a larger value for higher |
| 474 | concurrency. |
| 475 | |
| 476 | iodepth_batch_submit=int |
| 477 | iodepth_batch=int This defines how many pieces of IO to submit at once. |
| 478 | It defaults to 1 which means that we submit each IO |
| 479 | as soon as it is available, but can be raised to submit |
| 480 | bigger batches of IO at the time. |
| 481 | |
| 482 | iodepth_batch_complete=int This defines how many pieces of IO to retrieve |
| 483 | at once. It defaults to 1 which means that we'll ask |
| 484 | for a minimum of 1 IO in the retrieval process from |
| 485 | the kernel. The IO retrieval will go on until we |
| 486 | hit the limit set by iodepth_low. If this variable is |
| 487 | set to 0, then fio will always check for completed |
| 488 | events before queuing more IO. This helps reduce |
| 489 | IO latency, at the cost of more retrieval system calls. |
| 490 | |
| 491 | iodepth_low=int The low water mark indicating when to start filling |
| 492 | the queue again. Defaults to the same as iodepth, meaning |
| 493 | that fio will attempt to keep the queue full at all times. |
| 494 | If iodepth is set to eg 16 and iodepth_low is set to 4, then |
| 495 | after fio has filled the queue of 16 requests, it will let |
| 496 | the depth drain down to 4 before starting to fill it again. |
| 497 | |
| 498 | direct=bool If value is true, use non-buffered io. This is usually |
| 499 | O_DIRECT. |
| 500 | |
| 501 | buffered=bool If value is true, use buffered io. This is the opposite |
| 502 | of the 'direct' option. Defaults to true. |
| 503 | |
| 504 | offset=siint Start io at the given offset in the file. The data before |
| 505 | the given offset will not be touched. This effectively |
| 506 | caps the file size at real_size - offset. |
| 507 | |
| 508 | fsync=int If writing to a file, issue a sync of the dirty data |
| 509 | for every number of blocks given. For example, if you give |
| 510 | 32 as a parameter, fio will sync the file for every 32 |
| 511 | writes issued. If fio is using non-buffered io, we may |
| 512 | not sync the file. The exception is the sg io engine, which |
| 513 | synchronizes the disk cache anyway. |
| 514 | |
| 515 | overwrite=bool If true, writes to a file will always overwrite existing |
| 516 | data. If the file doesn't already exist, it will be |
| 517 | created before the write phase begins. If the file exists |
| 518 | and is large enough for the specified write phase, nothing |
| 519 | will be done. |
| 520 | |
| 521 | end_fsync=bool If true, fsync file contents when the job exits. |
| 522 | |
| 523 | fsync_on_close=bool If true, fio will fsync() a dirty file on close. |
| 524 | This differs from end_fsync in that it will happen on every |
| 525 | file close, not just at the end of the job. |
| 526 | |
| 527 | rwmixread=int How large a percentage of the mix should be reads. |
| 528 | |
| 529 | rwmixwrite=int How large a percentage of the mix should be writes. If both |
| 530 | rwmixread and rwmixwrite is given and the values do not add |
| 531 | up to 100%, the latter of the two will be used to override |
| 532 | the first. |
| 533 | |
| 534 | norandommap Normally fio will cover every block of the file when doing |
| 535 | random IO. If this option is given, fio will just get a |
| 536 | new random offset without looking at past io history. This |
| 537 | means that some blocks may not be read or written, and that |
| 538 | some blocks may be read/written more than once. This option |
| 539 | is mutually exclusive with verify= if and only if multiple |
| 540 | blocksizes (via bsrange=) are used, since fio only tracks |
| 541 | complete rewrites of blocks. |
| 542 | |
| 543 | softrandommap See norandommap. If fio runs with the random block map enabled |
| 544 | and it fails to allocate the map, if this option is set it |
| 545 | will continue without a random block map. As coverage will |
| 546 | not be as complete as with random maps, this option is |
| 547 | disabled by default. |
| 548 | |
| 549 | nice=int Run the job with the given nice value. See man nice(2). |
| 550 | |
| 551 | prio=int Set the io priority value of this job. Linux limits us to |
| 552 | a positive value between 0 and 7, with 0 being the highest. |
| 553 | See man ionice(1). |
| 554 | |
| 555 | prioclass=int Set the io priority class. See man ionice(1). |
| 556 | |
| 557 | thinktime=int Stall the job x microseconds after an io has completed before |
| 558 | issuing the next. May be used to simulate processing being |
| 559 | done by an application. See thinktime_blocks and |
| 560 | thinktime_spin. |
| 561 | |
| 562 | thinktime_spin=int |
| 563 | Only valid if thinktime is set - pretend to spend CPU time |
| 564 | doing something with the data received, before falling back |
| 565 | to sleeping for the rest of the period specified by |
| 566 | thinktime. |
| 567 | |
| 568 | thinktime_blocks |
| 569 | Only valid if thinktime is set - control how many blocks |
| 570 | to issue, before waiting 'thinktime' usecs. If not set, |
| 571 | defaults to 1 which will make fio wait 'thinktime' usecs |
| 572 | after every block. |
| 573 | |
| 574 | rate=int Cap the bandwidth used by this job to this number of KiB/sec. |
| 575 | |
| 576 | ratemin=int Tell fio to do whatever it can to maintain at least this |
| 577 | bandwidth. Failing to meet this requirement, will cause |
| 578 | the job to exit. |
| 579 | |
| 580 | rate_iops=int Cap the bandwidth to this number of IOPS. Basically the same |
| 581 | as rate, just specified independently of bandwidth. If the |
| 582 | job is given a block size range instead of a fixed value, |
| 583 | the smallest block size is used as the metric. |
| 584 | |
| 585 | rate_iops_min=int If fio doesn't meet this rate of IO, it will cause |
| 586 | the job to exit. |
| 587 | |
| 588 | ratecycle=int Average bandwidth for 'rate' and 'ratemin' over this number |
| 589 | of milliseconds. |
| 590 | |
| 591 | cpumask=int Set the CPU affinity of this job. The parameter given is a |
| 592 | bitmask of allowed CPU's the job may run on. So if you want |
| 593 | the allowed CPUs to be 1 and 5, you would pass the decimal |
| 594 | value of (1 << 1 | 1 << 5), or 34. See man |
| 595 | sched_setaffinity(2). This may not work on all supported |
| 596 | operating systems or kernel versions. This option doesn't |
| 597 | work well for a higher CPU count than what you can store in |
| 598 | an integer mask, so it can only control cpus 1-32. For |
| 599 | boxes with larger CPU counts, use cpus_allowed. |
| 600 | |
| 601 | cpus_allowed=str Controls the same options as cpumask, but it allows a text |
| 602 | setting of the permitted CPUs instead. So to use CPUs 1 and |
| 603 | 5, you would specify cpus_allowed=1,5. This options also |
| 604 | allows a range of CPUs. Say you wanted a binding to CPUs |
| 605 | 1, 5, and 8-15, you would set cpus_allowed=1,5,8-15. |
| 606 | |
| 607 | startdelay=time Start this job the specified number of seconds after fio |
| 608 | has started. Only useful if the job file contains several |
| 609 | jobs, and you want to delay starting some jobs to a certain |
| 610 | time. |
| 611 | |
| 612 | runtime=time Tell fio to terminate processing after the specified number |
| 613 | of seconds. It can be quite hard to determine for how long |
| 614 | a specified job will run, so this parameter is handy to |
| 615 | cap the total runtime to a given time. |
| 616 | |
| 617 | time_based If set, fio will run for the duration of the runtime |
| 618 | specified even if the file(s) are completely read or |
| 619 | written. It will simply loop over the same workload |
| 620 | as many times as the runtime allows. |
| 621 | |
| 622 | ramp_time=time If set, fio will run the specified workload for this amount |
| 623 | of time before logging any performance numbers. Useful for |
| 624 | letting performance settle before logging results, thus |
| 625 | minimizing the runtime required for stable results. Note |
| 626 | that the ramp_time is considered lead in time for a job, |
| 627 | thus it will increase the total runtime if a special timeout |
| 628 | or runtime is specified. |
| 629 | |
| 630 | invalidate=bool Invalidate the buffer/page cache parts for this file prior |
| 631 | to starting io. Defaults to true. |
| 632 | |
| 633 | sync=bool Use sync io for buffered writes. For the majority of the |
| 634 | io engines, this means using O_SYNC. |
| 635 | |
| 636 | iomem=str |
| 637 | mem=str Fio can use various types of memory as the io unit buffer. |
| 638 | The allowed values are: |
| 639 | |
| 640 | malloc Use memory from malloc(3) as the buffers. |
| 641 | |
| 642 | shm Use shared memory as the buffers. Allocated |
| 643 | through shmget(2). |
| 644 | |
| 645 | shmhuge Same as shm, but use huge pages as backing. |
| 646 | |
| 647 | mmap Use mmap to allocate buffers. May either be |
| 648 | anonymous memory, or can be file backed if |
| 649 | a filename is given after the option. The |
| 650 | format is mem=mmap:/path/to/file. |
| 651 | |
| 652 | mmaphuge Use a memory mapped huge file as the buffer |
| 653 | backing. Append filename after mmaphuge, ala |
| 654 | mem=mmaphuge:/hugetlbfs/file |
| 655 | |
| 656 | The area allocated is a function of the maximum allowed |
| 657 | bs size for the job, multiplied by the io depth given. Note |
| 658 | that for shmhuge and mmaphuge to work, the system must have |
| 659 | free huge pages allocated. This can normally be checked |
| 660 | and set by reading/writing /proc/sys/vm/nr_hugepages on a |
| 661 | Linux system. Fio assumes a huge page is 4MiB in size. So |
| 662 | to calculate the number of huge pages you need for a given |
| 663 | job file, add up the io depth of all jobs (normally one unless |
| 664 | iodepth= is used) and multiply by the maximum bs set. Then |
| 665 | divide that number by the huge page size. You can see the |
| 666 | size of the huge pages in /proc/meminfo. If no huge pages |
| 667 | are allocated by having a non-zero number in nr_hugepages, |
| 668 | using mmaphuge or shmhuge will fail. Also see hugepage-size. |
| 669 | |
| 670 | mmaphuge also needs to have hugetlbfs mounted and the file |
| 671 | location should point there. So if it's mounted in /huge, |
| 672 | you would use mem=mmaphuge:/huge/somefile. |
| 673 | |
| 674 | hugepage-size=siint |
| 675 | Defines the size of a huge page. Must at least be equal |
| 676 | to the system setting, see /proc/meminfo. Defaults to 4MiB. |
| 677 | Should probably always be a multiple of megabytes, so using |
| 678 | hugepage-size=Xm is the preferred way to set this to avoid |
| 679 | setting a non-pow-2 bad value. |
| 680 | |
| 681 | exitall When one job finishes, terminate the rest. The default is |
| 682 | to wait for each job to finish, sometimes that is not the |
| 683 | desired action. |
| 684 | |
| 685 | bwavgtime=int Average the calculated bandwidth over the given time. Value |
| 686 | is specified in milliseconds. |
| 687 | |
| 688 | create_serialize=bool If true, serialize the file creating for the jobs. |
| 689 | This may be handy to avoid interleaving of data |
| 690 | files, which may greatly depend on the filesystem |
| 691 | used and even the number of processors in the system. |
| 692 | |
| 693 | create_fsync=bool fsync the data file after creation. This is the |
| 694 | default. |
| 695 | |
| 696 | create_on_open=bool Don't pre-setup the files for IO, just create open() |
| 697 | when it's time to do IO to that file. |
| 698 | |
| 699 | unlink=bool Unlink the job files when done. Not the default, as repeated |
| 700 | runs of that job would then waste time recreating the file |
| 701 | set again and again. |
| 702 | |
| 703 | loops=int Run the specified number of iterations of this job. Used |
| 704 | to repeat the same workload a given number of times. Defaults |
| 705 | to 1. |
| 706 | |
| 707 | do_verify=bool Run the verify phase after a write phase. Only makes sense if |
| 708 | verify is set. Defaults to 1. |
| 709 | |
| 710 | verify=str If writing to a file, fio can verify the file contents |
| 711 | after each iteration of the job. The allowed values are: |
| 712 | |
| 713 | md5 Use an md5 sum of the data area and store |
| 714 | it in the header of each block. |
| 715 | |
| 716 | crc64 Use an experimental crc64 sum of the data |
| 717 | area and store it in the header of each |
| 718 | block. |
| 719 | |
| 720 | crc32c Use a crc32c sum of the data area and store |
| 721 | it in the header of each block. |
| 722 | |
| 723 | crc32c-intel Use hardware assisted crc32c calcuation |
| 724 | provided on SSE4.2 enabled processors. |
| 725 | |
| 726 | crc32 Use a crc32 sum of the data area and store |
| 727 | it in the header of each block. |
| 728 | |
| 729 | crc16 Use a crc16 sum of the data area and store |
| 730 | it in the header of each block. |
| 731 | |
| 732 | crc7 Use a crc7 sum of the data area and store |
| 733 | it in the header of each block. |
| 734 | |
| 735 | sha512 Use sha512 as the checksum function. |
| 736 | |
| 737 | sha256 Use sha256 as the checksum function. |
| 738 | |
| 739 | meta Write extra information about each io |
| 740 | (timestamp, block number etc.). The block |
| 741 | number is verified. |
| 742 | |
| 743 | null Only pretend to verify. Useful for testing |
| 744 | internals with ioengine=null, not for much |
| 745 | else. |
| 746 | |
| 747 | This option can be used for repeated burn-in tests of a |
| 748 | system to make sure that the written data is also |
| 749 | correctly read back. |
| 750 | |
| 751 | verifysort=bool If set, fio will sort written verify blocks when it deems |
| 752 | it faster to read them back in a sorted manner. This is |
| 753 | often the case when overwriting an existing file, since |
| 754 | the blocks are already laid out in the file system. You |
| 755 | can ignore this option unless doing huge amounts of really |
| 756 | fast IO where the red-black tree sorting CPU time becomes |
| 757 | significant. |
| 758 | |
| 759 | verify_offset=siint Swap the verification header with data somewhere else |
| 760 | in the block before writing. Its swapped back before |
| 761 | verifying. |
| 762 | |
| 763 | verify_interval=siint Write the verification header at a finer granularity |
| 764 | than the blocksize. It will be written for chunks the |
| 765 | size of header_interval. blocksize should divide this |
| 766 | evenly. |
| 767 | |
| 768 | verify_pattern=int If set, fio will fill the io buffers with this |
| 769 | pattern. Fio defaults to filling with totally random |
| 770 | bytes, but sometimes it's interesting to fill with a known |
| 771 | pattern for io verification purposes. Depending on the |
| 772 | width of the pattern, fio will fill 1/2/3/4 bytes of the |
| 773 | buffer at the time. The verify_pattern cannot be larger than |
| 774 | a 32-bit quantity. |
| 775 | |
| 776 | verify_fatal=bool Normally fio will keep checking the entire contents |
| 777 | before quitting on a block verification failure. If this |
| 778 | option is set, fio will exit the job on the first observed |
| 779 | failure. |
| 780 | |
| 781 | stonewall Wait for preceeding jobs in the job file to exit, before |
| 782 | starting this one. Can be used to insert serialization |
| 783 | points in the job file. A stone wall also implies starting |
| 784 | a new reporting group. |
| 785 | |
| 786 | new_group Start a new reporting group. If this option isn't given, |
| 787 | jobs in a file will be part of the same reporting group |
| 788 | unless separated by a stone wall (or if it's a group |
| 789 | by itself, with the numjobs option). |
| 790 | |
| 791 | numjobs=int Create the specified number of clones of this job. May be |
| 792 | used to setup a larger number of threads/processes doing |
| 793 | the same thing. We regard that grouping of jobs as a |
| 794 | specific group. |
| 795 | |
| 796 | group_reporting If 'numjobs' is set, it may be interesting to display |
| 797 | statistics for the group as a whole instead of for each |
| 798 | individual job. This is especially true of 'numjobs' is |
| 799 | large, looking at individual thread/process output quickly |
| 800 | becomes unwieldy. If 'group_reporting' is specified, fio |
| 801 | will show the final report per-group instead of per-job. |
| 802 | |
| 803 | thread fio defaults to forking jobs, however if this option is |
| 804 | given, fio will use pthread_create(3) to create threads |
| 805 | instead. |
| 806 | |
| 807 | zonesize=siint Divide a file into zones of the specified size. See zoneskip. |
| 808 | |
| 809 | zoneskip=siint Skip the specified number of bytes when zonesize data has |
| 810 | been read. The two zone options can be used to only do |
| 811 | io on zones of a file. |
| 812 | |
| 813 | write_iolog=str Write the issued io patterns to the specified file. See |
| 814 | read_iolog. |
| 815 | |
| 816 | read_iolog=str Open an iolog with the specified file name and replay the |
| 817 | io patterns it contains. This can be used to store a |
| 818 | workload and replay it sometime later. The iolog given |
| 819 | may also be a blktrace binary file, which allows fio |
| 820 | to replay a workload captured by blktrace. See blktrace |
| 821 | for how to capture such logging data. For blktrace replay, |
| 822 | the file needs to be turned into a blkparse binary data |
| 823 | file first (blktrace <device> -d file_for_fio.bin). |
| 824 | |
| 825 | write_bw_log=str If given, write a bandwidth log of the jobs in this job |
| 826 | file. Can be used to store data of the bandwidth of the |
| 827 | jobs in their lifetime. The included fio_generate_plots |
| 828 | script uses gnuplot to turn these text files into nice |
| 829 | graphs. See write_log_log for behaviour of given |
| 830 | filename. For this option, the postfix is _bw.log. |
| 831 | |
| 832 | write_lat_log=str Same as write_bw_log, except that this option stores io |
| 833 | completion latencies instead. If no filename is given |
| 834 | with this option, the default filename of "jobname_type.log" |
| 835 | is used. Even if the filename is given, fio will still |
| 836 | append the type of log. So if one specifies |
| 837 | |
| 838 | write_lat_log=foo |
| 839 | |
| 840 | The actual log names will be foo_clat.log and foo_slat.log. |
| 841 | This helps fio_generate_plot fine the logs automatically. |
| 842 | |
| 843 | lockmem=siint Pin down the specified amount of memory with mlock(2). Can |
| 844 | potentially be used instead of removing memory or booting |
| 845 | with less memory to simulate a smaller amount of memory. |
| 846 | |
| 847 | exec_prerun=str Before running this job, issue the command specified |
| 848 | through system(3). |
| 849 | |
| 850 | exec_postrun=str After the job completes, issue the command specified |
| 851 | though system(3). |
| 852 | |
| 853 | ioscheduler=str Attempt to switch the device hosting the file to the specified |
| 854 | io scheduler before running. |
| 855 | |
| 856 | cpuload=int If the job is a CPU cycle eater, attempt to use the specified |
| 857 | percentage of CPU cycles. |
| 858 | |
| 859 | cpuchunks=int If the job is a CPU cycle eater, split the load into |
| 860 | cycles of the given time. In milliseconds. |
| 861 | |
| 862 | disk_util=bool Generate disk utilization statistics, if the platform |
| 863 | supports it. Defaults to on. |
| 864 | |
| 865 | disable_clat=bool Disable measurements of completion latency numbers. Useful |
| 866 | only for cutting back the number of calls to gettimeofday, |
| 867 | as that does impact performance at really high IOPS rates. |
| 868 | Note that to really get rid of a large amount of these |
| 869 | calls, this option must be used with disable_slat and |
| 870 | disable_bw as well. |
| 871 | |
| 872 | disable_slat=bool Disable measurements of submission latency numbers. See |
| 873 | disable_clat. |
| 874 | |
| 875 | disable_bw=bool Disable measurements of throughput/bandwidth numbers. See |
| 876 | disable_clat. |
| 877 | |
| 878 | gtod_reduce=bool Enable all of the gettimeofday() reducing options |
| 879 | (disable_clat, disable_slat, disable_bw) plus reduce |
| 880 | precision of the timeout somewhat to really shrink |
| 881 | the gettimeofday() call count. With this option enabled, |
| 882 | we only do about 0.4% of the gtod() calls we would have |
| 883 | done if all time keeping was enabled. |
| 884 | |
| 885 | gtod_cpu=int Sometimes it's cheaper to dedicate a single thread of |
| 886 | execution to just getting the current time. Fio (and |
| 887 | databases, for instance) are very intensive on gettimeofday() |
| 888 | calls. With this option, you can set one CPU aside for |
| 889 | doing nothing but logging current time to a shared memory |
| 890 | location. Then the other threads/processes that run IO |
| 891 | workloads need only copy that segment, instead of entering |
| 892 | the kernel with a gettimeofday() call. The CPU set aside |
| 893 | for doing these time calls will be excluded from other |
| 894 | uses. Fio will manually clear it from the CPU mask of other |
| 895 | jobs. |
| 896 | |
| 897 | |
| 898 | 6.0 Interpreting the output |
| 899 | --------------------------- |
| 900 | |
| 901 | fio spits out a lot of output. While running, fio will display the |
| 902 | status of the jobs created. An example of that would be: |
| 903 | |
| 904 | Threads: 1: [_r] [24.8% done] [ 13509/ 8334 kb/s] [eta 00h:01m:31s] |
| 905 | |
| 906 | The characters inside the square brackets denote the current status of |
| 907 | each thread. The possible values (in typical life cycle order) are: |
| 908 | |
| 909 | Idle Run |
| 910 | ---- --- |
| 911 | P Thread setup, but not started. |
| 912 | C Thread created. |
| 913 | I Thread initialized, waiting. |
| 914 | R Running, doing sequential reads. |
| 915 | r Running, doing random reads. |
| 916 | W Running, doing sequential writes. |
| 917 | w Running, doing random writes. |
| 918 | M Running, doing mixed sequential reads/writes. |
| 919 | m Running, doing mixed random reads/writes. |
| 920 | F Running, currently waiting for fsync() |
| 921 | V Running, doing verification of written data. |
| 922 | E Thread exited, not reaped by main thread yet. |
| 923 | _ Thread reaped. |
| 924 | |
| 925 | The other values are fairly self explanatory - number of threads |
| 926 | currently running and doing io, rate of io since last check (read speed |
| 927 | listed first, then write speed), and the estimated completion percentage |
| 928 | and time for the running group. It's impossible to estimate runtime of |
| 929 | the following groups (if any). |
| 930 | |
| 931 | When fio is done (or interrupted by ctrl-c), it will show the data for |
| 932 | each thread, group of threads, and disks in that order. For each data |
| 933 | direction, the output looks like: |
| 934 | |
| 935 | Client1 (g=0): err= 0: |
| 936 | write: io= 32MiB, bw= 666KiB/s, runt= 50320msec |
| 937 | slat (msec): min= 0, max= 136, avg= 0.03, stdev= 1.92 |
| 938 | clat (msec): min= 0, max= 631, avg=48.50, stdev=86.82 |
| 939 | bw (KiB/s) : min= 0, max= 1196, per=51.00%, avg=664.02, stdev=681.68 |
| 940 | cpu : usr=1.49%, sys=0.25%, ctx=7969, majf=0, minf=17 |
| 941 | IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=99.0%, 16=0.0%, 32=0.0%, >32=0.0% |
| 942 | submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% |
| 943 | complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% |
| 944 | issued r/w: total=0/32768, short=0/0 |
| 945 | lat (msec): 2=1.6%, 4=0.0%, 10=3.2%, 20=12.8%, 50=38.4%, 100=24.8%, |
| 946 | lat (msec): 250=15.2%, 500=0.0%, 750=0.0%, 1000=0.0%, >=2048=0.0% |
| 947 | |
| 948 | The client number is printed, along with the group id and error of that |
| 949 | thread. Below is the io statistics, here for writes. In the order listed, |
| 950 | they denote: |
| 951 | |
| 952 | io= Number of megabytes io performed |
| 953 | bw= Average bandwidth rate |
| 954 | runt= The runtime of that thread |
| 955 | slat= Submission latency (avg being the average, stdev being the |
| 956 | standard deviation). This is the time it took to submit |
| 957 | the io. For sync io, the slat is really the completion |
| 958 | latency, since queue/complete is one operation there. This |
| 959 | value can be in milliseconds or microseconds, fio will choose |
| 960 | the most appropriate base and print that. In the example |
| 961 | above, milliseconds is the best scale. |
| 962 | clat= Completion latency. Same names as slat, this denotes the |
| 963 | time from submission to completion of the io pieces. For |
| 964 | sync io, clat will usually be equal (or very close) to 0, |
| 965 | as the time from submit to complete is basically just |
| 966 | CPU time (io has already been done, see slat explanation). |
| 967 | bw= Bandwidth. Same names as the xlat stats, but also includes |
| 968 | an approximate percentage of total aggregate bandwidth |
| 969 | this thread received in this group. This last value is |
| 970 | only really useful if the threads in this group are on the |
| 971 | same disk, since they are then competing for disk access. |
| 972 | cpu= CPU usage. User and system time, along with the number |
| 973 | of context switches this thread went through, usage of |
| 974 | system and user time, and finally the number of major |
| 975 | and minor page faults. |
| 976 | IO depths= The distribution of io depths over the job life time. The |
| 977 | numbers are divided into powers of 2, so for example the |
| 978 | 16= entries includes depths up to that value but higher |
| 979 | than the previous entry. In other words, it covers the |
| 980 | range from 16 to 31. |
| 981 | IO submit= How many pieces of IO were submitting in a single submit |
| 982 | call. Each entry denotes that amount and below, until |
| 983 | the previous entry - eg, 8=100% mean that we submitted |
| 984 | anywhere in between 5-8 ios per submit call. |
| 985 | IO complete= Like the above submit number, but for completions instead. |
| 986 | IO issued= The number of read/write requests issued, and how many |
| 987 | of them were short. |
| 988 | IO latencies= The distribution of IO completion latencies. This is the |
| 989 | time from when IO leaves fio and when it gets completed. |
| 990 | The numbers follow the same pattern as the IO depths, |
| 991 | meaning that 2=1.6% means that 1.6% of the IO completed |
| 992 | within 2 msecs, 20=12.8% means that 12.8% of the IO |
| 993 | took more than 10 msecs, but less than (or equal to) 20 msecs. |
| 994 | |
| 995 | After each client has been listed, the group statistics are printed. They |
| 996 | will look like this: |
| 997 | |
| 998 | Run status group 0 (all jobs): |
| 999 | READ: io=64MiB, aggrb=22178, minb=11355, maxb=11814, mint=2840msec, maxt=2955msec |
| 1000 | WRITE: io=64MiB, aggrb=1302, minb=666, maxb=669, mint=50093msec, maxt=50320msec |
| 1001 | |
| 1002 | For each data direction, it prints: |
| 1003 | |
| 1004 | io= Number of megabytes io performed. |
| 1005 | aggrb= Aggregate bandwidth of threads in this group. |
| 1006 | minb= The minimum average bandwidth a thread saw. |
| 1007 | maxb= The maximum average bandwidth a thread saw. |
| 1008 | mint= The smallest runtime of the threads in that group. |
| 1009 | maxt= The longest runtime of the threads in that group. |
| 1010 | |
| 1011 | And finally, the disk statistics are printed. They will look like this: |
| 1012 | |
| 1013 | Disk stats (read/write): |
| 1014 | sda: ios=16398/16511, merge=30/162, ticks=6853/819634, in_queue=826487, util=100.00% |
| 1015 | |
| 1016 | Each value is printed for both reads and writes, with reads first. The |
| 1017 | numbers denote: |
| 1018 | |
| 1019 | ios= Number of ios performed by all groups. |
| 1020 | merge= Number of merges io the io scheduler. |
| 1021 | ticks= Number of ticks we kept the disk busy. |
| 1022 | io_queue= Total time spent in the disk queue. |
| 1023 | util= The disk utilization. A value of 100% means we kept the disk |
| 1024 | busy constantly, 50% would be a disk idling half of the time. |
| 1025 | |
| 1026 | |
| 1027 | 7.0 Terse output |
| 1028 | ---------------- |
| 1029 | |
| 1030 | For scripted usage where you typically want to generate tables or graphs |
| 1031 | of the results, fio can output the results in a semicolon separated format. |
| 1032 | The format is one long line of values, such as: |
| 1033 | |
| 1034 | client1;0;0;1906777;1090804;1790;0;0;0.000000;0.000000;0;0;0.000000;0.000000;929380;1152890;25.510151%;1078276.333333;128948.113404;0;0;0;0;0;0.000000;0.000000;0;0;0.000000;0.000000;0;0;0.000000%;0.000000;0.000000;100.000000%;0.000000%;324;100.0%;0.0%;0.0%;0.0%;0.0%;0.0%;0.0%;100.0%;0.0%;0.0%;0.0%;0.0%;0.0% |
| 1035 | ;0.0%;0.0%;0.0%;0.0%;0.0% |
| 1036 | |
| 1037 | To enable terse output, use the --minimal command line option. |
| 1038 | |
| 1039 | Split up, the format is as follows: |
| 1040 | |
| 1041 | jobname, groupid, error |
| 1042 | READ status: |
| 1043 | KiB IO, bandwidth (KiB/sec), runtime (msec) |
| 1044 | Submission latency: min, max, mean, deviation |
| 1045 | Completion latency: min, max, mean, deviation |
| 1046 | Bw: min, max, aggregate percentage of total, mean, deviation |
| 1047 | WRITE status: |
| 1048 | KiB IO, bandwidth (KiB/sec), runtime (msec) |
| 1049 | Submission latency: min, max, mean, deviation |
| 1050 | Completion latency: min, max, mean, deviation |
| 1051 | Bw: min, max, aggregate percentage of total, mean, deviation |
| 1052 | CPU usage: user, system, context switches, major faults, minor faults |
| 1053 | IO depths: <=1, 2, 4, 8, 16, 32, >=64 |
| 1054 | IO latencies: <=2, 4, 10, 20, 50, 100, 250, 500, 750, 1000, >=2000 |
| 1055 | Text description |
| 1056 | |