'#', the entire line is discarded as a comment.
So let's look at a really simple job file that defines two processes, each
-randomly reading from a 128MiB file.
+randomly reading from a 128MB file.
; -- start job file --
[global]
Here we have no global section, as we only have one job defined anyway.
We want to use async io here, with a depth of 4 for each file. We also
-increased the buffer size used to 32KiB and define numjobs to 4 to
+increased the buffer size used to 32KB and define numjobs to 4 to
fork 4 identical jobs. The result is 4 processes each randomly writing
-to their own 64MiB file. Instead of using the above job file, you could
+to their own 64MB file. Instead of using the above job file, you could
have given the parameters on the command line. For this case, you would
specify:
IO's, instead of for every IO. Use rw=randread:8 to specify
that.
+kb_base=int The base unit for a kilobyte. The defacto base is 2^10, 1024.
+ Storage manufacturers like to use 10^3 or 1000 as a base
+ ten unit instead, for obvious reasons. Allow values are
+ 1024 or 1000, with 1024 being the default.
+
randrepeat=bool For random IO workloads, seed the generator in a predictable
way so that results are repeatable across repetitions.
not sync the file. The exception is the sg io engine, which
synchronizes the disk cache anyway.
+fsyncdata=int Like fsync= but uses fdatasync() to only sync data and not
+ metadata blocks.
+
overwrite=bool If true, writes to a file will always overwrite existing
data. If the file doesn't already exist, it will be
created before the write phase begins. If the file exists
that for shmhuge and mmaphuge to work, the system must have
free huge pages allocated. This can normally be checked
and set by reading/writing /proc/sys/vm/nr_hugepages on a
- Linux system. Fio assumes a huge page is 4MiB in size. So
+ Linux system. Fio assumes a huge page is 4MB in size. So
to calculate the number of huge pages you need for a given
job file, add up the io depth of all jobs (normally one unless
iodepth= is used) and multiply by the maximum bs set. Then
location should point there. So if it's mounted in /huge,
you would use mem=mmaphuge:/huge/somefile.
+iomem_align=int This indiciates the memory alignment of the IO memory buffers.
+ Note that the given alignment is applied to the first IO unit
+ buffer, if using iodepth the alignment of the following buffers
+ are given by the bs used. In other words, if using a bs that is
+ a multiple of the page sized in the system, all buffers will
+ be aligned to this value. If using a bs that is not page
+ aligned, the alignment of subsequent IO memory buffers is the
+ sum of the iomem_align and bs used.
+
hugepage-size=int
Defines the size of a huge page. Must at least be equal
- to the system setting, see /proc/meminfo. Defaults to 4MiB.
+ to the system setting, see /proc/meminfo. Defaults to 4MB.
Should probably always be a multiple of megabytes, so using
hugepage-size=Xm is the preferred way to set this to avoid
setting a non-pow-2 bad value.
pre_read=bool If this is given, files will be pre-read into memory before
starting the given IO operation. This will also clear
the 'invalidate' flag, since it is pointless to pre-read
- and then drop the cache.
+ and then drop the cache. This will only work for IO engines
+ that are seekable, since they allow you to read the same data
+ multiple times. Thus it will not work on eg network or splice
+ IO.
unlink=bool Unlink the job files when done. Not the default, as repeated
runs of that job would then waste time recreating the file
before quitting on a block verification failure. If this
option is set, fio will exit the job on the first observed
failure.
+
+verify_async=int Fio will normally verify IO inline from the submitting
+ thread. This option takes an integer describing how many
+ async offload threads to create for IO verification instead,
+ causing fio to offload the duty of verifying IO contents
+ to one or more separate threads. If using this offload
+ option, even sync IO engines can benefit from using an
+ iodepth setting higher than 1, as it allows them to have
+ IO in flight while verifies are running.
+
+verify_async_cpus=str Tell fio to set the given CPU affinity on the
+ async IO verification threads. See cpus_allowed for the
+ format used.
stonewall Wait for preceeding jobs in the job file to exit, before
starting this one. Can be used to insert serialization
direction, the output looks like:
Client1 (g=0): err= 0:
- write: io= 32MiB, bw= 666KiB/s, runt= 50320msec
+ write: io= 32MB, bw= 666KB/s, runt= 50320msec
slat (msec): min= 0, max= 136, avg= 0.03, stdev= 1.92
clat (msec): min= 0, max= 631, avg=48.50, stdev=86.82
- bw (KiB/s) : min= 0, max= 1196, per=51.00%, avg=664.02, stdev=681.68
+ bw (KB/s) : min= 0, max= 1196, per=51.00%, avg=664.02, stdev=681.68
cpu : usr=1.49%, sys=0.25%, ctx=7969, majf=0, minf=17
IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=99.0%, 16=0.0%, 32=0.0%, >32=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
will look like this:
Run status group 0 (all jobs):
- READ: io=64MiB, aggrb=22178, minb=11355, maxb=11814, mint=2840msec, maxt=2955msec
- WRITE: io=64MiB, aggrb=1302, minb=666, maxb=669, mint=50093msec, maxt=50320msec
+ READ: io=64MB, aggrb=22178, minb=11355, maxb=11814, mint=2840msec, maxt=2955msec
+ WRITE: io=64MB, aggrb=1302, minb=666, maxb=669, mint=50093msec, maxt=50320msec
For each data direction, it prints:
jobname, groupid, error
READ status:
- KiB IO, bandwidth (KiB/sec), runtime (msec)
+ KB IO, bandwidth (KB/sec), runtime (msec)
Submission latency: min, max, mean, deviation
Completion latency: min, max, mean, deviation
Bw: min, max, aggregate percentage of total, mean, deviation
WRITE status:
- KiB IO, bandwidth (KiB/sec), runtime (msec)
+ KB IO, bandwidth (KB/sec), runtime (msec)
Submission latency: min, max, mean, deviation
Completion latency: min, max, mean, deviation
Bw: min, max, aggregate percentage of total, mean, deviation