after a comma, it will apply to writes only. In other words,
the format is either bs=read_and_write or bs=read,write.
bs=4k,8k will thus use 4k blocks for reads, and 8k blocks
- for writes.
+ for writes. If you only wish to set the write size, you
+ can do so by passing an empty read size - bs=,8k will set
+ 8k for writes and leave the read default value.
bsrange=irange Instead of giving a single block size, specify a range
and fio will mix the issued io block sizes. The issued
we use read(2) and write(2) for asynchronous
io.
+ null Doesn't transfer any data, just pretends
+ to. This is mainly used to exercise fio
+ itself and for debugging/testing purposes.
+
iodepth=int This defines how many io units to keep in flight against
the file. The default is 1 for each file defined in this
job, can be overridden with a larger value for higher
thinktime=int Stall the job x microseconds after an io has completed before
issuing the next. May be used to simulate processing being
- done by an application.
+ done by an application. See thinktime_blocks.
+
+thinktime_blocks
+ Only valid if thinktime is set - control how many blocks
+ to issue, before waiting 'thinktime' usecs. If not set,
+ defaults to 1 which will make fio wait 'thinktime' usecs
+ after every block.
rate=int Cap the bandwidth used by this job to this number of KiB/sec.
shm Use shared memory as the buffers. Allocated
through shmget(2).
- mmap Use anonymous memory maps as the buffers.
- Allocated through mmap(2).
+ shmhuge Same as shm, but use huge pages as backing.
+
+ mmap Use mmap to allocate buffers. May either be
+ anonymous memory, or can be file backed if
+ a filename is given after the option. The
+ format is mem=mmap:/path/to/file.
+
+ mmaphuge Use a memory mapped huge file as the buffer
+ backing. Append filename after mmaphuge, ala
+ mem=mmaphuge:/hugetlbfs/file
The area allocated is a function of the maximum allowed
- bs size for the job, multiplied by the io depth given.
+ bs size for the job, multiplied by the io depth given. Note
+ that for shmhuge and mmaphuge to work, the system must have
+ free huge pages allocated. This can normally be checked
+ and set by reading/writing /proc/sys/vm/nr_hugepages on a
+ Linux system. Fio assumes a huge page is 4MiB in size. So
+ to calculate the number of huge pages you need for a given
+ job file, add up the io depth of all jobs (normally one unless
+ iodepth= is used) and multiply by the maximum bs set. Then
+ divide that number by the huge page size. You can see the
+ size of the huge pages in /proc/meminfo. If no huge pages
+ are allocated by having a non-zero number in nr_hugepages,
+ using mmaphuge or shmhuge will fail. Also see hugepage-size.
+
+ mmaphuge also needs to have hugetlbfs mounted and the file
+ location should point there. So if it's mounted in /huge,
+ you would use mem=mmaphuge:/huge/somefile.
+
+hugepage-size=siint
+ Defines the size of a huge page. Must at least be equal
+ to the system setting, see /proc/meminfo. Defaults to 4MiB.
+ Should probably always be a multiple of megabytes, so using
+ hugepage-size=Xm is the preferred way to set this to avoid
+ setting a non-pow-2 bad value.
exitall When one job finishes, terminate the rest. The default is
to wait for each job to finish, sometimes that is not the