size if larger than the current file size. If this parameter
is not given and the file exists, the file size will be used.
-bs=siint The block size used for the io units. Defaults to 4k.
-
-read_bs=siint
-write_bs=siint If the workload is a mixed read-write workload, you can use
- these options to set separate block sizes.
+bs=siint The block size used for the io units. Defaults to 4k. Values
+ can be given for both read and writes. If a single siint is
+ given, it will apply to both. If a second siint is specified
+ after a comma, it will apply to writes only. In other words,
+ the format is either bs=read_and_write or bs=read,write.
+ bs=4k,8k will thus use 4k blocks for reads, and 8k blocks
+ for writes. If you only wish to set the write size, you
+ can do so by passing an empty read size - bs=,8k will set
+ 8k for writes and leave the read default value.
bsrange=irange Instead of giving a single block size, specify a range
and fio will mix the issued io block sizes. The issued
io unit will always be a multiple of the minimum value
- given (also see bs_unaligned).
-
-read_bsrange=irange
-write_bsrange=irange
- If the workload is a mixed read-write workload, you can use
- one of these options to set separate block size ranges for
- reads and writes.
+ given (also see bs_unaligned). Applies to both reads and
+ writes, however a second range can be given after a comma.
+ See bs=.
bs_unaligned If this option is given, any byte size value within bsrange
may be used as a block range. This typically wont work with
we use read(2) and write(2) for asynchronous
io.
+ null Doesn't transfer any data, just pretends
+ to. This is mainly used to exercise fio
+ itself and for debugging/testing purposes.
+
iodepth=int This defines how many io units to keep in flight against
the file. The default is 1 for each file defined in this
job, can be overridden with a larger value for higher
shm Use shared memory as the buffers. Allocated
through shmget(2).
- mmap Use anonymous memory maps as the buffers.
- Allocated through mmap(2).
+ shmhuge Same as shm, but use huge pages as backing.
+
+ mmap Use mmap to allocate buffers. May either be
+ anonymous memory, or can be file backed if
+ a filename is given after the option. The
+ format is mem=mmap:/path/to/file.
+
+ mmaphuge Use a memory mapped huge file as the buffer
+ backing. Append filename after mmaphuge, ala
+ mem=mmaphuge:/hugetlbfs/file
The area allocated is a function of the maximum allowed
- bs size for the job, multiplied by the io depth given.
+ bs size for the job, multiplied by the io depth given. Note
+ that for shmhuge and mmaphuge to work, the system must have
+ free huge pages allocated. This can normally be checked
+ and set by reading/writing /proc/sys/vm/nr_hugepages on a
+ Linux system. Fio assumes a huge page is 4MiB in size. So
+ to calculate the number of huge pages you need for a given
+ job file, add up the io depth of all jobs (normally one unless
+ iodepth= is used) and multiply by the maximum bs set. Then
+ divide that number by the huge page size. You can see the
+ size of the huge pages in /proc/meminfo. If no huge pages
+ are allocated by having a non-zero number in nr_hugepages,
+ using mmaphuge or shmhuge will fail. Also see hugepage-size.
+
+ mmaphuge also needs to have hugetlbfs mounted and the file
+ location should point there. So if it's mounted in /huge,
+ you would use mem=mmaphuge:/huge/somefile.
+
+hugepage-size=siint
+ Defines the size of a huge page. Must at least be equal
+ to the system setting, see /proc/meminfo. Defaults to 4MiB.
+ Should probably always be a multiple of megabytes, so using
+ hugepage-size=Xm is the preferred way to set this to avoid
+ setting a non-pow-2 bad value.
exitall When one job finishes, terminate the rest. The default is
to wait for each job to finish, sometimes that is not the