$ fio --name=random-writers --ioengine=libaio --iodepth=4 --rw=randwrite --bs=32k --direct=0 --size=64m --numjobs=4
+When fio is utilized as a basis of any reasonably large test suite, it might be
+desirable to share a set of standardized settings across multiple job files.
+Instead of copy/pasting such settings, any section may pull in an external
+.fio file with 'include filename' directive, as in the following example:
+
+; -- start job file including.fio --
+[global]
+filename=/tmp/test
+filesize=1m
+include glob-include.fio
+
+[test]
+rw=randread
+bs=4k
+time_based=1
+runtime=10
+include test-include.fio
+; -- end job file including.fio --
+
+; -- start job file glob-include.fio --
+thread=1
+group_reporting=1
+; -- end job file glob-include.fio --
+
+; -- start job file test-include.fio --
+ioengine=libaio
+iodepth=4
+; -- end job file test-include.fio --
+
+Settings pulled into a section apply to that section only (except global
+section). Include directives may be nested in that any included file may
+contain further include directive(s). Include files may not contain []
+sections.
+
+
4.1 Environment variables
-------------------------
alternate random and zeroed data throughout the IO
buffer.
-buffer_pattern=str If set, fio will fill the io buffers with this pattern.
- If not set, the contents of io buffers is defined by the other
- options related to buffer contents. The setting can be any
- pattern of bytes, and can be prefixed with 0x for hex values.
+buffer_pattern=str If set, fio will fill the io buffers with this
+ pattern. If not set, the contents of io buffers is defined by
+ the other options related to buffer contents. The setting can
+ be any pattern of bytes, and can be prefixed with 0x for hex
+ values. It may also be a string, where the string must then
+ be wrapped with "".
+
+dedupe_percentage=int If set, fio will generate this percentage of
+ identical buffers when writing. These buffers will be
+ naturally dedupable. The contents of the buffers depend on
+ what other buffer compression settings have been set. It's
+ possible to have the individual buffers either fully
+ compressible, or not at all. This option only controls the
+ distribution of unique buffers.
nrfiles=int Number of files to use for this job. Defaults to 1.
channel semantics (Send/Recv) for the
InfiniBand, RoCE and iWARP protocols.
- falloc IO engine that does regular fallocate to
- simulate data transfer as fio ioengine.
- DDIR_READ does fallocate(,mode = keep_size,)
- DDIR_WRITE does fallocate(,mode = 0)
- DDIR_TRIM does fallocate(,mode = punch_hole)
+ falloc IO engine that does regular fallocate to
+ simulate data transfer as fio ioengine.
+ DDIR_READ does fallocate(,mode = keep_size,)
+ DDIR_WRITE does fallocate(,mode = 0)
+ DDIR_TRIM does fallocate(,mode = punch_hole)
e4defrag IO engine that does regular EXT4_IOC_MOVE_EXT
- ioctls to simulate defragment activity in
- request to DDIR_WRITE event
+ ioctls to simulate defragment activity in
+ request to DDIR_WRITE event
+
+ rbd IO engine supporting direct access to Ceph
+ Rados Block Devices (RBD) via librbd without
+ the need to use the kernel rbd driver. This
+ ioengine defines engine specific options.
+
+ gfapi Using Glusterfs libgfapi sync interface to
+ direct access to Glusterfs volumes without
+ options.
+
+ gfapi_async Using Glusterfs libgfapi async interface
+ to direct access to Glusterfs volumes without
+ having to go through FUSE. This ioengine
+ defines engine specific options.
+
+ libhdfs Read and write through Hadoop (HDFS).
+ The 'filename' option is used to specify host,
+ port of the hdfs name-node to connect. This
+ engine interprets offsets a little
+ differently. In HDFS, files once created
+ cannot be modified. So random writes are not
+ possible. To imitate this, libhdfs engine
+ expects bunch of small files to be created
+ over HDFS, and engine will randomly pick a
+ file out of those files based on the offset
+ generated by fio backend. (see the example
+ job file to create such files, use rw=write
+ option). Please note, you might want to set
+ necessary environment variables to work with
+ hdfs/libhdfs properly.
external Prefix to specify loading an external
IO engine object file. Append the engine
caps the file size at real_size - offset.
offset_increment=int If this is provided, then the real offset becomes
- the offset + offset_increment * thread_number, where the
- thread number is a counter that starts at 0 and is incremented
- for each job. This option is useful if there are several jobs
- which are intended to operate on a file in parallel in disjoint
- segments, with even spacing between the starting points.
+ offset + offset_increment * thread_number, where the thread
+ number is a counter that starts at 0 and is incremented for
+ each sub-job (i.e. when numjobs option is specified). This
+ option is useful if there are several jobs which are intended
+ to operate on a file in parallel disjoint segments, with
+ even spacing between the starting points.
number_ios=int Fio will normally perform IOs until it has exhausted the size
of the region set by size=, or if it exhaust the allocated
jobs in their lifetime. The included fio_generate_plots
script uses gnuplot to turn these text files into nice
graphs. See write_lat_log for behaviour of given
- filename. For this option, the suffix is _bw.log.
+ filename. For this option, the suffix is _bw.x.log, where
+ x is the index of the job (1..N, where N is the number of
+ jobs).
write_lat_log=str Same as write_bw_log, except that this option stores io
submission, completion, and total latencies instead. If no
write_lat_log=foo
- The actual log names will be foo_slat.log, foo_clat.log,
- and foo_lat.log. This helps fio_generate_plot fine the logs
- automatically.
+ The actual log names will be foo_slat.x.log, foo_clat.x.log,
+ and foo_lat.x.log, where x is the index of the job (1..N,
+ where N is the number of jobs). This helps fio_generate_plot
+ fine the logs automatically.
write_iops_log=str Same as write_bw_log, but writes IOPS. If no filename is
given with this option, the default filename of
- "jobname_type.log" is used. Even if the filename is given,
- fio will still append the type of log.
+ "jobname_type.x.log" is used,where x is the index of the job
+ (1..N, where N is the number of jobs). Even if the filename
+ is given, fio will still append the type of log.
log_avg_msec=int By default, fio will log an entry in the iops, latency,
or bw log for every IO that completes. When writing to the
log_offset=int If this is set, the iolog options will include the byte
offset for the IO entry as well as the other data values.
+log_compression=int If this is set, fio will compress the IO logs as
+ it goes, to keep the memory footprint lower. When a log
+ reaches the specified size, that chunk is removed and
+ compressed in the background. Given that IO logs are
+ fairly highly compressible, this yields a nice memory
+ savings for longer runs. The downside is that the
+ compression will consume some background CPU cycles, so
+ it may impact the run. This, however, is also true if
+ the logging ends up consuming most of the system memory.
+ So pick your poison. The IO logs are saved normally at the
+ end of a run, by decompressing the chunks and storing them
+ in the specified log file. This feature depends on the
+ availability of zlib.
+
+log_store_compressed=bool If set, and log_compression is also set,
+ fio will store the log files in a compressed format. They
+ can be decompressed with fio, using the --inflate-log
+ command line parameter. The files will be stored with a
+ .fz suffix.
+
lockmem=int Pin down the specified amount of memory with mlock(2). Can
potentially be used instead of removing memory or booting
with less memory to simulate a smaller amount of memory.