$ fio --name=random-writers --ioengine=libaio --iodepth=4 --rw=randwrite --bs=32k --direct=0 --size=64m --numjobs=4
+When fio is utilized as a basis of any reasonably large test suite, it might be
+desirable to share a set of standardized settings across multiple job files.
+Instead of copy/pasting such settings, any section may pull in an external
+.fio file with 'include filename' directive, as in the following example:
+
+; -- start job file including.fio --
+[global]
+filename=/tmp/test
+filesize=1m
+include glob-include.fio
+
+[test]
+rw=randread
+bs=4k
+time_based=1
+runtime=10
+include test-include.fio
+; -- end job file including.fio --
+
+; -- start job file glob-include.fio --
+thread=1
+group_reporting=1
+; -- end job file glob-include.fio --
+
+; -- start job file test-include.fio --
+ioengine=libaio
+iodepth=4
+; -- end job file test-include.fio --
+
+Settings pulled into a section apply to that section only (except global
+section). Include directives may be nested in that any included file may
+contain further include directive(s). Include files may not contain []
+sections.
+
+
4.1 Environment variables
-------------------------
alternate random and zeroed data throughout the IO
buffer.
-buffer_pattern=str If set, fio will fill the io buffers with this pattern.
- If not set, the contents of io buffers is defined by the other
- options related to buffer contents. The setting can be any
- pattern of bytes, and can be prefixed with 0x for hex values.
+buffer_pattern=str If set, fio will fill the io buffers with this
+ pattern. If not set, the contents of io buffers is defined by
+ the other options related to buffer contents. The setting can
+ be any pattern of bytes, and can be prefixed with 0x for hex
+ values. It may also be a string, where the string must then
+ be wrapped with "".
+
+dedupe_percentage=int If set, fio will generate this percentage of
+ identical buffers when writing. These buffers will be
+ naturally dedupable. The contents of the buffers depend on
+ what other buffer compression settings have been set. It's
+ possible to have the individual buffers either fully
+ compressible, or not at all. This option only controls the
+ distribution of unique buffers.
nrfiles=int Number of files to use for this job. Defaults to 1.
having to go through FUSE. This ioengine
defines engine specific options.
+ libhdfs Read and write through Hadoop (HDFS).
+ The 'filename' option is used to specify host,
+ port of the hdfs name-node to connect. This
+ engine interprets offsets a little
+ differently. In HDFS, files once created
+ cannot be modified. So random writes are not
+ possible. To imitate this, libhdfs engine
+ expects bunch of small files to be created
+ over HDFS, and engine will randomly pick a
+ file out of those files based on the offset
+ generated by fio backend. (see the example
+ job file to create such files, use rw=write
+ option). Please note, you might want to set
+ necessary environment variables to work with
+ hdfs/libhdfs properly.
+
external Prefix to specify loading an external
IO engine object file. Append the engine
filename, eg ioengine=external:/tmp/foo.o