alternate random and zeroed data throughout the IO
buffer.
-buffer_pattern=str If set, fio will fill the io buffers with this pattern.
- If not set, the contents of io buffers is defined by the other
- options related to buffer contents. The setting can be any
- pattern of bytes, and can be prefixed with 0x for hex values.
+buffer_pattern=str If set, fio will fill the io buffers with this
+ pattern. If not set, the contents of io buffers is defined by
+ the other options related to buffer contents. The setting can
+ be any pattern of bytes, and can be prefixed with 0x for hex
+ values. It may also be a string, where the string must then
+ be wrapped with "".
+
+dedupe_percentage=int If set, fio will generate this percentage of
+ identical buffers when writing. These buffers will be
+ naturally dedupable. The contents of the buffers depend on
+ what other buffer compression settings have been set. It's
+ possible to have the individual buffers either fully
+ compressible, or not at all. This option only controls the
+ distribution of unique buffers.
nrfiles=int Number of files to use for this job. Defaults to 1.
having to go through FUSE. This ioengine
defines engine specific options.
+ libhdfs Read and write through Hadoop (HDFS).
+ The 'filename' option is used to specify host,
+ port of the hdfs name-node to connect. This
+ engine interprets offsets a little
+ differently. In HDFS, files once created
+ cannot be modified. So random writes are not
+ possible. To imitate this, libhdfs engine
+ expects bunch of small files to be created
+ over HDFS, and engine will randomly pick a
+ file out of those files based on the offset
+ generated by fio backend. (see the example
+ job file to create such files, use rw=write
+ option). Please note, you might want to set
+ necessary environment variables to work with
+ hdfs/libhdfs properly.
+
external Prefix to specify loading an external
IO engine object file. Append the engine
filename, eg ioengine=external:/tmp/foo.o