of those files. Internally that is the same as using the 'stonewall'
parameter described the the parameter section.
+If the job file contains only one job, you may as well just give the
+parameters on the command line. The command line parameters are identical
+to the job parameters, with a few extra that control global parameters
+(see README). For example, for the job file parameter iodepth=2, the
+mirror command line option would be --iodepth 2 or --iodepth=2. You can
+also use the command line for giving more than one job entry. For each
+--name option that fio sees, it will start a new job with that name.
+Command line entries following a --name entry will apply to that job,
+until there are no more entries or a new --name entry is seen. This is
+similar to the job file options, where each option applies to the current
+job until a new [] job entry is seen.
+
fio does not need to run as root, except if the files or devices specified
in the job section requires that. Some other options may also be restricted,
such as memory locking, io scheduler switching, and descreasing the nice value.
As you can see, the job file sections themselves are empty as all the
described parameters are shared. As no filename= option is given, fio
-makes up a filename for each of the jobs as it sees fit.
+makes up a filename for each of the jobs as it sees fit. On the command
+line, this job would look as follows:
+
+$ fio --name=global --rw=randread --size=128m --name=job1 --name=job2
+
Lets look at an example that have a number of processes writing randomly
to files.
We want to use async io here, with a depth of 4 for each file. We also
increased the buffer size used to 32KiB and define numjobs to 4 to
fork 4 identical jobs. The result is 4 processes each randomly writing
-to their own 64MiB file.
+to their own 64MiB file. Instead of using the above job file, you could
+have given the parameters on the command line. For this case, you would
+specify:
+
+$ fio --name=random-writers --ioengine=libaio --iodepth=4 --rw=randwrite --bs=32k --direct=0 --size=64m --numjobs=4
fio ships with a few example job files, you can also look there for
inspiration.
name=str ASCII name of the job. This may be used to override the
name printed by fio for this job. Otherwise the job
- name is used.
+ name is used. On the command line this parameter has the
+ special purpose of also signalling the start of a new
+ job.
directory=str Prefix filenames with this directory. Used to places files
in a different location than "./".
bsrange=irange Instead of giving a single block size, specify a range
and fio will mix the issued io block sizes. The issued
io unit will always be a multiple of the minimum value
- given.
+ given (also see bs_unaligned).
+
+bs_unaligned If this option is given, any byte size value within bsrange
+ may be used as a block range. This typically wont work with
+ direct IO, as that normally requires sector alignment.
nrfiles=int Number of files to use for this job. Defaults to 1.
up to 100%, the latter of the two will be used to override
the first.
+norandommap Normally fio will cover every block of the file when doing
+ random IO. If this option is given, fio will just get a
+ new random offset without looking at past io history. This
+ means that some blocks may not be read or written, and that
+ some blocks may be read/written more than once. This option
+ is mutually exclusive with verify= for that reason.
+
nice=int Run the job with the given nice value. See man nice(2).
prio=int Set the io priority value of this job. Linux limits us to
been read. The two zone options can be used to only do
io on zones of a file.
-write_iolog=str Write the issued io patterns to the specified file. See iolog.
+write_iolog=str Write the issued io patterns to the specified file. See
+ read_iolog.
-iolog=str Open an iolog with the specified file name and replay the
+read_iolog=str Open an iolog with the specified file name and replay the
io patterns it contains. This can be used to store a
workload and replay it sometime later.