nrfiles=int Number of files to use for this job. Defaults to 1.
+openfiles=int Number of files to keep open at the same time. Defaults to
+ the same as nrfiles, can be set smaller to limit the number
+ simultaneous opens.
+
+file_service_type=str Defines how fio decides which file from a job to
+ service next. The following types are defined:
+
+ random Just choose a file at random.
+
+ roundrobin Round robin over open files. This
+ is the default.
+
+ The string can have a number appended, indicating how
+ often to switch to a new file. So if option random:4 is
+ given, fio will switch to a new random file after 4 ios
+ have been issued.
+
ioengine=str Defines how the job issues io to the file. The following
types are defined:
or receive, if the latter only the port
argument is used.
+ cpu Doesn't transfer any data, but burns CPU
+ cycles according to the cpuload= and
+ cpucycle= options. Setting cpuload=85
+ will cause that job to do nothing but burn
+ 85% of the CPU.
+
+ external Prefix to specify loading an external
+ IO engine object file. Append the engine
+ filename, eg ioengine=external:/tmp/foo.o
+ to load ioengine foo.o in /tmp.
+
iodepth=int This defines how many io units to keep in flight against
the file. The default is 1 for each file defined in this
job, can be overridden with a larger value for higher
concurrency.
+iodepth_batch=int This defines how many pieces of IO to submit at once.
+ It defaults to the same as iodepth, but can be set lower
+ if one so desires.
+
iodepth_low=int The low water mark indicating when to start filling
the queue again. Defaults to the same as iodepth, meaning
that fio will attempt to keep the queue full at all times.
numjobs=int Create the specified number of clones of this job. May be
used to setup a larger number of threads/processes doing
- the same thing.
+ the same thing. We regard that grouping of jobs as a
+ specific group.
+
+group_reporting If 'numjobs' is set, it may be interesting to display
+ statistics for the group as a whole instead of for each
+ individual job. This is especially true of 'numjobs' is
+ large, looking at individual thread/process output quickly
+ becomes unwieldy. If 'group_reporting' is specified, fio
+ will show the final report per-group instead of per-job.
thread fio defaults to forking jobs, however if this option is
given, fio will use pthread_create(3) to create threads
----------------
For scripted usage where you typically want to generate tables or graphs
-of the results, fio can output the results in a comma separated format.
+of the results, fio can output the results in a semicolon separated format.
The format is one long line of values, such as:
-client1,0,0,936,331,2894,0,0,0.000000,0.000000,1,170,22.115385,34.290410,16,714,84.252874%,366.500000,566.417819,3496,1237,2894,0,0,0.000000,0.000000,0,246,6.671625,21.436952,0,2534,55.465300%,1406.600000,2008.044216,0.000000%,0.431928%,1109
+client1;0;0;1906777;1090804;1790;0;0;0.000000;0.000000;0;0;0.000000;0.000000;929380;1152890;25.510151%;1078276.333333;128948.113404;0;0;0;0;0;0.000000;0.000000;0;0;0.000000;0.000000;0;0;0.000000%;0.000000;0.000000;100.000000%;0.000000%;324;100.0%;0.0%;0.0%;0.0%;0.0%;0.0%;0.0%;100.0%;0.0%;0.0%;0.0%;0.0%;0.0%
+;0.0%;0.0%;0.0%;0.0%;0.0%
Split up, the format is as follows:
Completion latency: min, max, mean, deviation
Bw: min, max, aggregate percentage of total, mean, deviation
CPU usage: user, system, context switches
+ IO depths: <=1, 2, 4, 8, 16, 32, >=64
+ IO latencies: <=2, 4, 10, 20, 50, 100, 250, 500, 750, 1000, >=2000
+ Text description