Also see the examples/ dir for sample job files.
+
+Interpreting the output
+-----------------------
+
+fio spits out a lot of output. While running, fio will display the
+status of the jobs created. An example of that would be:
+
+Threads now running: 2 : [ww] [5.73% done]
+
+The characters inside the square brackets denote the current status of
+each thread. The possible values (in typical life cycle order) are:
+
+P Thread setup, but not started.
+C Thread created and running, but not doing anything yet
+ R Running, doing sequential reads.
+ r Running, doing random reads.
+ W Running, doing sequential writes.
+ w Running, doing random writes.
+V Running, doing verification of written data.
+E Thread exited, not reaped by main thread yet.
+_ Thread reaped.
+
+The other values are fairly self explanatory - number of thread currently
+running and doing io, and the estimated completion percentage.
+
+When fio is done (or interrupted by ctrl-c), it will show the data for
+each thread, group of threads, and disks in that order. For each data
+direction, the output looks like:
+
+Client1 (g=0): err= 0:
+ write: io= 32MiB, bw= 666KiB/s, runt= 50320msec
+ slat (msec): min= 0, max= 136, avg= 0.03, dev= 1.92
+ clat (msec): min= 0, max= 631, avg=48.50, dev=86.82
+ bw (KiB/s) : min= 0, max= 1196, per=51.00%, avg=664.02, dev=681.68
+ cpu : usr=1.49%, sys=0.25%, ctx=7969
+
+The client number is printed, along with the group id and error of that
+thread. Below is the io statistics, here for writes. In the order listed,
+they denote:
+
+io= Number of megabytes io performed
+bw= Average bandwidth rate
+runt= The runtime of that thread
+ slat= Submission latency (avg being the average, dev being the
+ standard deviation). This is the time it took to submit
+ the io.
+ clat= Completion latency. Same names as slat, this denotes the
+ time from submission to completion of the io pieces.
+ bw= Bandwidth. Same names as the xlat stats, but also includes
+ an approximate percentage of total aggregate bandwidth
+ this thread received in this group. This last value is
+ only really useful if the threads in this group are on the
+ same disk, since they are then competing for disk access.
+cpu= CPU usage. User and system time, along with the number
+ of context switches this thread went through.
+
+After each client has been listed, the group statistics are printed. They
+will look like this:
+
+Run status group 0 (all jobs):
+ READ: io=64MiB, aggrb=22178, minb=11355, maxb=11814, mint=2840msec, maxt=2955msec
+ WRITE: io=64MiB, aggrb=1302, minb=666, maxb=669, mint=50093msec, maxt=50320msec
+
+For each data direction, it prints:
+
+io= Number of megabytes io performed.
+aggrb= Aggregate bandwidth of threads in this group.
+minb= The minimum average bandwidth a thread saw.
+maxb= The maximum average bandwidth a thread saw.
+mint= The minimum runtime of a thread.
+maxt= The maximum runtime of a thread.
+
+And finally, the disk statistics are printed. They will look like this:
+
+Disk stats (read/write):
+ sda: ios=16398/16511, merge=30/162, ticks=6853/819634, in_queue=826487, util=100.00%
+
+Each value is printed for both reads and writes, with reads first. The
+numbers denote:
+
+ios= Number of ios performed by all groups.
+merge= Number of merges io the io scheduler.
+ticks= Number of ticks we kept the disk busy.
+io_queue= Total time spent in the disk queue.
+util= The disk utilization. A value of 100% means we kept the disk
+ busy constantly, 50% would be a disk idling half of the time.