Commit | Line | Data |
---|---|---|
ebac4655 JA |
1 | fio |
2 | --- | |
3 | ||
4 | fio is a tool that will spawn a number of thread doing a particular | |
5 | type of io action as specified by the user. fio takes a number of | |
6 | global parameters, each inherited by the thread unless otherwise | |
7 | parameters given to them overriding that setting is given. | |
8 | ||
9 | Options | |
10 | ------- | |
11 | ||
12 | $ fio | |
13 | -s IO is sequential | |
14 | -b block size in KiB for each io | |
15 | -t <sec> Runtime in seconds | |
16 | -r For random io, sequence must be repeatable | |
17 | -R <on> If one thread fails to meet rate, quit all | |
18 | -o <on> Use direct IO is 1, buffered if 0 | |
19 | -l Generate per-job latency logs | |
20 | -w Generate per-job bandwidth logs | |
21 | -f <file> Read <file> for job descriptions | |
22 | -v Print version information and exit | |
23 | ||
24 | The <jobs> format is as follows: | |
25 | ||
26 | directory=x Use 'x' as the top level directory for storing files | |
27 | rw=x 'x' may be: read, randread, write, or randwrite | |
28 | size=x Set file size to x bytes (x string can include k/m/g) | |
29 | ioengine=x 'x' may be: aio/libaio/linuxaio for Linux aio, | |
30 | posixaio for POSIX aio, sync for regular read/write io, | |
31 | mmap for mmap'ed io, or sgio for direct SG_IO io. The | |
32 | latter only works on Linux on SCSI (or SCSI-like | |
33 | devices, such as usb-storage or sata/libata driven) | |
34 | devices. | |
35 | iodepth=x For async io, allow 'x' ios in flight | |
36 | overwrite=x If 'x', layout a write file first. | |
37 | prio=x Run io at prio X, 0-7 is the kernel allowed range | |
38 | prioclass=x Run io at prio class X | |
39 | bs=x Use 'x' for thread blocksize. May include k/m postfix. | |
40 | bsrange=x-y Mix thread block sizes randomly between x and y. May | |
41 | also include k/m postfix. | |
42 | direct=x 1 for direct IO, 0 for buffered IO | |
43 | thinktime=x "Think" x usec after each io | |
44 | rate=x Throttle rate to x KiB/sec | |
45 | ratemin=x Quit if rate of x KiB/sec can't be met | |
46 | ratecycle=x ratemin averaged over x msecs | |
47 | cpumask=x Only allow job to run on CPUs defined by mask. | |
48 | fsync=x If writing, fsync after every x blocks have been written | |
49 | startdelay=x Start this thread x seconds after startup | |
50 | timeout=x Terminate x seconds after startup | |
51 | offset=x Start io at offset x (x string can include k/m/g) | |
52 | invalidate=x Invalidate page cache for file prior to doing io | |
53 | sync=x Use sync writes if x and writing | |
54 | mem=x If x == malloc, use malloc for buffers. If x == shm, | |
55 | use shm for buffers. If x == mmap, use anon mmap. | |
56 | exitall When one thread quits, terminate the others | |
57 | bwavgtime=x Average bandwidth stats over an x msec window. | |
58 | create_serialize=x If 'x', serialize file creation. | |
59 | create_fsync=x If 'x', run fsync() after file creation. | |
60 | loops=x Run the job 'x' number of times. | |
61 | verify=x If 'x' == md5, use md5 for verifies. If 'x' == crc32, | |
62 | use crc32 for verifies. md5 is 'safer', but crc32 is | |
63 | a lot faster. Only makes sense for writing to a file. | |
64 | stonewall Wait for preceeding jobs to end before running. | |
65 | numjobs=x Create 'x' similar entries for this job | |
66 | thread Use pthreads instead of forked jobs | |
67 | ||
68 | ||
69 | Examples using a job file | |
70 | ------------------------- | |
71 | ||
72 | A sample job file doing the same as above would look like this: | |
73 | ||
74 | [read_file] | |
75 | rw=0 | |
76 | bs=4096 | |
77 | ||
78 | [write_file] | |
79 | rw=1 | |
80 | bs=16384 | |
81 | ||
82 | And fio would be invoked as: | |
83 | ||
84 | $ fio -o1 -s -f file_with_above | |
85 | ||
86 | The second example would look like this: | |
87 | ||
88 | [rf1] | |
89 | rw=0 | |
90 | prio=6 | |
91 | ||
92 | [rf2] | |
93 | rw=0 | |
94 | prio=3 | |
95 | ||
96 | [rf3] | |
97 | rw=0 | |
98 | prio=0 | |
99 | direct=1 | |
100 | ||
101 | And fio would be invoked as: | |
102 | ||
103 | $ fio -o0 -s -b4096 -f file_with_above | |
104 | ||
105 | 'global' is a reserved keyword. When used as the filename, it sets the | |
106 | default options for the threads following that section. It is possible | |
107 | to have more than one global section in the file, as it only affects | |
108 | subsequent jobs. | |
109 | ||
110 | Also see the examples/ dir for sample job files. | |
111 | ||
112 | ||
113 | Interpreting the output | |
114 | ----------------------- | |
115 | ||
116 | fio spits out a lot of output. While running, fio will display the | |
117 | status of the jobs created. An example of that would be: | |
118 | ||
119 | Threads now running: 2 : [ww] [5.73% done] | |
120 | ||
121 | The characters inside the square brackets denote the current status of | |
122 | each thread. The possible values (in typical life cycle order) are: | |
123 | ||
124 | Idle Run | |
125 | ---- --- | |
126 | P Thread setup, but not started. | |
127 | C Thread created and running, but not doing anything yet | |
128 | R Running, doing sequential reads. | |
129 | r Running, doing random reads. | |
130 | W Running, doing sequential writes. | |
131 | w Running, doing random writes. | |
132 | V Running, doing verification of written data. | |
133 | E Thread exited, not reaped by main thread yet. | |
134 | _ Thread reaped. | |
135 | ||
136 | The other values are fairly self explanatory - number of thread currently | |
137 | running and doing io, and the estimated completion percentage. | |
138 | ||
139 | When fio is done (or interrupted by ctrl-c), it will show the data for | |
140 | each thread, group of threads, and disks in that order. For each data | |
141 | direction, the output looks like: | |
142 | ||
143 | Client1 (g=0): err= 0: | |
144 | write: io= 32MiB, bw= 666KiB/s, runt= 50320msec | |
145 | slat (msec): min= 0, max= 136, avg= 0.03, dev= 1.92 | |
146 | clat (msec): min= 0, max= 631, avg=48.50, dev=86.82 | |
147 | bw (KiB/s) : min= 0, max= 1196, per=51.00%, avg=664.02, dev=681.68 | |
148 | cpu : usr=1.49%, sys=0.25%, ctx=7969 | |
149 | ||
150 | The client number is printed, along with the group id and error of that | |
151 | thread. Below is the io statistics, here for writes. In the order listed, | |
152 | they denote: | |
153 | ||
154 | io= Number of megabytes io performed | |
155 | bw= Average bandwidth rate | |
156 | runt= The runtime of that thread | |
157 | slat= Submission latency (avg being the average, dev being the | |
158 | standard deviation). This is the time it took to submit | |
159 | the io. For sync io, the slat is really the completion | |
160 | latency, since queue/complete is one operation there. | |
161 | clat= Completion latency. Same names as slat, this denotes the | |
162 | time from submission to completion of the io pieces. For | |
163 | sync io, clat will usually be equal (or very close) to 0, | |
164 | as the time from submit to complete is basically just | |
165 | CPU time (io has already been done, see slat explanation). | |
166 | bw= Bandwidth. Same names as the xlat stats, but also includes | |
167 | an approximate percentage of total aggregate bandwidth | |
168 | this thread received in this group. This last value is | |
169 | only really useful if the threads in this group are on the | |
170 | same disk, since they are then competing for disk access. | |
171 | cpu= CPU usage. User and system time, along with the number | |
172 | of context switches this thread went through. | |
173 | ||
174 | After each client has been listed, the group statistics are printed. They | |
175 | will look like this: | |
176 | ||
177 | Run status group 0 (all jobs): | |
178 | READ: io=64MiB, aggrb=22178, minb=11355, maxb=11814, mint=2840msec, maxt=2955msec | |
179 | WRITE: io=64MiB, aggrb=1302, minb=666, maxb=669, mint=50093msec, maxt=50320msec | |
180 | ||
181 | For each data direction, it prints: | |
182 | ||
183 | io= Number of megabytes io performed. | |
184 | aggrb= Aggregate bandwidth of threads in this group. | |
185 | minb= The minimum average bandwidth a thread saw. | |
186 | maxb= The maximum average bandwidth a thread saw. | |
187 | mint= The minimum runtime of a thread. | |
188 | maxt= The maximum runtime of a thread. | |
189 | ||
190 | And finally, the disk statistics are printed. They will look like this: | |
191 | ||
192 | Disk stats (read/write): | |
193 | sda: ios=16398/16511, merge=30/162, ticks=6853/819634, in_queue=826487, util=100.00% | |
194 | ||
195 | Each value is printed for both reads and writes, with reads first. The | |
196 | numbers denote: | |
197 | ||
198 | ios= Number of ios performed by all groups. | |
199 | merge= Number of merges io the io scheduler. | |
200 | ticks= Number of ticks we kept the disk busy. | |
201 | io_queue= Total time spent in the disk queue. | |
202 | util= The disk utilization. A value of 100% means we kept the disk | |
203 | busy constantly, 50% would be a disk idling half of the time. |