Commit | Line | Data |
---|---|---|
ebac4655 JA |
1 | fio |
2 | --- | |
3 | ||
4 | fio is a tool that will spawn a number of thread doing a particular | |
5 | type of io action as specified by the user. fio takes a number of | |
6 | global parameters, each inherited by the thread unless otherwise | |
7 | parameters given to them overriding that setting is given. | |
8 | ||
2b02b546 JA |
9 | |
10 | Source | |
11 | ------ | |
12 | ||
13 | fio resides in a git repo, the canonical place is: | |
14 | ||
15 | git://brick.kernel.dk/data/git/fio.git | |
16 | ||
17 | Snapshots are frequently generated as well and they include the git | |
18 | meta data as well. You can download them here: | |
19 | ||
20 | http://brick.kernel.dk/snaps/ | |
21 | ||
22 | ||
ebac4655 JA |
23 | Options |
24 | ------- | |
25 | ||
26 | $ fio | |
27 | -s IO is sequential | |
28 | -b block size in KiB for each io | |
29 | -t <sec> Runtime in seconds | |
30 | -r For random io, sequence must be repeatable | |
31 | -R <on> If one thread fails to meet rate, quit all | |
32 | -o <on> Use direct IO is 1, buffered if 0 | |
33 | -l Generate per-job latency logs | |
34 | -w Generate per-job bandwidth logs | |
35 | -f <file> Read <file> for job descriptions | |
36 | -v Print version information and exit | |
37 | ||
38 | The <jobs> format is as follows: | |
39 | ||
40 | directory=x Use 'x' as the top level directory for storing files | |
3d60d1ed JA |
41 | rw=x 'x' may be: read, randread, write, randwrite, |
42 | rw (read-write mix), randrw (read-write random mix) | |
ebac4655 JA |
43 | size=x Set file size to x bytes (x string can include k/m/g) |
44 | ioengine=x 'x' may be: aio/libaio/linuxaio for Linux aio, | |
45 | posixaio for POSIX aio, sync for regular read/write io, | |
46 | mmap for mmap'ed io, or sgio for direct SG_IO io. The | |
47 | latter only works on Linux on SCSI (or SCSI-like | |
48 | devices, such as usb-storage or sata/libata driven) | |
49 | devices. | |
50 | iodepth=x For async io, allow 'x' ios in flight | |
51 | overwrite=x If 'x', layout a write file first. | |
52 | prio=x Run io at prio X, 0-7 is the kernel allowed range | |
53 | prioclass=x Run io at prio class X | |
54 | bs=x Use 'x' for thread blocksize. May include k/m postfix. | |
55 | bsrange=x-y Mix thread block sizes randomly between x and y. May | |
56 | also include k/m postfix. | |
57 | direct=x 1 for direct IO, 0 for buffered IO | |
58 | thinktime=x "Think" x usec after each io | |
59 | rate=x Throttle rate to x KiB/sec | |
60 | ratemin=x Quit if rate of x KiB/sec can't be met | |
61 | ratecycle=x ratemin averaged over x msecs | |
62 | cpumask=x Only allow job to run on CPUs defined by mask. | |
63 | fsync=x If writing, fsync after every x blocks have been written | |
64 | startdelay=x Start this thread x seconds after startup | |
65 | timeout=x Terminate x seconds after startup | |
66 | offset=x Start io at offset x (x string can include k/m/g) | |
67 | invalidate=x Invalidate page cache for file prior to doing io | |
68 | sync=x Use sync writes if x and writing | |
69 | mem=x If x == malloc, use malloc for buffers. If x == shm, | |
70 | use shm for buffers. If x == mmap, use anon mmap. | |
71 | exitall When one thread quits, terminate the others | |
72 | bwavgtime=x Average bandwidth stats over an x msec window. | |
73 | create_serialize=x If 'x', serialize file creation. | |
74 | create_fsync=x If 'x', run fsync() after file creation. | |
75 | loops=x Run the job 'x' number of times. | |
76 | verify=x If 'x' == md5, use md5 for verifies. If 'x' == crc32, | |
77 | use crc32 for verifies. md5 is 'safer', but crc32 is | |
78 | a lot faster. Only makes sense for writing to a file. | |
79 | stonewall Wait for preceeding jobs to end before running. | |
80 | numjobs=x Create 'x' similar entries for this job | |
81 | thread Use pthreads instead of forked jobs | |
20dc95c4 JA |
82 | zonesize=x |
83 | zoneskip=y Zone options must be paired. If given, the job | |
84 | will skip y bytes for every x read/written. This | |
85 | can be used to gauge hard drive speed over the entire | |
86 | platter, without reading everything. Both x/y can | |
87 | include k/m/g suffix. | |
ebac4655 JA |
88 | |
89 | ||
90 | Examples using a job file | |
91 | ------------------------- | |
92 | ||
93 | A sample job file doing the same as above would look like this: | |
94 | ||
95 | [read_file] | |
96 | rw=0 | |
97 | bs=4096 | |
98 | ||
99 | [write_file] | |
100 | rw=1 | |
101 | bs=16384 | |
102 | ||
103 | And fio would be invoked as: | |
104 | ||
105 | $ fio -o1 -s -f file_with_above | |
106 | ||
107 | The second example would look like this: | |
108 | ||
109 | [rf1] | |
110 | rw=0 | |
111 | prio=6 | |
112 | ||
113 | [rf2] | |
114 | rw=0 | |
115 | prio=3 | |
116 | ||
117 | [rf3] | |
118 | rw=0 | |
119 | prio=0 | |
120 | direct=1 | |
121 | ||
122 | And fio would be invoked as: | |
123 | ||
124 | $ fio -o0 -s -b4096 -f file_with_above | |
125 | ||
126 | 'global' is a reserved keyword. When used as the filename, it sets the | |
127 | default options for the threads following that section. It is possible | |
128 | to have more than one global section in the file, as it only affects | |
129 | subsequent jobs. | |
130 | ||
131 | Also see the examples/ dir for sample job files. | |
132 | ||
133 | ||
134 | Interpreting the output | |
135 | ----------------------- | |
136 | ||
137 | fio spits out a lot of output. While running, fio will display the | |
138 | status of the jobs created. An example of that would be: | |
139 | ||
140 | Threads now running: 2 : [ww] [5.73% done] | |
141 | ||
142 | The characters inside the square brackets denote the current status of | |
143 | each thread. The possible values (in typical life cycle order) are: | |
144 | ||
145 | Idle Run | |
146 | ---- --- | |
147 | P Thread setup, but not started. | |
148 | C Thread created and running, but not doing anything yet | |
149 | R Running, doing sequential reads. | |
150 | r Running, doing random reads. | |
151 | W Running, doing sequential writes. | |
152 | w Running, doing random writes. | |
153 | V Running, doing verification of written data. | |
154 | E Thread exited, not reaped by main thread yet. | |
155 | _ Thread reaped. | |
156 | ||
157 | The other values are fairly self explanatory - number of thread currently | |
158 | running and doing io, and the estimated completion percentage. | |
159 | ||
160 | When fio is done (or interrupted by ctrl-c), it will show the data for | |
161 | each thread, group of threads, and disks in that order. For each data | |
162 | direction, the output looks like: | |
163 | ||
164 | Client1 (g=0): err= 0: | |
165 | write: io= 32MiB, bw= 666KiB/s, runt= 50320msec | |
166 | slat (msec): min= 0, max= 136, avg= 0.03, dev= 1.92 | |
167 | clat (msec): min= 0, max= 631, avg=48.50, dev=86.82 | |
168 | bw (KiB/s) : min= 0, max= 1196, per=51.00%, avg=664.02, dev=681.68 | |
169 | cpu : usr=1.49%, sys=0.25%, ctx=7969 | |
170 | ||
171 | The client number is printed, along with the group id and error of that | |
172 | thread. Below is the io statistics, here for writes. In the order listed, | |
173 | they denote: | |
174 | ||
175 | io= Number of megabytes io performed | |
176 | bw= Average bandwidth rate | |
177 | runt= The runtime of that thread | |
178 | slat= Submission latency (avg being the average, dev being the | |
179 | standard deviation). This is the time it took to submit | |
180 | the io. For sync io, the slat is really the completion | |
181 | latency, since queue/complete is one operation there. | |
182 | clat= Completion latency. Same names as slat, this denotes the | |
183 | time from submission to completion of the io pieces. For | |
184 | sync io, clat will usually be equal (or very close) to 0, | |
185 | as the time from submit to complete is basically just | |
186 | CPU time (io has already been done, see slat explanation). | |
187 | bw= Bandwidth. Same names as the xlat stats, but also includes | |
188 | an approximate percentage of total aggregate bandwidth | |
189 | this thread received in this group. This last value is | |
190 | only really useful if the threads in this group are on the | |
191 | same disk, since they are then competing for disk access. | |
192 | cpu= CPU usage. User and system time, along with the number | |
193 | of context switches this thread went through. | |
194 | ||
195 | After each client has been listed, the group statistics are printed. They | |
196 | will look like this: | |
197 | ||
198 | Run status group 0 (all jobs): | |
199 | READ: io=64MiB, aggrb=22178, minb=11355, maxb=11814, mint=2840msec, maxt=2955msec | |
200 | WRITE: io=64MiB, aggrb=1302, minb=666, maxb=669, mint=50093msec, maxt=50320msec | |
201 | ||
202 | For each data direction, it prints: | |
203 | ||
204 | io= Number of megabytes io performed. | |
205 | aggrb= Aggregate bandwidth of threads in this group. | |
206 | minb= The minimum average bandwidth a thread saw. | |
207 | maxb= The maximum average bandwidth a thread saw. | |
208 | mint= The minimum runtime of a thread. | |
209 | maxt= The maximum runtime of a thread. | |
210 | ||
211 | And finally, the disk statistics are printed. They will look like this: | |
212 | ||
213 | Disk stats (read/write): | |
214 | sda: ios=16398/16511, merge=30/162, ticks=6853/819634, in_queue=826487, util=100.00% | |
215 | ||
216 | Each value is printed for both reads and writes, with reads first. The | |
217 | numbers denote: | |
218 | ||
219 | ios= Number of ios performed by all groups. | |
220 | merge= Number of merges io the io scheduler. | |
221 | ticks= Number of ticks we kept the disk busy. | |
222 | io_queue= Total time spent in the disk queue. | |
223 | util= The disk utilization. A value of 100% means we kept the disk | |
224 | busy constantly, 50% would be a disk idling half of the time. |