[PATCH] Only fsync on close if fsync= given
[fio.git] / README
CommitLineData
ebac4655
JA
1fio
2---
3
4fio is a tool that will spawn a number of thread doing a particular
5type of io action as specified by the user. fio takes a number of
6global parameters, each inherited by the thread unless otherwise
7parameters given to them overriding that setting is given.
8
2b02b546
JA
9
10Source
11------
12
13fio resides in a git repo, the canonical place is:
14
15git://brick.kernel.dk/data/git/fio.git
16
17Snapshots are frequently generated as well and they include the git
18meta data as well. You can download them here:
19
20http://brick.kernel.dk/snaps/
21
22
ebac4655
JA
23Options
24-------
25
26$ fio
27 -s IO is sequential
28 -b block size in KiB for each io
29 -t <sec> Runtime in seconds
30 -r For random io, sequence must be repeatable
31 -R <on> If one thread fails to meet rate, quit all
32 -o <on> Use direct IO is 1, buffered if 0
33 -l Generate per-job latency logs
34 -w Generate per-job bandwidth logs
35 -f <file> Read <file> for job descriptions
36 -v Print version information and exit
37
38The <jobs> format is as follows:
39
40 directory=x Use 'x' as the top level directory for storing files
3d60d1ed
JA
41 rw=x 'x' may be: read, randread, write, randwrite,
42 rw (read-write mix), randrw (read-write random mix)
ebac4655
JA
43 size=x Set file size to x bytes (x string can include k/m/g)
44 ioengine=x 'x' may be: aio/libaio/linuxaio for Linux aio,
45 posixaio for POSIX aio, sync for regular read/write io,
46 mmap for mmap'ed io, or sgio for direct SG_IO io. The
47 latter only works on Linux on SCSI (or SCSI-like
48 devices, such as usb-storage or sata/libata driven)
49 devices.
50 iodepth=x For async io, allow 'x' ios in flight
51 overwrite=x If 'x', layout a write file first.
52 prio=x Run io at prio X, 0-7 is the kernel allowed range
53 prioclass=x Run io at prio class X
54 bs=x Use 'x' for thread blocksize. May include k/m postfix.
55 bsrange=x-y Mix thread block sizes randomly between x and y. May
56 also include k/m postfix.
57 direct=x 1 for direct IO, 0 for buffered IO
58 thinktime=x "Think" x usec after each io
59 rate=x Throttle rate to x KiB/sec
60 ratemin=x Quit if rate of x KiB/sec can't be met
61 ratecycle=x ratemin averaged over x msecs
62 cpumask=x Only allow job to run on CPUs defined by mask.
63 fsync=x If writing, fsync after every x blocks have been written
64 startdelay=x Start this thread x seconds after startup
65 timeout=x Terminate x seconds after startup
66 offset=x Start io at offset x (x string can include k/m/g)
67 invalidate=x Invalidate page cache for file prior to doing io
68 sync=x Use sync writes if x and writing
69 mem=x If x == malloc, use malloc for buffers. If x == shm,
70 use shm for buffers. If x == mmap, use anon mmap.
71 exitall When one thread quits, terminate the others
72 bwavgtime=x Average bandwidth stats over an x msec window.
73 create_serialize=x If 'x', serialize file creation.
74 create_fsync=x If 'x', run fsync() after file creation.
75 loops=x Run the job 'x' number of times.
76 verify=x If 'x' == md5, use md5 for verifies. If 'x' == crc32,
77 use crc32 for verifies. md5 is 'safer', but crc32 is
78 a lot faster. Only makes sense for writing to a file.
79 stonewall Wait for preceeding jobs to end before running.
80 numjobs=x Create 'x' similar entries for this job
81 thread Use pthreads instead of forked jobs
20dc95c4
JA
82 zonesize=x
83 zoneskip=y Zone options must be paired. If given, the job
84 will skip y bytes for every x read/written. This
85 can be used to gauge hard drive speed over the entire
86 platter, without reading everything. Both x/y can
87 include k/m/g suffix.
ebac4655
JA
88
89
90Examples using a job file
91-------------------------
92
93A sample job file doing the same as above would look like this:
94
95[read_file]
96rw=0
97bs=4096
98
99[write_file]
100rw=1
101bs=16384
102
103And fio would be invoked as:
104
105$ fio -o1 -s -f file_with_above
106
107The second example would look like this:
108
109[rf1]
110rw=0
111prio=6
112
113[rf2]
114rw=0
115prio=3
116
117[rf3]
118rw=0
119prio=0
120direct=1
121
122And fio would be invoked as:
123
124$ fio -o0 -s -b4096 -f file_with_above
125
126'global' is a reserved keyword. When used as the filename, it sets the
127default options for the threads following that section. It is possible
128to have more than one global section in the file, as it only affects
129subsequent jobs.
130
131Also see the examples/ dir for sample job files.
132
133
134Interpreting the output
135-----------------------
136
137fio spits out a lot of output. While running, fio will display the
138status of the jobs created. An example of that would be:
139
140Threads now running: 2 : [ww] [5.73% done]
141
142The characters inside the square brackets denote the current status of
143each thread. The possible values (in typical life cycle order) are:
144
145Idle Run
146---- ---
147P Thread setup, but not started.
148C Thread created and running, but not doing anything yet
149 R Running, doing sequential reads.
150 r Running, doing random reads.
151 W Running, doing sequential writes.
152 w Running, doing random writes.
153V Running, doing verification of written data.
154E Thread exited, not reaped by main thread yet.
155_ Thread reaped.
156
157The other values are fairly self explanatory - number of thread currently
158running and doing io, and the estimated completion percentage.
159
160When fio is done (or interrupted by ctrl-c), it will show the data for
161each thread, group of threads, and disks in that order. For each data
162direction, the output looks like:
163
164Client1 (g=0): err= 0:
165 write: io= 32MiB, bw= 666KiB/s, runt= 50320msec
166 slat (msec): min= 0, max= 136, avg= 0.03, dev= 1.92
167 clat (msec): min= 0, max= 631, avg=48.50, dev=86.82
168 bw (KiB/s) : min= 0, max= 1196, per=51.00%, avg=664.02, dev=681.68
169 cpu : usr=1.49%, sys=0.25%, ctx=7969
170
171The client number is printed, along with the group id and error of that
172thread. Below is the io statistics, here for writes. In the order listed,
173they denote:
174
175io= Number of megabytes io performed
176bw= Average bandwidth rate
177runt= The runtime of that thread
178 slat= Submission latency (avg being the average, dev being the
179 standard deviation). This is the time it took to submit
180 the io. For sync io, the slat is really the completion
181 latency, since queue/complete is one operation there.
182 clat= Completion latency. Same names as slat, this denotes the
183 time from submission to completion of the io pieces. For
184 sync io, clat will usually be equal (or very close) to 0,
185 as the time from submit to complete is basically just
186 CPU time (io has already been done, see slat explanation).
187 bw= Bandwidth. Same names as the xlat stats, but also includes
188 an approximate percentage of total aggregate bandwidth
189 this thread received in this group. This last value is
190 only really useful if the threads in this group are on the
191 same disk, since they are then competing for disk access.
192cpu= CPU usage. User and system time, along with the number
193 of context switches this thread went through.
194
195After each client has been listed, the group statistics are printed. They
196will look like this:
197
198Run status group 0 (all jobs):
199 READ: io=64MiB, aggrb=22178, minb=11355, maxb=11814, mint=2840msec, maxt=2955msec
200 WRITE: io=64MiB, aggrb=1302, minb=666, maxb=669, mint=50093msec, maxt=50320msec
201
202For each data direction, it prints:
203
204io= Number of megabytes io performed.
205aggrb= Aggregate bandwidth of threads in this group.
206minb= The minimum average bandwidth a thread saw.
207maxb= The maximum average bandwidth a thread saw.
208mint= The minimum runtime of a thread.
209maxt= The maximum runtime of a thread.
210
211And finally, the disk statistics are printed. They will look like this:
212
213Disk stats (read/write):
214 sda: ios=16398/16511, merge=30/162, ticks=6853/819634, in_queue=826487, util=100.00%
215
216Each value is printed for both reads and writes, with reads first. The
217numbers denote:
218
219ios= Number of ios performed by all groups.
220merge= Number of merges io the io scheduler.
221ticks= Number of ticks we kept the disk busy.
222io_queue= Total time spent in the disk queue.
223util= The disk utilization. A value of 100% means we kept the disk
224 busy constantly, 50% would be a disk idling half of the time.