X-Git-Url: https://git.kernel.dk/?p=fio.git;a=blobdiff_plain;f=HOWTO;h=12974f3f4dcc563cd143a1084aaff17116d70d48;hp=d8c6b9414eef85be48779d7018682cf2bbd07470;hb=9a9c63f1608c6f63b10cd5575dee868c2d269c87;hpb=36ecec830e25ce8f707632c464e366b21a0b0f4d diff --git a/HOWTO b/HOWTO index d8c6b941..12974f3f 100644 --- a/HOWTO +++ b/HOWTO @@ -219,6 +219,25 @@ filename=str Fio normally makes up a filename based on the job name, opendir=str Tell fio to recursively add any file it can find in this directory and down the file system tree. +lockfile=str Fio defaults to not doing any locking files before it does + IO to them. If a file or file descriptor is shared, fio + can serialize IO to that file to make the end result + consistent. This is usual for emulating real workloads that + share files. The lock modes are: + + none No locking. The default. + exclusive Only one thread/process may do IO, + excluding all others. + readwrite Read-write locking on the file. Many + readers may access the file at the + same time, but writes get exclusive + access. + + The option may be post-fixed with a lock batch number. If + set, then each thread/process may do that amount of IOs to + the file before giving up the lock. Since lock acqusition is + expensive, batching the lock/unlocks will speed up IO. + readwrite=str rw=str Type of io pattern. Accepted values are: @@ -317,6 +336,12 @@ bs_unaligned If this option is given, any byte size value within bsrange zero_buffers If this option is given, fio will init the IO buffers to all zeroes. The default is to fill them with random data. +refill_buffers If this option is given, fio will refill the IO buffers + on every submit. The default is to only fill it at init + time and reuse that data. Only makes sense if zero_buffers + isn't specified, naturally. If data verification is enabled, + refill_buffers is also automatically enabled. + nrfiles=int Number of files to use for this job. Defaults to 1. openfiles=int Number of files to keep open at the same time. Defaults to @@ -350,6 +375,8 @@ ioengine=str Defines how the job issues io to the file. The following posixaio glibc posix asynchronous io. + solarisaio Solaris native asynchronous io. + mmap File is memory mapped and data copied to/from using memcpy(3). @@ -435,7 +462,11 @@ fsync=int If writing to a file, issue a sync of the dirty data not sync the file. The exception is the sg io engine, which synchronizes the disk cache anyway. -overwrite=bool If writing to a file, setup the file first and do overwrites. +overwrite=bool If true, writes to a file will always overwrite existing + data. If the file doesn't already exist, it will be + created before the write phase begins. If the file exists + and is large enough for the specified write phase, nothing + will be done. end_fsync=bool If true, fsync file contents when the job exits. @@ -443,10 +474,6 @@ fsync_on_close=bool If true, fio will fsync() a dirty file on close. This differs from end_fsync in that it will happen on every file close, not just at the end of the job. -rwmixcycle=int Value in milliseconds describing how often to switch between - reads and writes for a mixed workload. The default is - 500 msecs. - rwmixread=int How large a percentage of the mix should be reads. rwmixwrite=int How large a percentage of the mix should be writes. If both @@ -463,6 +490,12 @@ norandommap Normally fio will cover every block of the file when doing fio doesn't track potential block rewrites which may alter the calculated checksum for that block. +softrandommap See norandommap. If fio runs with the random block map enabled + and it fails to allocate the map, if this option is set it + will continue without a random block map. As coverage will + not be as complete as with random maps, this option is + disabled by default. + nice=int Run the job with the given nice value. See man nice(2). prio=int Set the io priority value of this job. Linux limits us to @@ -793,6 +826,8 @@ Client1 (g=0): err= 0: bw (KiB/s) : min= 0, max= 1196, per=51.00%, avg=664.02, stdev=681.68 cpu : usr=1.49%, sys=0.25%, ctx=7969, majf=0, minf=17 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=99.0%, 16=0.0%, 32=0.0%, >32=0.0% + submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% + complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued r/w: total=0/32768, short=0/0 lat (msec): 2=1.6%, 4=0.0%, 10=3.2%, 20=12.8%, 50=38.4%, 100=24.8%, lat (msec): 250=15.2%, 500=0.0%, 750=0.0%, 1000=0.0%, >=2048=0.0% @@ -830,6 +865,11 @@ IO depths= The distribution of io depths over the job life time. The 16= entries includes depths up to that value but higher than the previous entry. In other words, it covers the range from 16 to 31. +IO submit= How many pieces of IO were submitting in a single submit + call. Each entry denotes that amount and below, until + the previous entry - eg, 8=100% mean that we submitted + anywhere in between 5-8 ios per submit call. +IO complete= Like the above submit number, but for completions instead. IO issued= The number of read/write requests issued, and how many of them were short. IO latencies= The distribution of IO completion latencies. This is the