X-Git-Url: https://git.kernel.dk/?p=fio.git;a=blobdiff_plain;f=HOWTO;h=d18d59b4b824d498104143ff25553d8769cf8ba3;hp=c37a9e09ac53d69d94fa925a5486b8fc8387eb02;hb=dd32be11d1e158ce16a0266816df1a7b86389b32;hpb=2cafffbea5d2ed2f20d73efa0d82baa9046e0b12 diff --git a/HOWTO b/HOWTO index c37a9e09..d18d59b4 100644 --- a/HOWTO +++ b/HOWTO @@ -11,6 +11,8 @@ Table of contents 8. Trace file format 9. CPU idleness profiling 10. Verification and triggers +11. Log File Formats + 1.0 Overview and history ------------------------ @@ -51,8 +53,8 @@ bottom, it contains the following basic parameters: IO engine How do we issue io? We could be memory mapping the file, we could be using regular read/write, we - could be using splice, async io, syslet, or even - SG (SCSI generic sg). + could be using splice, async io, or even SG + (SCSI generic sg). IO depth If the io engine is async, how large a queuing depth do we want to maintain? @@ -327,18 +329,15 @@ directory=str Prefix filenames with this directory. Used to place files filename=str Fio normally makes up a filename based on the job name, thread number, and file number. If you want to share files between threads in a job or several jobs, specify - a filename for each of them to override the default. If - the ioengine used is 'net', the filename is the host, port, - and protocol to use in the format of =host,port,protocol. - See ioengine=net for more. If the ioengine is file based, you - can specify a number of files by separating the names with a - ':' colon. So if you wanted a job to open /dev/sda and /dev/sdb - as the two working files, you would use - filename=/dev/sda:/dev/sdb. On Windows, disk devices are - accessed as \\.\PhysicalDrive0 for the first device, - \\.\PhysicalDrive1 for the second etc. Note: Windows and - FreeBSD prevent write access to areas of the disk containing - in-use data (e.g. filesystems). + a filename for each of them to override the default. + If the ioengine is file based, you can specify a number of + files by separating the names with a ':' colon. So if you + wanted a job to open /dev/sda and /dev/sdb as the two working + files, you would use filename=/dev/sda:/dev/sdb. On Windows, + disk devices are accessed as \\.\PhysicalDrive0 for the first + device, \\.\PhysicalDrive1 for the second etc. Note: Windows + and FreeBSD prevent write access to areas of the disk + containing in-use data (e.g. filesystems). If the wanted filename does need to include a colon, then escape that with a '\' character. For instance, if the filename is "/dev/dsk/foo@3,0:c", then you would use @@ -372,6 +371,11 @@ filename_format=str default of $jobname.$jobnum.$filenum will be used if no other format specifier is given. +unique_filename=bool To avoid collisions between networked clients, fio + defaults to prefixing any generated filenames (with a directory + specified) with the source of the client connecting. To disable + this behavior, set this option to 0. + opendir=str Tell fio to recursively add any file it can find in this directory and down the file system tree. @@ -401,6 +405,7 @@ rw=str Type of io pattern. Accepted values are: trimwrite Mixed trims and writes. Blocks will be trimmed first, then written to. + Fio defaults to read if the option is not specified. For the mixed io types, the default is to split them 50/50. For certain types of io the result may still be skewed a bit, since the speed may be different. It is possible to specify @@ -671,10 +676,23 @@ file_service_type=str Defines how fio decides which file from a job to the next. Multiple files can still be open depending on 'openfiles'. - The string can have a number appended, indicating how - often to switch to a new file. So if option random:4 is - given, fio will switch to a new random file after 4 ios - have been issued. + zipf Use a zipfian distribution to decide what file + to access. + + pareto Use a pareto distribution to decide what file + to access. + + gauss Use a gaussian (normal) distribution to decide + what file to access. + + For random, roundrobin, and sequential, a postfix can be + appended to tell fio how many I/Os to issue before switching + to a new file. For example, specifying + 'file_service_type=random:8' would cause fio to issue 8 I/Os + before selecting a new file at random. For the non-uniform + distributions, a floating point postfix can be given to + influence how the distribution is skewed. See + 'random_distribution' for a description of how that would work. ioengine=str Defines how the job issues io to the file. The following types are defined: @@ -682,11 +700,14 @@ ioengine=str Defines how the job issues io to the file. The following sync Basic read(2) or write(2) io. lseek(2) is used to position the io location. - psync Basic pread(2) or pwrite(2) io. + psync Basic pread(2) or pwrite(2) io. Default on all + supported operating systems except for Windows. vsync Basic readv(2) or writev(2) IO. - psyncv Basic preadv(2) or pwritev(2) IO. + pvsync Basic preadv(2) or pwritev(2) IO. + + pvsync2 Basic preadv2(2) or pwritev2(2) IO. libaio Linux native asynchronous io. Note that Linux may only support queued behaviour with @@ -698,6 +719,7 @@ ioengine=str Defines how the job issues io to the file. The following solarisaio Solaris native asynchronous io. windowsaio Windows native asynchronous io. + Default on Windows. mmap File is memory mapped and data copied to/from using memcpy(3). @@ -706,9 +728,6 @@ ioengine=str Defines how the job issues io to the file. The following vmsplice(2) to transfer data from user space to the kernel. - syslet-rw Use the syslet system calls to make - regular read/write async. - sg SCSI generic sg v3 io. May either be synchronous using the SG_IO ioctl, or if the target is an sg character device @@ -733,12 +752,13 @@ ioengine=str Defines how the job issues io to the file. The following cpuio Doesn't transfer any data, but burns CPU cycles according to the cpuload= and - cpucycle= options. Setting cpuload=85 + cpuchunks= options. Setting cpuload=85 will cause that job to do nothing but burn 85% of the CPU. In case of SMP machines, use numjobs= to get desired CPU usage, as the cpuload only loads a single - CPU at the desired rate. + CPU at the desired rate. A job never finishes + unless there is at least one non-cpuio job. guasi The GUASI IO engine is the Generic Userspace Asyncronous Syscall Interface approach @@ -796,6 +816,9 @@ ioengine=str Defines how the job issues io to the file. The following overwriting. The writetrim mode works well for this constraint. + pmemblk Read and write through the NVML libpmemblk + interface. + external Prefix to specify loading an external IO engine object file. Append the engine filename, eg ioengine=external:/tmp/foo.o @@ -963,6 +986,8 @@ random_distribution=str:float By default, fio will use a completely uniform random Uniform random distribution zipf Zipf distribution pareto Pareto distribution + gauss Normal (guassian) distribution + zoned Zoned random distribution When using a zipf or pareto distribution, an input value is also needed to define the access pattern. For zipf, this @@ -971,7 +996,28 @@ random_distribution=str:float By default, fio will use a completely uniform what the given input values will yield in terms of hit rates. If you wanted to use zipf with a theta of 1.2, you would use random_distribution=zipf:1.2 as the option. If a non-uniform - model is used, fio will disable use of the random map. + model is used, fio will disable use of the random map. For + the gauss distribution, a normal deviation is supplied as + a value between 0 and 100. + + For a zoned distribution, fio supports specifying percentages + of IO access that should fall within what range of the file or + device. For example, given a criteria of: + + 60% of accesses should be to the first 10% + 30% of accesses should be to the next 20% + 8% of accesses should be to to the next 30% + 2% of accesses should be to the next 40% + + we can define that through zoning of the random accesses. For + the above example, the user would do: + + random_distribution=zoned:60/10:30/20:8/30:2/40 + + similarly to how bssplit works for setting ranges and + percentages of block sizes. Like bssplit, it's possible to + specify separate zones for reads, writes, and trims. If just + one set is given, it'll apply to all of them. percentage_random=int For a random workload, set how big a percentage should be random. This defaults to 100%, in which case the workload @@ -1022,7 +1068,8 @@ nice=int Run the job with the given nice value. See man nice(2). prio=int Set the io priority value of this job. Linux limits us to a positive value between 0 and 7, with 0 being the highest. - See man ionice(1). + See man ionice(1). Refer to an appropriate manpage for + other operating systems since meaning of priority may differ. prioclass=int Set the io priority class. See man ionice(1). @@ -1128,7 +1175,7 @@ cpus_allowed_policy=str Set the policy of how fio distributes the CPUs one cpu per job. If not enough CPUs are given for the jobs listed, then fio will roundrobin the CPUs in the set. -numa_cpu_nodes=str Set this job running on spcified NUMA nodes' CPUs. The +numa_cpu_nodes=str Set this job running on specified NUMA nodes' CPUs. The arguments allow comma delimited list of cpu numbers, A-B ranges, or 'all'. Note, to enable numa options support, fio must be built on a system with libnuma-dev(el) installed. @@ -1178,6 +1225,7 @@ mem=str Fio can use various types of memory as the io unit buffer. The allowed values are: malloc Use memory from malloc(3) as the buffers. + Default memory type. shm Use shared memory as the buffers. Allocated through shmget(2). @@ -1238,10 +1286,14 @@ exitall_on_error When one job finishes in error, terminate the rest. The default is to wait for each job to finish. bwavgtime=int Average the calculated bandwidth over the given time. Value - is specified in milliseconds. + is specified in milliseconds. If the job also does bandwidth + logging through 'write_bw_log', then the minimum of this option + and 'log_avg_msec' will be used. Default: 500ms. iopsavgtime=int Average the calculated IOPS over the given time. Value - is specified in milliseconds. + is specified in milliseconds. If the job also does IOPS logging + through 'write_iops_log', then the minimum of this option and + 'log_avg_msec' will be used. Default: 500ms. create_serialize=bool If true, serialize the file creating for the jobs. This may be handy to avoid interleaving of data @@ -1541,7 +1593,7 @@ write_bw_log=str If given, write a bandwidth log of the jobs in this job filename. For this option, the suffix is _bw.x.log, where x is the index of the job (1..N, where N is the number of jobs). If 'per_job_logs' is false, then the filename will not - include the job index. + include the job index. See 'Log File Formats'. write_lat_log=str Same as write_bw_log, except that this option stores io submission, completion, and total latencies instead. If no @@ -1554,9 +1606,9 @@ write_lat_log=str Same as write_bw_log, except that this option stores io The actual log names will be foo_slat.x.log, foo_clat.x.log, and foo_lat.x.log, where x is the index of the job (1..N, where N is the number of jobs). This helps fio_generate_plot - fine the logs automatically. If 'per_job_logs' is false, then - the filename will not include the job index. - + find the logs automatically. If 'per_job_logs' is false, then + the filename will not include the job index. See 'Log File + Formats'. write_iops_log=str Same as write_bw_log, but writes IOPS. If no filename is given with this option, the default filename of @@ -1564,19 +1616,20 @@ write_iops_log=str Same as write_bw_log, but writes IOPS. If no filename is (1..N, where N is the number of jobs). Even if the filename is given, fio will still append the type of log. If 'per_job_logs' is false, then the filename will not include - the job index. + the job index. See 'Log File Formats'. log_avg_msec=int By default, fio will log an entry in the iops, latency, or bw log for every IO that completes. When writing to the disk log, that can quickly grow to a very large size. Setting this option makes fio average the each log entry over the specified period of time, reducing the resolution of the log. - See log_max as well. Defaults to 0, logging all entries. + See log_max_value as well. Defaults to 0, logging all entries. + +log_max_value=bool If log_avg_msec is set, fio logs the average over that + window. If you instead want to log the maximum value, set this + option to 1. Defaults to 0, meaning that averaged values are + logged. -log_max=bool If log_avg_msec is set, fio logs the average over that window. - If you instead want to log the maximum value, set this option - to 1. Defaults to 0, meaning that averaged values are logged. -. log_offset=int If this is set, the iolog options will include the byte offset for the IO entry as well as the other data values. @@ -1787,12 +1840,12 @@ that defines them is selected. [psyncv2] hipri Set RWF_HIPRI on IO, indicating to the kernel that it's of higher priority than normal. -[cpu] cpuload=int Attempt to use the specified percentage of CPU cycles. +[cpuio] cpuload=int Attempt to use the specified percentage of CPU cycles. -[cpu] cpuchunks=int Split the load into cycles of the given time. In +[cpuio] cpuchunks=int Split the load into cycles of the given time. In microseconds. -[cpu] exit_on_io_done=bool Detect when IO threads are done, then exit. +[cpuio] exit_on_io_done=bool Detect when IO threads are done, then exit. [netsplice] hostname=str [net] hostname=str The host name or IP address to use for TCP or UDP based IO. @@ -1862,6 +1915,15 @@ be the starting port number since fio will use a range of ports. 1 : allocate space immidietly inside defragment event, and free right after event +[rbd] clustername=str Specifies the name of the Ceph cluster. +[rbd] rbdname=str Specifies the name of the RBD. +[rbd] pool=str Specifies the naem of the Ceph pool containing RBD. +[rbd] clientname=str Specifies the username (without the 'client.' prefix) + used to access the Ceph cluster. If the clustername is + specified, the clientmae shall be the full type.id + string. If no type. prefix is given, fio will add + 'client.' by default. + [mtd] skip_bad=bool Skip operations against known bad blocks. [libhdfs] hdfsdirectory libhdfs will create chunk in this HDFS directory @@ -1962,7 +2024,9 @@ runt= The runtime of that thread cpu= CPU usage. User and system time, along with the number of context switches this thread went through, usage of system and user time, and finally the number of major - and minor page faults. + and minor page faults. The CPU utilization numbers are + averages for the jobs in that reporting group, while the + context and fault counters are summed. IO depths= The distribution of io depths over the job life time. The numbers are divided into powers of 2, so for example the 16= entries includes depths up to that value but higher @@ -2230,3 +2294,36 @@ the verify_state_load option. If that is set, fio will load the previously stored state. For a local fio run this is done by loading the files directly, and on a client/server run, the server backend will ask the client to send the files over and load them from there. + + +11.0 Log File Formats +--------------------- + +Fio supports a variety of log file formats, for logging latencies, bandwidth, +and IOPS. The logs share a common format, which looks like this: + +time (msec), value, data direction, offset + +Time for the log entry is always in milliseconds. The value logged depends +on the type of log, it will be one of the following: + + Latency log Value is latency in usecs + Bandwidth log Value is in KB/sec + IOPS log Value is IOPS + +Data direction is one of the following: + + 0 IO is a READ + 1 IO is a WRITE + 2 IO is a TRIM + +The offset is the offset, in bytes, from the start of the file, for that +particular IO. The logging of the offset can be toggled with 'log_offset'. + +If windowed logging is enabled though 'log_avg_msec', then fio doesn't log +individual IOs. Instead of logs the average values over the specified +period of time. Since 'data direction' and 'offset' are per-IO values, +they aren't applicable if windowed logging is enabled. If windowed logging +is enabled and 'log_max_value' is set, then fio logs maximum values in +that window instead of averages. +