one cpu per job. If not enough CPUs are given for the jobs
listed, then fio will roundrobin the CPUs in the set.
-numa_cpu_nodes=str Set this job running on spcified NUMA nodes' CPUs. The
+numa_cpu_nodes=str Set this job running on specified NUMA nodes' CPUs. The
arguments allow comma delimited list of cpu numbers,
A-B ranges, or 'all'. Note, to enable numa options support,
fio must be built on a system with libnuma-dev(el) installed.
through 'write_iops_log', then the minimum of this option and
'log_avg_msec' will be used. Default: 500ms.
-create_serialize=bool If true, serialize the file creating for the jobs.
+create_serialize=bool If true, serialize the file creation for the jobs.
This may be handy to avoid interleaving of data
files, which may greatly depend on the filesystem
used and even the number of processors in the system.
runs of that job would then waste time recreating the file
set again and again.
+unlink_each_loop=bool Unlink job files after each iteration or loop.
+
loops=int Run the specified number of iterations of this job. Used
to repeat the same workload a given number of times. Defaults
to 1.
The actual log names will be foo_slat.x.log, foo_clat.x.log,
and foo_lat.x.log, where x is the index of the job (1..N,
where N is the number of jobs). This helps fio_generate_plot
- fine the logs automatically. If 'per_job_logs' is false, then
+ find the logs automatically. If 'per_job_logs' is false, then
the filename will not include the job index. See 'Log File
Formats'.
the --inflate-log command line parameter. The files will be
stored with a .fz suffix.
+log_unix_epoch=bool If set, fio will log Unix timestamps to the log
+ files produced by enabling write_type_log for each log type, instead
+ of the default zero-based timestamps.
+
block_error_percentiles=bool If set, record errors in trim block-sized
units from writes and trims and output a histogram of
how many trims it took to get to errors, and what kind
[mtd] skip_bad=bool Skip operations against known bad blocks.
[libhdfs] hdfsdirectory libhdfs will create chunk in this HDFS directory
-[libhdfs] chunck_size the size of the chunck to use for each file.
+[libhdfs] chunk_size the size of the chunk to use for each file.
6.0 Interpreting the output