having to go through FUSE. This ioengine
defines engine specific options.
- hdfs Read and write through Hadoop (HDFS).
+ libhdfs Read and write through Hadoop (HDFS).
+ The 'filename' option is used to specify host,
+ port of the hdfs name-node to connect. This
+ engine interprets offsets a little
+ differently. In HDFS, files once created
+ cannot be modified. So random writes are not
+ possible. To imitate this, libhdfs engine
+ expects bunch of small files to be created
+ over HDFS, and engine will randomly pick a
+ file out of those files based on the offset
+ generated by fio backend. (see the example
+ job file to create such files, use rw=write
+ option). Please note, you might want to set
+ necessary environment variables to work with
+ hdfs/libhdfs properly.
external Prefix to specify loading an external
IO engine object file. Append the engine
having to go through FUSE. This ioengine defines engine specific
options.
.TP
-.B hdfs
-Read and write through Hadoop (HDFS)
+.B libhdfs
+Read and write through Hadoop (HDFS). The \fBfilename\fR option is used to
+specify host,port of the hdfs name-node to connect. This engine interprets
+offsets a little differently. In HDFS, files once created cannot be modified.
+So random writes are not possible. To imitate this, libhdfs engine expects
+bunch of small files to be created over HDFS, and engine will randomly pick a
+file out of those files based on the offset generated by fio backend. (see the
+example job file to create such files, use rw=write option). Please note, you
+might want to set necessary environment variables to work with hdfs/libhdfs
+properly.
.RE
.P
.RE
}
td->o.numa_memnodes = strdup(nodelist);
numa_free_nodemask(verify_bitmask);
-
+
break;
case MPOL_LOCAL:
case MPOL_DEFAULT:
},
#endif
#ifdef CONFIG_LIBHDFS
- { .ival = "hdfs",
+ { .ival = "libhdfs",
.help = "Hadoop Distributed Filesystem (HDFS) engine"
},
#endif