Signed-off-by: Sitsofe Wheeler <sitsofe@yahoo.com>
Read and write through Hadoop (HDFS). The :file:`filename` option
is used to specify host,port of the hdfs name-node to connect. This
engine interprets offsets a little differently. In HDFS, files once
Read and write through Hadoop (HDFS). The :file:`filename` option
is used to specify host,port of the hdfs name-node to connect. This
engine interprets offsets a little differently. In HDFS, files once
- created cannot be modified. So random writes are not possible. To
- imitate this, libhdfs engine expects bunch of small files to be
- created over HDFS, and engine will randomly pick a file out of those
- files based on the offset generated by fio backend. (see the example
+ created cannot be modified so random writes are not possible. To
+ imitate this the libhdfs engine expects a bunch of small files to be
+ created over HDFS and will randomly pick a file from them
+ based on the offset generated by fio backend (see the example
job file to create such files, use ``rw=write`` option). Please
job file to create such files, use ``rw=write`` option). Please
- note, you might want to set necessary environment variables to work
- with hdfs/libhdfs properly. Each job uses its own connection to
+ note, it may be necessary to set environment variables to work
+ with HDFS/libhdfs properly. Each job uses its own connection to