Description of problem: I think we should document that when I run mapreduce job with non-default volumes for input & output, the default volume has to be also started otherwise mapreduce job will fail.
Additional information: I've ran: su - bigtop -c "hadoop jar /usr/lib/hadoop-mapreduce/hadoop-streaming-2.2.0.2.0.6.0-101.jar -input glusterfs://HadoopVol2/user/bigtop/in -output glusterfs://HadoopVol3/user/bigtop/out -mapper /bin/cat -reducer /usr/bin/wc" where HadoopVol{1..2} are non-default volumes. Default volume (HadoopVol1) has to be started (and not full) during the job because HadoopVol1/user/username/.staging directory is being created during the process.
I have updated the installation guide in the section "Using Hadoop" (near the end) to describe that the HadoopVol needs to be running at all times when using Hadoop.