Description of problem: ------------------------ The output of `gluster volume heal info' command lists bricks along with their internal hostames (i.e. returned by `hostname' command on the instance). This is inconsistent with the output of other gluster commands like `gluster volume info', `gluster volume status' etc. which display Public DNS names of the instances. Another thing to note is that the output of `gluster volume heal info' command shows Public DNS names of the instances for those bricks which were brought down. For e.g. on a 6x3 volume, out of 18 bricks, 3 were brought down. For these bricks that were not running, the command output displayed Public DNS names instead of the internal names. Version-Release number of selected component (if applicable): ------------------------------------------------------------- glusterfs-3.6.0.53-1.el6rhs.x86_64 How reproducible: ----------------- Observed once Steps to Reproduce: -------------------- 1. Create a 6x3 volume in an AWS gluster cluster and mount it on a client. 2. Kill a few bricks and create some files and directories on the mount point. 3. Run `gluster volume heal info' command on one of the storage nodes. Actual results: ----------------- For all bricks that are connected, the command output displays internal names for the instances, while the disconnected bricks are shown with Public DNS names of the instances. Expected results: ------------------ Show Public DNS names for all bricks. Additional info:
I am eager to know the output of 'gluster peer status' command too in this case. Could you provide the samples ?
Here is the relevant upstream bug - https://bugzilla.redhat.com/show_bug.cgi?id=1208255
(In reply to SATHEESARAN from comment #2) > I am eager to know the output of 'gluster peer status' command too in this > case. > Could you provide the samples ? The output of `gluster peer status' command in this case displays the public DNS name of the instances.
Moving the bug to QE for verification as the fix is available in 3.1.2 as per comment 6