Description of problem: Run gluster volume heal <volname> info displays improper output. In the status field it says brick is remote for two of the machines, whereas three of the machines are remote. And for other two nodes no status is shown. Version-Release number of selected component (if applicable): glusterfs-3.3.0qa30 How reproducible: Consistent Steps to Reproduce: 1. Create and start a 2*2 distribute-replicate volume. 2. Create some data on the mountpoint meanwhile bring down a brick. 3. Now run gluster volume heal <volname> info Actual results: [root@QA-25 ~]# gluster v heal hosdu info Heal operation on volume hosdu has been successful Brick 172.17.251.63:/data/bricks/hosdu_brick1 Number of entries: 0 Brick 172.17.251.66:/data/bricks/hosdu_brick2 Number of entries: 0 Status: brick is remote Brick 172.17.251.65:/data/bricks/hosdu_brick3 Number of entries: 0 Brick 172.17.251.64:/data/bricks/hosdu_brick4 Number of entries: 0 Status: brick is remote [root@QA-25 ~]# Out of the four nodes only two of the nodes is reported as remote. And no status is shown for the other two nodes. Expected results: Status should ideally be like self-heal completed or self-heal started or self-heal aborted or something like that. Additional info:
CHANGE: http://review.gluster.com/3074 (self-heald: Add node-uuid option for determining brick position) merged in master by Vijay Bellur (vijay)
CHANGE: http://review.gluster.com/3075 (mgmt/glusterd: Use the correct status string for filtering) merged in master by Vijay Bellur (vijay)
CHANGE: http://review.gluster.com/3076 (self-heald: succeed heal info always) merged in master by Vijay Bellur (vijay)
Now all the nodes in the cluster are listed properly along with the list of files. [root@QA-24 ~]# gluster v heal hosdu info Heal operation on volume hosdu has been successful Brick 172.17.251.63:/data/bricks/hosdu_brick1 Number of entries: 0 Brick 172.17.251.66:/data/bricks/hosdu_brick2 Number of entries: 0 When the self heal was happening it displayed list of files properly.