Bug 807556

Summary: [glusterfs-3.3.0qa30] - volume heal <volname> info displays improper output
Product: [Community] GlusterFS Reporter: M S Vishwanath Bhat <vbhat>
Component: replicateAssignee: Pranith Kumar K <pkarampu>
Status: CLOSED CURRENTRELEASE QA Contact:
Severity: medium Docs Contact:
Priority: unspecified    
Version: pre-releaseCC: gluster-bugs, mzywusko
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: glusterfs-3.4.0 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2013-07-24 17:12:19 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 817967    

Description M S Vishwanath Bhat 2012-03-28 06:57:40 UTC
Description of problem:
Run gluster volume heal <volname> info displays improper output. In the status field it says brick is remote for two of the machines, whereas three of the machines are remote. And for other two nodes no status is shown.

Version-Release number of selected component (if applicable):
glusterfs-3.3.0qa30

How reproducible:
Consistent

Steps to Reproduce:
1. Create and start a 2*2 distribute-replicate volume.
2. Create some data on the mountpoint meanwhile bring down a brick.
3. Now run gluster volume heal <volname> info
  
Actual results:
[root@QA-25 ~]# gluster v heal hosdu  info
Heal operation on volume hosdu has been successful

Brick 172.17.251.63:/data/bricks/hosdu_brick1
Number of entries: 0

Brick 172.17.251.66:/data/bricks/hosdu_brick2
Number of entries: 0
Status: brick is remote

Brick 172.17.251.65:/data/bricks/hosdu_brick3
Number of entries: 0

Brick 172.17.251.64:/data/bricks/hosdu_brick4
Number of entries: 0
Status: brick is remote
[root@QA-25 ~]#

Out of the four nodes only two of the nodes is reported as remote. And no status is shown for the other two nodes. 



Expected results:
Status should ideally be like self-heal completed or self-heal started or self-heal aborted or something like that. 


Additional info:

Comment 1 Anand Avati 2012-04-05 12:15:55 UTC
CHANGE: http://review.gluster.com/3074 (self-heald: Add node-uuid option for determining brick position) merged in master by Vijay Bellur (vijay)

Comment 2 Anand Avati 2012-04-05 12:16:28 UTC
CHANGE: http://review.gluster.com/3075 (mgmt/glusterd: Use the correct status string for filtering) merged in master by Vijay Bellur (vijay)

Comment 3 Anand Avati 2012-04-05 12:17:04 UTC
CHANGE: http://review.gluster.com/3076 (self-heald: succeed heal info always) merged in master by Vijay Bellur (vijay)

Comment 4 M S Vishwanath Bhat 2012-05-11 07:11:04 UTC
Now all the nodes in the cluster are listed properly along with the list of files.


[root@QA-24 ~]# gluster v heal hosdu info
Heal operation on volume hosdu has been successful

Brick 172.17.251.63:/data/bricks/hosdu_brick1
Number of entries: 0

Brick 172.17.251.66:/data/bricks/hosdu_brick2
Number of entries: 0


When the self heal was happening it displayed list of files properly.

Comment 5 M S Vishwanath Bhat 2012-05-11 07:11:29 UTC
Now all the nodes in the cluster are listed properly along with the list of files.


[root@QA-24 ~]# gluster v heal hosdu info
Heal operation on volume hosdu has been successful

Brick 172.17.251.63:/data/bricks/hosdu_brick1
Number of entries: 0

Brick 172.17.251.66:/data/bricks/hosdu_brick2
Number of entries: 0


When the self heal was happening it displayed list of files properly.