Bug 807556 - [glusterfs-3.3.0qa30] - volume heal <volname> info displays improper output
Summary: [glusterfs-3.3.0qa30] - volume heal <volname> info displays improper output
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: replicate
Version: pre-release
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
Assignee: Pranith Kumar K
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks: 817967
TreeView+ depends on / blocked
 
Reported: 2012-03-28 06:57 UTC by M S Vishwanath Bhat
Modified: 2016-06-01 01:55 UTC (History)
2 users (show)

Fixed In Version: glusterfs-3.4.0
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2013-07-24 17:12:19 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description M S Vishwanath Bhat 2012-03-28 06:57:40 UTC
Description of problem:
Run gluster volume heal <volname> info displays improper output. In the status field it says brick is remote for two of the machines, whereas three of the machines are remote. And for other two nodes no status is shown.

Version-Release number of selected component (if applicable):
glusterfs-3.3.0qa30

How reproducible:
Consistent

Steps to Reproduce:
1. Create and start a 2*2 distribute-replicate volume.
2. Create some data on the mountpoint meanwhile bring down a brick.
3. Now run gluster volume heal <volname> info
  
Actual results:
[root@QA-25 ~]# gluster v heal hosdu  info
Heal operation on volume hosdu has been successful

Brick 172.17.251.63:/data/bricks/hosdu_brick1
Number of entries: 0

Brick 172.17.251.66:/data/bricks/hosdu_brick2
Number of entries: 0
Status: brick is remote

Brick 172.17.251.65:/data/bricks/hosdu_brick3
Number of entries: 0

Brick 172.17.251.64:/data/bricks/hosdu_brick4
Number of entries: 0
Status: brick is remote
[root@QA-25 ~]#

Out of the four nodes only two of the nodes is reported as remote. And no status is shown for the other two nodes. 



Expected results:
Status should ideally be like self-heal completed or self-heal started or self-heal aborted or something like that. 


Additional info:

Comment 1 Anand Avati 2012-04-05 12:15:55 UTC
CHANGE: http://review.gluster.com/3074 (self-heald: Add node-uuid option for determining brick position) merged in master by Vijay Bellur (vijay)

Comment 2 Anand Avati 2012-04-05 12:16:28 UTC
CHANGE: http://review.gluster.com/3075 (mgmt/glusterd: Use the correct status string for filtering) merged in master by Vijay Bellur (vijay)

Comment 3 Anand Avati 2012-04-05 12:17:04 UTC
CHANGE: http://review.gluster.com/3076 (self-heald: succeed heal info always) merged in master by Vijay Bellur (vijay)

Comment 4 M S Vishwanath Bhat 2012-05-11 07:11:04 UTC
Now all the nodes in the cluster are listed properly along with the list of files.


[root@QA-24 ~]# gluster v heal hosdu info
Heal operation on volume hosdu has been successful

Brick 172.17.251.63:/data/bricks/hosdu_brick1
Number of entries: 0

Brick 172.17.251.66:/data/bricks/hosdu_brick2
Number of entries: 0


When the self heal was happening it displayed list of files properly.

Comment 5 M S Vishwanath Bhat 2012-05-11 07:11:29 UTC
Now all the nodes in the cluster are listed properly along with the list of files.


[root@QA-24 ~]# gluster v heal hosdu info
Heal operation on volume hosdu has been successful

Brick 172.17.251.63:/data/bricks/hosdu_brick1
Number of entries: 0

Brick 172.17.251.66:/data/bricks/hosdu_brick2
Number of entries: 0


When the self heal was happening it displayed list of files properly.


Note You need to log in before you can comment on or make changes to this bug.