Bug 807556 - [glusterfs-3.3.0qa30] - volume heal <volname> info displays improper output
[glusterfs-3.3.0qa30] - volume heal <volname> info displays improper output
Status: CLOSED CURRENTRELEASE
Product: GlusterFS
Classification: Community
Component: replicate (Show other bugs)
pre-release
Unspecified Unspecified
unspecified Severity medium
: ---
: ---
Assigned To: Pranith Kumar K
:
Depends On:
Blocks: 817967
  Show dependency treegraph
 
Reported: 2012-03-28 02:57 EDT by M S Vishwanath Bhat
Modified: 2016-05-31 21:55 EDT (History)
2 users (show)

See Also:
Fixed In Version: glusterfs-3.4.0
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2013-07-24 13:12:19 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description M S Vishwanath Bhat 2012-03-28 02:57:40 EDT
Description of problem:
Run gluster volume heal <volname> info displays improper output. In the status field it says brick is remote for two of the machines, whereas three of the machines are remote. And for other two nodes no status is shown.

Version-Release number of selected component (if applicable):
glusterfs-3.3.0qa30

How reproducible:
Consistent

Steps to Reproduce:
1. Create and start a 2*2 distribute-replicate volume.
2. Create some data on the mountpoint meanwhile bring down a brick.
3. Now run gluster volume heal <volname> info
  
Actual results:
[root@QA-25 ~]# gluster v heal hosdu  info
Heal operation on volume hosdu has been successful

Brick 172.17.251.63:/data/bricks/hosdu_brick1
Number of entries: 0

Brick 172.17.251.66:/data/bricks/hosdu_brick2
Number of entries: 0
Status: brick is remote

Brick 172.17.251.65:/data/bricks/hosdu_brick3
Number of entries: 0

Brick 172.17.251.64:/data/bricks/hosdu_brick4
Number of entries: 0
Status: brick is remote
[root@QA-25 ~]#

Out of the four nodes only two of the nodes is reported as remote. And no status is shown for the other two nodes. 



Expected results:
Status should ideally be like self-heal completed or self-heal started or self-heal aborted or something like that. 


Additional info:
Comment 1 Anand Avati 2012-04-05 08:15:55 EDT
CHANGE: http://review.gluster.com/3074 (self-heald: Add node-uuid option for determining brick position) merged in master by Vijay Bellur (vijay@gluster.com)
Comment 2 Anand Avati 2012-04-05 08:16:28 EDT
CHANGE: http://review.gluster.com/3075 (mgmt/glusterd: Use the correct status string for filtering) merged in master by Vijay Bellur (vijay@gluster.com)
Comment 3 Anand Avati 2012-04-05 08:17:04 EDT
CHANGE: http://review.gluster.com/3076 (self-heald: succeed heal info always) merged in master by Vijay Bellur (vijay@gluster.com)
Comment 4 M S Vishwanath Bhat 2012-05-11 03:11:04 EDT
Now all the nodes in the cluster are listed properly along with the list of files.


[root@QA-24 ~]# gluster v heal hosdu info
Heal operation on volume hosdu has been successful

Brick 172.17.251.63:/data/bricks/hosdu_brick1
Number of entries: 0

Brick 172.17.251.66:/data/bricks/hosdu_brick2
Number of entries: 0


When the self heal was happening it displayed list of files properly.
Comment 5 M S Vishwanath Bhat 2012-05-11 03:11:29 EDT
Now all the nodes in the cluster are listed properly along with the list of files.


[root@QA-24 ~]# gluster v heal hosdu info
Heal operation on volume hosdu has been successful

Brick 172.17.251.63:/data/bricks/hosdu_brick1
Number of entries: 0

Brick 172.17.251.66:/data/bricks/hosdu_brick2
Number of entries: 0


When the self heal was happening it displayed list of files properly.

Note You need to log in before you can comment on or make changes to this bug.