Bug 1055968

Summary: AFR : "gluster volume heal <volume_name info" doesn't report the fqdn of storage nodes.
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: spandura
Component: replicateAssignee: Ravishankar N <ravishankar>
Status: CLOSED EOL QA Contact: storage-qa-internal <storage-qa-internal>
Severity: medium Docs Contact:
Priority: unspecified    
Version: 2.1CC: nlevinki, pkarampu, rhs-bugs, storage-qa-internal, vbellur
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2015-12-03 17:21:24 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1265470, 1265633    
Bug Blocks:    

Description spandura 2014-01-21 10:58:03 UTC
Description of problem:
============================
When a volume is created with bricks containing fully qualified domain names of the storage nodes, "gluster volume heal <volume_name> info" should also output fqdn of the storage node like "info healed" and "info split-brain"

Version-Release number of selected component (if applicable):
===============================================================
glusterfs 3.4.0.57rhs built on Jan 13 2014 06:59:05

How reproducible:
===================
Often

Steps to Reproduce:
===================
1. Create a replicated volume with bricks having the fqdn of the storage nodes. Start the volume. 

2. Bring down a brick offline. 

3. Create fuse mount. Create few files and directories. 

4. Execute "gluster volume heal <volume_name> info"

Actual results:
====================
root@domU-12-31-39-0A-99-B2 [Jan-20-2014- 6:30:51] >gluster v heal importer info | grep "Brick"
Brick domU-12-31-39-0A-99-B2:/rhs/bricks/importer/
Brick ip-10-82-210-192.ec2.internal:/rhs/bricks/importer
Brick ip-10-234-21-235:/rhs/bricks/importer/
Brick ip-10-2-34-53:/rhs/bricks/importer/
Brick ip-10-114-195-155:/rhs/bricks/importer/
Brick ip-10-159-26-108:/rhs/bricks/importer/

root@domU-12-31-39-0A-99-B2 [Jan-20-2014- 6:29:57] >gluster v heal importer info healed | grep "Brick"
Brick domU-12-31-39-0A-99-B2.compute-1.internal:/rhs/bricks/importer
Brick ip-10-82-210-192.ec2.internal:/rhs/bricks/importer
Brick ip-10-234-21-235.ec2.internal:/rhs/bricks/importer
Brick ip-10-2-34-53.ec2.internal:/rhs/bricks/importer
Brick ip-10-114-195-155.ec2.internal:/rhs/bricks/importer
Brick ip-10-159-26-108.ec2.internal:/rhs/bricks/importer


root@domU-12-31-39-0A-99-B2 [Jan-20-2014- 6:30:07] >gluster v heal importer info split-brain | grep "Brick"
Brick domU-12-31-39-0A-99-B2.compute-1.internal:/rhs/bricks/importer
Brick ip-10-82-210-192.ec2.internal:/rhs/bricks/importer
Brick ip-10-234-21-235.ec2.internal:/rhs/bricks/importer
Brick ip-10-2-34-53.ec2.internal:/rhs/bricks/importer
Brick ip-10-114-195-155.ec2.internal:/rhs/bricks/importer
Brick ip-10-159-26-108.ec2.internal:/rhs/bricks/importer

root@domU-12-31-39-0A-99-B2 [Jan-20-2014- 5:36:11] >gluster v info

Volume Name: exporter
Type: Distributed-Replicate
Volume ID: 31e01742-36c4-4fbf-bffb-bc9ae98920a7
Status: Started
Number of Bricks: 2 x 3 = 6
Transport-type: tcp
Bricks:
Brick1: domU-12-31-39-0A-99-B2.compute-1.internal:/rhs/bricks/exporter
Brick2: ip-10-82-210-192.ec2.internal:/rhs/bricks/exporter
Brick3: ip-10-234-21-235.ec2.internal:/rhs/bricks/exporter
Brick4: ip-10-2-34-53.ec2.internal:/rhs/bricks/exporter
Brick5: ip-10-114-195-155.ec2.internal:/rhs/bricks/exporter
Brick6: ip-10-159-26-108.ec2.internal:/rhs/bricks/exporter 

Expected results:
====================
"gluster volume heal <volume_name> info" should report the fqdn of the storage node.

Comment 2 Vivek Agarwal 2015-12-03 17:21:24 UTC
Thank you for submitting this issue for consideration in Red Hat Gluster Storage. The release for which you requested us to review, is now End of Life. Please See https://access.redhat.com/support/policy/updates/rhs/

If you can reproduce this bug against a currently maintained version of Red Hat Gluster Storage, please feel free to file a new report against the current release.