Bug 1055968 - AFR : "gluster volume heal <volume_name info" doesn't report the fqdn of storage nodes.
Summary: AFR : "gluster volume heal <volume_name info" doesn't report the fqdn of stor...
Keywords:
Status: CLOSED EOL
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: replicate
Version: 2.1
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: ---
Assignee: Ravishankar N
QA Contact: storage-qa-internal@redhat.com
URL:
Whiteboard:
Depends On: 1265470 1265633
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-01-21 10:58 UTC by spandura
Modified: 2016-09-17 12:19 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-12-03 17:21:24 UTC
Embargoed:


Attachments (Terms of Use)

Description spandura 2014-01-21 10:58:03 UTC
Description of problem:
============================
When a volume is created with bricks containing fully qualified domain names of the storage nodes, "gluster volume heal <volume_name> info" should also output fqdn of the storage node like "info healed" and "info split-brain"

Version-Release number of selected component (if applicable):
===============================================================
glusterfs 3.4.0.57rhs built on Jan 13 2014 06:59:05

How reproducible:
===================
Often

Steps to Reproduce:
===================
1. Create a replicated volume with bricks having the fqdn of the storage nodes. Start the volume. 

2. Bring down a brick offline. 

3. Create fuse mount. Create few files and directories. 

4. Execute "gluster volume heal <volume_name> info"

Actual results:
====================
root@domU-12-31-39-0A-99-B2 [Jan-20-2014- 6:30:51] >gluster v heal importer info | grep "Brick"
Brick domU-12-31-39-0A-99-B2:/rhs/bricks/importer/
Brick ip-10-82-210-192.ec2.internal:/rhs/bricks/importer
Brick ip-10-234-21-235:/rhs/bricks/importer/
Brick ip-10-2-34-53:/rhs/bricks/importer/
Brick ip-10-114-195-155:/rhs/bricks/importer/
Brick ip-10-159-26-108:/rhs/bricks/importer/

root@domU-12-31-39-0A-99-B2 [Jan-20-2014- 6:29:57] >gluster v heal importer info healed | grep "Brick"
Brick domU-12-31-39-0A-99-B2.compute-1.internal:/rhs/bricks/importer
Brick ip-10-82-210-192.ec2.internal:/rhs/bricks/importer
Brick ip-10-234-21-235.ec2.internal:/rhs/bricks/importer
Brick ip-10-2-34-53.ec2.internal:/rhs/bricks/importer
Brick ip-10-114-195-155.ec2.internal:/rhs/bricks/importer
Brick ip-10-159-26-108.ec2.internal:/rhs/bricks/importer


root@domU-12-31-39-0A-99-B2 [Jan-20-2014- 6:30:07] >gluster v heal importer info split-brain | grep "Brick"
Brick domU-12-31-39-0A-99-B2.compute-1.internal:/rhs/bricks/importer
Brick ip-10-82-210-192.ec2.internal:/rhs/bricks/importer
Brick ip-10-234-21-235.ec2.internal:/rhs/bricks/importer
Brick ip-10-2-34-53.ec2.internal:/rhs/bricks/importer
Brick ip-10-114-195-155.ec2.internal:/rhs/bricks/importer
Brick ip-10-159-26-108.ec2.internal:/rhs/bricks/importer

root@domU-12-31-39-0A-99-B2 [Jan-20-2014- 5:36:11] >gluster v info

Volume Name: exporter
Type: Distributed-Replicate
Volume ID: 31e01742-36c4-4fbf-bffb-bc9ae98920a7
Status: Started
Number of Bricks: 2 x 3 = 6
Transport-type: tcp
Bricks:
Brick1: domU-12-31-39-0A-99-B2.compute-1.internal:/rhs/bricks/exporter
Brick2: ip-10-82-210-192.ec2.internal:/rhs/bricks/exporter
Brick3: ip-10-234-21-235.ec2.internal:/rhs/bricks/exporter
Brick4: ip-10-2-34-53.ec2.internal:/rhs/bricks/exporter
Brick5: ip-10-114-195-155.ec2.internal:/rhs/bricks/exporter
Brick6: ip-10-159-26-108.ec2.internal:/rhs/bricks/exporter 

Expected results:
====================
"gluster volume heal <volume_name> info" should report the fqdn of the storage node.

Comment 2 Vivek Agarwal 2015-12-03 17:21:24 UTC
Thank you for submitting this issue for consideration in Red Hat Gluster Storage. The release for which you requested us to review, is now End of Life. Please See https://access.redhat.com/support/policy/updates/rhs/

If you can reproduce this bug against a currently maintained version of Red Hat Gluster Storage, please feel free to file a new report against the current release.


Note You need to log in before you can comment on or make changes to this bug.