Description of problem: ============================ When a volume is created with bricks containing fully qualified domain names of the storage nodes, "gluster volume heal <volume_name> info" should also output fqdn of the storage node like "info healed" and "info split-brain" Version-Release number of selected component (if applicable): =============================================================== glusterfs 3.4.0.57rhs built on Jan 13 2014 06:59:05 How reproducible: =================== Often Steps to Reproduce: =================== 1. Create a replicated volume with bricks having the fqdn of the storage nodes. Start the volume. 2. Bring down a brick offline. 3. Create fuse mount. Create few files and directories. 4. Execute "gluster volume heal <volume_name> info" Actual results: ==================== root@domU-12-31-39-0A-99-B2 [Jan-20-2014- 6:30:51] >gluster v heal importer info | grep "Brick" Brick domU-12-31-39-0A-99-B2:/rhs/bricks/importer/ Brick ip-10-82-210-192.ec2.internal:/rhs/bricks/importer Brick ip-10-234-21-235:/rhs/bricks/importer/ Brick ip-10-2-34-53:/rhs/bricks/importer/ Brick ip-10-114-195-155:/rhs/bricks/importer/ Brick ip-10-159-26-108:/rhs/bricks/importer/ root@domU-12-31-39-0A-99-B2 [Jan-20-2014- 6:29:57] >gluster v heal importer info healed | grep "Brick" Brick domU-12-31-39-0A-99-B2.compute-1.internal:/rhs/bricks/importer Brick ip-10-82-210-192.ec2.internal:/rhs/bricks/importer Brick ip-10-234-21-235.ec2.internal:/rhs/bricks/importer Brick ip-10-2-34-53.ec2.internal:/rhs/bricks/importer Brick ip-10-114-195-155.ec2.internal:/rhs/bricks/importer Brick ip-10-159-26-108.ec2.internal:/rhs/bricks/importer root@domU-12-31-39-0A-99-B2 [Jan-20-2014- 6:30:07] >gluster v heal importer info split-brain | grep "Brick" Brick domU-12-31-39-0A-99-B2.compute-1.internal:/rhs/bricks/importer Brick ip-10-82-210-192.ec2.internal:/rhs/bricks/importer Brick ip-10-234-21-235.ec2.internal:/rhs/bricks/importer Brick ip-10-2-34-53.ec2.internal:/rhs/bricks/importer Brick ip-10-114-195-155.ec2.internal:/rhs/bricks/importer Brick ip-10-159-26-108.ec2.internal:/rhs/bricks/importer root@domU-12-31-39-0A-99-B2 [Jan-20-2014- 5:36:11] >gluster v info Volume Name: exporter Type: Distributed-Replicate Volume ID: 31e01742-36c4-4fbf-bffb-bc9ae98920a7 Status: Started Number of Bricks: 2 x 3 = 6 Transport-type: tcp Bricks: Brick1: domU-12-31-39-0A-99-B2.compute-1.internal:/rhs/bricks/exporter Brick2: ip-10-82-210-192.ec2.internal:/rhs/bricks/exporter Brick3: ip-10-234-21-235.ec2.internal:/rhs/bricks/exporter Brick4: ip-10-2-34-53.ec2.internal:/rhs/bricks/exporter Brick5: ip-10-114-195-155.ec2.internal:/rhs/bricks/exporter Brick6: ip-10-159-26-108.ec2.internal:/rhs/bricks/exporter Expected results: ==================== "gluster volume heal <volume_name> info" should report the fqdn of the storage node.
Patch URL: http://review.gluster.org/#/c/12212/
Patch merged. Not sure why bugzilla is not reflecting the status.
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report. glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/ [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user