Description of problem: The 'gluster volume $volname info' command shows the FQDN hostnames of the nodes instead of the hostnames provided to Gluster during volume creation (when they differ). Version-Release number of selected component (if applicable): [root@duke ~]# gluster --version glusterfs 3.5.3 built on Nov 13 2014 11:06:07 (Also saw the same behavior on 3.6.2-1 from RPMs at http://download.gluster.org/pub/gluster/glusterfs/LATEST/CentOS/epel-6.6/x86_64/) How reproducible: 100% Steps to Reproduce: 1. Create glusterfs volume on two Gluster nodes using hostname aliases (eg. configured in /etc/hosts) different from system-configured fully-qualified hostname: # gluster volume create gluster_vol replica 2 node1-ib:/bricks/brick1 node2-ib:/bricks/brick1 2. Start glusterfs volume: # gluster volume start gluster_vol 3. Show gluster heal info: # gluster volume heal gluster_vol info Actual results: Brick node1.localdomain.net:/bricks/brick1/ Number of entries: 0 Brick node2.localdomain.net:/bricks/brick1/ Number of entries: 0 Expected results: Brick node1-ib:/bricks/brick1/ Number of entries: 0 Brick node2-ib:/bricks/brick1/ Number of entries: 0 Additional info: This appears to be a 'display-only' kind of bug -- the Gluster replication traffic appears to be using the proper hostnames/IPs -- it just doesn't render the "heal info" output properly.
Can you paste gluster peer status output?
Ah, my apologies for omitting that. Here is an unobfuscated paste of the pertinent details: NODE 1: ======================================= [root@duke ~]# gluster peer status Number of Peers: 1 Hostname: duchess-ib Uuid: 1a240151-668a-47ca-9cb5-9955f9fde38a State: Peer in Cluster (Connected) [root@duke ~]# gluster volume info Volume Name: gluster_disk Type: Replicate Volume ID: 04954a9d-b93a-4401-aeaf-0d55aec47316 Status: Started Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: duke-ib:/bricks/brick1 Brick2: duchess-ib:/bricks/brick1 [root@duke ~]# gluster volume heal gluster_disk info Brick duke.jonheese.local:/bricks/brick1/ Number of entries: 0 Brick duchess.jonheese.local:/bricks/brick1/ Number of entries: 0 [root@duke ~]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 10.10.10.1 duke-ib 10.10.10.2 duchess-ib NODE 2: ======================================= [root@duchess ~]# gluster peer status Number of Peers: 1 Hostname: duke-ib Uuid: e679b3e5-8f0e-4bc3-b784-6914046d6a0b State: Peer in Cluster (Connected) [root@duchess ~]# gluster volume info Volume Name: gluster_disk Type: Replicate Volume ID: 04954a9d-b93a-4401-aeaf-0d55aec47316 Status: Started Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: duke-ib:/bricks/brick1 Brick2: duchess-ib:/bricks/brick1 [root@duchess ~]# gluster volume heal gluster_disk info Brick duke.jonheese.local:/bricks/brick1/ Number of entries: 0 Brick duchess.jonheese.local:/bricks/brick1/ Number of entries: 0 [root@duchess ~]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 10.10.10.1 duke-ib 10.10.10.2 duchess-ib Thank you. Regards, Jon Heese
Bug 1176354 has some similarities with finding the right hostname of a system. Atin is looking into a solution for this.
This bug is getting closed because the 3.5 is marked End-Of-Life. There will be no further updates to this version. Please open a new bug against a version that still receives bugfixes if you are still facing this issue in a more current release.