Description of problem: ======================= Currently georeplication status always returns the hostname where as the volume info returns ip/hostname depending upon the way it is configured. [root@dhcp37-182 ~]# gluster volume info master Volume Name: master Type: Distributed-Replicate Volume ID: 3ac902da-449b-4731-b950-e8d6a88f861e Status: Started Number of Bricks: 6 x 2 = 12 Transport-type: tcp Bricks: Brick1: 10.70.37.182:/bricks/brick0/master_brick0 Brick2: 10.70.37.90:/bricks/brick0/master_brick1 Brick3: 10.70.37.102:/bricks/brick0/master_brick2 Brick4: 10.70.37.104:/bricks/brick0/master_brick3 Brick5: 10.70.37.170:/bricks/brick0/master_brick4 Brick6: 10.70.37.169:/bricks/brick0/master_brick5 Brick7: 10.70.37.182:/bricks/brick1/master_brick6 Brick8: 10.70.37.90:/bricks/brick1/master_brick7 Brick9: 10.70.37.102:/bricks/brick1/master_brick8 Brick10: 10.70.37.104:/bricks/brick1/master_brick9 Brick11: 10.70.37.170:/bricks/brick1/master_brick10 Brick12: 10.70.37.169:/bricks/brick1/master_brick11 Options Reconfigured: changelog.changelog: on geo-replication.ignore-pid-check: on geo-replication.indexing: on features.quota-deem-statfs: on features.inode-quota: on features.quota: on performance.readdir-ahead: on cluster.enable-shared-storage: enable [root@dhcp37-182 ~]# gluster v geo status MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- dhcp37-182.lab.eng.blr.redhat.com master /bricks/brick0/master_brick0 root ssh://10.70.37.122::slave 10.70.37.144 Active Changelog Crawl 2016-04-15 09:42:42 dhcp37-182.lab.eng.blr.redhat.com master /bricks/brick1/master_brick6 root ssh://10.70.37.122::slave 10.70.37.144 Active Changelog Crawl 2016-04-15 09:42:41 dhcp37-102.lab.eng.blr.redhat.com master /bricks/brick0/master_brick2 root ssh://10.70.37.122::slave 10.70.37.218 Passive N/A N/A dhcp37-102.lab.eng.blr.redhat.com master /bricks/brick1/master_brick8 root ssh://10.70.37.122::slave 10.70.37.218 Passive N/A N/A dhcp37-104.lab.eng.blr.redhat.com master /bricks/brick0/master_brick3 root ssh://10.70.37.122::slave 10.70.37.175 Active Changelog Crawl 2016-04-15 09:42:42 dhcp37-104.lab.eng.blr.redhat.com master /bricks/brick1/master_brick9 root ssh://10.70.37.122::slave 10.70.37.175 Active Changelog Crawl 2016-04-15 09:42:41 dhcp37-169.lab.eng.blr.redhat.com master /bricks/brick0/master_brick5 root ssh://10.70.37.122::slave 10.70.37.122 Active Changelog Crawl 2016-04-15 09:42:41 dhcp37-169.lab.eng.blr.redhat.com master /bricks/brick1/master_brick11 root ssh://10.70.37.122::slave 10.70.37.122 Active Changelog Crawl 2016-04-15 09:42:40 dhcp37-90.lab.eng.blr.redhat.com master /bricks/brick0/master_brick1 root ssh://10.70.37.122::slave 10.70.37.217 Passive N/A N/A dhcp37-90.lab.eng.blr.redhat.com master /bricks/brick1/master_brick7 root ssh://10.70.37.122::slave 10.70.37.217 Passive N/A N/A dhcp37-170.lab.eng.blr.redhat.com master /bricks/brick0/master_brick4 root ssh://10.70.37.122::slave 10.70.37.123 Passive N/A N/A dhcp37-170.lab.eng.blr.redhat.com master /bricks/brick1/master_brick10 root ssh://10.70.37.122::slave 10.70.37.123 Passive N/A N/A [root@dhcp37-182 ~]# Application like scheduler script (schedule_georep.py) which does comparison between different gluster cli output (Like volume info and geo-rep status) returns offline. [ WARN] Geo-rep workers Faulty/Offline, Faulty: [] Offline: ['10.70.37.182:/bricks/brick0/master_brick0', '10.70.37.90:/bricks/brick0/master_brick1', '10.70.37.102:/bricks/brick0/master_brick2', '10.70.37.104:/bricks/brick0/master_brick3', '10.70.37.170:/bricks/brick0/master_brick4', '10.70.37.169:/bricks/brick0/master_brick5', '10.70.37.182:/bricks/brick1/master_brick6', '10.70.37.90:/bricks/brick1/master_brick7', '10.70.37.102:/bricks/brick1/master_brick8', '10.70.37.104:/bricks/brick1/master_brick9', '10.70.37.170:/bricks/brick1/master_brick10', '10.70.37.169:/bricks/brick1/master_brick11'] Version-Release number of selected component (if applicable): ============================================================== glusterfs-3.7.9-1.el7rhgs.x86_64 How reproducible: ================= 1/1 Steps to Reproduce: =================== 1. Configure volume using ip 2. Configure geo-replication between master and slave 3. Check geo-replication status and volume info Actual results: =============== Volume info shows ip and geo-replication status shows hostname for master nodes Expected results: ================= Geo-replication status should show the way volume is configured.
Upstream patch is sent http://review.gluster.org/14005
Downstream patch: https://code.engineering.redhat.com/gerrit/#/c/73026/
Verified with build: glusterfs-3.7.9-3.el7rhgs.x86_64 glusterfs-geo-replication-3.7.9-3.el7rhgs.x86_64 If the volumes are configured using IP, the geo-rep status shows the IP's. Moving the Bug to verified state
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2016:1240