Bug 1327552 - [geo-rep]: geo status shows $MASTER Nodes always with hostname even if volume is configured with IP
Summary: [geo-rep]: geo status shows $MASTER Nodes always with hostname even if volume...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: geo-replication
Version: rhgs-3.1
Hardware: x86_64
OS: Linux
high
high
Target Milestone: ---
: RHGS 3.1.3
Assignee: Aravinda VK
QA Contact: Rahul Hinduja
URL:
Whiteboard:
Depends On:
Blocks: Gluster-HC-1 1311817 1327553 1328706
TreeView+ depends on / blocked
 
Reported: 2016-04-15 11:22 UTC by Rahul Hinduja
Modified: 2016-06-23 05:17 UTC (History)
6 users (show)

Fixed In Version: glusterfs-3.7.9-3
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 1327553 (view as bug list)
Environment:
Last Closed: 2016-06-23 05:17:50 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2016:1240 0 normal SHIPPED_LIVE Red Hat Gluster Storage 3.1 Update 3 2016-06-23 08:51:28 UTC

Description Rahul Hinduja 2016-04-15 11:22:09 UTC
Description of problem:
=======================

Currently georeplication status always returns the hostname where as the volume info returns ip/hostname depending upon the way it is configured. 

[root@dhcp37-182 ~]# gluster volume info master 
 
Volume Name: master
Type: Distributed-Replicate
Volume ID: 3ac902da-449b-4731-b950-e8d6a88f861e
Status: Started
Number of Bricks: 6 x 2 = 12
Transport-type: tcp
Bricks:
Brick1: 10.70.37.182:/bricks/brick0/master_brick0
Brick2: 10.70.37.90:/bricks/brick0/master_brick1
Brick3: 10.70.37.102:/bricks/brick0/master_brick2
Brick4: 10.70.37.104:/bricks/brick0/master_brick3
Brick5: 10.70.37.170:/bricks/brick0/master_brick4
Brick6: 10.70.37.169:/bricks/brick0/master_brick5
Brick7: 10.70.37.182:/bricks/brick1/master_brick6
Brick8: 10.70.37.90:/bricks/brick1/master_brick7
Brick9: 10.70.37.102:/bricks/brick1/master_brick8
Brick10: 10.70.37.104:/bricks/brick1/master_brick9
Brick11: 10.70.37.170:/bricks/brick1/master_brick10
Brick12: 10.70.37.169:/bricks/brick1/master_brick11
Options Reconfigured:
changelog.changelog: on
geo-replication.ignore-pid-check: on
geo-replication.indexing: on
features.quota-deem-statfs: on
features.inode-quota: on
features.quota: on
performance.readdir-ahead: on
cluster.enable-shared-storage: enable
[root@dhcp37-182 ~]# gluster v geo status
 
MASTER NODE                          MASTER VOL    MASTER BRICK                     SLAVE USER    SLAVE                        SLAVE NODE      STATUS     CRAWL STATUS       LAST_SYNCED                  
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
dhcp37-182.lab.eng.blr.redhat.com    master        /bricks/brick0/master_brick0     root          ssh://10.70.37.122::slave    10.70.37.144    Active     Changelog Crawl    2016-04-15 09:42:42          
dhcp37-182.lab.eng.blr.redhat.com    master        /bricks/brick1/master_brick6     root          ssh://10.70.37.122::slave    10.70.37.144    Active     Changelog Crawl    2016-04-15 09:42:41          
dhcp37-102.lab.eng.blr.redhat.com    master        /bricks/brick0/master_brick2     root          ssh://10.70.37.122::slave    10.70.37.218    Passive    N/A                N/A                          
dhcp37-102.lab.eng.blr.redhat.com    master        /bricks/brick1/master_brick8     root          ssh://10.70.37.122::slave    10.70.37.218    Passive    N/A                N/A                          
dhcp37-104.lab.eng.blr.redhat.com    master        /bricks/brick0/master_brick3     root          ssh://10.70.37.122::slave    10.70.37.175    Active     Changelog Crawl    2016-04-15 09:42:42          
dhcp37-104.lab.eng.blr.redhat.com    master        /bricks/brick1/master_brick9     root          ssh://10.70.37.122::slave    10.70.37.175    Active     Changelog Crawl    2016-04-15 09:42:41          
dhcp37-169.lab.eng.blr.redhat.com    master        /bricks/brick0/master_brick5     root          ssh://10.70.37.122::slave    10.70.37.122    Active     Changelog Crawl    2016-04-15 09:42:41          
dhcp37-169.lab.eng.blr.redhat.com    master        /bricks/brick1/master_brick11    root          ssh://10.70.37.122::slave    10.70.37.122    Active     Changelog Crawl    2016-04-15 09:42:40          
dhcp37-90.lab.eng.blr.redhat.com     master        /bricks/brick0/master_brick1     root          ssh://10.70.37.122::slave    10.70.37.217    Passive    N/A                N/A                          
dhcp37-90.lab.eng.blr.redhat.com     master        /bricks/brick1/master_brick7     root          ssh://10.70.37.122::slave    10.70.37.217    Passive    N/A                N/A                          
dhcp37-170.lab.eng.blr.redhat.com    master        /bricks/brick0/master_brick4     root          ssh://10.70.37.122::slave    10.70.37.123    Passive    N/A                N/A                          
dhcp37-170.lab.eng.blr.redhat.com    master        /bricks/brick1/master_brick10    root          ssh://10.70.37.122::slave    10.70.37.123    Passive    N/A                N/A                          
[root@dhcp37-182 ~]# 

Application like scheduler script (schedule_georep.py) which does comparison between different gluster cli output (Like volume info and geo-rep status) returns offline.

 [  WARN] Geo-rep workers Faulty/Offline, Faulty: [] Offline: ['10.70.37.182:/bricks/brick0/master_brick0', '10.70.37.90:/bricks/brick0/master_brick1', '10.70.37.102:/bricks/brick0/master_brick2', '10.70.37.104:/bricks/brick0/master_brick3', '10.70.37.170:/bricks/brick0/master_brick4', '10.70.37.169:/bricks/brick0/master_brick5', '10.70.37.182:/bricks/brick1/master_brick6', '10.70.37.90:/bricks/brick1/master_brick7', '10.70.37.102:/bricks/brick1/master_brick8', '10.70.37.104:/bricks/brick1/master_brick9', '10.70.37.170:/bricks/brick1/master_brick10', '10.70.37.169:/bricks/brick1/master_brick11']



Version-Release number of selected component (if applicable):
==============================================================

glusterfs-3.7.9-1.el7rhgs.x86_64

How reproducible:
=================
1/1


Steps to Reproduce:
===================
1. Configure volume using ip
2. Configure geo-replication between master and slave
3. Check geo-replication status and volume info 

Actual results:
===============

Volume info shows ip and geo-replication status shows hostname for master nodes

Expected results:
=================

Geo-replication status should show the way volume is configured.

Comment 2 Aravinda VK 2016-04-18 06:55:50 UTC
Upstream patch is sent
http://review.gluster.org/14005

Comment 4 Aravinda VK 2016-04-26 09:06:08 UTC
Downstream patch: https://code.engineering.redhat.com/gerrit/#/c/73026/

Comment 6 Rahul Hinduja 2016-05-02 14:28:44 UTC
Verified with build: 
glusterfs-3.7.9-3.el7rhgs.x86_64
glusterfs-geo-replication-3.7.9-3.el7rhgs.x86_64

If the volumes are configured using IP, the geo-rep status shows the IP's. Moving the Bug to verified state

Comment 9 errata-xmlrpc 2016-06-23 05:17:50 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2016:1240


Note You need to log in before you can comment on or make changes to this bug.