Bug 1855966

Summary: [IPV6] Geo-replication session fails to start sync with IPV6 hostnames
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: SATHEESARAN <sasundar>
Component: geo-replicationAssignee: Sunny Kumar <sunkumar>
Status: CLOSED ERRATA QA Contact: SATHEESARAN <sasundar>
Severity: medium Docs Contact:
Priority: unspecified    
Version: rhgs-3.5CC: csaba, dwalveka, khiremat, puebele, rcyriac, rhs-bugs, rkothiya, sabose, sacharya, sajmoham, sheggodu, storage-qa-internal, sunkumar
Target Milestone: ---Keywords: ZStream
Target Release: RHGS 3.5.z Batch Update 3   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: glusterfs-6.0-41 Doc Type: Bug Fix
Doc Text:
Previously, when IPv6 addresses were used during brick creation, the geo-replication would fail to start causing faulty geo-replication sessions. With this update, geo-replication can parse full brick and hostname from volfile.
Story Points: ---
Clone Of: 1855965 Environment:
rhhiv
Last Closed: 2020-12-17 04:51:53 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1855965    
Attachments:
Description Flags
tar of /var/log/glusterfs/geo-replication from master node none

Description SATHEESARAN 2020-07-11 11:42:41 UTC
Description of problem:
-----------------------
Geo-replication session was established with IPV6 hostnames, but when starting the geo-rep session, it never starts

Version-Release number of selected component (if applicable):
---------------------------------------------------------------
RHGS 3.5.2 ( glusterfs-6.0-37.1.el8rhgs )
glusterfs-geo-replication-6.0-37.1.el8rhgs 

How reproducible:
-----------------
Always

Steps to Reproduce:
-------------------
1. Establish the geo-rep session from master to slave, both master and slave are using IPV6, but no IPv4.
2. Start the session

Actual results:
-----------------
Session fails to start, remains in geo-rep session remains created state

Expected results:
-----------------
Session should start sync from master to slave


Additional info:

Comment 2 SATHEESARAN 2020-07-11 11:44:24 UTC
Created attachment 1700689 [details]
tar of /var/log/glusterfs/geo-replication from master node

Archived content of /var/log/glusterfs/geo-replication is attached

Comment 4 SATHEESARAN 2020-07-12 07:56:26 UTC
The issue is RCA'ed and upstream patch[1] is posted

[1] - https://review.gluster.org/#/c/glusterfs/+/24706/

Comment 5 SATHEESARAN 2020-07-13 10:11:35 UTC
This issue is not a blocker for RHGS 3.5.2-async, as initially proposed.
This issue happens with direct usage of IPV6 addresses at the slave site.

I have repeated the testing with IPV6 FQDN and the issue is gone.

Comment 15 SATHEESARAN 2020-11-02 08:04:04 UTC
Tested with glusterfs-6.0-46.el8rhgs and found that files are synced to the secondary volume,
when the geo-rep session is configured with IPv4 addresses


Here are the much detailed information about the test
0. Created the primary cluster with FQDNs and enabled cluster.enable-shared-storage that creates the gluster-shared-storage volume for meta operations
1. Created the source/primary volume with FQDN on the primary cluster
2. Created the destination/secondary cluster and volume with IPv4
3. Mounted the primary volume on the primary cluster, and created few files on the fuse mounted volume
4. Created the geo-rep session from primary to secondary volume using IPs on the secondary host
5. Configured the geo-rep session to make use of meta volume
6. Started the session
7. Followed up the geo-rep status to make sure, that the sync is completed
8. Mounted the volume on the secondary cluster, and compared the sha256sum on the primary volume and that matched

Comment 21 errata-xmlrpc 2020-12-17 04:51:53 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (glusterfs bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:5603