Bug 1855966 - [IPV6] Geo-replication session fails to start sync with IPV6 hostnames
Summary: [IPV6] Geo-replication session fails to start sync with IPV6 hostnames
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: geo-replication
Version: rhgs-3.5
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: ---
: RHGS 3.5.z Batch Update 3
Assignee: Sunny Kumar
QA Contact: SATHEESARAN
URL:
Whiteboard:
Depends On:
Blocks: 1855965
TreeView+ depends on / blocked
 
Reported: 2020-07-11 11:42 UTC by SATHEESARAN
Modified: 2020-12-17 04:52 UTC (History)
13 users (show)

Fixed In Version: glusterfs-6.0-41
Doc Type: Bug Fix
Doc Text:
Previously, when IPv6 addresses were used during brick creation, the geo-replication would fail to start causing faulty geo-replication sessions. With this update, geo-replication can parse full brick and hostname from volfile.
Clone Of: 1855965
Environment:
rhhiv
Last Closed: 2020-12-17 04:51:53 UTC
Embargoed:


Attachments (Terms of Use)
tar of /var/log/glusterfs/geo-replication from master node (80.00 KB, application/x-tar)
2020-07-11 11:44 UTC, SATHEESARAN
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2020:5603 0 None None None 2020-12-17 04:52:15 UTC

Description SATHEESARAN 2020-07-11 11:42:41 UTC
Description of problem:
-----------------------
Geo-replication session was established with IPV6 hostnames, but when starting the geo-rep session, it never starts

Version-Release number of selected component (if applicable):
---------------------------------------------------------------
RHGS 3.5.2 ( glusterfs-6.0-37.1.el8rhgs )
glusterfs-geo-replication-6.0-37.1.el8rhgs 

How reproducible:
-----------------
Always

Steps to Reproduce:
-------------------
1. Establish the geo-rep session from master to slave, both master and slave are using IPV6, but no IPv4.
2. Start the session

Actual results:
-----------------
Session fails to start, remains in geo-rep session remains created state

Expected results:
-----------------
Session should start sync from master to slave


Additional info:

Comment 2 SATHEESARAN 2020-07-11 11:44:24 UTC
Created attachment 1700689 [details]
tar of /var/log/glusterfs/geo-replication from master node

Archived content of /var/log/glusterfs/geo-replication is attached

Comment 4 SATHEESARAN 2020-07-12 07:56:26 UTC
The issue is RCA'ed and upstream patch[1] is posted

[1] - https://review.gluster.org/#/c/glusterfs/+/24706/

Comment 5 SATHEESARAN 2020-07-13 10:11:35 UTC
This issue is not a blocker for RHGS 3.5.2-async, as initially proposed.
This issue happens with direct usage of IPV6 addresses at the slave site.

I have repeated the testing with IPV6 FQDN and the issue is gone.

Comment 15 SATHEESARAN 2020-11-02 08:04:04 UTC
Tested with glusterfs-6.0-46.el8rhgs and found that files are synced to the secondary volume,
when the geo-rep session is configured with IPv4 addresses


Here are the much detailed information about the test
0. Created the primary cluster with FQDNs and enabled cluster.enable-shared-storage that creates the gluster-shared-storage volume for meta operations
1. Created the source/primary volume with FQDN on the primary cluster
2. Created the destination/secondary cluster and volume with IPv4
3. Mounted the primary volume on the primary cluster, and created few files on the fuse mounted volume
4. Created the geo-rep session from primary to secondary volume using IPs on the secondary host
5. Configured the geo-rep session to make use of meta volume
6. Started the session
7. Followed up the geo-rep status to make sure, that the sync is completed
8. Mounted the volume on the secondary cluster, and compared the sha256sum on the primary volume and that matched

Comment 21 errata-xmlrpc 2020-12-17 04:51:53 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (glusterfs bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:5603


Note You need to log in before you can comment on or make changes to this bug.