Description of problem: ----------------------- Geo-replication session was established with IPV6 hostnames, but when starting the geo-rep session, it never starts Version-Release number of selected component (if applicable): --------------------------------------------------------------- RHGS 3.5.2 ( glusterfs-6.0-37.1.el8rhgs ) glusterfs-geo-replication-6.0-37.1.el8rhgs How reproducible: ----------------- Always Steps to Reproduce: ------------------- 1. Establish the geo-rep session from master to slave, both master and slave are using IPV6, but no IPv4. 2. Start the session Actual results: ----------------- Session fails to start, remains in geo-rep session remains created state Expected results: ----------------- Session should start sync from master to slave Additional info:
Created attachment 1700689 [details] tar of /var/log/glusterfs/geo-replication from master node Archived content of /var/log/glusterfs/geo-replication is attached
The issue is RCA'ed and upstream patch[1] is posted [1] - https://review.gluster.org/#/c/glusterfs/+/24706/
This issue is not a blocker for RHGS 3.5.2-async, as initially proposed. This issue happens with direct usage of IPV6 addresses at the slave site. I have repeated the testing with IPV6 FQDN and the issue is gone.
Tested with glusterfs-6.0-46.el8rhgs and found that files are synced to the secondary volume, when the geo-rep session is configured with IPv4 addresses Here are the much detailed information about the test 0. Created the primary cluster with FQDNs and enabled cluster.enable-shared-storage that creates the gluster-shared-storage volume for meta operations 1. Created the source/primary volume with FQDN on the primary cluster 2. Created the destination/secondary cluster and volume with IPv4 3. Mounted the primary volume on the primary cluster, and created few files on the fuse mounted volume 4. Created the geo-rep session from primary to secondary volume using IPs on the secondary host 5. Configured the geo-rep session to make use of meta volume 6. Started the session 7. Followed up the geo-rep status to make sure, that the sync is completed 8. Mounted the volume on the secondary cluster, and compared the sha256sum on the primary volume and that matched
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (glusterfs bug fix and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:5603