Bug 1855966
Summary: | [IPV6] Geo-replication session fails to start sync with IPV6 hostnames | ||||||
---|---|---|---|---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | SATHEESARAN <sasundar> | ||||
Component: | geo-replication | Assignee: | Sunny Kumar <sunkumar> | ||||
Status: | CLOSED ERRATA | QA Contact: | SATHEESARAN <sasundar> | ||||
Severity: | medium | Docs Contact: | |||||
Priority: | unspecified | ||||||
Version: | rhgs-3.5 | CC: | csaba, dwalveka, khiremat, puebele, rcyriac, rhs-bugs, rkothiya, sabose, sacharya, sajmoham, sheggodu, storage-qa-internal, sunkumar | ||||
Target Milestone: | --- | Keywords: | ZStream | ||||
Target Release: | RHGS 3.5.z Batch Update 3 | ||||||
Hardware: | x86_64 | ||||||
OS: | Linux | ||||||
Whiteboard: | |||||||
Fixed In Version: | glusterfs-6.0-41 | Doc Type: | Bug Fix | ||||
Doc Text: |
Previously, when IPv6 addresses were used during brick creation, the geo-replication would fail to start causing faulty geo-replication sessions. With this update, geo-replication can parse full brick and hostname from volfile.
|
Story Points: | --- | ||||
Clone Of: | 1855965 | Environment: |
rhhiv
|
||||
Last Closed: | 2020-12-17 04:51:53 UTC | Type: | --- | ||||
Regression: | --- | Mount Type: | --- | ||||
Documentation: | --- | CRM: | |||||
Verified Versions: | Category: | --- | |||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
Cloudforms Team: | --- | Target Upstream Version: | |||||
Embargoed: | |||||||
Bug Depends On: | |||||||
Bug Blocks: | 1855965 | ||||||
Attachments: |
|
Description
SATHEESARAN
2020-07-11 11:42:41 UTC
Created attachment 1700689 [details]
tar of /var/log/glusterfs/geo-replication from master node
Archived content of /var/log/glusterfs/geo-replication is attached
The issue is RCA'ed and upstream patch[1] is posted [1] - https://review.gluster.org/#/c/glusterfs/+/24706/ This issue is not a blocker for RHGS 3.5.2-async, as initially proposed. This issue happens with direct usage of IPV6 addresses at the slave site. I have repeated the testing with IPV6 FQDN and the issue is gone. Tested with glusterfs-6.0-46.el8rhgs and found that files are synced to the secondary volume, when the geo-rep session is configured with IPv4 addresses Here are the much detailed information about the test 0. Created the primary cluster with FQDNs and enabled cluster.enable-shared-storage that creates the gluster-shared-storage volume for meta operations 1. Created the source/primary volume with FQDN on the primary cluster 2. Created the destination/secondary cluster and volume with IPv4 3. Mounted the primary volume on the primary cluster, and created few files on the fuse mounted volume 4. Created the geo-rep session from primary to secondary volume using IPs on the secondary host 5. Configured the geo-rep session to make use of meta volume 6. Started the session 7. Followed up the geo-rep status to make sure, that the sync is completed 8. Mounted the volume on the secondary cluster, and compared the sha256sum on the primary volume and that matched Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (glusterfs bug fix and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:5603 |