Bug 1695436

Summary: geo-rep session creation fails with IPV6
Product: [Community] GlusterFS Reporter: Aravinda VK <avishwan>
Component: geo-replicationAssignee: Aravinda VK <avishwan>
Status: CLOSED CURRENTRELEASE QA Contact:
Severity: high Docs Contact:
Priority: high    
Version: 6CC: amukherj, avishwan, bugs, csaba, khiremat, pasik, rhs-bugs, sankarshan, sasundar, storage-qa-internal
Target Milestone: ---   
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: glusterfs-6.1 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: 1688833 Environment:
Last Closed: 2019-04-17 13:59:15 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1688833    
Bug Blocks: 1688231, 1688239    

Description Aravinda VK 2019-04-03 05:56:54 UTC
+++ This bug was initially created as a clone of Bug #1688833 +++

+++ This bug was initially created as a clone of Bug #1688231 +++

Description of problem:
-----------------------
This issue is seen with the RHHI-V usecase. VM images are stored in the gluster volumes and geo-replicated to the secondary site, for DR use case.

When IPv6 is used, the additional mount option is required --xlator-option=transport.address-family=inet6". But when geo-rep check for slave space with gverify.sh, these mount options are not considered and it fails to mount either master or slave volume

Version-Release number of selected component (if applicable):
--------------------------------------------------------------
RHGS 3.4.4 ( glusterfs-3.12.2-47 )

How reproducible:
-----------------
Always

Steps to Reproduce:
-------------------
1. Create geo-rep session from the master to slave

Actual results:
--------------
Creation of geo-rep session fails at gverify.sh

Expected results:
-----------------
Creation of geo-rep session should be successful

Additional info:

--- Additional comment from SATHEESARAN on 2019-03-13 11:49:02 UTC ---

[root@ ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
2620:52:0:4624:5054:ff:fee9:57f8 master.lab.eng.blr.redhat.com 
2620:52:0:4624:5054:ff:fe6d:d816 slave.lab.eng.blr.redhat.com 

[root@ ~]# gluster volume info
 
Volume Name: master
Type: Distribute
Volume ID: 9cf0224f-d827-4028-8a45-37f7bfaf1c78
Status: Started
Snapshot Count: 0
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: master.lab.eng.blr.redhat.com:/gluster/brick1/master
Options Reconfigured:
performance.client-io-threads: on
server.event-threads: 4
client.event-threads: 4
user.cifs: off
features.shard: on
network.remote-dio: enable
performance.low-prio-threads: 32
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
transport.address-family: inet6
nfs.disable: on

[root@localhost ~]# gluster volume geo-replication master slave.lab.eng.blr.redhat.com::slave create push-pem
Unable to mount and fetch slave volume details. Please check the log: /var/log/glusterfs/geo-replication/gverify-slavemnt.log
geo-replication command failed


Snip from gverify-slavemnt.log
<snip>
[2019-03-13 11:46:28.746494] I [MSGID: 100030] [glusterfsd.c:2646:main] 0-glusterfs: Started running glusterfs version 3.12.2 (args: glusterfs --xlator-option=*dht.lookup-unhashed=off --volfile-server slave.lab.eng.blr.redhat.com --volfile-id slave -l /var/log/glusterfs/geo-replication/gverify-slavemnt.log /tmp/gverify.sh.y1TCoY)
[2019-03-13 11:46:28.750595] W [MSGID: 101002] [options.c:995:xl_opt_validate] 0-glusterfs: option 'address-family' is deprecated, preferred is 'transport.address-family', continuing with correction
[2019-03-13 11:46:28.753702] E [MSGID: 101075] [common-utils.c:482:gf_resolve_ip6] 0-resolver: getaddrinfo failed (family:2) (Name or service not known)
[2019-03-13 11:46:28.753725] E [name.c:267:af_inet_client_get_remote_sockaddr] 0-glusterfs: DNS resolution failed on host slave.lab.eng.blr.redhat.com
[2019-03-13 11:46:28.753953] I [glusterfsd-mgmt.c:2337:mgmt_rpc_notify] 0-glusterfsd-mgmt: disconnected from remote-host: slave.lab.eng.blr.redhat.com
[2019-03-13 11:46:28.753980] I [glusterfsd-mgmt.c:2358:mgmt_rpc_notify] 0-glusterfsd-mgmt: Exhausted all volfile servers
[2019-03-13 11:46:28.753998] I [MSGID: 101190] [event-epoll.c:676:event_dispatch_epoll_worker] 0-epoll: Started thread with index 0
[2019-03-13 11:46:28.754073] I [MSGID: 101190] [event-epoll.c:676:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1
[2019-03-13 11:46:28.754154] W [glusterfsd.c:1462:cleanup_and_exit] (-->/lib64/libgfrpc.so.0(rpc_clnt_notify+0xab) [0x7fc39d379bab] -->glusterfs(+0x11fcd) [0x56427db95fcd] -->glusterfs(cleanup_and_exit+0x6b) [0x56427db8eb2b] ) 0-: received signum (1), shutting down
[2019-03-13 11:46:28.754197] I [fuse-bridge.c:6611:fini] 0-fuse: Unmounting '/tmp/gverify.sh.y1TCoY'.
[2019-03-13 11:46:28.760213] I [fuse-bridge.c:6616:fini] 0-fuse: Closing fuse connection to '/tmp/gverify.sh.y1TCoY'.
</snip>

--- Additional comment from Worker Ant on 2019-03-14 14:51:56 UTC ---

REVIEW: https://review.gluster.org/22363 (WIP geo-rep: IPv6 support) posted (#1) for review on master by Aravinda VK

--- Additional comment from Worker Ant on 2019-03-15 14:59:56 UTC ---

REVIEW: https://review.gluster.org/22363 (geo-rep: IPv6 support) merged (#3) on master by Aravinda VK

Comment 1 Worker Ant 2019-04-03 06:30:08 UTC
REVIEW: https://review.gluster.org/22488 (geo-rep: IPv6 support) posted (#1) for review on release-6 by Aravinda VK

Comment 2 Worker Ant 2019-04-17 13:59:15 UTC
REVIEW: https://review.gluster.org/22488 (geo-rep: IPv6 support) merged (#3) on release-6 by Shyamsundar Ranganathan

Comment 3 Shyamsundar 2019-04-22 13:33:13 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.1, please open a new bug report.

glusterfs-6.1 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] https://lists.gluster.org/pipermail/announce/2019-April/000124.html
[2] https://www.gluster.org/pipermail/gluster-users/