Please check if we can add a gluster storage storage over IPv6, and use it for VM storage in the latest version of RHHI.
Does IPv6 work with RHGS 3.4 out of the box? Are there any configuration changes needed
As per Amar, no changes needed. Deployment of RHHI over IPv6 needs to be tested
This QE validation should be targeted for RHV 4.2.8, as RHHI QE team is occupied with bug verification and functional regression for upcoming RHHI 2.0 GA with RHV 4.2. Providing qa_ack for release_milestone for RHV 4.2.8
(In reply to Sahina Bose from comment #2) > As per Amar, no changes needed. Deployment of RHHI over IPv6 needs to be > tested As there are no info required, removing the needinfo on Amar
1- Tried to test rhhi installation via gdeploy and faced some issue with gdeploy. IPV6 has ":" in between each 4 characters and gdeploy conf file also has standard for specific host separated by ":" like: [lv:host], in host if we specify ipv6 then gdeploy is not able to parse host properly as it's spiting with ":" ex: [lv:2620:52:0:4628:1142:d118:4a2b:94d6] // This does not work 2- Tried with ansible based deployment as well and faced issue with gluster peer probe. gluster peer probe is failing with ipv6 [root@headwig hc-ansible-deployment]# gluster peer probe 2620:52:0:4628:1142:d118:4a2b:94d6 peer probe: failed: Probe returned with Transport endpoint is not connected I am using glusterfs-server-3.12.2-18.el7rhgs.x86_64 and gdeploy-2.0.2-27.el7rhgs.noarch
Amar, who can look at the RHGS issue with IPV6? Should this have worked with the downstream version mentioned above
Any update on this?
(In reply to Gobinda Das from comment #5) > 1- Tried to test rhhi installation via gdeploy and faced some issue with > gdeploy. > > IPV6 has ":" in between each 4 characters and gdeploy conf file also has > standard for specific host separated by ":" like: > > [lv:host], in host if we specify ipv6 then gdeploy is not able to parse host > properly as it's spiting with ":" > > ex: [lv:2620:52:0:4628:1142:d118:4a2b:94d6] // This does not work > > 2- Tried with ansible based deployment as well and faced issue with gluster > peer probe. > gluster peer probe is failing with ipv6 > > [root@headwig hc-ansible-deployment]# gluster peer probe > 2620:52:0:4628:1142:d118:4a2b:94d6 > peer probe: failed: Probe returned with Transport endpoint is not connected Can you please attach logs? > > I am using glusterfs-server-3.12.2-18.el7rhgs.x86_64 and > gdeploy-2.0.2-27.el7rhgs.noarch
Hi Yaniv, Here is the glusterd log [2018-12-21 08:12:20.340784] I [MSGID: 106487] [glusterd-handler.c:1254:__glusterd_handle_cli_probe] 0-glusterd: Received CLI probe req 2620:52:0:4628:8561:98aa:f93e:8813 24007 [2018-12-21 08:12:20.342625] I [MSGID: 106128] [glusterd-handler.c:3679:glusterd_probe_begin] 0-glusterd: Unable to find peerinfo for host: 2620:52:0:4628:8561:98aa:f93e:8813 (24007) [2018-12-21 08:12:20.390089] W [MSGID: 106061] [glusterd-handler.c:3465:glusterd_transport_inet_options_build] 0-glusterd: Failed to get tcp-user-timeout [2018-12-21 08:12:20.390165] I [rpc-clnt.c:1005:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2018-12-21 08:12:20.390329] W [MSGID: 101002] [options.c:972:xl_opt_validate] 0-management: option 'address-family' is deprecated, preferred is 'transport.address-family', continuing with correction [2018-12-21 08:12:20.390477] I [dict.c:1168:data_to_uint16] (-->/usr/lib64/glusterfs/6dev/rpc-transport/socket.so(+0x6fca) [0x7fed1c8a0fca] -->/usr/lib64/glusterfs/6dev/rpc-transport/socket.so(socket_client_get_remote_sockaddr+0xe1) [0x7fed1c8a8891] -->/lib64/libglusterfs.so.0(data_to_uint16+0x111) [0x7fed28d44eb1] ) 0-dict: key null, unsigned integer type asked, has integer type [Invalid argument] [2018-12-21 08:12:20.390530] E [MSGID: 101075] [common-utils.c:508:gf_resolve_ip6] 0-resolver: getaddrinfo failed (Address family for hostname not supported) [2018-12-21 08:12:20.390567] E [name.c:258:af_inet_client_get_remote_sockaddr] 0-management: DNS resolution failed on host 2620:52:0:4628:8561:98aa:f93e:8813 [2018-12-21 08:12:20.392908] I [MSGID: 106498] [glusterd-handler.c:3608:glusterd_friend_add] 0-management: connect returned 0 [2018-12-21 08:12:20.393040] I [MSGID: 106004] [glusterd-handler.c:6413:__glusterd_peer_rpc_notify] 0-management: Peer <2620:52:0:4628:8561:98aa:f93e:8813> (<00000000-0000-0000-0000-000000000000>), in state <Establishing Connection>, has disconnected from glusterd. [2018-12-21 08:12:20.394240] I [MSGID: 106599] [glusterd-nfs-svc.c:161:glusterd_nfssvc_reconfigure] 0-management: nfs/server.so xlator is not installed [2018-12-21 08:12:20.395000] I [MSGID: 106131] [glusterd-proc-mgmt.c:86:glusterd_proc_stop] 0-management: glustershd already stopped [2018-12-21 08:12:20.395059] I [MSGID: 106568] [glusterd-svc-mgmt.c:253:glusterd_svc_stop] 0-management: glustershd service is stopped [2018-12-21 08:12:20.395098] I [MSGID: 106131] [glusterd-proc-mgmt.c:86:glusterd_proc_stop] 0-management: quotad already stopped [2018-12-21 08:12:20.395134] I [MSGID: 106568] [glusterd-svc-mgmt.c:253:glusterd_svc_stop] 0-management: quotad service is stopped [2018-12-21 08:12:20.395185] I [MSGID: 106131] [glusterd-proc-mgmt.c:86:glusterd_proc_stop] 0-management: bitd already stopped [2018-12-21 08:12:20.395215] I [MSGID: 106568] [glusterd-svc-mgmt.c:253:glusterd_svc_stop] 0-management: bitd service is stopped [2018-12-21 08:12:20.395253] I [MSGID: 106131] [glusterd-proc-mgmt.c:86:glusterd_proc_stop] 0-management: scrub already stopped [2018-12-21 08:12:20.395282] I [MSGID: 106568] [glusterd-svc-mgmt.c:253:glusterd_svc_stop] 0-management: scrub service is stopped [2018-12-21 08:12:20.395523] E [MSGID: 101191] [event-epoll.c:759:event_dispatch_epoll_worker] 0-epoll: Failed to dispatch handler
I got one private build from Milind Changire (http://brewweb.devel.redhat.com/brew/taskinfo?taskID=19612683) and tested gluster peer probe with IPV6 is working with this build. I am testing RHHI deployment with gluster ansible role. Right now Gdeploy does not support ipv6.Raised BZ for this(https://bugzilla.redhat.com/show_bug.cgi?id=1662394). I will update the rhhi deployment with ipv6 result soon.
Still facing issue from gluster side.After volume created successfully the bricks are not coming Online. Status of volume: test Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 2620:52:0:4628:33a6:4655:b142:33cd:/g luster_bricks/test N/A N/A N N/A Task Status of Volume test ------------------------------------------------------------------------------ There are no active volume tasks Status of volume: test1 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 2620:52:0:4628:33a6:4655:b142:33cd:/g luster_bricks/test1/test1 N/A N/A N N/A Task Status of Volume test1 ------------------------------------------------------------------------------ There are no active volume tasks
Rejy: a build would be available Feb 16.
Cleaning up the QA whiteboard, as this feature will be validated with RHV 4.3 Also included this bug for RHHI-V 1.6
Verified with RHVH 4.3 and RHGS 3.4.4 ( glusterfs-3.12.2-47.el7rhgs ) 1. After creating the HE setup over pure IPV6, glusterfs storage domain can be created with IPV6 gluster server.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2019:1121