Bug 1624708 - RHHI: add gluster storage domain over IPv6
Summary: RHHI: add gluster storage domain over IPv6
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: rhhi
Version: rhhi-1.1
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: RHHI-V 1.6
Assignee: Gobinda Das
QA Contact: SATHEESARAN
URL:
Whiteboard: TestOnly
Depends On: 1618669 1664590 1676886
Blocks: RHEV_IPv6 1662394 RHHI-V-1-6-Engineering-RFE-BZs 1680596
TreeView+ depends on / blocked
 
Reported: 2018-09-03 06:36 UTC by Dan Kenigsberg
Modified: 2020-01-13 11:01 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: Enhancement
Doc Text:
IPv6 support is now available on Red Hat Hyperconverged Infrastructure for Virtualization 1.6. Support is provided for IPv6-only environments, including gateways and DNS providers. Refer to documentation for configuration details.
Clone Of:
: 1680596 (view as bug list)
Environment:
Last Closed: 2019-05-09 06:09:23 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2019:1121 0 None None None 2019-05-09 06:09:43 UTC

Description Dan Kenigsberg 2018-09-03 06:36:54 UTC
Please check if we can add a gluster storage storage over IPv6, and use it for VM storage in the latest version of RHHI.

Comment 1 Sahina Bose 2018-09-10 11:57:36 UTC
Does IPv6 work with RHGS 3.4 out of the box? Are there any configuration changes needed

Comment 2 Sahina Bose 2018-10-08 08:08:41 UTC
As per Amar, no changes needed. Deployment of RHHI over IPv6 needs to be tested

Comment 3 SATHEESARAN 2018-10-08 11:14:15 UTC
This QE validation should be targeted for RHV 4.2.8, as RHHI QE team is occupied with bug verification and functional regression for upcoming RHHI 2.0 GA with RHV 4.2.

Providing qa_ack for release_milestone for RHV 4.2.8

Comment 4 SATHEESARAN 2018-10-08 11:14:52 UTC
(In reply to Sahina Bose from comment #2)
> As per Amar, no changes needed. Deployment of RHHI over IPv6 needs to be
> tested

As there are no info required, removing the needinfo on Amar

Comment 5 Gobinda Das 2018-11-02 11:21:01 UTC
1- Tried to test rhhi installation via gdeploy and faced some issue with gdeploy.

IPV6 has ":" in between each 4 characters and gdeploy conf file also has standard for specific host separated by ":" like:

[lv:host], in host if we specify ipv6 then gdeploy is not able to parse host properly as it's spiting with ":"

ex: [lv:2620:52:0:4628:1142:d118:4a2b:94d6] // This does not work

2- Tried with ansible based deployment as well and faced issue with gluster peer probe.
gluster peer probe is failing with ipv6

[root@headwig hc-ansible-deployment]# gluster peer probe 2620:52:0:4628:1142:d118:4a2b:94d6
peer probe: failed: Probe returned with Transport endpoint is not connected

I am using glusterfs-server-3.12.2-18.el7rhgs.x86_64 and gdeploy-2.0.2-27.el7rhgs.noarch

Comment 6 Sahina Bose 2018-11-05 09:06:42 UTC
Amar, who can look at the RHGS issue with IPV6? Should this have worked with the downstream version mentioned above

Comment 7 Gobinda Das 2018-12-03 05:01:57 UTC
Any update on this?

Comment 8 Yaniv Kaul 2018-12-23 07:37:33 UTC
(In reply to Gobinda Das from comment #5)
> 1- Tried to test rhhi installation via gdeploy and faced some issue with
> gdeploy.
> 
> IPV6 has ":" in between each 4 characters and gdeploy conf file also has
> standard for specific host separated by ":" like:
> 
> [lv:host], in host if we specify ipv6 then gdeploy is not able to parse host
> properly as it's spiting with ":"
> 
> ex: [lv:2620:52:0:4628:1142:d118:4a2b:94d6] // This does not work
> 
> 2- Tried with ansible based deployment as well and faced issue with gluster
> peer probe.
> gluster peer probe is failing with ipv6
> 
> [root@headwig hc-ansible-deployment]# gluster peer probe
> 2620:52:0:4628:1142:d118:4a2b:94d6
> peer probe: failed: Probe returned with Transport endpoint is not connected

Can you please attach logs?

> 
> I am using glusterfs-server-3.12.2-18.el7rhgs.x86_64 and
> gdeploy-2.0.2-27.el7rhgs.noarch

Comment 9 Gobinda Das 2018-12-24 05:06:02 UTC
Hi Yaniv,
Here is the glusterd log

[2018-12-21 08:12:20.340784] I [MSGID: 106487] [glusterd-handler.c:1254:__glusterd_handle_cli_probe] 0-glusterd: Received CLI probe req 2620:52:0:4628:8561:98aa:f93e:8813 24007
[2018-12-21 08:12:20.342625] I [MSGID: 106128] [glusterd-handler.c:3679:glusterd_probe_begin] 0-glusterd: Unable to find peerinfo for host: 2620:52:0:4628:8561:98aa:f93e:8813 (24007)
[2018-12-21 08:12:20.390089] W [MSGID: 106061] [glusterd-handler.c:3465:glusterd_transport_inet_options_build] 0-glusterd: Failed to get tcp-user-timeout
[2018-12-21 08:12:20.390165] I [rpc-clnt.c:1005:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600
[2018-12-21 08:12:20.390329] W [MSGID: 101002] [options.c:972:xl_opt_validate] 0-management: option 'address-family' is deprecated, preferred is 'transport.address-family', continuing with correction
[2018-12-21 08:12:20.390477] I [dict.c:1168:data_to_uint16] (-->/usr/lib64/glusterfs/6dev/rpc-transport/socket.so(+0x6fca) [0x7fed1c8a0fca] -->/usr/lib64/glusterfs/6dev/rpc-transport/socket.so(socket_client_get_remote_sockaddr+0xe1) [0x7fed1c8a8891] -->/lib64/libglusterfs.so.0(data_to_uint16+0x111) [0x7fed28d44eb1] ) 0-dict: key null, unsigned integer type asked, has integer type [Invalid argument]
[2018-12-21 08:12:20.390530] E [MSGID: 101075] [common-utils.c:508:gf_resolve_ip6] 0-resolver: getaddrinfo failed (Address family for hostname not supported)
[2018-12-21 08:12:20.390567] E [name.c:258:af_inet_client_get_remote_sockaddr] 0-management: DNS resolution failed on host 2620:52:0:4628:8561:98aa:f93e:8813
[2018-12-21 08:12:20.392908] I [MSGID: 106498] [glusterd-handler.c:3608:glusterd_friend_add] 0-management: connect returned 0
[2018-12-21 08:12:20.393040] I [MSGID: 106004] [glusterd-handler.c:6413:__glusterd_peer_rpc_notify] 0-management: Peer <2620:52:0:4628:8561:98aa:f93e:8813> (<00000000-0000-0000-0000-000000000000>), in state <Establishing Connection>, has disconnected from glusterd.
[2018-12-21 08:12:20.394240] I [MSGID: 106599] [glusterd-nfs-svc.c:161:glusterd_nfssvc_reconfigure] 0-management: nfs/server.so xlator is not installed
[2018-12-21 08:12:20.395000] I [MSGID: 106131] [glusterd-proc-mgmt.c:86:glusterd_proc_stop] 0-management: glustershd already stopped
[2018-12-21 08:12:20.395059] I [MSGID: 106568] [glusterd-svc-mgmt.c:253:glusterd_svc_stop] 0-management: glustershd service is stopped
[2018-12-21 08:12:20.395098] I [MSGID: 106131] [glusterd-proc-mgmt.c:86:glusterd_proc_stop] 0-management: quotad already stopped
[2018-12-21 08:12:20.395134] I [MSGID: 106568] [glusterd-svc-mgmt.c:253:glusterd_svc_stop] 0-management: quotad service is stopped
[2018-12-21 08:12:20.395185] I [MSGID: 106131] [glusterd-proc-mgmt.c:86:glusterd_proc_stop] 0-management: bitd already stopped
[2018-12-21 08:12:20.395215] I [MSGID: 106568] [glusterd-svc-mgmt.c:253:glusterd_svc_stop] 0-management: bitd service is stopped
[2018-12-21 08:12:20.395253] I [MSGID: 106131] [glusterd-proc-mgmt.c:86:glusterd_proc_stop] 0-management: scrub already stopped
[2018-12-21 08:12:20.395282] I [MSGID: 106568] [glusterd-svc-mgmt.c:253:glusterd_svc_stop] 0-management: scrub service is stopped
[2018-12-21 08:12:20.395523] E [MSGID: 101191] [event-epoll.c:759:event_dispatch_epoll_worker] 0-epoll: Failed to dispatch handler

Comment 10 Gobinda Das 2018-12-28 07:13:28 UTC
I got one private build from Milind Changire (http://brewweb.devel.redhat.com/brew/taskinfo?taskID=19612683) and tested gluster peer probe with IPV6 is working with this build.
I am testing RHHI deployment with gluster ansible role.
Right now Gdeploy does not support ipv6.Raised BZ for this(https://bugzilla.redhat.com/show_bug.cgi?id=1662394).
I will update the rhhi deployment with ipv6 result soon.

Comment 11 Gobinda Das 2019-01-02 07:36:22 UTC
Still facing issue from gluster side.After volume created successfully the bricks are not coming Online.

Status of volume: test
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 2620:52:0:4628:33a6:4655:b142:33cd:/g
luster_bricks/test                          N/A       N/A        N       N/A  
 
Task Status of Volume test
------------------------------------------------------------------------------
There are no active volume tasks
 
Status of volume: test1
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 2620:52:0:4628:33a6:4655:b142:33cd:/g
luster_bricks/test1/test1                   N/A       N/A        N       N/A  
 
Task Status of Volume test1
------------------------------------------------------------------------------
There are no active volume tasks

Comment 12 Dan Kenigsberg 2019-02-07 13:19:17 UTC
Rejy: a build would be available Feb 16.

Comment 13 SATHEESARAN 2019-02-12 09:20:38 UTC
Cleaning up the QA whiteboard, as this feature will be validated with RHV 4.3
Also included this bug for RHHI-V 1.6

Comment 14 SATHEESARAN 2019-03-21 11:04:06 UTC
Verified with RHVH 4.3 and RHGS 3.4.4 ( glusterfs-3.12.2-47.el7rhgs )
1. After creating the HE setup over pure IPV6, glusterfs storage domain can be created
with IPV6 gluster server.

Comment 16 errata-xmlrpc 2019-05-09 06:09:23 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2019:1121


Note You need to log in before you can comment on or make changes to this bug.