Bug 1680596 - [Doc RFE] Document how to set up a cluster (server) using IPv6 networking exclusively
Summary: [Doc RFE] Document how to set up a cluster (server) using IPv6 networking exc...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: doc-Maintaining_RHHI
Version: rhhiv-1.6
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: RHHI-V 1.6
Assignee: Laura Bailey
QA Contact: SATHEESARAN
URL:
Whiteboard: TestOnly
Depends On: 1618669 1624708 1664590 1676886
Blocks: RHEV_IPv6 1662394 RHHI-V-1-6-Documentation-RFE-BZs
TreeView+ depends on / blocked
 
Reported: 2019-02-25 11:52 UTC by Anjana Suparna Sriram
Modified: 2019-05-20 04:22 UTC (History)
11 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1624708
Environment:
Last Closed: 2019-05-20 04:22:05 UTC
Embargoed:


Attachments (Terms of Use)

Description Anjana Suparna Sriram 2019-02-25 11:52:22 UTC
+++ This bug was initially created as a clone of Bug #1624708 +++

Please check if we can add a gluster storage storage over IPv6, and use it for VM storage in the latest version of RHHI.

--- Additional comment from Sahina Bose on 2018-09-10 11:57:36 UTC ---

Does IPv6 work with RHGS 3.4 out of the box? Are there any configuration changes needed

--- Additional comment from Sahina Bose on 2018-10-08 08:08:41 UTC ---

As per Amar, no changes needed. Deployment of RHHI over IPv6 needs to be tested

--- Additional comment from SATHEESARAN on 2018-10-08 11:14:15 UTC ---

This QE validation should be targeted for RHV 4.2.8, as RHHI QE team is occupied with bug verification and functional regression for upcoming RHHI 2.0 GA with RHV 4.2.

Providing qa_ack for release_milestone for RHV 4.2.8

--- Additional comment from SATHEESARAN on 2018-10-08 11:14:52 UTC ---

(In reply to Sahina Bose from comment #2)
> As per Amar, no changes needed. Deployment of RHHI over IPv6 needs to be
> tested

As there are no info required, removing the needinfo on Amar

--- Additional comment from Gobinda Das on 2018-11-02 11:21:01 UTC ---

1- Tried to test rhhi installation via gdeploy and faced some issue with gdeploy.

IPV6 has ":" in between each 4 characters and gdeploy conf file also has standard for specific host separated by ":" like:

[lv:host], in host if we specify ipv6 then gdeploy is not able to parse host properly as it's spiting with ":"

ex: [lv:2620:52:0:4628:1142:d118:4a2b:94d6] // This does not work

2- Tried with ansible based deployment as well and faced issue with gluster peer probe.
gluster peer probe is failing with ipv6

[root@headwig hc-ansible-deployment]# gluster peer probe 2620:52:0:4628:1142:d118:4a2b:94d6
peer probe: failed: Probe returned with Transport endpoint is not connected

I am using glusterfs-server-3.12.2-18.el7rhgs.x86_64 and gdeploy-2.0.2-27.el7rhgs.noarch

--- Additional comment from Sahina Bose on 2018-11-05 09:06:42 UTC ---

Amar, who can look at the RHGS issue with IPV6? Should this have worked with the downstream version mentioned above

--- Additional comment from Gobinda Das on 2018-12-03 05:01:57 UTC ---

Any update on this?

--- Additional comment from Yaniv Kaul on 2018-12-23 07:37:33 UTC ---

(In reply to Gobinda Das from comment #5)
> 1- Tried to test rhhi installation via gdeploy and faced some issue with
> gdeploy.
> 
> IPV6 has ":" in between each 4 characters and gdeploy conf file also has
> standard for specific host separated by ":" like:
> 
> [lv:host], in host if we specify ipv6 then gdeploy is not able to parse host
> properly as it's spiting with ":"
> 
> ex: [lv:2620:52:0:4628:1142:d118:4a2b:94d6] // This does not work
> 
> 2- Tried with ansible based deployment as well and faced issue with gluster
> peer probe.
> gluster peer probe is failing with ipv6
> 
> [root@headwig hc-ansible-deployment]# gluster peer probe
> 2620:52:0:4628:1142:d118:4a2b:94d6
> peer probe: failed: Probe returned with Transport endpoint is not connected

Can you please attach logs?

> 
> I am using glusterfs-server-3.12.2-18.el7rhgs.x86_64 and
> gdeploy-2.0.2-27.el7rhgs.noarch

--- Additional comment from Gobinda Das on 2018-12-24 05:06:02 UTC ---

Hi Yaniv,
Here is the glusterd log

[2018-12-21 08:12:20.340784] I [MSGID: 106487] [glusterd-handler.c:1254:__glusterd_handle_cli_probe] 0-glusterd: Received CLI probe req 2620:52:0:4628:8561:98aa:f93e:8813 24007
[2018-12-21 08:12:20.342625] I [MSGID: 106128] [glusterd-handler.c:3679:glusterd_probe_begin] 0-glusterd: Unable to find peerinfo for host: 2620:52:0:4628:8561:98aa:f93e:8813 (24007)
[2018-12-21 08:12:20.390089] W [MSGID: 106061] [glusterd-handler.c:3465:glusterd_transport_inet_options_build] 0-glusterd: Failed to get tcp-user-timeout
[2018-12-21 08:12:20.390165] I [rpc-clnt.c:1005:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600
[2018-12-21 08:12:20.390329] W [MSGID: 101002] [options.c:972:xl_opt_validate] 0-management: option 'address-family' is deprecated, preferred is 'transport.address-family', continuing with correction
[2018-12-21 08:12:20.390477] I [dict.c:1168:data_to_uint16] (-->/usr/lib64/glusterfs/6dev/rpc-transport/socket.so(+0x6fca) [0x7fed1c8a0fca] -->/usr/lib64/glusterfs/6dev/rpc-transport/socket.so(socket_client_get_remote_sockaddr+0xe1) [0x7fed1c8a8891] -->/lib64/libglusterfs.so.0(data_to_uint16+0x111) [0x7fed28d44eb1] ) 0-dict: key null, unsigned integer type asked, has integer type [Invalid argument]
[2018-12-21 08:12:20.390530] E [MSGID: 101075] [common-utils.c:508:gf_resolve_ip6] 0-resolver: getaddrinfo failed (Address family for hostname not supported)
[2018-12-21 08:12:20.390567] E [name.c:258:af_inet_client_get_remote_sockaddr] 0-management: DNS resolution failed on host 2620:52:0:4628:8561:98aa:f93e:8813
[2018-12-21 08:12:20.392908] I [MSGID: 106498] [glusterd-handler.c:3608:glusterd_friend_add] 0-management: connect returned 0
[2018-12-21 08:12:20.393040] I [MSGID: 106004] [glusterd-handler.c:6413:__glusterd_peer_rpc_notify] 0-management: Peer <2620:52:0:4628:8561:98aa:f93e:8813> (<00000000-0000-0000-0000-000000000000>), in state <Establishing Connection>, has disconnected from glusterd.
[2018-12-21 08:12:20.394240] I [MSGID: 106599] [glusterd-nfs-svc.c:161:glusterd_nfssvc_reconfigure] 0-management: nfs/server.so xlator is not installed
[2018-12-21 08:12:20.395000] I [MSGID: 106131] [glusterd-proc-mgmt.c:86:glusterd_proc_stop] 0-management: glustershd already stopped
[2018-12-21 08:12:20.395059] I [MSGID: 106568] [glusterd-svc-mgmt.c:253:glusterd_svc_stop] 0-management: glustershd service is stopped
[2018-12-21 08:12:20.395098] I [MSGID: 106131] [glusterd-proc-mgmt.c:86:glusterd_proc_stop] 0-management: quotad already stopped
[2018-12-21 08:12:20.395134] I [MSGID: 106568] [glusterd-svc-mgmt.c:253:glusterd_svc_stop] 0-management: quotad service is stopped
[2018-12-21 08:12:20.395185] I [MSGID: 106131] [glusterd-proc-mgmt.c:86:glusterd_proc_stop] 0-management: bitd already stopped
[2018-12-21 08:12:20.395215] I [MSGID: 106568] [glusterd-svc-mgmt.c:253:glusterd_svc_stop] 0-management: bitd service is stopped
[2018-12-21 08:12:20.395253] I [MSGID: 106131] [glusterd-proc-mgmt.c:86:glusterd_proc_stop] 0-management: scrub already stopped
[2018-12-21 08:12:20.395282] I [MSGID: 106568] [glusterd-svc-mgmt.c:253:glusterd_svc_stop] 0-management: scrub service is stopped
[2018-12-21 08:12:20.395523] E [MSGID: 101191] [event-epoll.c:759:event_dispatch_epoll_worker] 0-epoll: Failed to dispatch handler

--- Additional comment from Gobinda Das on 2018-12-28 07:13:28 UTC ---

I got one private build from Milind Changire (http://brewweb.devel.redhat.com/brew/taskinfo?taskID=19612683) and tested gluster peer probe with IPV6 is working with this build.
I am testing RHHI deployment with gluster ansible role.
Right now Gdeploy does not support ipv6.Raised BZ for this(https://bugzilla.redhat.com/show_bug.cgi?id=1662394).
I will update the rhhi deployment with ipv6 result soon.

--- Additional comment from Gobinda Das on 2019-01-02 07:36:22 UTC ---

Still facing issue from gluster side.After volume created successfully the bricks are not coming Online.

Status of volume: test
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 2620:52:0:4628:33a6:4655:b142:33cd:/g
luster_bricks/test                          N/A       N/A        N       N/A  
 
Task Status of Volume test
------------------------------------------------------------------------------
There are no active volume tasks
 
Status of volume: test1
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 2620:52:0:4628:33a6:4655:b142:33cd:/g
luster_bricks/test1/test1                   N/A       N/A        N       N/A  
 
Task Status of Volume test1
------------------------------------------------------------------------------
There are no active volume tasks

--- Additional comment from Dan Kenigsberg on 2019-02-07 13:19:17 UTC ---

Rejy: a build would be available Feb 16.

--- Additional comment from SATHEESARAN on 2019-02-12 09:20:38 UTC ---

Cleaning up the QA whiteboard, as this feature will be validated with RHV 4.3
Also included this bug for RHHI-V 1.6

Comment 6 SATHEESARAN 2019-03-21 12:45:25 UTC
I have completed IPV6 testing with RHHI-V 1.6 and I can provide information about the setup

Comment 18 SATHEESARAN 2019-04-04 12:34:19 UTC
Verified with the doc links provided and IPV6 related caveats are well documented and this feature is called out as Tech Preview


Note You need to log in before you can comment on or make changes to this bug.