Bug 1688217

Summary: [RFE][gluster-ansible] Need glusterd configuration via ansible roles for IPV6 cases
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: SATHEESARAN <sasundar>
Component: rhhiAssignee: Prajith <pkesavap>
Status: CLOSED ERRATA QA Contact: SATHEESARAN <sasundar>
Severity: medium Docs Contact:
Priority: high    
Version: rhhiv-1.6CC: dwalveka, godas, pasik, pkesavap, rhs-bugs, sabose, sasundar
Target Milestone: ---Keywords: FutureFeature
Target Release: RHHI-V 1.8   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: gluster-ansible-features-1.0.5-2 Doc Type: Enhancement
Doc Text:
This enchantment supports the IPv6 networking. With this release, gluster-ansible can configure IPv6 networking for Red Hat Gluster Storage.
Story Points: ---
Clone Of: 1688188 Environment:
Last Closed: 2020-08-04 14:50:55 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1688188    
Bug Blocks: 1721383, 1779976    

Description SATHEESARAN 2019-03-13 11:14:28 UTC
Description of problem:
-----------------------
To enable IPV6 with gluster, glusterd volume file needs to be edited to uncomment, "options transport.address-family inet6" and glusterd needs to be restarted. This enabled IPV6 support with RHHI-V infrastructure.

Version-Release number of selected component (if applicable):
-------------------------------------------------------------
gluster-ansible-roles-1.0.4-4

How reproducible:
-----------------
Always

Steps to Reproduce:
-------------------
1. Use IPV6 FQDNs for hostnames

Actual results:
---------------
glusterd is not configured

Expected results:
-----------------
glusterd should be configured

Comment 3 SATHEESARAN 2019-07-10 05:51:16 UTC
The dependent bug is already ON_QA

Comment 4 SATHEESARAN 2019-07-10 05:51:54 UTC
Mistakenly moved it directly to VERIFIED

Comment 5 SATHEESARAN 2019-07-10 05:52:59 UTC
Tested with RHVH 4.3.5 + RHEL 7.7 + RHGS 3.4.4 ( interim build - glusterfs-6.0-6 ) with ansible 2.8.1-1
with:
gluster-ansible-features-1.0.5-2.el7rhgs.noarch
gluster-ansible-roles-1.0.5-2.el7rhgs.noarch
gluster-ansible-infra-1.0.4-3.el7rhgs.noarch

glusterd volfile has the required configuration as below:

# cat /etc/glusterfs/glusterd.vol 
volume management
    type mgmt/glusterd
    option working-directory /var/lib/glusterd
    option transport-type socket,rdma
    option transport.socket.keepalive-time 10
    option transport.socket.keepalive-interval 2
    option transport.socket.read-fail-log off
    option transport.socket.listen-port 24007
    option transport.rdma.listen-port 24008
    option ping-timeout 0
    option event-threads 1
#   option lock-timer 180
    option transport.address-family inet6
#   option base-port 49152
    option max-port  60999
end-volume

Comment 9 Sahina Bose 2020-07-24 06:20:34 UTC
Prajith, can you please review?

Comment 10 Prajith 2020-07-24 07:01:03 UTC
The doc looks good to me,

Comment 12 errata-xmlrpc 2020-08-04 14:50:55 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (RHHI for Virtualization 1.8 bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2020:3314