Bug 1688217 - [RFE][gluster-ansible] Need glusterd configuration via ansible roles for IPV6 cases
Summary: [RFE][gluster-ansible] Need glusterd configuration via ansible roles for IPV6...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: rhhi
Version: rhhiv-1.6
Hardware: x86_64
OS: Linux
high
medium
Target Milestone: ---
: RHHI-V 1.8
Assignee: Prajith
QA Contact: SATHEESARAN
URL:
Whiteboard:
Depends On: 1688188
Blocks: 1721383 RHHI-V-1.8-Engineering-RFE-BZs
TreeView+ depends on / blocked
 
Reported: 2019-03-13 11:14 UTC by SATHEESARAN
Modified: 2020-08-04 14:51 UTC (History)
7 users (show)

Fixed In Version: gluster-ansible-features-1.0.5-2
Doc Type: Enhancement
Doc Text:
This enchantment supports the IPv6 networking. With this release, gluster-ansible can configure IPv6 networking for Red Hat Gluster Storage.
Clone Of: 1688188
Environment:
Last Closed: 2020-08-04 14:50:55 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2020:3314 0 None None None 2020-08-04 14:51:21 UTC

Description SATHEESARAN 2019-03-13 11:14:28 UTC
Description of problem:
-----------------------
To enable IPV6 with gluster, glusterd volume file needs to be edited to uncomment, "options transport.address-family inet6" and glusterd needs to be restarted. This enabled IPV6 support with RHHI-V infrastructure.

Version-Release number of selected component (if applicable):
-------------------------------------------------------------
gluster-ansible-roles-1.0.4-4

How reproducible:
-----------------
Always

Steps to Reproduce:
-------------------
1. Use IPV6 FQDNs for hostnames

Actual results:
---------------
glusterd is not configured

Expected results:
-----------------
glusterd should be configured

Comment 3 SATHEESARAN 2019-07-10 05:51:16 UTC
The dependent bug is already ON_QA

Comment 4 SATHEESARAN 2019-07-10 05:51:54 UTC
Mistakenly moved it directly to VERIFIED

Comment 5 SATHEESARAN 2019-07-10 05:52:59 UTC
Tested with RHVH 4.3.5 + RHEL 7.7 + RHGS 3.4.4 ( interim build - glusterfs-6.0-6 ) with ansible 2.8.1-1
with:
gluster-ansible-features-1.0.5-2.el7rhgs.noarch
gluster-ansible-roles-1.0.5-2.el7rhgs.noarch
gluster-ansible-infra-1.0.4-3.el7rhgs.noarch

glusterd volfile has the required configuration as below:

# cat /etc/glusterfs/glusterd.vol 
volume management
    type mgmt/glusterd
    option working-directory /var/lib/glusterd
    option transport-type socket,rdma
    option transport.socket.keepalive-time 10
    option transport.socket.keepalive-interval 2
    option transport.socket.read-fail-log off
    option transport.socket.listen-port 24007
    option transport.rdma.listen-port 24008
    option ping-timeout 0
    option event-threads 1
#   option lock-timer 180
    option transport.address-family inet6
#   option base-port 49152
    option max-port  60999
end-volume

Comment 9 Sahina Bose 2020-07-24 06:20:34 UTC
Prajith, can you please review?

Comment 10 Prajith 2020-07-24 07:01:03 UTC
The doc looks good to me,

Comment 12 errata-xmlrpc 2020-08-04 14:50:55 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (RHHI for Virtualization 1.8 bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2020:3314


Note You need to log in before you can comment on or make changes to this bug.