Description of problem: ----------------------- To enable IPV6 with gluster, glusterd volume file needs to be edited to uncomment, "options transport.address-family inet6" and glusterd needs to be restarted. This enabled IPV6 support with RHHI-V infrastructure. Version-Release number of selected component (if applicable): ------------------------------------------------------------- gluster-ansible-roles-1.0.4-4 How reproducible: ----------------- Always Steps to Reproduce: ------------------- 1. Use IPV6 FQDNs for hostnames Actual results: --------------- glusterd is not configured Expected results: ----------------- glusterd should be configured
PR: https://github.com/gluster/gluster-ansible-features/pull/23 fixes the issue.
This bug is the MUST fix for the forthcoming RHHI-V release, providing the qa_ack
The setup is validating the host with 'dig' command which always queries the DNS server. This needs to be changed with 'getent ahosts' as there are chances for PoC and other simple customer deployments, DNS records may be replaced with entry in /etc/hosts @Sac, Can we remove 'dig' command in the following code path: Code in: roles/gluster_hci/tasks/glusterd_ipv6.yml <code> # Check if the FQDN maps to ipv6, get the AAAA record - name: Check if given hosts have ipv6 configured command: args: argv: - dig - +short - "{{ item }}" - AAAA register: v6result with_items: "{{ groups['all'] }}" failed_when: v6result.stdout_lines|length == 0 delegate_to: localhost run_once: true </code>
@sas ack. I will be moving this to pre_tasks in our playbook. This will no longer be part of rhhi role.
(In reply to SATHEESARAN from comment #5) > The setup is validating the host with 'dig' command which always queries the > DNS server. > This needs to be changed with 'getent ahosts' as there are chances for PoC > and other simple > customer deployments, DNS records may be replaced with entry in /etc/hosts > > > @Sac, > > Can we remove 'dig' command in the following code path: > > Code in: roles/gluster_hci/tasks/glusterd_ipv6.yml > > <code> > # Check if the FQDN maps to ipv6, get the AAAA record > - name: Check if given hosts have ipv6 configured > command: > args: > argv: > - dig > - +short > - "{{ item }}" > - AAAA > register: v6result > with_items: "{{ groups['all'] }}" > failed_when: v6result.stdout_lines|length == 0 > delegate_to: localhost > run_once: true > </code> What is the reason? We use dig here to determine if the network is ipv4 or ipv6. I think you are confusing this with `valid hostname' check, where we decided to use getent instead of dig. I am moving this to ON_QA.
(In reply to Sachidananda Urs from comment #7) > (In reply to SATHEESARAN from comment #5) > > The setup is validating the host with 'dig' command which always queries the > > DNS server. > > This needs to be changed with 'getent ahosts' as there are chances for PoC > > and other simple > > customer deployments, DNS records may be replaced with entry in /etc/hosts > > > > > > @Sac, > > > > Can we remove 'dig' command in the following code path: > > > > Code in: roles/gluster_hci/tasks/glusterd_ipv6.yml > > > > <code> > > # Check if the FQDN maps to ipv6, get the AAAA record > > - name: Check if given hosts have ipv6 configured > > command: > > args: > > argv: > > - dig > > - +short > > - "{{ item }}" > > - AAAA > > register: v6result > > with_items: "{{ groups['all'] }}" > > failed_when: v6result.stdout_lines|length == 0 > > delegate_to: localhost > > run_once: true > > </code> > > What is the reason? We use dig here to determine if the network is ipv4 or > ipv6. > > I think you are confusing this with `valid hostname' check, where we decided > to use > getent instead of dig. > > I am moving this to ON_QA. Additional note: In this part of the role, we check if the given host is valid ipv6. If we go ahead with `getent ahostsv6 <host>', we do not have a deterministic test. Can you please add comment with the tests that you run?
@sas, after discussion with Sahina we decided we will remove that check. Since we enable ipv6 in glusterd based on user input, we can get rid of this check.
Patch: https://github.com/gluster/gluster-ansible-features/pull/29 should fix the issue.
Tested with RHVH 4.3.5 + RHEL 7.7 + RHGS 3.4.4 ( interim build - glusterfs-6.0-6 ) with ansible 2.8.1-1 with: gluster-ansible-features-1.0.5-2.el7rhgs.noarch gluster-ansible-roles-1.0.5-2.el7rhgs.noarch gluster-ansible-infra-1.0.4-3.el7rhgs.noarch glusterd volfile has the required configuration as below: # cat /etc/glusterfs/glusterd.vol volume management type mgmt/glusterd option working-directory /var/lib/glusterd option transport-type socket,rdma option transport.socket.keepalive-time 10 option transport.socket.keepalive-interval 2 option transport.socket.read-fail-log off option transport.socket.listen-port 24007 option transport.rdma.listen-port 24008 option ping-timeout 0 option event-threads 1 # option lock-timer 180 option transport.address-family inet6 # option base-port 49152 option max-port 60999 end-volume