Description of problem: Deploy OCP with CRS, installer failed on "Verify heketi service" step, because during installation, installer change the iptables rules on hosts which inside glusterfs group. According to code, this should be avoid. Version-Release number of the following components: openshift-ansible-3.6.162-1.git.0.50e29bd.el7 ansible-2.2.3.0-1.el7 How reproducible: 100% Steps to Reproduce: 1. Install OCP with CRS # cat hosts [OSEv3:children] masters nodes glusterfs [OSEv3:vars] ... openshift_hosted_registry_storage_kind=glusterfs openshift_storage_glusterfs_is_native=false openshift_storage_glusterfs_heketi_is_native=false openshift_storage_glusterfs_heketi_url=host-8-241-24.host.centralci.eng.rdu2.redhat.com openshift_storage_glusterfs_heketi_admin_key=redhat [masters] host-8-241-25.host.centralci.eng.rdu2.redhat.com openshift_public_hostname=host-8-241-25.host.centralci.eng.rdu2.redhat.com openshift_hostname=host-8-241-25.host.centralci.eng.rdu2.redhat.com [nodes] host-8-241-25.host.centralci.eng.rdu2.redhat.com openshift_public_hostname=host-8-241-25.host.centralci.eng.rdu2.redhat.com openshift_hostname=host-8-241-25.host.centralci.eng.rdu2.redhat.com openshift_node_labels="{'role': 'node'}" host-8-240-252.host.centralci.eng.rdu2.redhat.com openshift_public_hostname=host-8-240-252.host.centralci.eng.rdu2.redhat.com openshift_hostname=host-8-240-252.host.centralci.eng.rdu2.redhat.com openshift_node_labels="{'role': 'node','registry': 'enabled','router': 'enabled'}" [glusterfs] host-8-241-24.host.centralci.eng.rdu2.redhat.com glusterfs_devices="['/dev/vsda']" host-8-241-32.host.centralci.eng.rdu2.redhat.com glusterfs_devices="['/dev/vsda']" host-8-241-38.host.centralci.eng.rdu2.redhat.com glusterfs_devices="['/dev/vsda']" ... 2. 3. Actual results: # ansible-playbook -i host -v /usr/share/ansible/openshift-ansible/playbboks/byo/config.yml ... TASK [openshift_storage_glusterfs : Verify heketi service] ********************* Friday 21 July 2017 06:31:37 +0000 (0:00:00.058) 0:09:23.970 *********** fatal: [host-8-241-25.host.centralci.eng.rdu2.redhat.com]: FAILED! => { "changed": false, "cmd": [ "heketi-cli", "-s", "http://host-8-241-24.host.centralci.eng.rdu2.redhat.com:8080", "--user", "admin", "--secret", "redhat", "cluster", "list" ], "delta": "0:00:00.058231", "end": "2017-07-21 02:31:36.998453", "failed": true, "rc": 255, "start": "2017-07-21 02:31:36.940222", "warnings": [] } STDERR: Error: Get http://host-8-241-24.host.centralci.eng.rdu2.redhat.com:8080/clusters: dial tcp 10.8.241.24:8080: getsockopt: no route to host ... Expected results: Installation succeed Additional info: Installer overwrite glusterfs cluster's iptables rules. According to code, this should be avoid: # cat ./playbooks/common/openshift-glusterfs/config.yml --- - name: Open firewall ports for GlusterFS hosts: oo_glusterfs_to_config vars: os_firewall_allow: - service: glusterfs_sshd port: "2222/tcp" - service: glusterfs_daemon port: "24007/tcp" - service: glusterfs_management port: "24008/tcp" - service: glusterfs_bricks port: "49152-49251/tcp" roles: - role: os_firewall when: - openshift_storage_glusterfs_is_native | default(True) # ansible-playbook -i host -v /usr/share/ansible/openshift-ansible/playbboks/byo/config.yml ... TASK [os_firewall : Add iptables allow rules] ********************************** Friday 21 July 2017 06:31:26 +0000 (0:00:10.063) 0:09:13.497 *********** changed: [host-8-241-24.host.centralci.eng.rdu2.redhat.com] => (item={u'port': u'2222/tcp', u'service': u'glusterfs_sshd'}) => { "changed": true, "item": { "port": "2222/tcp", "service": "glusterfs_sshd" }, "output": [ "", "Successfully created chain OS_FIREWALL_ALLOW", "iptables: Saving firewall rules to /etc/sysconfig/iptables: [ OK ]\r\n", "", "iptables: Saving firewall rules to /etc/sysconfig/iptables: [ OK ]\r\n", "", "iptables: Saving firewall rules to /etc/sysconfig/iptables: [ OK ]\r\n" ] } changed: [host-8-241-32.host.centralci.eng.rdu2.redhat.com] => (item={u'port': u'2222/tcp', u'service': u'glusterfs_sshd'}) => { "changed": true, "item": { "port": "2222/tcp", "service": "glusterfs_sshd" }, "output": [ "", "Successfully created chain OS_FIREWALL_ALLOW", "iptables: Saving firewall rules to /etc/sysconfig/iptables: [ OK ]\r\n", "", "iptables: Saving firewall rules to /etc/sysconfig/iptables: [ OK ]\r\n", "", "iptables: Saving firewall rules to /etc/sysconfig/iptables: [ OK ]\r\n" ] } changed: [host-8-241-38.host.centralci.eng.rdu2.redhat.com] => (item={u'port': u'2222/tcp', u'service': u'glusterfs_sshd'}) => { "changed": true, "item": { "port": "2222/tcp", "service": "glusterfs_sshd" }, "output": [ "", "Successfully created chain OS_FIREWALL_ALLOW", "iptables: Saving firewall rules to /etc/sysconfig/iptables: [ OK ]\r\n", "", "iptables: Saving firewall rules to /etc/sysconfig/iptables: [ OK ]\r\n", "", "iptables: Saving firewall rules to /etc/sysconfig/iptables: [ OK ]\r\n" ] } changed: [host-8-241-24.host.centralci.eng.rdu2.redhat.com] => (item={u'port': u'24007/tcp', u'service': u'glusterfs_daemon'}) => { "changed": true, "item": { "port": "24007/tcp", "service": "glusterfs_daemon" }, "output": [ "", "iptables: Saving firewall rules to /etc/sysconfig/iptables: [ OK ]\r\n" ] } changed: [host-8-241-32.host.centralci.eng.rdu2.redhat.com] => (item={u'port': u'24007/tcp', u'service': u'glusterfs_daemon'}) => { "changed": true, "item": { "port": "24007/tcp", "service": "glusterfs_daemon" }, "output": [ "", "iptables: Saving firewall rules to /etc/sysconfig/iptables: [ OK ]\r\n" ] } changed: [host-8-241-38.host.centralci.eng.rdu2.redhat.com] => (item={u'port': u'24007/tcp', u'service': u'glusterfs_daemon'}) => { "changed": true, "item": { "port": "24007/tcp", "service": "glusterfs_daemon" }, "output": [ "", "iptables: Saving firewall rules to /etc/sysconfig/iptables: [ OK ]\r\n" ] } changed: [host-8-241-24.host.centralci.eng.rdu2.redhat.com] => (item={u'port': u'24008/tcp', u'service': u'glusterfs_management'}) => { "changed": true, "item": { "port": "24008/tcp", "service": "glusterfs_management" }, "output": [ "", "iptables: Saving firewall rules to /etc/sysconfig/iptables: [ OK ]\r\n" ] } changed: [host-8-241-32.host.centralci.eng.rdu2.redhat.com] => (item={u'port': u'24008/tcp', u'service': u'glusterfs_management'}) => { "changed": true, "item": { "port": "24008/tcp", "service": "glusterfs_management" }, "output": [ "", "iptables: Saving firewall rules to /etc/sysconfig/iptables: [ OK ]\r\n" ] } changed: [host-8-241-38.host.centralci.eng.rdu2.redhat.com] => (item={u'port': u'24008/tcp', u'service': u'glusterfs_management'}) => { "changed": true, "item": { "port": "24008/tcp", "service": "glusterfs_management" }, "output": [ "", "iptables: Saving firewall rules to /etc/sysconfig/iptables: [ OK ]\r\n" ] } changed: [host-8-241-24.host.centralci.eng.rdu2.redhat.com] => (item={u'port': u'49152-49251/tcp', u'service': u'glusterfs_bricks'}) => { "changed": true, "item": { "port": "49152-49251/tcp", "service": "glusterfs_bricks" }, "output": [ "", "iptables: Saving firewall rules to /etc/sysconfig/iptables: [ OK ]\r\n" ] } changed: [host-8-241-32.host.centralci.eng.rdu2.redhat.com] => (item={u'port': u'49152-49251/tcp', u'service': u'glusterfs_bricks'}) => { "changed": true, "item": { "port": "49152-49251/tcp", "service": "glusterfs_bricks" }, "output": [ "", "iptables: Saving firewall rules to /etc/sysconfig/iptables: [ OK ]\r\n" ] } changed: [host-8-241-38.host.centralci.eng.rdu2.redhat.com] => (item={u'port': u'49152-49251/tcp', u'service': u'glusterfs_bricks'}) => { "changed": true, "item": { "port": "49152-49251/tcp", "service": "glusterfs_bricks" }, "output": [ "", "iptables: Saving firewall rules to /etc/sysconfig/iptables: [ OK ]\r\n" ] } ...
Seems it should be: ./playbooks/common/openshift-glusterfs/config.yml when: - openshift_storage_glusterfs_is_native | default(True) | bool
Fix is upstream here: https://github.com/openshift/openshift-ansible/pull/4826
PR merged.
Verified with version openshift-ansible-3.6.169-1.git.0.440d532.el7, PR merged, the "openshift_storage_glusterfs_is_native=false" has effect now.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2017:3188