Description of problem: Install failed when firewalld used Version-Release number of the following components: openshift-ansible-3.10.0-0.22.0.git.0.b6ec617.el7 How reproducible: Always Steps to Reproduce: 1. Set up OCP 3.10 Actual results: TASK [openshift_node : Add firewalld allow rules] ****************************** Wednesday 18 April 2018 01:59:03 -0400 (0:00:00.051) 0:00:53.351 ******* An exception occurred during task execution. To see the full traceback, use -vvv. The error was: AttributeError: 'AnsibleModule' object has no attribute 'fail' failed: [qe-wmengfw223-node-registry-router-1.0418-ing.qe.rhcloud.com] (item={u'port': u'10250/tcp', u'service': u'Kubernetes kubelet'}) => {"changed": false, "failed": true, "item": {"port": "10250/tcp", "service": "Kubernetes kubelet"}, "module_stderr": "Traceback (most recent call last):\n File \"/tmp/ansible_kEUDkq/ansible_module_firewalld.py\", line 936, in <module>\n main()\n File \"/tmp/ansible_kEUDkq/ansible_module_firewalld.py\", line 788, in main\n module.fail(msg='firewall is not currently running, unable to perform immediate actions without a running firewall daemon')\nAttributeError: 'AnsibleModule' object has no attribute 'fail'\n", "module_stdout": "", "msg": "MODULE FAILURE", "rc": 1} An exception occurred during task execution. To see the full traceback, use -vvv. The error was: AttributeError: 'AnsibleModule' object has no attribute 'fail' failed: [qe-wmengfw223-node-registry-router-2.0418-ing.qe.rhcloud.com] (item={u'port': u'10250/tcp', u'service': u'Kubernetes kubelet'}) => {"changed": false, "failed": true, "item": {"port": "10250/tcp", "service": "Kubernetes kubelet"}, "module_stderr": "Traceback (most recent call last):\n File \"/tmp/ansible_Jqdp6p/ansible_module_firewalld.py\", line 936, in <module>\n main()\n File \"/tmp/ansible_Jqdp6p/ansible_module_firewalld.py\", line 788, in main\n module.fail(msg='firewall is not currently running, unable to perform immediate actions without a running firewall daemon')\nAttributeError: 'AnsibleModule' object has no attribute 'fail'\n", "module_stdout": "", "msg": "MODULE FAILURE", "rc": 1} An exception occurred during task execution. To see the full traceback, use -vvv. The error was: AttributeError: 'AnsibleModule' object has no attribute 'fail' Expected results: Install succeeds
Weihua, Can you verify that the hosts were fully updated and rebooted prior to running the installer? There's a problem with dbus in 7.5 that means when we install dnsmasq that dbus is updated and becomes wedged. The only true way to address this is to ensure that your hosts are fully up to date and rebooted before you run the installer.
Thanks. How can I judge it is dbus problem or not? Which image should I use? Is dbus problem in qe-rhel-75-20180404 image? Does the dbus problem only happen to firewalld but not to iptables? I tried the same installations, the only difference is os_firewall_use_firewalld=true/false. if true, install fails. if false, install succeeds.
If `yum update` updates dbus then you know you've got to reboot before completing the installation.
no dbus in `yum update`. Operating System: Red Hat Enterprise Linux Server 7.5 (Maipo) CPE OS Name: cpe:/o:redhat:enterprise_linux:7.5:GA:server Kernel: Linux 3.10.0-862.el7.x86_64 # rpm -q dbus dbus-1.10.24-7.el7.x86_64 # systemctl status dbus ● dbus.service - D-Bus System Message Bus Loaded: loaded (/usr/lib/systemd/system/dbus.service; static; vendor preset: disabled) Active: active (running) since Mon 2018-04-23 11:30:06 EDT; 5min ago
This would occur even with an updated dbus - as we forcibly restart dbus when dnsmasq is being installed (I'll prepare a fix to revert this).
To summarize, we require that hosts are yum upgraded and rebooted prior to running the install. We're not going to restart dbus during the installation playbooks.
PR https://github.com/openshift/openshift-ansible/pull/8104 Fix is available in openshift-ansible-3.10.0-0.30.0
Fixed. openshift-ansible-3.10.0-0.30.0 Kernel Version: 3.10.0-862.el7.x86_64 Operating System: Red Hat Enterprise Linux Server 7.5 (Maipo)
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2018:1816