Bug 1490739

Summary: "Could not find the requested service iptables: host" when scaling up etcd
Product: OpenShift Container Platform Reporter: Gaoyun Pei <gpei>
Component: InstallerAssignee: Scott Dodson <sdodson>
Status: CLOSED ERRATA QA Contact: Gaoyun Pei <gpei>
Severity: high Docs Contact:
Priority: high    
Version: 3.7.0CC: aos-bugs, jokerman, mmccomas
Target Milestone: ---   
Target Release: 3.7.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
The etcd scaleup playbook had an error where it attempted to run commands on hosts other than the host that was currently being scaled up resulting in an error if the other hosts did not yet have certain dependencies met. The playbooks now properly target only the host currently being scaled up.
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-11-28 22:10:32 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Gaoyun Pei 2017-09-12 07:29:10 UTC
Description of problem:
When scaling up etcd from single etcd to three member etcd cluster, scale-up playbook failed when trying to start iptables service on the second new etcd node.

The playbook is trying to start iptables service on all new_etcd nodes after just completing iptables packages installation check on the first new etcd node.
https://github.com/openshift/openshift-ansible/blob/openshift-ansible-3.7.0-0.125.0/roles/os_firewall/tasks/iptables.yml#L17-L36


Version-Release number of the following components:
openshift-ansible-3.7.0-0.125.0.git.0.91043b6.el7.noarch.rpm
ansible-2.3.2.0-2.el7.noarch

How reproducible:
Always

Steps to Reproduce:
1.Add new_etcd group as OSEv3 children group and add two new host in new_etcd group into ansible inventory file, then run etcd scale-up playbook
[new_etcd]
ec2-54-226-88-79.compute-1.amazonaws.com 
ec2-54-208-13-216.compute-1.amazonaws.com 

#ansible-playbook -i ~/host -v /usr/share/ansible/openshift-ansible/playbooks/byo/openshift-etcd/scaleup.yml


Actual results:
TASK [os_firewall : Install iptables packages] *********************************
ok: [ec2-54-226-88-79.compute-1.amazonaws.com] => (item=iptables) => {"changed": false, "item": "iptables", "msg": "", "rc": 0, "results": ["iptables-1.4.21-18.el7.x86_64 providing iptables is already installed"]}
changed: [ec2-54-226-88-79.compute-1.amazonaws.com] => (item=iptables-services) => {"changed": true, "item": "iptables-services", "msg": "", "rc": 0, "results": ["Loaded plugins: amazon-id, search-disabled-repos\nResolving Dependencies\n--> Running transaction check\n---> Package iptables-services.x86_64 0:1.4.21-18.el7 will be installed\n--> ... iptables-services.x86_64 0:1.4.21-18.el7                                      \n\nComplete!\n"]}


TASK [os_firewall : Start and enable iptables service] 
changed: [ec2-54-226-88-79.compute-1.amazonaws.com -> ec2-54-226-88-79.compute-1.amazonaws.com] => (item=ec2-54-226-88-79.compute-1.amazonaws.com) => {"changed": true, "enabled": true, "item": "ec2-54-226-88-79.compute-1.amazonaws.com", "name": "iptables", "state": "started", "status": ..}}

failed: [ec2-54-226-88-79.compute-1.amazonaws.com -> ec2-54-208-13-216.compute-1.amazonaws.com] (item=ec2-54-208-13-216.compute-1.amazonaws.com) => {"failed": true, "item": "ec2-54-208-13-216.compute-1.amazonaws.com", "msg": "Could not find the requested service iptables: host"}



Expected results:

Additional info:

Comment 2 Scott Dodson 2017-09-14 02:38:20 UTC
https://github.com/openshift/openshift-ansible/pull/5407 proposed fix

Comment 3 Gaoyun Pei 2017-10-09 08:01:22 UTC
Verify this bug with openshift-ansible-3.7.0-0.144.2.git.0.da1dd6c.el7.noarch.rpm

When scaling up etcd from single etcd to three member etcd cluster, the scale-up playbook is starting iptables service correctly, no such issue happened.

Comment 7 errata-xmlrpc 2017-11-28 22:10:32 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2017:3188