https://github.com/openshift/openshift-ansible/pull/4724
1) Test with openshift-ansible-3.4.63-1, issue reproduced, installer failed at: RUNNING HANDLER [verify api server] ******************************************** 2) Test with openshift-ansible-3.4.124-1, installer failed at: TASK [openshift_manage_node : Wait for Node Registration] ********************** ... FAILED - RETRYING: Wait for Node Registration (2 retries left). FAILED - RETRYING: Wait for Node Registration (1 retries left). fatal: [ec2-54-145-219-149.compute-1.amazonaws.com -> ec2-54-209-97-70.compute-1.amazonaws.com]: FAILED! => {"attempts": 50, "changed": false, "cmd": ["oc", "get", "node", "ip-172-18-2-83.ec2.internal", "--config=/tmp/openshift-ansible-sTwU1w/admin.kubeconfig", "-n", "default"], "delta": "0:00:00.535600", "end": "2017-08-09 05:20:01.757457", "failed": true, "rc": 1, "start": "2017-08-09 05:20:01.221857", "stderr": "No resources found.\nError from server: nodes \"ip-172-18-2-83.ec2.internal\" not found", "stderr_lines": ["No resources found.", "Error from server: nodes \"ip-172-18-2-83.ec2.internal\" not found"], "stdout": "", "stdout_lines": []} No ideas why the node of the first master was missed from `oc get nodes` list (The atomic-openshift-node service seems running). The whole logs for scaling-up playbooks will be attached later.
Retested with openshift-ansible-3.4.124-1.git.0.8bc631d.el7.noarch.rpm Scale-up playbooks succeed both on containerized and rpm environments. S2I build also works well against the new master.
@Gan Thanks! Can this bug be moved to verified status?