Bug 1469230
| Summary: | [3.4] scaleup playbook doesn't consider ca certificate specified in openshift_master_overwrite_certificates | ||
|---|---|---|---|
| Product: | OpenShift Container Platform | Reporter: | Andrew Butcher <abutcher> |
| Component: | Installer | Assignee: | Andrew Butcher <abutcher> |
| Status: | CLOSED ERRATA | QA Contact: | Gan Huang <ghuang> |
| Severity: | medium | Docs Contact: | |
| Priority: | medium | ||
| Version: | 3.4.1 | CC: | abutcher, aos-bugs, ghuang, jialiu, jkaur, jokerman, mmccomas, tatanaka |
| Target Milestone: | --- | Keywords: | NeedsTestCase |
| Target Release: | 3.4.z | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | Bug Fix | |
| Doc Text: |
Previously the specified openshift_master_ca_certificate file was not deployed when performing a master scaleup. The scaleup playbooks have been updated to ensure that this certificate is deployed.
|
Story Points: | --- |
| Clone Of: | 1426677 | Environment: | |
| Last Closed: | 2017-10-02 13:02:36 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
| Bug Depends On: | 1426677 | ||
| Bug Blocks: | |||
|
Comment 1
Andrew Butcher
2017-07-10 16:54:57 UTC
1) Test with openshift-ansible-3.4.63-1, issue reproduced, installer failed at:
RUNNING HANDLER [verify api server] ********************************************
2) Test with openshift-ansible-3.4.124-1, installer failed at:
TASK [openshift_manage_node : Wait for Node Registration] **********************
...
FAILED - RETRYING: Wait for Node Registration (2 retries left).
FAILED - RETRYING: Wait for Node Registration (1 retries left).
fatal: [ec2-54-145-219-149.compute-1.amazonaws.com -> ec2-54-209-97-70.compute-1.amazonaws.com]: FAILED! => {"attempts": 50, "changed": false, "cmd": ["oc", "get", "node", "ip-172-18-2-83.ec2.internal", "--config=/tmp/openshift-ansible-sTwU1w/admin.kubeconfig", "-n", "default"], "delta": "0:00:00.535600", "end": "2017-08-09 05:20:01.757457", "failed": true, "rc": 1, "start": "2017-08-09 05:20:01.221857", "stderr": "No resources found.\nError from server: nodes \"ip-172-18-2-83.ec2.internal\" not found", "stderr_lines": ["No resources found.", "Error from server: nodes \"ip-172-18-2-83.ec2.internal\" not found"], "stdout": "", "stdout_lines": []}
No ideas why the node of the first master was missed from `oc get nodes` list (The atomic-openshift-node service seems running). The whole logs for scaling-up playbooks will be attached later.
Retested with openshift-ansible-3.4.124-1.git.0.8bc631d.el7.noarch.rpm Scale-up playbooks succeed both on containerized and rpm environments. S2I build also works well against the new master. @Gan Thanks! Can this bug be moved to verified status? |