Bug 1600010
| Summary: | [3.9] Can't add container provider automatically | ||
|---|---|---|---|
| Product: | OpenShift Container Platform | Reporter: | Gaoyun Pei <gpei> |
| Component: | Installer | Assignee: | Scott Dodson <sdodson> |
| Status: | CLOSED WONTFIX | QA Contact: | Gaoyun Pei <gpei> |
| Severity: | medium | Docs Contact: | |
| Priority: | medium | ||
| Version: | 3.9.0 | CC: | aos-bugs, gpei, jokerman, mmccomas |
| Target Milestone: | --- | Keywords: | Reopened, Triaged |
| Target Release: | 3.9.z | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2019-03-01 02:24:23 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
| Bug Depends On: | |||
| Bug Blocks: | 1524102, 1591966 | ||
There appear to be no active cases related to this bug. As such we're closing this bug in order to focus on bugs that are still tied to active customer cases. Please re-open this bug if you feel it was closed in error or a new active case is attached. Still could be reproduced with openshift-ansible-3.9.54-1.git.0.8a67eb1.el7.noarch.rpm, and this bug is blocking the verification of BZ#1591966 |
Description of problem: When trying to use playbooks/openshift-management/add_container_provider.yml to add OCP-3.9 cluster as a container provider, it fails at initial check. TASK [openshift_management : Ensure we use openshift_master_cluster_public_hostname if it is available] **************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_management/tasks/add_container_provider.yml:9 skipping: [ec2-54-174-139-112.compute-1.amazonaws.com] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_management : Ensure we default to the first master if openshift_master_cluster_public_hostname is unavailable] ***************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_management/tasks/add_container_provider.yml:15 fatal: [ec2-54-174-139-112.compute-1.amazonaws.com]: FAILED! => { "failed": true, "msg": "The task includes an option with an undefined variable. The error was: 'dict object' has no attribute 'cluster_hostname'\n\nThe error appears to have been in '/usr/share/ansible/openshift-ansible/roles/openshift_management/tasks/add_container_provider.yml': line 15, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: Ensure we default to the first master if openshift_master_cluster_public_hostname is unavailable\n ^ here\n\nexception type: <class 'ansible.errors.AnsibleUndefinedVariable'>\nexception: 'dict object' has no attribute 'cluster_hostname'" } to retry, use: --limit @/usr/share/ansible/openshift-ansible/playbooks/openshift-management/add_container_provider.retry Version-Release number of the following components: openshift-ansible-3.9.33-1.git.56.19ba16e.el7.noarch How reproducible: Always Steps to Reproduce: 1.ansible-playbook -i host/39 /usr/share/ansible/openshift-ansible/playbooks/openshift-management/add_container_provider.yml Actual results: Please include the entire output from the last TASK line through the end of output if an error is generated Expected results: Additional info: Please attach logs from ansible-playbook with the -vvv flag