Bug 1638120
Summary: | openshift_web_console : Apply the web console template file fails | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
Product: | OpenShift Container Platform | Reporter: | Omer SEN <omer.sen> | ||||||||
Component: | Networking | Assignee: | Casey Callendrello <cdc> | ||||||||
Status: | CLOSED NEXTRELEASE | QA Contact: | Meng Bo <bmeng> | ||||||||
Severity: | unspecified | Docs Contact: | |||||||||
Priority: | unspecified | ||||||||||
Version: | 3.10.0 | CC: | aos-bugs, jokerman, mmccomas, omer.sen, vrutkovs, wmeng | ||||||||
Target Milestone: | --- | ||||||||||
Target Release: | 4.1.0 | ||||||||||
Hardware: | Unspecified | ||||||||||
OS: | Unspecified | ||||||||||
Whiteboard: | |||||||||||
Fixed In Version: | Doc Type: | If docs needed, set a value | |||||||||
Doc Text: | Story Points: | --- | |||||||||
Clone Of: | Environment: | ||||||||||
Last Closed: | 2019-05-15 11:58:57 UTC | Type: | Bug | ||||||||
Regression: | --- | Mount Type: | --- | ||||||||
Documentation: | --- | CRM: | |||||||||
Verified Versions: | Category: | --- | |||||||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||||||
Cloudforms Team: | --- | Target Upstream Version: | |||||||||
Embargoed: | |||||||||||
Attachments: |
|
Description
Omer SEN
2018-10-10 18:37:28 UTC
Created attachment 1492688 [details]
ansible-playbook -vvv output
Had made a fresh install and debug messages: ==================================================== TASK [openshift_web_console : Verify that the console is running] ************************************************************************************************************************ FAILED - RETRYING: Verify that the console is running (60 retries left). FAILED - RETRYING: Verify that the console is running (59 retries left). FAILED - RETRYING: Verify that the console is running (58 retries left). FAILED - RETRYING: Verify that the console is running (57 retries left). FAILED - RETRYING: Verify that the console is running (56 retries left). FAILED - RETRYING: Verify that the console is running (55 retries left). FAILED - RETRYING: Verify that the console is running (54 retries left). FAILED - RETRYING: Verify that the console is running (53 retries left). FAILED - RETRYING: Verify that the console is running (52 retries left). FAILED - RETRYING: Verify that the console is running (51 retries left). FAILED - RETRYING: Verify that the console is running (50 retries left). FAILED - RETRYING: Verify that the console is running (49 retries left). FAILED - RETRYING: Verify that the console is running (48 retries left). FAILED - RETRYING: Verify that the console is running (47 retries left). FAILED - RETRYING: Verify that the console is running (46 retries left). FAILED - RETRYING: Verify that the console is running (45 retries left). FAILED - RETRYING: Verify that the console is running (44 retries left). FAILED - RETRYING: Verify that the console is running (43 retries left). FAILED - RETRYING: Verify that the console is running (42 retries left). FAILED - RETRYING: Verify that the console is running (41 retries left). fatal: [mini.ose.serra.local]: FAILED! => {"failed": true, "msg": "The conditional check 'console_deployment.results.results[0].status.readyReplicas is defined' failed. The error was: error while evaluating conditional (console_deployment.results.results[0].status.readyReplicas is defined): 'dict object' has no attribute 'results'"} ...ignoring TASK [openshift_web_console : Check status in the openshift-web-console namespace] ******************************************************************************************************* fatal: [mini.ose.serra.local]: FAILED! => {"failed": true, "msg": "The conditional check '(console_deployment.results.results[0].status.readyReplicas is not defined) or (console_deployment.results.results[0].status.readyReplicas == 0)' failed. The error was: error while evaluating conditional ((console_deployment.results.results[0].status.readyReplicas is not defined) or (console_deployment.results.results[0].status.readyReplicas == 0)): 'dict object' has no attribute 'results'\n\nThe error appears to have been in '/usr/share/ansible/openshift-ansible/roles/openshift_web_console/tasks/start.yml': line 21, column 5, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n block:\n - name: Check status in the openshift-web-console namespace\n ^ here\n"} ...ignoring TASK [openshift_web_console : debug] ***************************************************************************************************************************************************** fatal: [mini.ose.serra.local]: FAILED! => {"msg": "The conditional check '(console_deployment.results.results[0].status.readyReplicas is not defined) or (console_deployment.results.results[0].status.readyReplicas == 0)' failed. The error was: error while evaluating conditional ((console_deployment.results.results[0].status.readyReplicas is not defined) or (console_deployment.results.results[0].status.readyReplicas == 0)): 'dict object' has no attribute 'results'\n\nThe error appears to have been in '/usr/share/ansible/openshift-ansible/roles/openshift_web_console/tasks/start.yml': line 26, column 5, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n ignore_errors: true\n - debug:\n ^ here\n"} to retry, use: --limit @/usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.retry PLAY RECAP ******************************************************************************************************************************************************************************* localhost : ok=13 changed=0 unreachable=0 failed=0 mini.ose.serra.local : ok=486 changed=126 unreachable=0 failed=1 INSTALLER STATUS ************************************************************************************************************************************************************************* Initialization : Complete (0:00:19) Health Check : Complete (0:01:17) Node Bootstrap Preparation : Complete (0:00:00) etcd Install : Complete (0:00:54) Master Install : Complete (0:05:54) Master Additional Install : Complete (0:03:20) Node Join : Complete (0:00:57) Hosted Install : Complete (0:01:38) Web Console Install : In Progress (0:05:01) This phase can be restarted by running: playbooks/openshift-web-console/config.yml Failure summary: 1. Hosts: mini.ose.serra.local Play: Web Console Task: openshift_web_console : debug Message: The conditional check '(console_deployment.results.results[0].status.readyReplicas is not defined) or (console_deployment.results.results[0].status.readyReplicas == 0)' failed. The error was: error while evaluating conditional ((console_deployment.results.results[0].status.readyReplicas is not defined) or (console_deployment.results.results[0].status.readyReplicas == 0)): 'dict object' has no attribute 'results' The error appears to have been in '/usr/share/ansible/openshift-ansible/roles/openshift_web_console/tasks/start.yml': line 26, column 5, but may be elsewhere in the file depending on the exact syntax problem. The offending line appears to be: ignore_errors: true - debug: ^ here >runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
SDN didn't come up.
Please attach the inventory and output of `journalctl -b -l --unit=origin-node`
Created attachment 1492879 [details]
requested journalctl output
Output of "journalctl -b -l --unit=origin-node"
Comment on attachment 1492879 [details]
requested journalctl output
This is for one node installation
mini.ose.serra.local openshift_node_group_name='node-config-all-in-one'
Created attachment 1492882 [details]
requested journalctl output for single master single node
[nodes]
master.os.serra.local openshift_node_group_name='node-config-master-infra'
node1.os.serra.local openshift_node_group_name='node-config-compute'
>Oct 10 21:54:56 mini.ose.serra.local origin-node[4045]: I1010 21:54:56.407318 4045 kubelet.go:1919] SyncLoop (PLEG): "sdn-swcrc_openshift-sdn(bb8c636d-ccce-11e8-87cf-525400307c6b)", event: &pleg.PodLifecycleEvent{ID:"bb8c636d-ccce-11e8-87cf-525400307c6b", Type:"ContainerDied", Data:"5ac36113c4c6df3cdfd8329fa1c04f4cff647fb6897c6d38b31bf2657503d52e"}
SDN container restarts, moving this to Networking team
Look at the logs for the sdn pod (with oc logs). That should tell you what's wrong. Issue was NetworkManager was disabled. But pre_deployment.yaml should have a check for that and exits if NM was disabled. |