Bug 1638120 - openshift_web_console : Apply the web console template file fails
Summary: openshift_web_console : Apply the web console template file fails
Keywords:
Status: CLOSED NEXTRELEASE
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Networking
Version: 3.10.0
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: 4.1.0
Assignee: Casey Callendrello
QA Contact: Meng Bo
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-10-10 18:37 UTC by Omer SEN
Modified: 2019-05-15 11:58 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-05-15 11:58:57 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
ansible-playbook -vvv output (5.54 MB, text/plain)
2018-10-10 19:01 UTC, Omer SEN
no flags Details
requested journalctl output (1.65 MB, application/zip)
2018-10-11 11:22 UTC, Omer SEN
no flags Details
requested journalctl output for single master single node (911.31 KB, application/x-xz)
2018-10-11 11:29 UTC, Omer SEN
no flags Details

Description Omer SEN 2018-10-10 18:37:28 UTC
Description of problem: When installing Openshift  with

# ansible-playbook -i <inventory_file> /usr/share/ansible/openshiftansible/playbooks/deploy_cluster.yml

It fails on "openshift_web_console : Apply the web console template file fails" TASK. 

Version-Release number of the following components:
rpm -q openshift-ansible  

openshift-ansible-3.10.51-1.git.0.44a646c.el7.noarch


rpm -q ansible
ansible-2.4.6.0-1.el7ae.noarch


ansible --version

ansible 2.4.6.0

How reproducible: Just run 

 ansible-playbook -i <inventory_file> /usr/share/ansible/openshiftansible/playbooks/deploy_cluster.yml as root with 1 master 1 node inventory file


[masters]
master.os.serra.local

[etcd]
master.os.serra.local

# [nodes]
# master.os.serra.local
# node1.os.serra.local
# node-config-all-in-one
[nodes]
master.os.serra.local openshift_node_group_name='node-config-master-infra'
node1.os.serra.local openshift_node_group_name='node-config-compute'


# Create an OSEv3 group that contains the masters and nodes groups
[OSEv3:children]
masters
nodes
etcd



Steps to Reproduce:
1. Create inventory file 
2. Run  ansible-playbook -i <inventory_file> /usr/share/ansible/openshiftansible/playbooks/deploy_cluster.yml (after running prerequisites.yml)
3. Got the error

Actual results:

===============================================================================

TASK [openshift_web_console : Apply the web console template file] *************
changed: [master.os.serra.local]

TASK [openshift_web_console : Remove temp directory] ***************************
ok: [master.os.serra.local]

TASK [openshift_web_console : Pause for the web console deployment to start] ***
skipping: [master.os.serra.local]

TASK [openshift_web_console : include_tasks] ***********************************
included: /usr/share/ansible/openshift-ansible/roles/openshift_web_console/tasks/start.yml for master.os.serra.local

TASK [openshift_web_console : Verify that the console is running] **************
FAILED - RETRYING: Verify that the console is running (60 retries left).
FAILED - RETRYING: Verify that the console is running (59 retries left).
FAILED - RETRYING: Verify that the console is running (58 retries left).
FAILED - RETRYING: Verify that the console is running (57 retries left).
FAILED - RETRYING: Verify that the console is running (56 retries left).
FAILED - RETRYING: Verify that the console is running (55 retries left).
FAILED - RETRYING: Verify that the console is running (54 retries left).
FAILED - RETRYING: Verify that the console is running (53 retries left).
FAILED - RETRYING: Verify that the console is running (52 retries left).
FAILED - RETRYING: Verify that the console is running (51 retries left).
FAILED - RETRYING: Verify that the console is running (50 retries left).
FAILED - RETRYING: Verify that the console is running (49 retries left).
FAILED - RETRYING: Verify that the console is running (48 retries left).
FAILED - RETRYING: Verify that the console is running (47 retries left).
FAILED - RETRYING: Verify that the console is running (46 retries left).
FAILED - RETRYING: Verify that the console is running (45 retries left).
FAILED - RETRYING: Verify that the console is running (44 retries left).
FAILED - RETRYING: Verify that the console is running (43 retries left).
FAILED - RETRYING: Verify that the console is running (42 retries left).
FAILED - RETRYING: Verify that the console is running (41 retries left).
FAILED - RETRYING: Verify that the console is running (40 retries left).
FAILED - RETRYING: Verify that the console is running (39 retries left).
FAILED - RETRYING: Verify that the console is running (38 retries left).
FAILED - RETRYING: Verify that the console is running (37 retries left).
FAILED - RETRYING: Verify that the console is running (36 retries left).
FAILED - RETRYING: Verify that the console is running (35 retries left).


FAILED - RETRYING: Verify that the console is running (34 retries left).
FAILED - RETRYING: Verify that the console is running (33 retries left).
FAILED - RETRYING: Verify that the console is running (32 retries left).
FAILED - RETRYING: Verify that the console is running (31 retries left).
FAILED - RETRYING: Verify that the console is running (30 retries left).
FAILED - RETRYING: Verify that the console is running (29 retries left).
FAILED - RETRYING: Verify that the console is running (28 retries left).
FAILED - RETRYING: Verify that the console is running (27 retries left).
FAILED - RETRYING: Verify that the console is running (26 retries left).
FAILED - RETRYING: Verify that the console is running (25 retries left).
FAILED - RETRYING: Verify that the console is running (24 retries left).
FAILED - RETRYING: Verify that the console is running (23 retries left).
FAILED - RETRYING: Verify that the console is running (22 retries left).
FAILED - RETRYING: Verify that the console is running (21 retries left).
FAILED - RETRYING: Verify that the console is running (20 retries left).
FAILED - RETRYING: Verify that the console is running (19 retries left).
FAILED - RETRYING: Verify that the console is running (18 retries left).
FAILED - RETRYING: Verify that the console is running (17 retries left).
FAILED - RETRYING: Verify that the console is running (16 retries left).
FAILED - RETRYING: Verify that the console is running (15 retries left).
FAILED - RETRYING: Verify that the console is running (14 retries left).
FAILED - RETRYING: Verify that the console is running (13 retries left).
FAILED - RETRYING: Verify that the console is running (12 retries left).
FAILED - RETRYING: Verify that the console is running (11 retries left).
FAILED - RETRYING: Verify that the console is running (10 retries left).
FAILED - RETRYING: Verify that the console is running (9 retries left).
FAILED - RETRYING: Verify that the console is running (8 retries left).
FAILED - RETRYING: Verify that the console is running (7 retries left).
FAILED - RETRYING: Verify that the console is running (6 retries left).
FAILED - RETRYING: Verify that the console is running (5 retries left).
FAILED - RETRYING: Verify that the console is running (4 retries left).
FAILED - RETRYING: Verify that the console is running (3 retries left).
FAILED - RETRYING: Verify that the console is running (2 retries left).
FAILED - RETRYING: Verify that the console is running (1 retries left).
fatal: [master.os.serra.local]: FAILED! => {"attempts": 60, "changed": false, "failed": true, "results": {"cmd": "/usr/bin/oc get deployment webconsole -o json -n openshift-web-console", "results": [{"apiVersion": "extensions/v1beta1", "kind": "Deployment", "metadata": {"annotations": {"deployment.kubernetes.io/revision": "1", "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"apps/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"openshift-web-console\",\"webconsole\":\"true\"},\"name\":\"webconsole\",\"namespace\":\"openshift-web-console\"},\"spec\":{\"replicas\":1,\"strategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"100%\"},\"type\":\"RollingUpdate\"},\"template\":{\"metadata\":{\"labels\":{\"app\":\"openshift-web-console\",\"webconsole\":\"true\"},\"name\":\"webconsole\"},\"spec\":{\"containers\":[{\"command\":[\"/usr/bin/origin-web-console\",\"--audit-log-path=-\",\"-v=0\",\"--config=/var/webconsole-config/webconsole-config.yaml\"],\"image\":\"docker.io/openshift/origin-web-console:v3.10.0\",\"imagePullPolicy\":\"IfNotPresent\",\"livenessProbe\":{\"exec\":{\"command\":[\"/bin/sh\",\"-c\",\"if [[ ! -f /tmp/webconsole-config.hash ]]; then \\\\\\n  md5sum /var/webconsole-config/webconsole-config.yaml \\u003e /tmp/webconsole-config.hash; \\\\\\nelif [[ $(md5sum /var/webconsole-config/webconsole-config.yaml) != $(cat /tmp/webconsole-config.hash) ]]; then \\\\\\n  echo 'webconsole-config.yaml has changed.'; \\\\\\n  exit 1; \\\\\\nfi \\u0026\\u0026 curl -k -f https://0.0.0.0:8443/console/\"]}},\"name\":\"webconsole\",\"ports\":[{\"containerPort\":8443}],\"readinessProbe\":{\"httpGet\":{\"path\":\"/healthz\",\"port\":8443,\"scheme\":\"HTTPS\"}},\"resources\":{\"requests\":{\"cpu\":\"100m\",\"memory\":\"100Mi\"}},\"volumeMounts\":[{\"mountPath\":\"/var/serving-cert\",\"name\":\"serving-cert\"},{\"mountPath\":\"/var/webconsole-config\",\"name\":\"webconsole-config\"}]}],\"nodeSelector\":{\"node-role.kubernetes.io/master\":\"true\"},\"serviceAccountName\":\"webconsole\",\"volumes\":[{\"name\":\"serving-cert\",\"secret\":{\"defaultMode\":288,\"secretName\":\"webconsole-serving-cert\"}},{\"configMap\":{\"defaultMode\":288,\"name\":\"webconsole-config\"},\"name\":\"webconsole-config\"}]}}}}\n"}, "creationTimestamp": "2018-10-10T17:47:02Z", "generation": 1, "labels": {"app": "openshift-web-console", "webconsole": "true"}, "name": "webconsole", "namespace": "openshift-web-console", "resourceVersion": "5834", "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-web-console/deployments/webconsole", "uid": "7e6039f6-ccb4-11e8-a3b9-525400380b0e"}, "spec": {"progressDeadlineSeconds": 600, "replicas": 1, "revisionHistoryLimit": 2, "selector": {"matchLabels": {"app": "openshift-web-console", "webconsole": "true"}}, "strategy": {"rollingUpdate": {"maxSurge": "25%", "maxUnavailable": "100%"}, "type": "RollingUpdate"}, "template": {"metadata": {"creationTimestamp": null, "labels": {"app": "openshift-web-console", "webconsole": "true"}, "name": "webconsole"}, "spec": {"containers": [{"command": ["/usr/bin/origin-web-console", "--audit-log-path=-", "-v=0", "--config=/var/webconsole-config/webconsole-config.yaml"], "image": "docker.io/openshift/origin-web-console:v3.10.0", "imagePullPolicy": "IfNotPresent", "livenessProbe": {"exec": {"command": ["/bin/sh", "-c", "if [[ ! -f /tmp/webconsole-config.hash ]]; then \\\n  md5sum /var/webconsole-config/webconsole-config.yaml > /tmp/webconsole-config.hash; \\\nelif [[ $(md5sum /var/webconsole-config/webconsole-config.yaml) != $(cat /tmp/webconsole-config.hash) ]]; then \\\n  echo 'webconsole-config.yaml has changed.'; \\\n  exit 1; \\\nfi && curl -k -f https://0.0.0.0:8443/console/"]}, "failureThreshold": 3, "periodSeconds": 10, "successThreshold": 1, "timeoutSeconds": 1}, "name": "webconsole", "ports": [{"containerPort": 8443, "protocol": "TCP"}], "readinessProbe": {"failureThreshold": 3, "httpGet": {"path": "/healthz", "port": 8443, "scheme": "HTTPS"}, "periodSeconds": 10, "successThreshold": 1, "timeoutSeconds": 1}, "resources": {"requests": {"cpu": "100m", "memory": "100Mi"}}, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "volumeMounts": [{"mountPath": "/var/serving-cert", "name": "serving-cert"}, {"mountPath": "/var/webconsole-config", "name": "webconsole-config"}]}], "dnsPolicy": "ClusterFirst", "nodeSelector": {"node-role.kubernetes.io/master": "true"}, "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "serviceAccount": "webconsole", "serviceAccountName": "webconsole", "terminationGracePeriodSeconds": 30, "volumes": [{"name": "serving-cert", "secret": {"defaultMode": 288, "secretName": "webconsole-serving-cert"}}, {"configMap": {"defaultMode": 288, "name": "webconsole-config"}, "name": "webconsole-config"}]}}}, "status": {"conditions": [{"lastTransitionTime": "2018-10-10T17:47:02Z", "lastUpdateTime": "2018-10-10T17:47:02Z", "message": "Deployment has minimum availability.", "reason": "MinimumReplicasAvailable", "status": "True", "type": "Available"}, {"lastTransitionTime": "2018-10-10T17:57:03Z", "lastUpdateTime": "2018-10-10T17:57:03Z", "message": "ReplicaSet \"webconsole-55c4d867f\" has timed out progressing.", "reason": "ProgressDeadlineExceeded", "status": "False", "type": "Progressing"}], "observedGeneration": 1, "replicas": 1, "unavailableReplicas": 1, "updatedReplicas": 1}}], "returncode": 0}, "state": "list"}
...ignoring

TASK [openshift_web_console : Check status in the openshift-web-console namespace] ***
changed: [master.os.serra.local]

TASK [openshift_web_console : debug] *******************************************
ok: [master.os.serra.local] => {
    "msg": [
        "In project openshift-web-console on server https://master.os.serra.local:8443", 
        "", 
        "svc/webconsole - 172.30.50.83:443 -> 8443", 
        "  deployment/webconsole deploys docker.io/openshift/origin-web-console:v3.10.0", 
        "    deployment #1 running for 10 minutes - 0/1 pods", 
        "", 
        "View details with 'oc describe <resource>/<name>' or list everything with 'oc get all'."
    ]
}

TASK [openshift_web_console : Get pods in the openshift-web-console namespace] ***
changed: [master.os.serra.local]

TASK [openshift_web_console : debug] *******************************************
ok: [master.os.serra.local] => {
    "msg": [
        "NAME                         READY     STATUS    RESTARTS   AGE       IP        NODE", 
        "webconsole-55c4d867f-t9qw6   0/1       Pending   0          10m       <none>    <none>"
    ]
}

TASK [openshift_web_console : Get events in the openshift-web-console namespace] ***
changed: [master.os.serra.local]

TASK [openshift_web_console : debug] *******************************************
ok: [master.os.serra.local] => {
    "msg": [
        "LAST SEEN   FIRST SEEN   COUNT     NAME                                          KIND         SUBOBJECT   TYPE      REASON              SOURCE                  MESSAGE", 
        "18s         10m          37        webconsole-55c4d867f-t9qw6.155c50721e9f61cd   Pod                      Warning   FailedScheduling    default-scheduler       0/2 nodes are available: 2 node(s) were not ready.", 
        "10m         10m          1         webconsole-55c4d867f.155c50721ea9bff6         ReplicaSet               Normal    SuccessfulCreate    replicaset-controller   Created pod: webconsole-55c4d867f-t9qw6", 
        "10m         10m          1         webconsole.155c5071d5c31064                   Deployment               Normal    ScalingReplicaSet   deployment-controller   Scaled up replica set webconsole-55c4d867f to 1"
    ]
}

TASK [openshift_web_console : Get console pod logs] ****************************
changed: [master.os.serra.local]

TASK [openshift_web_console : debug] *******************************************
ok: [master.os.serra.local] => {
    "msg": []
}

TASK [openshift_web_console : Report console errors] ***************************
fatal: [master.os.serra.local]: FAILED! => {"changed": false, "failed": true, "msg": "Console install failed."}
        to retry, use: --limit @/usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.retry

PLAY RECAP *********************************************************************
localhost                  : ok=13   changed=0    unreachable=0    failed=0   
master.os.serra.local      : ok=491  changed=126  unreachable=0    failed=1   
node1.os.serra.local       : ok=35   changed=2    unreachable=0    failed=0   


INSTALLER STATUS ***************************************************************
Initialization              : Complete (0:00:25)
Health Check                : Complete (0:01:14)
etcd Install                : Complete (0:00:49)
Node Bootstrap Preparation  : Complete (0:00:00)
Master Install              : Complete (0:04:07)
Master Additional Install   : Complete (0:01:45)
Node Join                   : Complete (0:00:14)
Hosted Install              : Complete (0:00:49)
Web Console Install         : In Progress (0:10:44)
        This phase can be restarted by running: playbooks/openshift-web-console/config.yml


Failure summary:


  1. Hosts:    master.os.serra.local
     Play:     Web Console
     Task:     Report console errors
     Message:  Console install failed.




LATEST PACKAGE JUST INSTALLED:
=============================


rpm -qi  openshift-ansible-playbooks        
Name        : openshift-ansible-playbooks
Version     : 3.10.51
Release     : 1.git.0.44a646c.el7
Architecture: noarch
Install Date: Wed 10 Oct 2018 06:36:43 PM BST
Group       : Unspecified
Size        : 442696
License     : ASL 2.0
Signature   : RSA/SHA1, Thu 27 Sep 2018 10:01:36 AM BST, Key ID c34c5bd42f297ecc
Source RPM  : openshift-ansible-3.10.51-1.git.0.44a646c.el7.src.rpm
Build Date  : Wed 26 Sep 2018 05:56:24 PM BST
Build Host  : c1be.rdu2.centos.org
Relocations : (not relocatable)
Packager    : CBS <cbs>
Vendor      : CentOS
URL         : https://github.com/openshift/openshift-ansible
Summary     : Openshift and Atomic Enterprise Ansible Playbooks
Description :
Openshift and Atomic Enterprise Ansible Playbooks.


Expected results:

To run successfully.


Additional info:
/var/log/messages:

-a3b9-525400380b0e)
Oct 10 19:34:48 master origin-node: E1010 19:34:48.533033    1333 pod_workers.go:186] Error syncing pod 175d8bdd-ccb4-11e8-a3b9-525400380b0e ("sdn-m7mbs_openshift-sdn(175d8bdd-ccb4-11e8-a3b9-525400380b0e)"), skipping: failed to "StartContainer" for "sdn" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=sdn pod=sdn-m7mbs_openshift-sdn(175d8bdd-ccb4-11e8-a3b9-525400380b0e)"
Oct 10 19:34:52 master origin-node: W1010 19:34:52.325319    1333 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d
Oct 10 19:34:52 master origin-node: E1010 19:34:52.325859    1333 kubelet.go:2143] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized



Please attach logs from ansible-playbook with the -vvv flag

Comment 1 Omer SEN 2018-10-10 19:01:16 UTC
Created attachment 1492688 [details]
ansible-playbook -vvv output

Comment 2 Omer SEN 2018-10-11 07:11:38 UTC
Had made a fresh install and debug messages:

====================================================

TASK [openshift_web_console : Verify that the console is running] ************************************************************************************************************************
FAILED - RETRYING: Verify that the console is running (60 retries left).
FAILED - RETRYING: Verify that the console is running (59 retries left).
FAILED - RETRYING: Verify that the console is running (58 retries left).
FAILED - RETRYING: Verify that the console is running (57 retries left).
FAILED - RETRYING: Verify that the console is running (56 retries left).
FAILED - RETRYING: Verify that the console is running (55 retries left).
FAILED - RETRYING: Verify that the console is running (54 retries left).
FAILED - RETRYING: Verify that the console is running (53 retries left).
FAILED - RETRYING: Verify that the console is running (52 retries left).
FAILED - RETRYING: Verify that the console is running (51 retries left).
FAILED - RETRYING: Verify that the console is running (50 retries left).
FAILED - RETRYING: Verify that the console is running (49 retries left).
FAILED - RETRYING: Verify that the console is running (48 retries left).
FAILED - RETRYING: Verify that the console is running (47 retries left).
FAILED - RETRYING: Verify that the console is running (46 retries left).
FAILED - RETRYING: Verify that the console is running (45 retries left).
FAILED - RETRYING: Verify that the console is running (44 retries left).
FAILED - RETRYING: Verify that the console is running (43 retries left).
FAILED - RETRYING: Verify that the console is running (42 retries left).
FAILED - RETRYING: Verify that the console is running (41 retries left).
fatal: [mini.ose.serra.local]: FAILED! => {"failed": true, "msg": "The conditional check 'console_deployment.results.results[0].status.readyReplicas is defined' failed. The error was: error while evaluating conditional (console_deployment.results.results[0].status.readyReplicas is defined): 'dict object' has no attribute 'results'"}
...ignoring

TASK [openshift_web_console : Check status in the openshift-web-console namespace] *******************************************************************************************************
fatal: [mini.ose.serra.local]: FAILED! => {"failed": true, "msg": "The conditional check '(console_deployment.results.results[0].status.readyReplicas is not defined) or (console_deployment.results.results[0].status.readyReplicas == 0)' failed. The error was: error while evaluating conditional ((console_deployment.results.results[0].status.readyReplicas is not defined) or (console_deployment.results.results[0].status.readyReplicas == 0)): 'dict object' has no attribute 'results'\n\nThe error appears to have been in '/usr/share/ansible/openshift-ansible/roles/openshift_web_console/tasks/start.yml': line 21, column 5, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n  block:\n  - name: Check status in the openshift-web-console namespace\n    ^ here\n"}
...ignoring

TASK [openshift_web_console : debug] *****************************************************************************************************************************************************
fatal: [mini.ose.serra.local]: FAILED! => {"msg": "The conditional check '(console_deployment.results.results[0].status.readyReplicas is not defined) or (console_deployment.results.results[0].status.readyReplicas == 0)' failed. The error was: error while evaluating conditional ((console_deployment.results.results[0].status.readyReplicas is not defined) or (console_deployment.results.results[0].status.readyReplicas == 0)): 'dict object' has no attribute 'results'\n\nThe error appears to have been in '/usr/share/ansible/openshift-ansible/roles/openshift_web_console/tasks/start.yml': line 26, column 5, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n    ignore_errors: true\n  - debug:\n    ^ here\n"}
        to retry, use: --limit @/usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.retry

PLAY RECAP *******************************************************************************************************************************************************************************
localhost                  : ok=13   changed=0    unreachable=0    failed=0   
mini.ose.serra.local       : ok=486  changed=126  unreachable=0    failed=1   


INSTALLER STATUS *************************************************************************************************************************************************************************
Initialization              : Complete (0:00:19)
Health Check                : Complete (0:01:17)
Node Bootstrap Preparation  : Complete (0:00:00)
etcd Install                : Complete (0:00:54)
Master Install              : Complete (0:05:54)
Master Additional Install   : Complete (0:03:20)
Node Join                   : Complete (0:00:57)
Hosted Install              : Complete (0:01:38)
Web Console Install         : In Progress (0:05:01)
        This phase can be restarted by running: playbooks/openshift-web-console/config.yml


Failure summary:


  1. Hosts:    mini.ose.serra.local
     Play:     Web Console
     Task:     openshift_web_console : debug
     Message:  The conditional check '(console_deployment.results.results[0].status.readyReplicas is not defined) or (console_deployment.results.results[0].status.readyReplicas == 0)' failed. The error was: error while evaluating conditional ((console_deployment.results.results[0].status.readyReplicas is not defined) or (console_deployment.results.results[0].status.readyReplicas == 0)): 'dict object' has no attribute 'results'
               
               The error appears to have been in '/usr/share/ansible/openshift-ansible/roles/openshift_web_console/tasks/start.yml': line 26, column 5, but may
               be elsewhere in the file depending on the exact syntax problem.
               
               The offending line appears to be:
               
                   ignore_errors: true
                 - debug:
                   ^ here

Comment 3 Vadim Rutkovsky 2018-10-11 11:16:26 UTC
>runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

SDN didn't come up.

Please attach the inventory and output of `journalctl -b -l --unit=origin-node`

Comment 4 Omer SEN 2018-10-11 11:22:27 UTC
Created attachment 1492879 [details]
requested journalctl output

Output of "journalctl -b -l --unit=origin-node"

Comment 5 Omer SEN 2018-10-11 11:25:58 UTC
Comment on attachment 1492879 [details]
requested journalctl output

This is for one node installation

mini.ose.serra.local openshift_node_group_name='node-config-all-in-one'

Comment 6 Omer SEN 2018-10-11 11:29:16 UTC
Created attachment 1492882 [details]
requested journalctl output for single master single node

[nodes]
master.os.serra.local openshift_node_group_name='node-config-master-infra'
node1.os.serra.local openshift_node_group_name='node-config-compute'

Comment 7 Vadim Rutkovsky 2018-10-11 11:43:47 UTC
>Oct 10 21:54:56 mini.ose.serra.local origin-node[4045]: I1010 21:54:56.407318    4045 kubelet.go:1919] SyncLoop (PLEG): "sdn-swcrc_openshift-sdn(bb8c636d-ccce-11e8-87cf-525400307c6b)", event: &pleg.PodLifecycleEvent{ID:"bb8c636d-ccce-11e8-87cf-525400307c6b", Type:"ContainerDied", Data:"5ac36113c4c6df3cdfd8329fa1c04f4cff647fb6897c6d38b31bf2657503d52e"}

SDN container restarts, moving this to Networking team

Comment 8 Casey Callendrello 2018-10-12 13:52:41 UTC
Look at the logs for the sdn pod (with oc logs). That should tell you what's wrong.

Comment 9 Omer SEN 2018-11-05 12:15:54 UTC
Issue was NetworkManager was disabled. But pre_deployment.yaml should have a check for that and exits if NM was disabled.


Note You need to log in before you can comment on or make changes to this bug.