Bug 1564809 - install failed due to sdn pods crash
Summary: install failed due to sdn pods crash
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Installer
Version: 3.10.0
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: 3.10.0
Assignee: Scott Dodson
QA Contact: Weihua Meng
URL:
Whiteboard:
: 1565494 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-04-08 02:17 UTC by Weihua Meng
Modified: 2018-04-13 12:34 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
undefined
Clone Of:
Environment:
Last Closed: 2018-04-13 12:34:08 UTC
Target Upstream Version:


Attachments (Terms of Use)

Description Weihua Meng 2018-04-08 02:17:57 UTC
Description of problem:
install failed due to sdn pods crash

F0408 02:00:35.283346       1 start_node.go:162] open /etc/origin/node/node-config.yaml: no such file or directory

Version-Release number of the following components:
openshift-ansible-3.10.0-0.16.0.git.0.8925606.el7.noarch.rpm
openshift v3.10.0-0.16.0

How reproducible:
Always

Steps to Reproduce:
1. install ocp 3.10
2. check cluster status

Actual results:
1. Install failed.
2.
# oc version
oc v3.10.0-0.16.0
kubernetes v1.9.1+a0ce1bc657
features: Basic-Auth GSSAPI Kerberos SPNEGO

Server https://shared-wmengrpm310-master-etcd-1:8443
openshift v3.10.0-0.14.0
kubernetes v1.9.1+a0ce1bc657
[root@shared-wmengrpm310-master-etcd-1 ~]# oc get nodes
NAME                               STATUS     ROLES     AGE       VERSION
shared-wmengrpm310-master-etcd-1   NotReady   master    23m       v1.9.1+a0ce1bc657
shared-wmengrpm310-nrr-1           NotReady   compute   16m       v1.9.1+a0ce1bc657
shared-wmengrpm310-nrr-2           NotReady   compute   16m       v1.9.1+a0ce1bc657

# oc get pods --all-namespaces
NAMESPACE               NAME                                                  READY     STATUS             RESTARTS   AGE
default                 docker-registry-1-deploy                              0/1       Pending            0          16m
default                 registry-console-1-deploy                             0/1       Pending            0          16m
default                 router-1-deploy                                       0/1       Pending            0          16m
kube-system             master-api-shared-wmengrpm310-master-etcd-1           1/1       Running            0          23m
kube-system             master-controllers-shared-wmengrpm310-master-etcd-1   1/1       Running            0          23m
kube-system             master-etcd-shared-wmengrpm310-master-etcd-1          1/1       Running            0          24m
openshift-node          sync-9wzlh                                            1/1       Running            0          23m
openshift-node          sync-djgzd                                            1/1       Running            0          17m
openshift-node          sync-mcjtn                                            1/1       Running            0          17m
openshift-sdn           ovs-2ftfl                                             1/1       Running            0          17m
openshift-sdn           ovs-4nt9s                                             1/1       Running            0          17m
openshift-sdn           ovs-xcxwh                                             1/1       Running            0          23m
openshift-sdn           sdn-cfpcg                                             0/1       CrashLoopBackOff   8          17m
openshift-sdn           sdn-d66pc                                             0/1       CrashLoopBackOff   12         23m
openshift-sdn           sdn-jlw9r                                             0/1       CrashLoopBackOff   8          17m
openshift-web-console   webconsole-6bd4c96bf5-nmw7l                           0/1       Pending            0          16m

[root@shared-wmengrpm310-master-etcd-1 ~]# oc logs sdn-cfpcg -n openshift-sdn
User "sa" set.
Context "default-context" modified.
I0408 02:00:35.283212       1 start_node.go:310] Reading node configuration from /etc/origin/node/node-config.yaml
F0408 02:00:35.283346       1 start_node.go:162] open /etc/origin/node/node-config.yaml: no such file or directory
 
[root@shared-wmengrpm310-nrr-1 ~]# ll /etc/origin/node
total 20
-rwx------. 1 root root 2781 4月   7 21:02 bootstrap.kubeconfig
-rw-------. 1 root root 1592 4月   7 20:56 bootstrap-node-config.yaml
drwxr-xr-x. 2 root root  138 4月   7 21:07 certificates
-rw-r--r--. 1 root root 1070 4月   7 21:02 client-ca.crt
-rw-------. 1 root root 1935 4月   7 21:07 node.kubeconfig
drwxr-xr-x. 2 root root    6 4月   7 20:53 pods
-rw-------. 1 root root   27 4月   7 20:52 resolv.conf
[root@shared-wmengrpm310-nrr-1 ~]# 

Expected results:
Install succeeds

Comment 1 DeShuai Ma 2018-04-08 16:26:49 UTC
As now we use kubelet dynamic configuration, but the file /etc/origin/node/node-config.yaml is not generated by the openshift-node sync pod.

1. The log of sync pod in openshift-node is empty, it difficult to debug why not generate node-config file.
[root@ip-172-18-14-207 node]# oc get po -n openshift-node
NAME         READY     STATUS    RESTARTS   AGE
sync-b4bgl   1/1       Running   0          5m
sync-gv4bm   1/1       Running   0          21m
[root@ip-172-18-14-207 node]# oc logs sync-b4bgl
[root@ip-172-18-14-207 node]# 

2. walkaround to make the sdn pod running, we can create the file by "oc extract --config=/etc/origin/node/node.kubeconfig "cm/${BOOTSTRAP_CONFIG_NAME}" -n openshift-node --to=/etc/origin/node --confirm"
The "cm/${BOOTSTRAP_CONFIG_NAME}" is:
[root@ip-172-18-14-207 node]# oc get configmap
NAME                  DATA      AGE
node-config-compute   1         1h
node-config-infra     1         1h
node-config-master    1         1h

Comment 2 DeShuai Ma 2018-04-09 08:07:06 UTC
In sync.yaml, The "BOOTSTRAP_CONFIG_NAME" define in "/etc/sysconfig/atomic-openshift-node" other than "/etc/sysconfig/origin-node"

Comment 3 Johnny Liu 2018-04-09 11:20:30 UTC
This is blocking a lot of installation, user have to interact the installation to run the above workaround command manually before the installer reach the retries.

So I added TestBlocker keyword back to request this bug ASAP.

Comment 4 Johnny Liu 2018-04-10 06:03:07 UTC
(In reply to DeShuai Ma from comment #2)
> In sync.yaml, The "BOOTSTRAP_CONFIG_NAME" define in
> "/etc/sysconfig/atomic-openshift-node" other than
> "/etc/sysconfig/origin-node"

It is not enough to modify /etc/sysconfig/origin-node to /etc/sysconfig/atomic-openshift-node in roles/openshift_node_group/files/sync.yaml is not enough, also have to mount /etc/sysconfig/atomic-openshift-node into sync pod.

Comment 5 Gan Huang 2018-04-10 09:18:04 UTC
*** Bug 1565494 has been marked as a duplicate of this bug. ***

Comment 6 Weihua Meng 2018-04-12 08:27:10 UTC
sdn pods are running now.
and nodes are ready status

openshift-ansible-3.10.0-0.20.0.git.0.37bab0f.el7.noarch.rpm
# openshift version
openshift v3.10.0-0.20.0
kubernetes v1.10.0+b81c8f8
etcd 3.2.16


# oc get pods -n openshift-sdn
NAME        READY     STATUS    RESTARTS   AGE
ovs-57pp4   1/1       Running   0          14m
ovs-6fvt9   1/1       Running   0          14m
ovs-n2t8x   1/1       Running   0          20m
sdn-bxbs6   1/1       Running   0          14m
sdn-mp7vc   1/1       Running   0          20m
sdn-pdvnh   1/1       Running   0          14m

# oc get nodes
NAME                              STATUS    ROLES     AGE       VERSION
qe-wmeng20r75n1al-master-etcd-1   Ready     master    33m       v1.10.0+b81c8f8
qe-wmeng20r75n1al-nrr-1           Ready     compute   27m       v1.10.0+b81c8f8
qe-wmeng20r75n1al-nrr-2           Ready     compute   27m       v1.10.0+b81c8f8

Comment 7 Weihua Meng 2018-04-13 00:39:55 UTC
Fixed.

openshift-ansible-3.10.0-0.20.0.git.0.37bab0f.el7.noarch.rpm
# openshift version
openshift v3.10.0-0.20.0
kubernetes v1.10.0+b81c8f8
etcd 3.2.16


Note You need to log in before you can comment on or make changes to this bug.