Bug 1923978 - kubernetes-nmstate: nmstate-handler pod crashes when configuring bridge device using ip tool [NEEDINFO]
Summary: kubernetes-nmstate: nmstate-handler pod crashes when configuring bridge devic...
Keywords:
Status: CLOSED DUPLICATE of bug 1923979
Alias: None
Product: Container Native Virtualization (CNV)
Classification: Red Hat
Component: Networking
Version: 2.6.0
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: 4.8.0
Assignee: Petr Horáček
QA Contact: Meni Yakove
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-02-02 11:11 UTC by Yossi Segev
Modified: 2021-02-14 09:57 UTC (History)
1 user (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-02-14 09:57:28 UTC
Target Upstream Version:
Embargoed:
phoracek: needinfo?


Attachments (Terms of Use)
nmstate-handler.log (8.69 KB, text/plain)
2021-02-02 11:11 UTC, Yossi Segev
no flags Details

Description Yossi Segev 2021-02-02 11:11:11 UTC
Created attachment 1754341 [details]
nmstate-handler.log

Description of problem:
When configuring a bridge device using ip (netlink) tool - nmastate-handler on the node enters CrahsLoopBackOff.


Version-Release number of selected component (if applicable):
OCP Version: 4.7.0-fc.4
Kubernetes Version: v1.20.0+f0a2ec9
CNV Vesion: 2.6.0
nmstate verion: nmstate-0.3.4-17.el8_3.noarch


How reproducible:
Always


Steps to Reproduce:
1. In the cluster - login to one of the worker nodes.

[cnv-qe-jenkins@network02-khphv-executor ~]$ oc get nodes -l node-role.kubernetes.io/worker
NAME                             STATUS   ROLES    AGE    VERSION
network02-khphv-worker-0-4zmxs   Ready    worker   2d4h   v1.20.0+d9c52cc
network02-khphv-worker-0-7q8sr   Ready    worker   2d4h   v1.20.0+d9c52cc
network02-khphv-worker-0-cmqsb   Ready    worker   2d4h   v1.20.0+d9c52cc
[cnv-qe-jenkins@network02-khphv-executor ~]$ 
[cnv-qe-jenkins@network02-khphv-executor ~]$ oc debug node/network02-khphv-worker-0-cmqsb
Starting pod/network02-khphv-worker-0-cmqsb-debug ...
To use host binaries, run `chroot /host`
Pod IP: 192.168.3.248
If you don't see a command prompt, try pressing enter.
sh-4.4# chroot /host
sh-4.4# 

2. Add a bridge device using ip tool:
sh-4.4# ip link add name br-test type bridge


Actual results:
nmstate-handler pod on the node enters CrashLoopBackOff state.
[cnv-qe-jenkins@network02-khphv-executor ~]$ oc get pod -n openshift-cnv -l component=kubernetes-nmstate-handler  -o wide
NAME                    READY   STATUS             RESTARTS   AGE     IP              NODE                             NOMINATED NODE   READINESS GATES
nmstate-handler-2s6ff   1/1     Running            0          2d3h    192.168.2.23    network02-khphv-master-2         <none>           <none>
nmstate-handler-5g4jj   1/1     Running            0          2d3h    192.168.0.132   network02-khphv-master-1         <none>           <none>
nmstate-handler-9c8hs   1/1     Running            1          2d3h    192.168.0.14    network02-khphv-worker-0-7q8sr   <none>           <none>
nmstate-handler-ggfn7   1/1     Running            1          2d3h    192.168.2.46    network02-khphv-worker-0-4zmxs   <none>           <none>
nmstate-handler-r2kcd   1/1     Running            0          2d3h    192.168.1.146   network02-khphv-master-0         <none>           <none>
nmstate-handler-vt2ck   0/1     CrashLoopBackOff   18         3h42m   192.168.3.248   network02-khphv-worker-0-cmqsb   <none>           <none>


Additional info:
1. nmstate-handler log and journalctl (from the node) are attached.
2. Workaround - restart the damaged pod by deleting it:
[cnv-qe-jenkins@network02-khphv-executor yossi]$ oc delete pod -n openshift-cnv nmstate-handler-vt2ck
pod "nmstate-handler-vt2ck" deleted

Comment 1 Petr Horáček 2021-02-11 13:25:05 UTC
@ysegev I recall we were working on a hotfix for this. Have we left this BZ behind? If it was fixed, I'd move it to ON QA.

Comment 2 Yossi Segev 2021-02-14 09:57:28 UTC

*** This bug has been marked as a duplicate of bug 1923979 ***


Note You need to log in before you can comment on or make changes to this bug.