Description of problem: This happens on OCP 4.5.16 with CNV 2.4.2 -- the customer has a support agreement / exception for that specific version. See below for reproducer steps. Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: reproducer: ============================================= [akaris@linux cnv]$ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.5.16 True False 12m Cluster version is 4.5.16 [akaris@linux cnv]$ cat cnv.yaml apiVersion: v1 kind: Namespace metadata: name: openshift-cnv --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: kubevirt-hyperconverged-group namespace: openshift-cnv spec: targetNamespaces: - openshift-cnv --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: hco-operatorhub namespace: openshift-cnv spec: source: redhat-operators sourceNamespace: openshift-marketplace name: kubevirt-hyperconverged startingCSV: kubevirt-hyperconverged-operator.v2.4.2 channel: "2.4" installPlanApproval: Manual [akaris@linux cnv]$ oc apply -f cnv.yaml Edit and approve the 2.4.2 installplan: [akaris@linux cnv]$ oc get installplan NAME CSV APPROVAL APPROVED install-8nl6r kubevirt-hyperconverged-operator.v2.4.2 Manual true install-zbqpc kubevirt-hyperconverged-operator.v2.4.3 Manual false [akaris@linux cnv]$ cat hyper.yaml apiVersion: hco.kubevirt.io/v1alpha1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: BareMetalPlatform: false oc apply -f hyper.yaml ============================================= Once that's done: I can reproduce the nmstate pod issue. The trick lies in using an interface name that can be interpreted as a float64, but that will not be automatically quoted by the kubernetes parser (at least that's my interpretation). Preparation - install iproute inside a toolbox and create virtual interfaces on a worker node: ~~~ [akaris@linux cnv]$ oc debug node/ip-10-0-154-20.eu-west-1.compute.internal Starting pod/ip-10-0-154-20eu-west-1computeinternal-debug ... To use host binaries, run `chroot /host` chroot /host toolbox Pod IP: 10.0.154.20 If you don't see a command prompt, try pressing enter. chroot /host sh-4.4# toolbox Trying to pull registry.redhat.io/rhel8/support-tools... Getting image source signatures Copying blob cca21acb641a done Copying blob 5ee83610639d done Copying blob d9e72d058dc5 done Copying config be1f7079a9 done Writing manifest to image destination Storing signatures be1f7079a938a4ab5c1f8b4c7d2dc82b8c60598bb1e248438ced576829f96389 Spawning a container 'toolbox-' with image 'registry.redhat.io/rhel8/support-tools' Detected RUN label in the container image. Using that as the default... command: podman run -it --name toolbox- --privileged --ipc=host --net=host --pid=host -e HOST=/host -e NAME=toolbox- -e IMAGE=registry.redhat.io/rhel8/support-tools:latest -v /run:/run -v /var/log:/var/log -v /etc/machine-id:/etc/machine-id -v /etc/localtime:/etc/localtime -v /:/host registry.redhat.io/rhel8/support-tools:latest [root@ip-10-0-154-20 /]# [root@ip-10-0-154-20 /]# [root@ip-10-0-154-20 /]# [root@ip-10-0-154-20 /]# [root@ip-10-0-154-20 /]# ip link bash: ip: command not found [root@ip-10-0-154-20 /]# yum install iproute -y Updating Subscription Management repositories. Unable to read consumer identity Subscription Manager is operating in container mode. This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register. Red Hat Universal Base Image 8 (RPMs) - BaseOS 929 kB/s | 772 kB 00:00 Red Hat Universal Base Image 8 (RPMs) - AppStream 16 MB/s | 4.9 MB 00:00 Red Hat Universal Base Image 8 (RPMs) - CodeReady Builder 87 kB/s | 13 kB 00:00 Dependencies resolved. =============================================================================================================================================================================================================================================== Package Architecture Version Repository Size =============================================================================================================================================================================================================================================== Installing: iproute x86_64 5.3.0-5.el8 ubi-8-baseos 665 k Installing dependencies: libmnl x86_64 1.0.4-6.el8 ubi-8-baseos 30 k Transaction Summary =============================================================================================================================================================================================================================================== Install 2 Packages Total download size: 696 k Installed size: 1.9 M Downloading Packages: (1/2): libmnl-1.0.4-6.el8.x86_64.rpm 364 kB/s | 30 kB 00:00 (2/2): iproute-5.3.0-5.el8.x86_64.rpm 5.3 MB/s | 665 kB 00:00 ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Total 5.4 MB/s | 696 kB 00:00 Running transaction check Transaction check succeeded. Running transaction test Transaction test succeeded. Running transaction Preparing : 1/1 Installing : libmnl-1.0.4-6.el8.x86_64 1/2 Running scriptlet: libmnl-1.0.4-6.el8.x86_64 1/2 Installing : iproute-5.3.0-5.el8.x86_64 2/2 Running scriptlet: iproute-5.3.0-5.el8.x86_64 2/2 Verifying : libmnl-1.0.4-6.el8.x86_64 1/2 Verifying : iproute-5.3.0-5.el8.x86_64 2/2 Installed products updated. Installed: iproute-5.3.0-5.el8.x86_64 libmnl-1.0.4-6.el8.x86_64 Complete! ~~~ For example, 999999 or 11111.1 will be quoted correctly and will not reproduce the issue: ~~~ [root@ip-10-0-154-20 /]# ip link add 9999999999 type dummy [root@ip-10-0-154-20 /]# ip link ls dev 9999999999 62: 9999999999: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/ether d6:c5:3a:a6:ca:c7 brd ff:ff:ff:ff:ff:ff [root@ip-10-0-154-20 /]# ip link ls dev 187e15e9860b329 41: 187e15e9860b329@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8901 qdisc noqueue master ovs-system state UP mode DEFAULT group default link/ether 32:ae:0d:64:5f:48 brd ff:ff:ff:ff:ff:ff link-netns 06e2e636-db29-4869-9b16-a0820be3a1a4 [root@ip-10-0-154-20 /]# ip link add 1111111111111.1 type veth peer 2222222222222.2 ~~~ Delete the nmstate pod to repopulate the nns CRD: ~~~ oc delete pod nmstate-handler-xs692 ~~~ These will be correctly quoted and the nns will be created - the pod will not crash: ~~~ [akaris@linux cnv]$ oc get nns ip-10-0-154-20.eu-west-1.compute.internal -o yaml | grep name f:name: {} name: ip-10-0-154-20.eu-west-1.compute.internal name: ip-10-0-154-20.eu-west-1.compute.internal name: "1111111111111.1" name: 187e15e9860b329 name: 193c943317dbfc9 name: "2222222222222.2" name: 269419b1e64320a name: 27b6490791282c7 name: 2914b714aa5ebbd name: 32178eafa95aeea name: 489ffd44abee7ea name: 50c9e0168038261 name: 5bd487dc7312dc3 name: 8db85e8da9bd00c name: 9647a3f85ae9cc3 name: "9999999999" name: 9e09e0053f0c541 name: br-int name: br-local name: c9eadb630efa062 name: cfe3b6bd93f3fe3 name: d1f43b7c0f0f61d name: de72896b9df785d name: e642867142e7d39 name: ens3 name: f6c91e659cddfe8 name: genev_sys_6081 name: lo name: ovn-k8s-gw0 name: ovn-k8s-mp0 ~~~ However, 60e+02 is a float64 and will be parsed as such and reproduces the issue: ~~~ [root@ip-10-0-154-20 /]# ip link add 60e+02 type dummy [root@ip-10-0-154-20 /]# ip link ls | grep 60 5: genev_sys_6081: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 65000 qdisc noqueue master ovs-system state UNKNOWN mode DEFAULT group default qlen 1000 link/ether 76:4c:b7:c2:54:62 brd ff:ff:ff:ff:ff:ff link-netns 686a7daa-d336-4003-b8c2-848ec063760d 41: 187e15e9860b329@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8901 qdisc noqueue master ovs-system state UP mode DEFAULT group default 60: 5bd487dc7312dc3@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8901 qdisc noqueue master ovs-system state UP mode DEFAULT group default 65: 60e+02: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000 [root@ip-10-0-154-20 /]# ~~~ ~~~ [akaris@linux cnv]$ oc get pods -o wide | grep ip-10-0-154-20 | grep nmstate nmstate-handler-xs692 1/1 Running 1 2m16s 10.0.154.20 ip-10-0-154-20.eu-west-1.compute.internal <none> <none> [akaris@linux cnv]$ oc delete pod nmstate-handler-xs692 pod "nmstate-handler-xs692" deleted [akaris@linux cnv]$ [akaris@linux cnv]$ oc get pods -o wide | grep ip-10-0-154-20 | grep nmstate nmstate-handler-gm6bw 1/1 Running 0 3s 10.0.154.20 ip-10-0-154-20.eu-west-1.compute.internal <none> <none> [akaris@linux cnv]$ oc get pods -o wide | grep ip-10-0-154-20 | grep nmstate nmstate-handler-gm6bw 0/1 Error 2 31s 10.0.154.20 ip-10-0-154-20.eu-west-1.compute.internal <none> <none> [akaris@linux cnv]$ oc logs nmstate-handler-gm6bw {"level":"info","ts":1612896873.3406088,"logger":"cmd","msg":"Operator Version: 0.21.0"} {"level":"info","ts":1612896873.345091,"logger":"cmd","msg":"Go Version: go1.13.15"} {"level":"info","ts":1612896873.3479779,"logger":"cmd","msg":"Go OS/Arch: linux/amd64"} {"level":"info","ts":1612896873.3480678,"logger":"cmd","msg":"Version of operator-sdk: v0.15.1"} {"level":"info","ts":1612896873.3481107,"logger":"cmd","msg":"Try to take exclusive lock on file: /var/k8s_nmstate/handler_lock"} {"level":"info","ts":1612896873.3485055,"logger":"cmd","msg":"Successfully took nmstate exclusive lock"} {"level":"info","ts":1612896876.0043983,"logger":"controller-runtime.metrics","msg":"metrics server is starting to listen","addr":":8080"} {"level":"info","ts":1612896876.004686,"logger":"cmd","msg":"Registering Components."} {"level":"info","ts":1612896876.0049677,"logger":"cmd","msg":"Starting the Cmd."} {"level":"info","ts":1612896876.0053968,"logger":"controller-runtime.controller","msg":"Starting EventSource","controller":"nodenetworkconfigurationpolicy-controller","source":"kind source: /, Kind="} {"level":"info","ts":1612896876.0057232,"logger":"controller-runtime.controller","msg":"Starting Controller","controller":"nodenetworkconfigurationpolicy-controller"} {"level":"info","ts":1612896876.0070198,"logger":"controller-runtime.controller","msg":"Starting EventSource","controller":"node-controller","source":"kind source: /, Kind="} {"level":"info","ts":1612896876.007319,"logger":"controller-runtime.controller","msg":"Starting Controller","controller":"node-controller"} {"level":"info","ts":1612896876.0058627,"logger":"controller-runtime.manager","msg":"starting metrics server","path":"/metrics"} {"level":"info","ts":1612896876.1059654,"logger":"controller-runtime.controller","msg":"Starting workers","controller":"nodenetworkconfigurationpolicy-controller","worker count":1} {"level":"info","ts":1612896876.1091166,"logger":"controller-runtime.controller","msg":"Starting workers","controller":"node-controller","worker count":1} E0209 18:54:36.725557 1 runtime.go:78] Observed a panic: &runtime.TypeAssertionError{_interface:(*runtime._type)(0x14ca120), concrete:(*runtime._type)(0x1460040), asserted:(*runtime._type)(0x1483a00), missingMethod:""} (interface conversion: interface {} is float64, not string) goroutine 331 [running]: k8s.io/apimachinery/pkg/util/runtime.logPanic(0x15050c0, 0xc0006e8300) /go/src/github.com/nmstate/kubernetes-nmstate/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0xa3 k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0) /go/src/github.com/nmstate/kubernetes-nmstate/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x82 panic(0x15050c0, 0xc0006e8300) /usr/lib/golang/src/runtime/panic.go:679 +0x1b2 github.com/nmstate/kubernetes-nmstate/pkg/helper.filterOut(0xc0007a0800, 0x154b, 0x1800, 0x7f753b2d21d0, 0xc000445900, 0x1800, 0x0, 0x18ee540, 0xc00003e0a8, 0x0) /go/src/github.com/nmstate/kubernetes-nmstate/pkg/helper/client.go:211 +0x51a github.com/nmstate/kubernetes-nmstate/pkg/helper.UpdateCurrentState(0x1907400, 0xc000295f20, 0xc0004211e0, 0x0, 0x0) /go/src/github.com/nmstate/kubernetes-nmstate/pkg/helper/client.go:113 +0xc1 github.com/nmstate/kubernetes-nmstate/pkg/helper.CreateOrUpdateNodeNetworkState(0x1907400, 0xc000295f20, 0xc0002bf800, 0x0, 0x0, 0xc000490420, 0x29, 0x18c2540, 0xc0002bf800) /go/src/github.com/nmstate/kubernetes-nmstate/pkg/helper/client.go:103 +0x1d1 github.com/nmstate/kubernetes-nmstate/pkg/controller/node.(*ReconcileNode).Reconcile(0xc0002b51c0, 0x0, 0x0, 0xc000490420, 0x29, 0xc00074fcd8, 0xc000440240, 0xc0004401b8, 0x18c6560) /go/src/github.com/nmstate/kubernetes-nmstate/pkg/controller/node/node_controller.go:110 +0x322 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler(0xc00076ed80, 0x154b560, 0xc0006c0700, 0x0) /go/src/github.com/nmstate/kubernetes-nmstate/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:256 +0x162 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem(0xc00076ed80, 0x1169000) /go/src/github.com/nmstate/kubernetes-nmstate/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:232 +0xcb sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker(0xc00076ed80) /go/src/github.com/nmstate/kubernetes-nmstate/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:211 +0x2b k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1(0xc0006b6620) /go/src/github.com/nmstate/kubernetes-nmstate/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:152 +0x5e k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0006b6620, 0x3b9aca00, 0x0, 0xc0001af701, 0xc000604660) /go/src/github.com/nmstate/kubernetes-nmstate/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:153 +0xf8 k8s.io/apimachinery/pkg/util/wait.Until(0xc0006b6620, 0x3b9aca00, 0xc000604660) /go/src/github.com/nmstate/kubernetes-nmstate/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88 +0x4d created by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1 /go/src/github.com/nmstate/kubernetes-nmstate/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:193 +0x328 panic: interface conversion: interface {} is float64, not string [recovered] panic: interface conversion: interface {} is float64, not string goroutine 331 [running]: k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0) /go/src/github.com/nmstate/kubernetes-nmstate/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:55 +0x105 panic(0x15050c0, 0xc0006e8300) /usr/lib/golang/src/runtime/panic.go:679 +0x1b2 github.com/nmstate/kubernetes-nmstate/pkg/helper.filterOut(0xc0007a0800, 0x154b, 0x1800, 0x7f753b2d21d0, 0xc000445900, 0x1800, 0x0, 0x18ee540, 0xc00003e0a8, 0x0) /go/src/github.com/nmstate/kubernetes-nmstate/pkg/helper/client.go:211 +0x51a github.com/nmstate/kubernetes-nmstate/pkg/helper.UpdateCurrentState(0x1907400, 0xc000295f20, 0xc0004211e0, 0x0, 0x0) /go/src/github.com/nmstate/kubernetes-nmstate/pkg/helper/client.go:113 +0xc1 github.com/nmstate/kubernetes-nmstate/pkg/helper.CreateOrUpdateNodeNetworkState(0x1907400, 0xc000295f20, 0xc0002bf800, 0x0, 0x0, 0xc000490420, 0x29, 0x18c2540, 0xc0002bf800) /go/src/github.com/nmstate/kubernetes-nmstate/pkg/helper/client.go:103 +0x1d1 github.com/nmstate/kubernetes-nmstate/pkg/controller/node.(*ReconcileNode).Reconcile(0xc0002b51c0, 0x0, 0x0, 0xc000490420, 0x29, 0xc00074fcd8, 0xc000440240, 0xc0004401b8, 0x18c6560) /go/src/github.com/nmstate/kubernetes-nmstate/pkg/controller/node/node_controller.go:110 +0x322 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler(0xc00076ed80, 0x154b560, 0xc0006c0700, 0x0) /go/src/github.com/nmstate/kubernetes-nmstate/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:256 +0x162 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem(0xc00076ed80, 0x1169000) /go/src/github.com/nmstate/kubernetes-nmstate/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:232 +0xcb sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker(0xc00076ed80) /go/src/github.com/nmstate/kubernetes-nmstate/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:211 +0x2b k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1(0xc0006b6620) /go/src/github.com/nmstate/kubernetes-nmstate/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:152 +0x5e k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0006b6620, 0x3b9aca00, 0x0, 0xc0001af701, 0xc000604660) /go/src/github.com/nmstate/kubernetes-nmstate/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:153 +0xf8 k8s.io/apimachinery/pkg/util/wait.Until(0xc0006b6620, 0x3b9aca00, 0xc000604660) /go/src/github.com/nmstate/kubernetes-nmstate/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88 +0x4d created by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1 /go/src/github.com/nmstate/kubernetes-nmstate/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:193 +0x328 [akaris@linux cnv]$ ~~~
The stacktrace interpreted: https://github.com/nmstate/kubernetes-nmstate/blob/release-0.31/pkg/helper/client.go#L103 | v https://github.com/nmstate/kubernetes-nmstate/blob/release-0.31/pkg/controller/node/node_controller.go#L108 | v https://github.com/nmstate/kubernetes-nmstate/blob/8de6a0202b5d4a7313abed22ac13a70b0e9761b6/pkg/helper/client.go#L113 ~~~ 2021-02-09T15:05:56.820123631Z github.com/nmstate/kubernetes-nmstate/pkg/helper.UpdateCurrentState(2021-02-09T15:05:56.820131088Z 0x1907400, 0xc00079cc00, 0xc0008a8f20, 0x0, 2021-02-09T15:05:56.820138258Z 0x0) 2021-02-09T15:05:56.820138258Z /go/src/github.com/nmstate/kubernetes-nmstate/pkg/helper/client.go:113 +0xc12021-02-09T15:05:56.82014557Z ~~~ ~~~ func UpdateCurrentState(client client.Client, nodeNetworkState *nmstatev1alpha1.NodeNetworkState) error { observedStateRaw, err := nmstatectl.Show() if err != nil { return errors.Wrap(err, "error running nmstatectl show") } observedState := nmstatev1alpha1.State{Raw: []byte(observedStateRaw)} stateToReport, err := filterOut(observedState, interfacesFilterGlob) if err != nil { fmt.Printf("failed filtering out interfaces from NodeNetworkState, keeping orignal content, please fix the glob: %v", err) stateToReport = observedState } nodeNetworkState.Status.CurrentState = stateToReport nodeNetworkState.Status.LastSuccessfulUpdateTime = metav1.Time{Time: time.Now()} err = client.Status().Update(context.Background(), nodeNetworkState) if err != nil { // Request object not found, could have been deleted after reconcile request. if !apierrors.IsNotFound(err) { return errors.Wrap(err, "Request object not found, could have been deleted after reconcile request") } } return nil } ~~~ | v https://github.com/nmstate/kubernetes-nmstate/blob/8de6a0202b5d4a7313abed22ac13a70b0e9761b6/pkg/helper/client.go#L211 ~~~ for _, iface := range interfaces.([]interface{}) { name := iface.(map[string]interface{})["name"] if !interfacesFilterGlob.Match(name.(string)) { filteredInterfaces = append(filteredInterfaces, iface) } } ~~~ name.(string) is a type assertion: https://tour.golang.org/methods/15 ---> "If i does not hold a T, the statement will trigger a panic. " So, if name does not hold a string, this will panic. ~~~ 2021-02-09T15:05:56.820104175Z github.com/nmstate/kubernetes-nmstate/pkg/helper.filterOut(0xc00109c800, 0x418f, 0x4800, 0x7fb2fb343030, 0xc00024ced0, 0x48002021-02-09T15:05:56.820115692Z , 0x0, 0x18ee540, 0xc000178060, 0x0) 2021-02-09T15:05:56.820115692Z /go/src/github.com/nmstate/kubernetes-nmstate/pkg/helper/client.go2021-02-09T15:05:56.820123631Z :211 +0x51a ~~~ Indeed we panic here with: ~~~ 2021-02-09T15:05:56.82007285Z panic: interface conversion: interface {} is float64, not string [recovered] 2021-02-09T15:05:56.82007285Z panic: interface conversion: interface {} is float64, not string ~~~ | v ~~~ 021-02-09T15:05:56.82007285Z k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0) 2021-02-09T15:05:56.82007285Z /go/src/github.com/nmstate/kubernetes-nmstate/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:552021-02-09T15:05:56.820104175Z +0x105 2021-02-09T15:05:56.820104175Z panic(0x15050c0, 0xc000a47530) 2021-02-09T15:05:56.820104175Z /usr/lib/golang/src/runtime/panic.go:679 +0x1b2 ~~~ ------------------------------------------------------------------------ ~~~ func filterOut(currentState nmstatev1alpha1.State, interfacesFilterGlob glob.Glob) (nmstatev1alpha1.State, error) { if interfacesFilterGlob.Match("") { return currentState, nil } var state map[string]interface{} err := yaml.Unmarshal(currentState.Raw, &state) if err != nil { return currentState, err } interfaces := state["interfaces"] var filteredInterfaces []interface{} for _, iface := range interfaces.([]interface{}) { name := iface.(map[string]interface{})["name"] if !interfacesFilterGlob.Match(name.(string)) { filteredInterfaces = append(filteredInterfaces, iface) } } state["interfaces"] = filteredInterfaces filteredState, err := yaml.Marshal(state) if err != nil { return currentState, err } return nmstatev1alpha1.State{Raw: filteredState}, nil } ~~~ We panic on currentState["interfaces"] ... We cycle through the interfaces and one of the interfaces' .["name"] fields is a float64 and not a string and thus fails the type assertion. We get that currentState from `nmstatectl show` in UpdateCurrentState. The idea is that we then filter out invalid interfaces. That's where we panic as the name of the interface if reported as a float64. observedStateRaw, err := nmstatectl.Show() [that returns string https://github.com/nmstate/kubernetes-nmstate/blob/8de6a0202b5d4a7313abed22ac13a70b0e9761b6/pkg/nmstatectl/nmstatectl.go#L55] we then convert that string into a []byte array observedState := nmstatev1alpha1.State{Raw: []byte(observedStateRaw)} We then unmarshal with: yaml.Unmarshal(currentState.Raw, &state) into a slice map[string]interface{} We then iterate through each of these entries and make sure that each entry itself is of: iface.(map[string]interface{})["name"] so interfaces is: []map[string]interface{} each interface has a ["name"] key. The value of that "name" key must be string: name.(string) But it's clearly float64 in this case ...
This means that funky but valid interface names such as: However, 60e+02 is a float64 and will be parsed as such and reproduces the issue: ~~~ [root@ip-10-0-154-20 /]# ip link add 60e+02 type dummy [root@ip-10-0-154-20 /]# ip link ls | grep 60 5: genev_sys_6081: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 65000 qdisc noqueue master ovs-system state UNKNOWN mode DEFAULT group default qlen 1000 link/ether 76:4c:b7:c2:54:62 brd ff:ff:ff:ff:ff:ff link-netns 686a7daa-d336-4003-b8c2-848ec063760d 41: 187e15e9860b329@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8901 qdisc noqueue master ovs-system state UP mode DEFAULT group default 60: 5bd487dc7312dc3@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8901 qdisc noqueue master ovs-system state UP mode DEFAULT group default 65: 60e+02: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000 [root@ip-10-0-154-20 /]# ~~~ Will lead to a crash.
This changed in upstream master, but the name.(string) assertion is still in place, so this panic might still get triggered: https://github.com/nmstate/kubernetes-nmstate/blob/master/pkg/helper/client.go#L91 func CreateOrUpdateNodeNetworkState(client client.Client, node *corev1.Node, namespace client.ObjectKey, observedState shared.State) error { nnsInstance := &nmstatev1beta1.NodeNetworkState{} err := client.Get(context.TODO(), namespace, nnsInstance) if err != nil { if !apierrors.IsNotFound(err) { return errors.Wrap(err, "Failed to get nmstate") } else { nnsInstance, err = InitializeNodeNetworkState(client, node) if err != nil { return err } } } return UpdateCurrentState(client, nnsInstance, observedState) } func UpdateCurrentState(client client.Client, nodeNetworkState *nmstatev1beta1.NodeNetworkState, observedState shared.State) error { if observedState.String() == nodeNetworkState.Status.CurrentState.String() { log.Info("Skipping NodeNetworkState update, node network configuration not changed") return nil } nodeNetworkState.Status.CurrentState = observedState nodeNetworkState.Status.LastSuccessfulUpdateTime = metav1.Time{Time: time.Now()} err := client.Status().Update(context.Background(), nodeNetworkState) if err != nil { if apierrors.IsNotFound(err) { return errors.Wrap(err, "Request object not found, could have been deleted after reconcile request") } else { return errors.Wrap(err, "Error updating nodeNetworkState") } } return nil } https://github.com/nmstate/kubernetes-nmstate/blob/47251a5248c6fb6c3fc6ff49bbbd797d984d54dd/controllers/nodenetworkstate_controller.go#L68 currentStateRaw, err := nmstatectl.Show() if err != nil { // We cannot call nmstatectl show let's reconcile again return ctrl.Result{}, err } currentState, err := state.FilterOut(shared.NewState(currentStateRaw)) if err != nil { return ctrl.Result{}, err } nmstate.CreateOrUpdateNodeNetworkState(r.Client, node, request.NamespacedName, currentState) if err != nil { err = errors.Wrap(err, "error at node reconcile creating NodeNetworkStateNetworkState") return ctrl.Result{}, err } return ctrl.Result{}, nil But following down the rabbit hole, that logic still seems to be the same: https://github.com/nmstate/kubernetes-nmstate/blob/master/pkg/state/filter.go func filterOutInterfaces(state map[string]interface{}, interfacesFilterGlob glob.Glob) { interfaces := state["interfaces"] filteredInterfaces := []interface{}{} for _, iface := range interfaces.([]interface{}) { name := iface.(map[string]interface{})["name"] if !interfacesFilterGlob.Match(name.(string)) { filterOutDynamicAttributes(iface.(map[string]interface{})) filteredInterfaces = append(filteredInterfaces, iface) } } state["interfaces"] = filteredInterfaces } func filterOut(currentState shared.State, interfacesFilterGlob glob.Glob) (shared.State, error) { var state map[string]interface{} err := yaml.Unmarshal(currentState.Raw, &state) if err != nil { return currentState, err } filterOutInterfaces(state, interfacesFilterGlob) filterOutRoutes("running", state, interfacesFilterGlob) filterOutRoutes("config", state, interfacesFilterGlob) filteredState, err := yaml.Marshal(state) if err != nil { return currentState, err } return shared.NewState(string(filteredState)), nil }
The problem is that upon unmarshalling, we give the `yaml` package (yaml "sigs.k8s.io/yaml") full reign over how it interprets the data that is passed to it due to the interface{} var state map[string]interface{} err := yaml.Unmarshal(currentState.Raw, &state) But we then boldly expect that the data is interpreted as string: func filterOutInterfaces(state map[string]interface{}, interfacesFilterGlob glob.Glob) { interfaces := state["interfaces"] filteredInterfaces := []interface{}{} for _, iface := range interfaces.([]interface{}) { name := iface.(map[string]interface{})["name"] if !interfacesFilterGlob.Match(name.(string)) { filterOutDynamicAttributes(iface.(map[string]interface{}))
We literally hit this in production at the customer site. The customer has the following veth with a valid uid which can also be interpreted as float64. [akaris@supportshell -]$ grep name 0050-nmstate-test.txt | egrep 'name: [0-9]+e[0-9]+' - name: 040e0d0e654649b - name: 0e7480436552138 <-------------- that looks like a valid float to me - name: 78e7c0f52ada0a8 - name: 7e6e370940968ba Test in my lab: [root@ip-10-0-154-20 /]# ip link add 0e7480436552138 type dummy [root@ip-10-0-154-20 /]# [akaris@linux cnv]$ oc get pods -o wide | grep ip-10-0-154-20 | grep nmstate nmstate-handler-8l6bl 1/1 Running 0 3m52s 10.0.154.20 ip-10-0-154-20.eu-west-1.compute.internal <none> <none> [akaris@linux cnv]$ oc delete pod nmstate-handler-8l6bl pod "nmstate-handler-8l6bl" deleted [akaris@linux cnv]$ oc get pods -o wide | grep ip-10-0-154-20 | grep nmstate nmstate-handler-kfb9s 1/1 Running 0 3s 10.0.154.20 ip-10-0-154-20.eu-west-1.compute.internal <none> <none> [akaris@linux cnv]$ oc get pods -o wide | grep ip-10-0-154-20 | grep nmstate nmstate-handler-kfb9s 1/1 Running 0 5s 10.0.154.20 ip-10-0-154-20.eu-west-1.compute.internal <none> <none> [akaris@linux cnv]$ oc get pods -o wide | grep ip-10-0-154-20 | grep nmstate nmstate-handler-kfb9s 0/1 Error 0 6s 10.0.154.20 ip-10-0-154-20.eu-west-1.compute.internal <none> <none> [akaris@linux cnv]$ oc logs nmstate-handler-kfb9s {"level":"info","ts":1612899198.9794462,"logger":"cmd","msg":"Operator Version: 0.21.0"} {"level":"info","ts":1612899198.9796498,"logger":"cmd","msg":"Go Version: go1.13.15"} {"level":"info","ts":1612899198.9796731,"logger":"cmd","msg":"Go OS/Arch: linux/amd64"} {"level":"info","ts":1612899198.9796817,"logger":"cmd","msg":"Version of operator-sdk: v0.15.1"} {"level":"info","ts":1612899198.9796915,"logger":"cmd","msg":"Try to take exclusive lock on file: /var/k8s_nmstate/handler_lock"} {"level":"info","ts":1612899198.9801002,"logger":"cmd","msg":"Successfully took nmstate exclusive lock"} {"level":"info","ts":1612899201.637689,"logger":"controller-runtime.metrics","msg":"metrics server is starting to listen","addr":":8080"} {"level":"info","ts":1612899201.6385236,"logger":"cmd","msg":"Registering Components."} {"level":"info","ts":1612899201.6387656,"logger":"cmd","msg":"Starting the Cmd."} {"level":"info","ts":1612899201.6390276,"logger":"controller-runtime.manager","msg":"starting metrics server","path":"/metrics"} {"level":"info","ts":1612899201.6390562,"logger":"controller-runtime.controller","msg":"Starting EventSource","controller":"nodenetworkconfigurationpolicy-controller","source":"kind source: /, Kind="} {"level":"info","ts":1612899201.6391609,"logger":"controller-runtime.controller","msg":"Starting EventSource","controller":"node-controller","source":"kind source: /, Kind="} {"level":"info","ts":1612899201.7395442,"logger":"controller-runtime.controller","msg":"Starting Controller","controller":"nodenetworkconfigurationpolicy-controller"} {"level":"info","ts":1612899201.7395756,"logger":"controller-runtime.controller","msg":"Starting workers","controller":"nodenetworkconfigurationpolicy-controller","worker count":1} {"level":"info","ts":1612899201.7400901,"logger":"controller-runtime.controller","msg":"Starting Controller","controller":"node-controller"} {"level":"info","ts":1612899201.740117,"logger":"controller-runtime.controller","msg":"Starting workers","controller":"node-controller","worker count":1} E0209 19:33:22.394992 1 runtime.go:78] Observed a panic: &runtime.TypeAssertionError{_interface:(*runtime._type)(0x14ca120), concrete:(*runtime._type)(0x1460040), asserted:(*runtime._type)(0x1483a00), missingMethod:""} (interface conversion: interface {} is float64, not string) goroutine 310 [running]: k8s.io/apimachinery/pkg/util/runtime.logPanic(0x15050c0, 0xc00010af90) /go/src/github.com/nmstate/kubernetes-nmstate/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0xa3 k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0) /go/src/github.com/nmstate/kubernetes-nmstate/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x82 panic(0x15050c0, 0xc00010af90) /usr/lib/golang/src/runtime/panic.go:679 +0x1b2 github.com/nmstate/kubernetes-nmstate/pkg/helper.filterOut(0xc0003c1800, 0x1554, 0x1800, 0x7ff889f774a0, 0xc00010dbb0, 0x1800, 0x0, 0x18ee540, 0xc00003e0a8, 0x0) /go/src/github.com/nmstate/kubernetes-nmstate/pkg/helper/client.go:211 +0x51a github.com/nmstate/kubernetes-nmstate/pkg/helper.UpdateCurrentState(0x1907400, 0xc000219e90, 0xc0004f0420, 0x0, 0x0) /go/src/github.com/nmstate/kubernetes-nmstate/pkg/helper/client.go:113 +0xc1 github.com/nmstate/kubernetes-nmstate/pkg/helper.CreateOrUpdateNodeNetworkState(0x1907400, 0xc000219e90, 0xc000016300, 0x0, 0x0, 0xc00048f290, 0x29, 0x18c2540, 0xc000016300) /go/src/github.com/nmstate/kubernetes-nmstate/pkg/helper/client.go:103 +0x1d1 github.com/nmstate/kubernetes-nmstate/pkg/controller/node.(*ReconcileNode).Reconcile(0xc000100120, 0x0, 0x0, 0xc00048f290, 0x29, 0xc000624cd8, 0xc0000e4240, 0xc0000e41b8, 0x18c6560) /go/src/github.com/nmstate/kubernetes-nmstate/pkg/controller/node/node_controller.go:110 +0x322 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler(0xc000180000, 0x154b560, 0xc0001003c0, 0x0) /go/src/github.com/nmstate/kubernetes-nmstate/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:256 +0x162 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem(0xc000180000, 0x100) /go/src/github.com/nmstate/kubernetes-nmstate/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:232 +0xcb sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker(0xc000180000) /go/src/github.com/nmstate/kubernetes-nmstate/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:211 +0x2b k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1(0xc000297920) /go/src/github.com/nmstate/kubernetes-nmstate/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:152 +0x5e k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000297920, 0x3b9aca00, 0x0, 0xc000297701, 0xc000320000) /go/src/github.com/nmstate/kubernetes-nmstate/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:153 +0xf8 k8s.io/apimachinery/pkg/util/wait.Until(0xc000297920, 0x3b9aca00, 0xc000320000) /go/src/github.com/nmstate/kubernetes-nmstate/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88 +0x4d created by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1 /go/src/github.com/nmstate/kubernetes-nmstate/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:193 +0x328 panic: interface conversion: interface {} is float64, not string [recovered] panic: interface conversion: interface {} is float64, not string goroutine 310 [running]: k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0) /go/src/github.com/nmstate/kubernetes-nmstate/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:55 +0x105 panic(0x15050c0, 0xc00010af90) /usr/lib/golang/src/runtime/panic.go:679 +0x1b2 github.com/nmstate/kubernetes-nmstate/pkg/helper.filterOut(0xc0003c1800, 0x1554, 0x1800, 0x7ff889f774a0, 0xc00010dbb0, 0x1800, 0x0, 0x18ee540, 0xc00003e0a8, 0x0) /go/src/github.com/nmstate/kubernetes-nmstate/pkg/helper/client.go:211 +0x51a github.com/nmstate/kubernetes-nmstate/pkg/helper.UpdateCurrentState(0x1907400, 0xc000219e90, 0xc0004f0420, 0x0, 0x0) /go/src/github.com/nmstate/kubernetes-nmstate/pkg/helper/client.go:113 +0xc1 github.com/nmstate/kubernetes-nmstate/pkg/helper.CreateOrUpdateNodeNetworkState(0x1907400, 0xc000219e90, 0xc000016300, 0x0, 0x0, 0xc00048f290, 0x29, 0x18c2540, 0xc000016300) /go/src/github.com/nmstate/kubernetes-nmstate/pkg/helper/client.go:103 +0x1d1 github.com/nmstate/kubernetes-nmstate/pkg/controller/node.(*ReconcileNode).Reconcile(0xc000100120, 0x0, 0x0, 0xc00048f290, 0x29, 0xc000624cd8, 0xc0000e4240, 0xc0000e41b8, 0x18c6560) /go/src/github.com/nmstate/kubernetes-nmstate/pkg/controller/node/node_controller.go:110 +0x322 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler(0xc000180000, 0x154b560, 0xc0001003c0, 0x0) /go/src/github.com/nmstate/kubernetes-nmstate/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:256 +0x162 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem(0xc000180000, 0x100) /go/src/github.com/nmstate/kubernetes-nmstate/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:232 +0xcb sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker(0xc000180000) /go/src/github.com/nmstate/kubernetes-nmstate/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:211 +0x2b k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1(0xc000297920) /go/src/github.com/nmstate/kubernetes-nmstate/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:152 +0x5e k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000297920, 0x3b9aca00, 0x0, 0xc000297701, 0xc000320000) /go/src/github.com/nmstate/kubernetes-nmstate/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:153 +0xf8 k8s.io/apimachinery/pkg/util/wait.Until(0xc000297920, 0x3b9aca00, 0xc000320000) /go/src/github.com/nmstate/kubernetes-nmstate/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88 +0x4d created by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1 /go/src/github.com/nmstate/kubernetes-nmstate/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:193 +0x328 [akaris@linux cnv]$
Wow! Thanks for the detailed report, it's really helpful. I will check when are we going to release another 2.4.z. At very least, we should not panic when we fail to filter. We have to see if this should be treated in nmstatectl itself, in the yaml parser or only in our filtering code.
@Andreas, would you please help me identify the priority of this? Are you fine using the workaround until 4.8 or does this require a backport all the way to 2.4? Or are you moving to 2.5 soon?
Verified the scenario on OCP,CNV 4.8. "cluster-network-addons-operator version is: v4.8.0-13" Scenario tested: 1. Debug worker node: `oc debug node/onash-48-lmltb-worker-0-c887v` 2. Add interface "60e+02" which is a float64: sh-4.4# ip link add 60e+02 type dummy sh-4.4# ip link ls | grep 60 27: vethf608d455@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1392 qdisc noqueue master ovs-system state UP mode DEFAULT group default 35: veth4c01d060@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1392 qdisc noqueue master ovs-system state UP mode DEFAULT group default 40: veth60b03854@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1392 qdisc noqueue master ovs-system state UP mode DEFAULT group default 46: veth27e260a2@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1392 qdisc noqueue master ovs-system state UP mode DEFAULT group default 50: 60e+02: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000 3. Get the nmstate pod: [cnv-qe-jenkins@localhost ~]$ oc get pods -A -o wide | grep 192.168.0.40 | grep nmstate openshift-cnv nmstate-handler-25dtn 1/1 Running 1 46m 192.168.0.40 onash-48-lmltb-worker-0-c887v <none> <none> 4. Delete the pod: [cnv-qe-jenkins@localhost ~]$ oc delete pod -n openshift-cnv nmstate-handler-25dtn pod "nmstate-handler-25dtn" deleted 5. Verify the pod DOES NOT crash as mentioned above and is running: [cnv-qe-jenkins@localhost ~]$ oc get pods -A -o wide | grep 192.168.0.40 | grep nmstate openshift-cnv nmstate-handler-c8z6v 1/1 Running 0 10m 192.168.0.40 onash-48-lmltb-worker-0-c887v <none> <none> 6. Check also in logs (no `panic..`): [cnv-qe-jenkins@localhost ~]$ oc logs -n openshift-cnv nmstate-handler-c8z6v {"level":"info","ts":1618912712.8501925,"logger":"setup","msg":"Try to take exclusive lock on file: /var/k8s_nmstate/handler_lock"} {"level":"info","ts":1618912712.850407,"logger":"setup","msg":"Successfully took nmstate exclusive lock"} I0420 09:58:33.901614 1 request.go:655] Throttling request took 1.040827217s, request: GET:https://172.30.0.1:443/apis/upload.cdi.kubevirt.io/v1beta1?timeout=32s {"level":"info","ts":1618912717.090017,"logger":"controller-runtime.injectors-warning","msg":"Injectors are deprecated, and will be removed in v0.10.x"} {"level":"info","ts":1618912717.09006,"logger":"controller-runtime.injectors-warning","msg":"Injectors are deprecated, and will be removed in v0.10.x"} {"level":"info","ts":1618912717.0900927,"logger":"controller-runtime.injectors-warning","msg":"Injectors are deprecated, and will be removed in v0.10.x"} {"level":"info","ts":1618912717.0900989,"logger":"controller-runtime.injectors-warning","msg":"Injectors are deprecated, and will be removed in v0.10.x"} {"level":"info","ts":1618912717.0901105,"logger":"controller-runtime.injectors-warning","msg":"Injectors are deprecated, and will be removed in v0.10.x"} {"level":"info","ts":1618912717.0901153,"logger":"controller-runtime.injectors-warning","msg":"Injectors are deprecated, and will be removed in v0.10.x"} {"level":"info","ts":1618912717.0901191,"logger":"controller-runtime.injectors-warning","msg":"Injectors are deprecated, and will be removed in v0.10.x"} {"level":"info","ts":1618912717.0901227,"logger":"controller-runtime.injectors-warning","msg":"Injectors are deprecated, and will be removed in v0.10.x"} {"level":"info","ts":1618912717.09015,"logger":"controller-runtime.injectors-warning","msg":"Injectors are deprecated, and will be removed in v0.10.x"} {"level":"info","ts":1618912717.0901594,"logger":"controller-runtime.injectors-warning","msg":"Injectors are deprecated, and will be removed in v0.10.x"} {"level":"info","ts":1618912717.0901868,"logger":"setup","msg":"starting manager"} {"level":"info","ts":1618912717.0914314,"logger":"controller-runtime.manager.controller.node","msg":"Starting EventSource","reconciler group":"","reconciler kind":"Node","source":"kind source: /, Kind="} {"level":"info","ts":1618912717.0915864,"logger":"controller-runtime.manager.controller.node","msg":"Starting EventSource","reconciler group":"","reconciler kind":"Node","source":"kind source: /, Kind="} {"level":"info","ts":1618912717.0916562,"logger":"controller-runtime.manager.controller.nodenetworkstate","msg":"Starting EventSource","reconciler group":"nmstate.io","reconciler kind":"NodeNetworkState","source":"kind source: /, Kind="} {"level":"info","ts":1618912717.091704,"logger":"controller-runtime.manager.controller.nodenetworkconfigurationpolicy","msg":"Starting EventSource","reconciler group":"nmstate.io","reconciler kind":"NodeNetworkConfigurationPolicy","source":"kind source: /, Kind="} {"level":"info","ts":1618912717.1944938,"logger":"controller-runtime.manager.controller.nodenetworkconfigurationpolicy","msg":"Starting Controller","reconciler group":"nmstate.io","reconciler kind":"NodeNetworkConfigurationPolicy"} {"level":"info","ts":1618912717.1945503,"logger":"controller-runtime.manager.controller.node","msg":"Starting EventSource","reconciler group":"","reconciler kind":"Node","source":"kind source: /, Kind="} {"level":"info","ts":1618912717.1945732,"logger":"controller-runtime.manager.controller.nodenetworkconfigurationpolicy","msg":"Starting workers","reconciler group":"nmstate.io","reconciler kind":"NodeNetworkConfigurationPolicy","worker count":1} {"level":"info","ts":1618912717.1944983,"logger":"controller-runtime.manager.controller.node","msg":"Starting Controller","reconciler group":"","reconciler kind":"Node"} {"level":"info","ts":1618912717.1946151,"logger":"controller-runtime.manager.controller.node","msg":"Starting Controller","reconciler group":"","reconciler kind":"Node"} {"level":"info","ts":1618912717.1946301,"logger":"controller-runtime.manager.controller.node","msg":"Starting workers","reconciler group":"","reconciler kind":"Node","worker count":1} {"level":"info","ts":1618912717.1946223,"logger":"controller-runtime.manager.controller.node","msg":"Starting workers","reconciler group":"","reconciler kind":"Node","worker count":1} {"level":"info","ts":1618912717.194498,"logger":"controller-runtime.manager.controller.nodenetworkstate","msg":"Starting Controller","reconciler group":"nmstate.io","reconciler kind":"NodeNetworkState"} {"level":"info","ts":1618912717.1947732,"logger":"controller-runtime.manager.controller.nodenetworkstate","msg":"Starting workers","reconciler group":"nmstate.io","reconciler kind":"NodeNetworkState","worker count":1} {"level":"info","ts":1618912717.8502464,"logger":"controllers.Node","msg":"Network configuration changed, updating NodeNetworkState"} {"level":"info","ts":1618912717.8504934,"logger":"client","msg":"Skipping NodeNetworkState update, node network configuration not changed"} {"level":"info","ts":1618913027.629953,"logger":"controllers.Node","msg":"Network configuration changed, updating NodeNetworkState"} {"level":"info","ts":1618913218.5921633,"logger":"controllers.Node","msg":"Network configuration changed, updating NodeNetworkState"} [cnv-qe-jenkins@localhost ~]$
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Virtualization 4.8.0 Images), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2021:2920