Description of problem: nmstate fails to set state and has no status, causing all QE test runs to not run. Version-Release number of selected component (if applicable): CNV 2.6 How reproducible: Always Steps to Reproduce: 1. Connect to a 2.6 cluster. 2. Run: `oc get nns <node_name> -o yaml 3. The status is empty Actual results: nmstate doesn't set states. Expected results: Run nmstate successfully and get status. Additional info: Logs: http://pastebin.test.redhat.com/919550 If needed we have a 2.6 cluster.
We have to backport the the nmstate pin from CNV 2.5 since OCP 4.7 has still NetworkManager 1.22 at nodes https://code.engineering.redhat.com/gerrit/#/c/218561/
Running u/s v0.33.0 version that is based on NetworkManager 1.22 works fine, to do so from the nodes you run the following: sudo podman run --network=host --privileged -it --volume /run/dbus/system_bus_socket:/run/dbus/system_bus_socket --entrypoint nmstatectl quay.io/nmstate/kubernetes-nmstate-handler:v0.33.0 show
Verified in CNV 2.6.0 with CNAO v.2.6.0-5. 1. Verify status exists in NNS $ oc get nns myakove-hhsbc-worker-0-99vpj -o jsonpath={.status} {"currentState":{"dns-resolver":{"config":{"search":[],"server":[]},"running":{"search":["openstacklocal"],"server":["192.168.0.11","192.168.0.10"]}},"interfaces":[{"bridge":{},"ipv4":{"enabled":false},"ipv6":{"enabled":false},"lldp":{"enabled":false},"mac-address":"D2:C9:DD:6D:60:43","mtu":1400,"name":"br0","state":"down","type":"ovs-interface"},{"ipv4":{"enabled":false},"ipv6":{"enabled":false},"lldp":{"enabled":false},"mac-address":"FA:16:3E:8F:3B:93","mtu":1450,"name":"ens10","state":"down","type":"ethernet"},{"ipv4":{"address":[{"ip":"192.168.1.244","prefix-length":18},{"ip":"192.168.0.7","prefix-length":32}],"auto-dns":true,"auto-gateway":true,"auto-routes":true,"dhcp":true,"enabled":true},"ipv6":{"address":[{"ip":"fe80::f816:3eff:fe24:9bdd","prefix-length":64}],"auto-dns":true,"auto-gateway":true,"auto-routes":true,"autoconf":true,"dhcp":true,"enabled":true},"lldp":{"enabled":false},"mac-address":"FA:16:3E:24:9B:DD","mtu":1450,"name":"ens3","state":"up","type":"ethernet"},{"ipv4":{"enabled":false},"ipv6":{"enabled":false},"lldp":{"enabled":false},"mac-address":"FA:16:3E:2A:3E:03","mtu":1450,"name":"ens8","state":"down","type":"ethernet"},{"ipv4":{"enabled":false},"ipv6":{"enabled":false},"lldp":{"enabled":false},"mac-address":"FA:16:3E:72:8A:21","mtu":1450,"name":"ens9","state":"down","type":"ethernet"},{"ipv4":{"enabled":false},"ipv6":{"enabled":false},"lldp":{"enabled":false},"mtu":65536,"name":"lo","state":"down","type":"unknown"},{"ipv4":{"enabled":false},"ipv6":{"enabled":false},"lldp":{"enabled":false},"mac-address":"B2:8D:BA:50:FC:55","mtu":1400,"name":"tun0","state":"down","type":"ovs-interface"},{"ipv4":{"enabled":false},"ipv6":{"enabled":false},"lldp":{"enabled":false},"mac-address":"F2:2C:FE:2A:C6:1E","mtu":65000,"name":"vxlan_sys_4789","state":"down","type":"vxlan","vxlan":{"base-iface":"","destination-port":4789,"id":0,"remote":""}}],"route-rules":{"config":[]},"routes":{"config":[],"running":[{"destination":"0.0.0.0/0","metric":101,"next-hop-address":"192.168.0.1","next-hop-interface":"ens3","table-id":254},{"destination":"169.254.169.254/32","metric":101,"next-hop-address":"192.168.0.10","next-hop-interface":"ens3","table-id":254},{"destination":"192.168.0.0/18","metric":101,"next-hop-address":"","next-hop-interface":"ens3","table-id":254},{"destination":"fe80::/64","metric":101,"next-hop-address":"","next-hop-interface":"ens3","table-id":254},{"destination":"ff00::/8","metric":256,"next-hop-address":"","next-hop-interface":"ens3","table-id":255}]}},"lastSuccessfulUpdateTime":"2020-12-01T10:57:08Z"} 2. Verify the fixed version, according to the "Fixed In Version" field in the bug report, is the one installed in the cluster: a. Fixed In Version: cluster-network-addons-operator-container-v2.6.0-5 b. Go to brew and search for the build of this CNAO version. found https://brewweb.engineering.redhat.com/brew/buildinfo?buildID=1396432 c. Search for the sha of the CNAO build: registry-proxy.engineering.redhat.com/rh-osbs/container-native-virtualization-cluster-network-addons-operator@sha256:341e4fad7b245cf3cdc3df6cb0530578c11a05c353e907444aa9250768ec9d13 d. In the cluster - search for the sha of the installed CNAO: $ oc get deployment -n openshift-cnv cluster-network-addons-operator -o wide NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR cluster-network-addons-operator 1/1 1 1 88m cluster-network-addons-operator registry.redhat.io/container-native-virtualization/cluster-network-addons-operator@sha256:341e4fad7b245cf3cdc3df6cb0530578c11a05c353e907444aa9250768ec9d13 name=cluster-network-addons-operator The sha values in section c and d match.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Virtualization 2.6.0 security and bug fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2021:0799