ovn-kubernetes changed some OVS port names so we need to update /etc/NetworkManager/conf.d/sdn.conf to match.
Tested and verified in 4.5.0-0.nightly-2020-04-16-063730 [weliang@weliang ~]$ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.5.0-0.nightly-2020-04-16-063730 True False 18m Cluster version is 4.5.0-0.nightly-2020-04-16-063730 [weliang@weliang ~]$ oc get nodes NAME STATUS ROLES AGE VERSION ip-10-0-128-168.us-east-2.compute.internal Ready worker 32m v1.18.0-rc.1 ip-10-0-143-157.us-east-2.compute.internal Ready master 43m v1.18.0-rc.1 ip-10-0-146-84.us-east-2.compute.internal Ready master 43m v1.18.0-rc.1 ip-10-0-151-22.us-east-2.compute.internal Ready worker 32m v1.18.0-rc.1 ip-10-0-167-234.us-east-2.compute.internal Ready master 43m v1.18.0-rc.1 ip-10-0-167-40.us-east-2.compute.internal Ready worker 32m v1.18.0-rc.1 [weliang@weliang ~]$ oc debug node/ip-10-0-128-168.us-east-2.compute.internal Starting pod/ip-10-0-128-168us-east-2computeinternal-debug ... To use host binaries, run `chroot /host` chroot /host Pod IP: 10.0.128.168 If you don't see a command prompt, try pressing enter. sh-4.2# chroot /host sh-4.4# journalctl -u kubelet | grep ovn-k8s-gw0 sh-4.4# journalctl -u kubelet | grep ovn-k8s-mp0 sh-4.4# cat /etc/NetworkManager/conf.d/sdn.conf # ignore known SDN-managed devices [device] match-device=interface-name:br-int;interface-name:br-local;interface-name:br-nexthop,interface-name:ovn-k8s-*,interface-name:k8s-*;interface-name:tun0;interface-name:br0;driver:veth managed=0 sh-4.4# exit exit sh-4.2# exit exit Removing debug pod ... [weliang@weliang ~]$ oc debug node/ip-10-0-143-157.us-east-2.compute.internal Starting pod/ip-10-0-143-157us-east-2computeinternal-debug ... To use host binaries, run `chroot /host` Pod IP: 10.0.143.157 If you don't see a command prompt, try pressing enter. sh-4.2# chroot /host sh-4.4# journalctl -u kubelet | grep ovn-k8s sh-4.4# cat /etc/NetworkManager/conf.d/sdn.conf # ignore known SDN-managed devices [device] match-device=interface-name:br-int;interface-name:br-local;interface-name:br-nexthop,interface-name:ovn-k8s-*,interface-name:k8s-*;interface-name:tun0;interface-name:br0;driver:veth managed=0 sh-4.4# [weliang@weliang ~]$ oc rsh ovnkube-master-4mnj4 Defaulting container name to northd. Use 'oc describe pod/ovnkube-master-4mnj4 -n openshift-ovn-kubernetes' to see all of the containers in this pod. sh-4.2# ip add show ovn-k8s-gw0 7: ovn-k8s-gw0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8901 qdisc noqueue state UNKNOWN group default qlen 1000 link/ether 00:00:a9:fe:21:01 brd ff:ff:ff:ff:ff:ff inet 169.254.33.1/24 brd 169.254.33.255 scope global ovn-k8s-gw0 valid_lft forever preferred_lft forever sh-4.2# ip add show ovn-k8s-mp0 8: ovn-k8s-mp0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8901 qdisc noqueue state UNKNOWN group default qlen 1000 link/ether 5e:33:53:cb:db:20 brd ff:ff:ff:ff:ff:ff inet 10.129.0.2/23 brd 10.129.1.255 scope global ovn-k8s-mp0 valid_lft forever preferred_lft forever sh-4.2# exit exit [weliang@weliang ~]$ oc rsh ovnkube-node-stkvj Defaulting container name to ovn-controller. Use 'oc describe pod/ovnkube-node-stkvj -n openshift-ovn-kubernetes' to see all of the containers in this pod. sh-4.2# ip add show ovn-k8s-gw0 7: ovn-k8s-gw0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8901 qdisc noqueue state UNKNOWN group default qlen 1000 link/ether 00:00:a9:fe:21:01 brd ff:ff:ff:ff:ff:ff inet 169.254.33.1/24 brd 169.254.33.255 scope global ovn-k8s-gw0 valid_lft forever preferred_lft forever sh-4.2# ip add show ovn-k8s-mp0 8: ovn-k8s-mp0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8901 qdisc noqueue state UNKNOWN group default qlen 1000 link/ether da:98:49:66:f4:42 brd ff:ff:ff:ff:ff:ff inet 10.128.2.2/23 brd 10.128.3.255 scope global ovn-k8s-mp0 valid_lft forever preferred_lft forever sh-4.2#
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:2409