Bug 1820823

Summary: add new ovn-kubernetes port names to /etc/NetworkManager/conf.d/sdn.conf
Product: OpenShift Container Platform Reporter: Dan Williams <dcbw>
Component: Machine Config OperatorAssignee: Dan Williams <dcbw>
Status: CLOSED ERRATA QA Contact: Michael Nguyen <mnguyen>
Severity: medium Docs Contact:
Priority: medium    
Version: 4.4CC: amurdaca, kgarriso, mnguyen, smilner
Target Milestone: ---   
Target Release: 4.4.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: 1820822 Environment:
Last Closed: 2020-05-04 11:48:25 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Bug Depends On: 1820822    
Bug Blocks:    

Description Dan Williams 2020-04-04 01:52:46 UTC
+++ This bug was initially created as a clone of Bug #1820822 +++

ovn-kubernetes changed some OVS port names so we need to update /etc/NetworkManager/conf.d/sdn.conf to match.

Comment 4 Michael Nguyen 2020-04-22 20:42:52 UTC
Verified on 4.4.0-0.nightly-2020-04-22-135638
$ oc get clusterversion
NAME      VERSION                             AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.4.0-0.nightly-2020-04-22-135638   True        False         8m55s   Cluster version is 4.4.0-0.nightly-2020-04-22-135638
$ oc get nodes
NAME                                         STATUS   ROLES    AGE   VERSION
ip-10-0-138-52.us-west-2.compute.internal    Ready    master   20m   v1.17.1
ip-10-0-140-196.us-west-2.compute.internal   Ready    worker   13m   v1.17.1
ip-10-0-141-84.us-west-2.compute.internal    Ready    worker   13m   v1.17.1
ip-10-0-143-113.us-west-2.compute.internal   Ready    master   20m   v1.17.1
ip-10-0-145-1.us-west-2.compute.internal     Ready    worker   13m   v1.17.1
ip-10-0-156-231.us-west-2.compute.internal   Ready    master   20m   v1.17.1
$ oc debug node/ip-10-0-138-52.us-west-2.compute.internal
Starting pod/ip-10-0-138-52us-west-2computeinternal-debug ...
To use host binaries, run `chroot /host`
If you don't see a command prompt, try pressing enter.
sh-4.2# chroot /host
sh-4.4# journalctl -u kubelet | grep ovn-k8s
sh-4.4# cat /etc/NetworkManager/conf.d/sdn.conf 
# ignore known SDN-managed devices
[device]
match-device=interface-name:br-int;interface-name:br-local;interface-name:br-nexthop,interface-name:ovn-k8s-*,interface-name:k8s-*;interface-name:tun0;interface-name:br0;driver:veth
managed=0
sh-4.4# exit
exit
sh-4.2# exit
exit

Removing debug pod ...
$ oc debug node/ip-10-0-140-196.us-west-2.compute.internal
Starting pod/ip-10-0-140-196us-west-2computeinternal-debug ...
To use host binaries, run `chroot /host`
If you don't see a command prompt, try pressing enter.
sh-4.2# chroot /host
sh-4.4# journalctl -u kubelet | grep ovn-k8s
sh-4.4# cat /etc/NetworkManager/conf.d/sdn.conf
# ignore known SDN-managed devices
[device]
match-device=interface-name:br-int;interface-name:br-local;interface-name:br-nexthop,interface-name:ovn-k8s-*,interface-name:k8s-*;interface-name:tun0;interface-name:br0;driver:veth
managed=0
sh-4.4# exit
exit
sh-4.2# exit
exit

Removing debug pod ...
$ oc -n openshift-ovn-kubernetes get pods
NAME                   READY   STATUS    RESTARTS   AGE
ovnkube-master-24nkz   4/4     Running   1          22m
ovnkube-master-5z9sn   4/4     Running   0          22m
ovnkube-master-xcpkm   4/4     Running   0          22m
ovnkube-node-csjjf     2/2     Running   0          15m
ovnkube-node-hs9bd     2/2     Running   0          15m
ovnkube-node-q2896     2/2     Running   0          15m
ovnkube-node-t7bcb     2/2     Running   0          22m
ovnkube-node-tqvq8     2/2     Running   0          22m
ovnkube-node-zbplj     2/2     Running   0          22m
ovs-node-5cqq6         1/1     Running   0          15m
ovs-node-68vhl         1/1     Running   0          22m
ovs-node-6h9xz         1/1     Running   0          15m
ovs-node-k2clk         1/1     Running   0          22m
ovs-node-x7vcs         1/1     Running   0          22m
ovs-node-xjxjk         1/1     Running   0          15m
$ oc -n openshift-ovn-kubernetes rsh ovnkube-master-24nkz
Defaulting container name to northd.
Use 'oc describe pod/ovnkube-master-24nkz -n openshift-ovn-kubernetes' to see all of the containers in this pod.
sh-4.2# ip add show ovn-k8s-gw0
7: ovn-k8s-gw0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8901 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ether 00:00:a9:fe:21:01 brd ff:ff:ff:ff:ff:ff
    inet 169.254.33.1/24 brd 169.254.33.255 scope global ovn-k8s-gw0
       valid_lft forever preferred_lft forever
sh-4.2# ip add show ovn-k8s-mp0
8: ovn-k8s-mp0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8901 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ether 82:92:ec:b5:dc:8d brd ff:ff:ff:ff:ff:ff
    inet 10.128.4.2/23 brd 10.128.5.255 scope global ovn-k8s-mp0
       valid_lft forever preferred_lft forever
sh-4.2# exit
exit
$ oc -n openshift-ovn-kubernetes rsh ovnkube-node-hs9bd
Defaulting container name to ovn-controller.
Use 'oc describe pod/ovnkube-node-hs9bd -n openshift-ovn-kubernetes' to see all of the containers in this pod.
sh-4.2# ip add show ovn-k8s-gw0
7: ovn-k8s-gw0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8901 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ether 00:00:a9:fe:21:01 brd ff:ff:ff:ff:ff:ff
    inet 169.254.33.1/24 brd 169.254.33.255 scope global ovn-k8s-gw0
       valid_lft forever preferred_lft forever
sh-4.2# ip add show ovn-k8s-mp0
8: ovn-k8s-mp0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8901 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ether 62:85:42:68:5f:77 brd ff:ff:ff:ff:ff:ff
    inet 10.128.10.2/23 brd 10.128.11.255 scope global ovn-k8s-mp0
       valid_lft forever preferred_lft forever
sh-4.2# exit
exit

Comment 6 errata-xmlrpc 2020-05-04 11:48:25 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:0581