Bug 1650230

Summary: Octavia o-hm0 interface disappears after reboot
Product: Red Hat OpenStack Reporter: Brent Eagles <beagles>
Component: openstack-tripleo-heat-templatesAssignee: Brent Eagles <beagles>
Status: CLOSED ERRATA QA Contact: Gurenko Alex <agurenko>
Severity: urgent Docs Contact:
Priority: urgent    
Version: 14.0 (Rocky)CC: agurenko, astafeye, bcafarel, beagles, cgoncalves, dbecker, lars, mburns, morazi, nyechiel
Target Milestone: rcKeywords: Triaged, ZStream
Target Release: 14.0 (Rocky)   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: openstack-tripleo-heat-templates-9.0.1-0.20181013060899.el7ost Doc Type: No Doc Update
Doc Text:
Story Points: ---
Clone Of: 1649923 Environment:
Last Closed: 2019-01-11 11:54:45 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1649923    
Bug Blocks:    

Description Brent Eagles 2018-11-15 16:04:02 UTC
+++ This bug was initially created as a clone of Bug #1649923 +++

While working on #1649565, I rebooted our controller.  When the system came back up, the interface o-hm0 (used by Octavia) was missing.  Looking at the output of "journactl -b", it looks like it was there initially:

Nov 14 14:17:49 neu-3-39-control3.kzn.moc NetworkManager[9405]: <info>  [1542223069.9059] ifcfg-rh: new connection /etc/sysconfig/network-scripts/ifcfg-o-hm0 (a9a0628a-8235-31ff-8408-fc1d34ddd140,"System o-hm0")
Nov 14 14:17:49 neu-3-39-control3.kzn.moc NetworkManager[9405]: <info>  [1542223069.9059] ifcfg-rh: Ignoring connection /etc/sysconfig/network-scripts/ifcfg-o-hm0 (a9a0628a-8235-31ff-8408-fc1d34ddd140,"System o-hm0") due to NM_CONTROLLED=no. Unmanaged: interface-name:o-hm0.
Nov 14 14:17:57 neu-3-39-control3.kzn.moc ovs-vsctl[10186]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-int o-hm0 -- add-port br-int o-hm0 -- set Interface o-hm0 type=internal -- -- set Interface o-hm0 external-ids:iface-status=active -- set Interface o-hm0 external-ids:attached-mac=fa:16:3e:a3:75:df -- set Interface o-hm0 external-ids:iface-id=f79518df-4d78-47c7-99fc-ef83345afe45 -- set Interface o-hm0 external-ids:skip_cleanup=true -- set Interface o-hm0 "mac=\"fa:16:3e:a3:75:df\"" -- set Interface o-hm0 other-config:hwaddr=fa:16:3e:a3:75:df
Nov 14 14:17:57 neu-3-39-control3.kzn.moc NetworkManager[9405]: <info>  [1542223077.0365] manager: (o-hm0): new Generic device (/org/freedesktop/NetworkManager/Devices/13)
Nov 14 14:17:57 neu-3-39-control3.kzn.moc kernel: device o-hm0 entered promiscuous mode
Nov 14 14:17:57 neu-3-39-control3.kzn.moc NetworkManager[9405]: <info>  [1542223077.0668] device (o-hm0): carrier: link connected
Nov 14 14:18:00 neu-3-39-control3.kzn.moc ntpd[9331]: Listen normally on 5 o-hm0 fe80::f816:3eff:fea3:75df UDP 123
Nov 14 14:18:01 neu-3-39-control3.kzn.moc network[9894]: Bringing up interface o-hm0:  [  OK  ]
Nov 14 14:18:02 neu-3-39-control3.kzn.moc ntpd[9331]: Listen normally on 6 o-hm0 172.24.0.4 UDP 123
Nov 14 14:18:22 neu-3-39-control3.kzn.moc kernel: device o-hm0 left promiscuous mode
Nov 14 14:18:24 neu-3-39-control3.kzn.moc ntpd[9331]: Deleting interface #6 o-hm0, 172.24.0.4#123, interface stats: received=0, sent=0, dropped=0, active_time=22 secs
Nov 14 14:18:24 neu-3-39-control3.kzn.moc ntpd[9331]: Deleting interface #5 o-hm0, fe80::f816:3eff:fea3:75df#123, interface stats: received=0, sent=0, dropped=0, active_time=24 secs

...but then it was removed.  beagles thinkgs ovs-cleanup is the culprit.

I was able to 'ifup o-hm0' to restore the interface.

--- Additional comment from Brent Eagles on 2018-11-14 15:09:03 EST ---

This is because the neutron-ovs-cleanup script that we implemented as a fix for the missing cleanup services destroys the o-hm0 interface after networking is started. Attempting to fix by fine-tuning the ordering.

Comment 14 errata-xmlrpc 2019-01-11 11:54:45 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2019:0045