Bug 1247094

Summary: vif details
Product: Red Hat OpenStack Reporter: Pablo Caruana <pcaruana>
Component: openstack-neutronAssignee: lpeer <lpeer>
Status: CLOSED DUPLICATE QA Contact: Ofer Blaut <oblaut>
Severity: high Docs Contact:
Priority: high    
Version: 5.0 (RHEL 6)CC: amuller, chrisw, nyechiel, yeylon
Target Milestone: ---   
Target Release: 8.0 (Liberty)   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2015-12-16 21:25:48 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Pablo Caruana 2015-07-27 10:30:16 UTC
Description of problem:
OSP 4 to OSP 5 upgrade having Neutron ports are broken after Icehouse upgrade

Here the vif_details contains {} only. It appears that the neutron ports do not have any 'vif_details' set, and without that, the neutron agent creates just tap ports, and nova creates libvirt.xml files without bridge information. Instances can not start, virtual machines were unable to start up because the qbr* Linux bridges had disappeared and neither Nova   nor Neutron tried to recreate them. A work around (which is unfeasible for us) was to migrate the virtual machines to a   different compute node, which causes their entire state (include ephemeral disks) to be recreated.

If a new instance is created, the port has this information. Those instances can be restarted, and migrated.


Another thing worth mentioning regarding Neutron agents: before the upgrade, this bed coach environment was running 2 DHCP agents per network, one on each controller and registering themselves using the host FQDN. After the upgrade, we've switched to running just 1 DHCP agent per network, on the active controller node, and registering itself with a pseudo host name just the same as the other Neutron agents:

How reproducible:

Nova contains

/etc/nova/nova.conf 
vif_plugging_is_fatal=false

vif_plugging_timeout=300

vif_driver=nova.virt.libvirt.vif.LibvirtGenericVIFDriver

Upgrading the packages 

openstack-neutron-2014.1.4-1.el6ost.noarch                  Fri Jul 24 10:51:01 2015
openstack-neutron-ml2-2014.1.4-1.el6ost.noarch              Fri Jul 24 10:51:13 2015
openstack-neutron-openvswitch-2014.1.4-1.el6ost.noarch      Fri Jul 24 10:51:01 2015
openstack-nova-api-2014.1.4-4.el6ost.noarch                 Fri Jul 24 09:40:44 2015
openstack-nova-cert-2014.1.4-4.el6ost.noarch                Fri Jul 24 09:40:44 2015
openstack-nova-common-2014.1.4-4.el6ost.noarch              Fri Jul 24 09:40:42 2015
openstack-nova-conductor-2014.1.4-4.el6ost.noarch           Fri Jul 24 09:40:44 2015
openstack-nova-console-2014.1.4-4.el6ost.noarch             Fri Jul 24 09:40:42 2015
openstack-nova-novncproxy-2014.1.4-4.el6ost.noarch          Fri Jul 24 09:40:42 2015
openstack-nova-scheduler-2014.1.4-4.el6ost.noarch           Fri Jul 24 09:40:43 2015
python-neutron-2014.1.4-1.el6ost.noarch                     Fri Jul 24 10:51:00 2015
python-neutronclient-2.3.4-3.el6ost.noarch                  Fri Jul 24 10:50:58 2015
python-nova-2014.1.4-4.el6ost.noarch                        Fri Jul 24 09:40:41 2015
python-novaclient-2.17.0-4.el6ost.noarch                    Fri Jul 24 09:40:39 2015
Steps to Reproduce:
1.
2.
3.

Actual results:
Non consistent  state where including part of  DHCP agent  broken

Expected results:
Working neutron components  without workarounds.

Comment 3 Assaf Muller 2015-12-16 21:25:48 UTC

*** This bug has been marked as a duplicate of bug 1247095 ***