Bug 1247095 - vif details
vif details
Status: CLOSED DUPLICATE of bug 1247096
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-neutron (Show other bugs)
5.0 (RHEL 6)
Unspecified Unspecified
high Severity high
: ---
: 8.0 (Liberty)
Assigned To: lpeer
Ofer Blaut
:
: 1247094 (view as bug list)
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2015-07-27 06:30 EDT by Pablo Caruana
Modified: 2016-04-27 00:46 EDT (History)
4 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2015-12-16 16:30:25 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Pablo Caruana 2015-07-27 06:30:46 EDT
Description of problem:
OSP 4 to OSP 5 upgrade having Neutron ports are broken after Icehouse upgrade

Here the vif_details contains {} only. It appears that the neutron ports do not have any 'vif_details' set, and without that, the neutron agent creates just tap ports, and nova creates libvirt.xml files without bridge information. Instances can not start, virtual machines were unable to start up because the qbr* Linux bridges had disappeared and neither Nova   nor Neutron tried to recreate them. A work around (which is unfeasible for us) was to migrate the virtual machines to a   different compute node, which causes their entire state (include ephemeral disks) to be recreated.

If a new instance is created, the port has this information. Those instances can be restarted, and migrated.


Another thing worth mentioning regarding Neutron agents: before the upgrade, this bed coach environment was running 2 DHCP agents per network, one on each controller and registering themselves using the host FQDN. After the upgrade, we've switched to running just 1 DHCP agent per network, on the active controller node, and registering itself with a pseudo host name just the same as the other Neutron agents:

How reproducible:

Nova contains

/etc/nova/nova.conf 
vif_plugging_is_fatal=false

vif_plugging_timeout=300

vif_driver=nova.virt.libvirt.vif.LibvirtGenericVIFDriver

Upgrading the packages 

openstack-neutron-2014.1.4-1.el6ost.noarch                  Fri Jul 24 10:51:01 2015
openstack-neutron-ml2-2014.1.4-1.el6ost.noarch              Fri Jul 24 10:51:13 2015
openstack-neutron-openvswitch-2014.1.4-1.el6ost.noarch      Fri Jul 24 10:51:01 2015
openstack-nova-api-2014.1.4-4.el6ost.noarch                 Fri Jul 24 09:40:44 2015
openstack-nova-cert-2014.1.4-4.el6ost.noarch                Fri Jul 24 09:40:44 2015
openstack-nova-common-2014.1.4-4.el6ost.noarch              Fri Jul 24 09:40:42 2015
openstack-nova-conductor-2014.1.4-4.el6ost.noarch           Fri Jul 24 09:40:44 2015
openstack-nova-console-2014.1.4-4.el6ost.noarch             Fri Jul 24 09:40:42 2015
openstack-nova-novncproxy-2014.1.4-4.el6ost.noarch          Fri Jul 24 09:40:42 2015
openstack-nova-scheduler-2014.1.4-4.el6ost.noarch           Fri Jul 24 09:40:43 2015
python-neutron-2014.1.4-1.el6ost.noarch                     Fri Jul 24 10:51:00 2015
python-neutronclient-2.3.4-3.el6ost.noarch                  Fri Jul 24 10:50:58 2015
python-nova-2014.1.4-4.el6ost.noarch                        Fri Jul 24 09:40:41 2015
python-novaclient-2.17.0-4.el6ost.noarch                    Fri Jul 24 09:40:39 2015
Steps to Reproduce:
1.
2.
3.

Actual results:
Non consistent  state where including part of  DHCP agent  broken

Expected results:
Working neutron components  without workarounds.
Comment 3 Assaf Muller 2015-12-16 16:25:48 EST
*** Bug 1247094 has been marked as a duplicate of this bug. ***
Comment 4 Assaf Muller 2015-12-16 16:30:25 EST

*** This bug has been marked as a duplicate of bug 1247096 ***

Note You need to log in before you can comment on or make changes to this bug.