Bug 1768558

Summary: Instances fail to spawn because of failed to allocate network, not rescheduling error
Product: Red Hat OpenStack Reporter: David Hill <dhill>
Component: openstack-neutronAssignee: Bernard Cafarelli <bcafarel>
Status: CLOSED CURRENTRELEASE QA Contact: Alex Katz <akatz>
Severity: urgent Docs Contact:
Priority: urgent    
Version: 13.0 (Queens)CC: amuller, apevec, bcafarel, chrisw, ekuris, jhardee, lhh, mgarciac, mporrato, msecaur, njohnston, schhabdi, scohen
Target Milestone: z13Keywords: Triaged, ZStream
Target Release: 13.0 (Queens)   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: openstack-neutron-12.1.1-9.el7ost Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2020-10-12 14:03:54 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description David Hill 2019-11-04 17:33:36 UTC
Description of problem:
Instances fail to spawn because of failed to allocate network, not rescheduling error:
~~~
2019-10-22 00:10:59.368 1 ERROR nova.compute.manager [req-7d784cfb-eda8-471c-ba86-02258064ec1c 529b3e899c2c9a5ec4dec54df06360863a17
de903eef96f4eea6cec870bf6f5d 93f5602a385b4b3ba27af460fb562be8 - 62cf1b5ec006489db99e2b0ebfb55f57 62cf1b5ec006489db99e2b0ebfb55f57] 
[instance: 2f90567d-a9f6-4174-95f4-ecf4f897cfcf] Failed to allocate network(s): VirtualInterfaceCreateException: Virtual Interface 
creation failed
2019-10-22 00:10:59.368 1 ERROR nova.compute.manager [instance: 2f90567d-a9f6-4174-95f4-ecf4f897cfcf] Traceback (most recent call l
ast):
2019-10-22 00:10:59.368 1 ERROR nova.compute.manager [instance: 2f90567d-a9f6-4174-95f4-ecf4f897cfcf]   File "/usr/lib/python2.7/si
te-packages/nova/compute/manager.py", line 2053, in _build_and_run_instance
2019-10-22 00:10:59.368 1 ERROR nova.compute.manager [instance: 2f90567d-a9f6-4174-95f4-ecf4f897cfcf]     block_device_info=block_d
evice_info)
2019-10-22 00:10:59.368 1 ERROR nova.compute.manager [instance: 2f90567d-a9f6-4174-95f4-ecf4f897cfcf]   File "/usr/lib/python2.7/si
te-packages/nova/virt/libvirt/driver.py", line 3138, in spawn
2019-10-22 00:10:59.368 1 ERROR nova.compute.manager [instance: 2f90567d-a9f6-4174-95f4-ecf4f897cfcf]     destroy_disks_on_failure=
True)
2019-10-22 00:10:59.368 1 ERROR nova.compute.manager [instance: 2f90567d-a9f6-4174-95f4-ecf4f897cfcf]   File "/usr/lib/python2.7/si
te-packages/nova/virt/libvirt/driver.py", line 5660, in _create_domain_and_network
2019-10-22 00:10:59.368 1 ERROR nova.compute.manager [instance: 2f90567d-a9f6-4174-95f4-ecf4f897cfcf]     raise exception.VirtualIn
terfaceCreateException()
2019-10-22 00:10:59.368 1 ERROR nova.compute.manager [instance: 2f90567d-a9f6-4174-95f4-ecf4f897cfcf] VirtualInterfaceCreateExcepti
on: Virtual Interface creation failed
2019-10-22 00:10:59.368 1 ERROR nova.compute.manager [instance: 2f90567d-a9f6-4174-95f4-ecf4f897cfcf] 
~~~

Version-Release number of selected component (if applicable):
registry.access.redhat.com/rhosp13/openstack-neutron-l3-agent:13.0-93
registry.access.redhat.com/rhosp13/openstack-neutron-metadata-agent:13.0-95
registry.access.redhat.com/rhosp13/openstack-neutron-openvswitch-agent:13.0-93


How reproducible:
Randomly

Steps to Reproduce:
1. No clear steps but create lots of VMs constantly and eventually you might hit this issue
2.
3.

Actual results:
Failure to create VMs after vif_plug_timeout is reached

Expected results:
vif_plug_timeout default value shouldn't be tweaked

Additional info:
Appears to be caused when load is high on CPU where ovs-vswitchd is running but this could also be cause by ovs-vswitchd itself

Comment 10 Bernard Cafarelli 2019-11-25 12:26:00 UTC
*** Bug 1775321 has been marked as a duplicate of this bug. ***

Comment 16 Lon Hohberger 2020-03-11 10:35:42 UTC
According to our records, this should be resolved by python-neutron-lib-1.13.0-2.el7ost.  This build is available now.