Bug 1768558 - Instances fail to spawn because of failed to allocate network, not rescheduling error
Summary: Instances fail to spawn because of failed to allocate network, not rescheduli...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-neutron
Version: 13.0 (Queens)
Hardware: x86_64
OS: Linux
urgent
urgent
Target Milestone: z13
: 13.0 (Queens)
Assignee: Bernard Cafarelli
QA Contact: Alex Katz
URL:
Whiteboard:
: 1775321 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-11-04 17:33 UTC by David Hill
Modified: 2023-10-06 18:44 UTC (History)
13 users (show)

Fixed In Version: openstack-neutron-12.1.1-9.el7ost
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-10-12 14:03:54 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
OpenStack gerrit 693412 0 'None' ABANDONED Add setproctitle support to the workers module 2021-09-27 07:55:20 UTC
OpenStack gerrit 693415 0 'None' ABANDONED Change process name of neutron-server to match worker role 2021-09-27 07:55:20 UTC
Red Hat Issue Tracker OSP-28286 0 None None None 2023-09-07 20:58:09 UTC
Red Hat Knowledge Base (Solution) 4551201 0 None None None 2019-11-04 17:37:38 UTC

Description David Hill 2019-11-04 17:33:36 UTC
Description of problem:
Instances fail to spawn because of failed to allocate network, not rescheduling error:
~~~
2019-10-22 00:10:59.368 1 ERROR nova.compute.manager [req-7d784cfb-eda8-471c-ba86-02258064ec1c 529b3e899c2c9a5ec4dec54df06360863a17
de903eef96f4eea6cec870bf6f5d 93f5602a385b4b3ba27af460fb562be8 - 62cf1b5ec006489db99e2b0ebfb55f57 62cf1b5ec006489db99e2b0ebfb55f57] 
[instance: 2f90567d-a9f6-4174-95f4-ecf4f897cfcf] Failed to allocate network(s): VirtualInterfaceCreateException: Virtual Interface 
creation failed
2019-10-22 00:10:59.368 1 ERROR nova.compute.manager [instance: 2f90567d-a9f6-4174-95f4-ecf4f897cfcf] Traceback (most recent call l
ast):
2019-10-22 00:10:59.368 1 ERROR nova.compute.manager [instance: 2f90567d-a9f6-4174-95f4-ecf4f897cfcf]   File "/usr/lib/python2.7/si
te-packages/nova/compute/manager.py", line 2053, in _build_and_run_instance
2019-10-22 00:10:59.368 1 ERROR nova.compute.manager [instance: 2f90567d-a9f6-4174-95f4-ecf4f897cfcf]     block_device_info=block_d
evice_info)
2019-10-22 00:10:59.368 1 ERROR nova.compute.manager [instance: 2f90567d-a9f6-4174-95f4-ecf4f897cfcf]   File "/usr/lib/python2.7/si
te-packages/nova/virt/libvirt/driver.py", line 3138, in spawn
2019-10-22 00:10:59.368 1 ERROR nova.compute.manager [instance: 2f90567d-a9f6-4174-95f4-ecf4f897cfcf]     destroy_disks_on_failure=
True)
2019-10-22 00:10:59.368 1 ERROR nova.compute.manager [instance: 2f90567d-a9f6-4174-95f4-ecf4f897cfcf]   File "/usr/lib/python2.7/si
te-packages/nova/virt/libvirt/driver.py", line 5660, in _create_domain_and_network
2019-10-22 00:10:59.368 1 ERROR nova.compute.manager [instance: 2f90567d-a9f6-4174-95f4-ecf4f897cfcf]     raise exception.VirtualIn
terfaceCreateException()
2019-10-22 00:10:59.368 1 ERROR nova.compute.manager [instance: 2f90567d-a9f6-4174-95f4-ecf4f897cfcf] VirtualInterfaceCreateExcepti
on: Virtual Interface creation failed
2019-10-22 00:10:59.368 1 ERROR nova.compute.manager [instance: 2f90567d-a9f6-4174-95f4-ecf4f897cfcf] 
~~~

Version-Release number of selected component (if applicable):
registry.access.redhat.com/rhosp13/openstack-neutron-l3-agent:13.0-93
registry.access.redhat.com/rhosp13/openstack-neutron-metadata-agent:13.0-95
registry.access.redhat.com/rhosp13/openstack-neutron-openvswitch-agent:13.0-93


How reproducible:
Randomly

Steps to Reproduce:
1. No clear steps but create lots of VMs constantly and eventually you might hit this issue
2.
3.

Actual results:
Failure to create VMs after vif_plug_timeout is reached

Expected results:
vif_plug_timeout default value shouldn't be tweaked

Additional info:
Appears to be caused when load is high on CPU where ovs-vswitchd is running but this could also be cause by ovs-vswitchd itself

Comment 10 Bernard Cafarelli 2019-11-25 12:26:00 UTC
*** Bug 1775321 has been marked as a duplicate of this bug. ***

Comment 16 Lon Hohberger 2020-03-11 10:35:42 UTC
According to our records, this should be resolved by python-neutron-lib-1.13.0-2.el7ost.  This build is available now.


Note You need to log in before you can comment on or make changes to this bug.