Description of problem:
Instances fail to spawn because of failed to allocate network, not rescheduling error:
~~~
2019-10-22 00:10:59.368 1 ERROR nova.compute.manager [req-7d784cfb-eda8-471c-ba86-02258064ec1c 529b3e899c2c9a5ec4dec54df06360863a17
de903eef96f4eea6cec870bf6f5d 93f5602a385b4b3ba27af460fb562be8 - 62cf1b5ec006489db99e2b0ebfb55f57 62cf1b5ec006489db99e2b0ebfb55f57]
[instance: 2f90567d-a9f6-4174-95f4-ecf4f897cfcf] Failed to allocate network(s): VirtualInterfaceCreateException: Virtual Interface
creation failed
2019-10-22 00:10:59.368 1 ERROR nova.compute.manager [instance: 2f90567d-a9f6-4174-95f4-ecf4f897cfcf] Traceback (most recent call l
ast):
2019-10-22 00:10:59.368 1 ERROR nova.compute.manager [instance: 2f90567d-a9f6-4174-95f4-ecf4f897cfcf] File "/usr/lib/python2.7/si
te-packages/nova/compute/manager.py", line 2053, in _build_and_run_instance
2019-10-22 00:10:59.368 1 ERROR nova.compute.manager [instance: 2f90567d-a9f6-4174-95f4-ecf4f897cfcf] block_device_info=block_d
evice_info)
2019-10-22 00:10:59.368 1 ERROR nova.compute.manager [instance: 2f90567d-a9f6-4174-95f4-ecf4f897cfcf] File "/usr/lib/python2.7/si
te-packages/nova/virt/libvirt/driver.py", line 3138, in spawn
2019-10-22 00:10:59.368 1 ERROR nova.compute.manager [instance: 2f90567d-a9f6-4174-95f4-ecf4f897cfcf] destroy_disks_on_failure=
True)
2019-10-22 00:10:59.368 1 ERROR nova.compute.manager [instance: 2f90567d-a9f6-4174-95f4-ecf4f897cfcf] File "/usr/lib/python2.7/si
te-packages/nova/virt/libvirt/driver.py", line 5660, in _create_domain_and_network
2019-10-22 00:10:59.368 1 ERROR nova.compute.manager [instance: 2f90567d-a9f6-4174-95f4-ecf4f897cfcf] raise exception.VirtualIn
terfaceCreateException()
2019-10-22 00:10:59.368 1 ERROR nova.compute.manager [instance: 2f90567d-a9f6-4174-95f4-ecf4f897cfcf] VirtualInterfaceCreateExcepti
on: Virtual Interface creation failed
2019-10-22 00:10:59.368 1 ERROR nova.compute.manager [instance: 2f90567d-a9f6-4174-95f4-ecf4f897cfcf]
~~~
Version-Release number of selected component (if applicable):
registry.access.redhat.com/rhosp13/openstack-neutron-l3-agent:13.0-93
registry.access.redhat.com/rhosp13/openstack-neutron-metadata-agent:13.0-95
registry.access.redhat.com/rhosp13/openstack-neutron-openvswitch-agent:13.0-93
How reproducible:
Randomly
Steps to Reproduce:
1. No clear steps but create lots of VMs constantly and eventually you might hit this issue
2.
3.
Actual results:
Failure to create VMs after vif_plug_timeout is reached
Expected results:
vif_plug_timeout default value shouldn't be tweaked
Additional info:
Appears to be caused when load is high on CPU where ovs-vswitchd is running but this could also be cause by ovs-vswitchd itself
Comment 10Bernard Cafarelli
2019-11-25 12:26:00 UTC
*** Bug 1775321 has been marked as a duplicate of this bug. ***