Issue happened in the Tobiko job: http://rhos-ci-logs.lab.eng.tlv2.redhat.com/logs/rcj/gate-sanity-16.2_director-rhel-virthost-3cont_2comp-ipv4-geneve-tobiko_faults/593/test_results/tobiko_gate_1/tobiko_gate_1_05_faults_faults.html in test tobiko/tests/faults/ha/test_cloud_recovery.py::DisruptTripleoNodesTest::test_controllers_shutdown) Instance spawned on compute-0 (4b44830b-9076-4dc2-b59b-64c7df9b15db) ended up in the ERROR state due to "Failed to allocate network" issue. I checked logs there and here's what I found: * nova updates port to set binding at: 2022-11-16 04:32:37.794 8 DEBUG nova.network.neutronv2.api [req-cd718cda-9c60-4c6b-9a15-58f6d260c315 f63c032bc518439f849b34c073165c99 870de0ea24d7479794a27f0a3bc627d7 - default default] [instance: 4b44830b-9076-4dc2-b59b-64c7df9b15db] Successfully updated port: 359603b7-c5eb-4450-ba2e-2422560f9b31 _update_port /usr/lib/python3.6/site-packages/nova/network/neutronv2/api.py:516 * on neutron port was then bound to compute-0: 2022-11-16 04:32:37.422 16 DEBUG neutron.plugins.ml2.managers [req-73add9d3-31a7-4bd4-a25a-4b622ab6d91a 723a96511ed64862ab5c4cffae71f578 ded225165efe418f9b3741d2ad2e6a39 - default default] Bound port: 359603b7-c5eb-4450-ba2e-2422560f9b31, host: compute-0.redhat.local, vif_type: ovs, vif_details: {"port_filter": true}, binding_levels: [{'bound_driver': 'ovn', 'bound_segment': {'id': 'e4eee4ba-f121-4a3c-912e-4b42656557c7', 'network_type': 'geneve', 'physical_network': None, ' segmentation_id': 890, 'network_id': '92e4e51f-5874-433d-9e6b-347294018254'}}] _bind_port_level /usr/lib/python3.6/site-packages/neutron/plugins/ml2/managers.py:937 * nova created tap interface at: 2022-11-16 04:32:38.894 8 INFO os_vif [req-cd718cda-9c60-4c6b-9a15-58f6d260c315 f63c032bc518439f849b34c073165c99 870de0ea24d7479794a27f0a3bc627d7 - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:12:bd,bridge_name='br-int',has_traffic_filtering=True,id=359603b7-c5eb-4450-ba2e-2422560f9b31,network=Network(92e4e51f-5874-433d-9e6b-347294018254),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap359603b7-c5') * and in ovn-controller logs: 2022-11-16T04:32:39.024Z|02004|binding|INFO|Claiming lport 359603b7-c5eb-4450-ba2e-2422560f9b31 for this chassis. 2022-11-16T04:32:39.024Z|02005|binding|INFO|359603b7-c5eb-4450-ba2e-2422560f9b31: Claiming fa:16:3e:3d:12:bd 10.100.45.5 2001:db8:0:2d20:f816:3eff:fe3d:12bd 2022-11-16T04:32:39.024Z|02006|binding|INFO|359603b7-c5eb-4450-ba2e-2422560f9b31: Claiming unknown 2022-11-16T04:32:39.052Z|02007|binding|INFO|Setting lport 359603b7-c5eb-4450-ba2e-2422560f9b31 ovn-installed in OVS 2022-11-16T04:32:39.052Z|02008|binding|INFO|Setting lport 359603b7-c5eb-4450-ba2e-2422560f9b31 up in Southbound So up to now all looks fine for me but there is no info in the neutron logs that this port is UP, on any of the controllers. Due to that there is no notification to nova and instances is finally set in ERROR state.
May i request to make a brief description/comment public for this issue. The other linked BZ is also private.
Hello Sufiyan: The error reported here is the same as in [1], and was fixed in the public code repository in [2]. Regards. [1]https://bugs.launchpad.net/neutron/+bug/1983530 [2]https://review.opendev.org/c/openstack/neutron/+/851997
*** Bug 2177931 has been marked as a duplicate of this bug. ***
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat OpenStack Platform 16.2.5 (Train) bug fix and enhancement advisory), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2023:1763