Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1749473

Summary: Unable to delete an instance | Conflict: Port [port-id] is currently a parent port for trunk [trunk-id]
Product: Red Hat OpenStack Reporter: Cristian Muresanu <cmuresan>
Component: openstack-neutronAssignee: Nate Johnston <njohnston>
Status: CLOSED DUPLICATE QA Contact: Candido Campos <ccamposr>
Severity: medium Docs Contact:
Priority: medium    
Version: 13.0 (Queens)CC: amuller, chrisw, cmuresan, ffernand, fwissing, njohnston, ppitonak, ravsingh, scohen
Target Milestone: ---Keywords: Reopened
Target Release: ---   
Hardware: Unspecified   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2020-03-12 13:32:32 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Cristian Muresanu 2019-09-05 17:39:11 UTC
Description of problem:

Attempting to delete instance doesn't work:
> openstack server delete 5bb9b8ea-831d-4cf4-9e3f-dfeaf1b70c73

Seems cannot delete because it has a parent port for a trunk, and can't delete the trunk because it's in use.

./var/log/containers/nova/nova-compute.log:
2019-08-27 18:44:32.047 1 ERROR oslo_messaging.rpc.server [req-b88046b6-6af8-40dd-924c-6648f8ee03a3 a522230b4e9f4c6791d3b2e8adf3cad8 8a159a04990641cd8a49674276c57a8a - default default] Exception during message handling: Conflict: Port d3d7a1d1-6179-4fa4-99e0-fed8553a7ad4 is currently a parent port for trunk 5a994ada-671c-4238-bbe0-48d757354b3f.

Trying to remove the trunk:
> openstack network trunk delete 5a994ada-671c-4238-bbe0-48d757354b3f
Failed to delete trunk with name or ID '5a994ada-671c-4238-bbe0-48d757354b3f': Trunk 5a994ada-671c-4238-bbe0-48d757354b3f is currently in use.
Neutron server returns request_ids: ['req-625b801f-acbc-42b9-86b1-afb61a285d71']
1 of 1 trunks failed to delete.

> openstack network trunk create --disable 5a994ada-671c-4238-bbe0-48d757354b3f --parent-port d3d7a1d1-6179-4fa4-99e0-fed8553a7ad4
Port d3d7a1d1-6179-4fa4-99e0-fed8553a7ad4 is currently in use and is not eligible for use as a parent port.
Neutron server returns request_ids: ['req-d94c6fe7-3b12-4374-a23d-7b57a56587e7']

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

019-07-10 19:39:53.039 1 ERROR nova.compute.manager [req-44bbda19-e954-46ce-8134-a300d96a88c5 0abfff6b9ebb439193db457dbbb23c7a 8a159a04990641cd8a49674276c57a8a - default default] [instance: 5bb9b8ea-831d-4cf4-9e3f-dfeaf1b70c73] Failed to deallocate network for instance. Error: Port d3d7a1d1-6179-4fa4-99e0-fed8553a7ad4 is currently a parent port for trunk 5a994ada-671c-4238-bbe0-48d757354b3f.
2019-07-10 19:39:53.156 1 ERROR nova.compute.manager [req-44bbda19-e954-46ce-8134-a300d96a88c5 0abfff6b9ebb439193db457dbbb23c7a 8a159a04990641cd8a49674276c57a8a - default default] [instance: 5bb9b8ea-831d-4cf4-9e3f-dfeaf1b70c73] Setting instance vm_state to ERROR: Conflict: Port d3d7a1d1-6179-4fa4-99e0-fed8553a7ad4 is currently a parent port for trunk 5a994ada-671c-4238-bbe0-48d757354b3f.
2019-07-10 19:39:53.156 1 ERROR nova.compute.manager [instance: 5bb9b8ea-831d-4cf4-9e3f-dfeaf1b70c73] Traceback (most recent call last):
2019-07-10 19:39:53.156 1 ERROR nova.compute.manager [instance: 5bb9b8ea-831d-4cf4-9e3f-dfeaf1b70c73]   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2561, in do_terminate_instance
2019-07-10 19:39:53.156 1 ERROR nova.compute.manager [instance: 5bb9b8ea-831d-4cf4-9e3f-dfeaf1b70c73]     self._delete_instance(context, instance, bdms)
2019-07-10 19:39:53.156 1 ERROR nova.compute.manager [instance: 5bb9b8ea-831d-4cf4-9e3f-dfeaf1b70c73]   File "/usr/lib/python2.7/site-packages/nova/hooks.py", line 154, in inner
2019-07-10 19:39:53.156 1 ERROR nova.compute.manager [instance: 5bb9b8ea-831d-4cf4-9e3f-dfeaf1b70c73]     rv = f(*args, **kwargs)
2019-07-10 19:39:53.156 1 ERROR nova.compute.manager [instance: 5bb9b8ea-831d-4cf4-9e3f-dfeaf1b70c73]   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2490, in _delete_instance
2019-07-10 19:39:53.156 1 ERROR nova.compute.manager [instance: 5bb9b8ea-831d-4cf4-9e3f-dfeaf1b70c73]     self._shutdown_instance(context, instance, bdms)
2019-07-10 19:39:53.156 1 ERROR nova.compute.manager [instance: 5bb9b8ea-831d-4cf4-9e3f-dfeaf1b70c73]   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2385, in _shutdown_instance
2019-07-10 19:39:53.156 1 ERROR nova.compute.manager [instance: 5bb9b8ea-831d-4cf4-9e3f-dfeaf1b70c73]     self._try_deallocate_network(context, instance, requested_networks)
2019-07-10 19:39:53.156 1 ERROR nova.compute.manager [instance: 5bb9b8ea-831d-4cf4-9e3f-dfeaf1b70c73]   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2307, in _try_deallocate_network
2019-07-10 19:39:53.156 1 ERROR nova.compute.manager [instance: 5bb9b8ea-831d-4cf4-9e3f-dfeaf1b70c73]     self._set_instance_obj_error_state(context, instance)
2019-07-10 19:39:53.156 1 ERROR nova.compute.manager [instance: 5bb9b8ea-831d-4cf4-9e3f-dfeaf1b70c73]   File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
2019-07-10 19:39:53.156 1 ERROR nova.compute.manager [instance: 5bb9b8ea-831d-4cf4-9e3f-dfeaf1b70c73]     self.force_reraise()
2019-07-10 19:39:53.156 1 ERROR nova.compute.manager [instance: 5bb9b8ea-831d-4cf4-9e3f-dfeaf1b70c73]   File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in force_reraise
2019-07-10 19:39:53.156 1 ERROR nova.compute.manager [instance: 5bb9b8ea-831d-4cf4-9e3f-dfeaf1b70c73]     six.reraise(self.type_, self.value, self.tb)
2019-07-10 19:39:53.156 1 ERROR nova.compute.manager [instance: 5bb9b8ea-831d-4cf4-9e3f-dfeaf1b70c73]   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2302, in _try_deallocate_network
2019-07-10 19:39:53.156 1 ERROR nova.compute.manager [instance: 5bb9b8ea-831d-4cf4-9e3f-dfeaf1b70c73]     self._deallocate_network(context, instance, requested_networks)
2019-07-10 19:39:53.156 1 ERROR nova.compute.manager [instance: 5bb9b8ea-831d-4cf4-9e3f-dfeaf1b70c73]   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1678, in _deallocate_network
2019-07-10 19:39:53.156 1 ERROR nova.compute.manager [instance: 5bb9b8ea-831d-4cf4-9e3f-dfeaf1b70c73]     context, instance, requested_networks=requested_networks)
2019-07-10 19:39:53.156 1 ERROR nova.compute.manager [instance: 5bb9b8ea-831d-4cf4-9e3f-dfeaf1b70c73]   File "/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 1273, in deallocate_for_instance
2019-07-10 19:39:53.156 1 ERROR nova.compute.manager [instance: 5bb9b8ea-831d-4cf4-9e3f-dfeaf1b70c73]     self._delete_ports(neutron, instance, ports, raise_if_fail=True)
2019-07-10 19:39:53.156 1 ERROR nova.compute.manager [instance: 5bb9b8ea-831d-4cf4-9e3f-dfeaf1b70c73]   File "/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 1247, in _delete_ports
2019-07-10 19:39:53.156 1 ERROR nova.compute.manager [instance: 5bb9b8ea-831d-4cf4-9e3f-dfeaf1b70c73]     raise exceptions[0]
2019-07-10 19:39:53.156 1 ERROR nova.compute.manager [instance: 5bb9b8ea-831d-4cf4-9e3f-dfeaf1b70c73] Conflict: Port d3d7a1d1-6179-4fa4-99e0-fed8553a7ad4 is currently a parent port for trunk 5a994ada-671c-4238-bbe0-48d757354b3f.
2019-07-10 19:39:53.156 1 ERROR nova.compute.manager [instance: 5bb9b8ea-831d-4cf4-9e3f-dfeaf1b70c73] Neutron server returns request_ids: ['req-6ef96063-09b9-4c76-bf47-bd9eb6eda0d1']
2019-07-10 19:39:53.156 1 ERROR nova.compute.manager [instance: 5bb9b8ea-831d-4cf4-9e3f-dfeaf1b70c73] 
2019-07-10 19:39:53.270 1 DEBUG oslo_concurrency.lockutils [req-44bbda19-e954-46ce-8134-a300d96a88c5 0abfff6b9ebb439193db457dbbb23c7a 8a159a04990641cd8a49674276c57a8a - default default] Lock "5bb9b8ea-831d-4cf4-9e3f-dfeaf1b70c73" released by "nova.compute.manager.do_terminate_instance" :: held 1.027s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285
2019-07-10 19:39:53.271 1 DEBUG oslo_concurrency.lockutils [-] Lock "5bb9b8ea-831d-4cf4-9e3f-dfeaf1b70c73" acquired by "nova.compute.manager.query_driver_power_state_and_sync" :: waited 0.688s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273
2019-07-10 19:39:53.271 1 INFO nova.compute.manager [-] [instance: 5bb9b8ea-831d-4cf4-9e3f-dfeaf1b70c73] During sync_power_state the instance has a pending task (deleting). Skip.
2019-07-10 19:39:53.271 1 DEBUG oslo_concurrency.lockutils [-] Lock "5bb9b8ea-831d-4cf4-9e3f-dfeaf1b70c73" released by "nova.compute.manager.query_driver_power_state_and_sync" :: held 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285
2019-07-10 19:39:53.502 1 INFO nova.compute.manager [req-44bbda19-e954-46ce-8134-a300d96a88c5 0abfff6b9ebb439193db457dbbb23c7a 8a159a04990641cd8a49674276c57a8a - default default] [instance: 5bb9b8ea-831d-4cf4-9e3f-dfeaf1b70c73] Successfully reverted task state from deleting on failure for instance.
2019-07-10 19:40:50.494 1 DEBUG nova.compute.resource_tracker [req-dac91077-e161-4fd8-991a-cea05e7e9d81 - - - - -] Instance 5bb9b8ea-831d-4cf4-9e3f-dfeaf1b70c73 actively managed on this compute host and has allocations in placement: {u'resources': {u'VCPU': 8, u'MEMORY_MB': 16384, u'DISK_GB': 60}}. _remove_deleted_instances_allocations /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:1252

Comment 3 Cristian Muresanu 2019-12-06 03:47:31 UTC
Hi, we're wondering if there is any chance for this issue to be resolved in osp13z9 update ?
Thanks,
Cristian

Comment 4 Freddy Wissing 2019-12-19 14:40:04 UTC
Realizing this isn't high priority, is there any estimate when this could be looked at?

Thank you,

/Freddy

Comment 6 Nate Johnston 2020-01-24 22:46:33 UTC
Freddy and Cristian,

Can you check on something for me?  First, please paste the output of "openstack network trunk show <port-id>" where the port-id is the id of the port of the instance that cannot be deleted.

Second, if you see attached subports in the output, please loop over them and run "openstack network trunk unset --subport <subport-id> <port-id>" for each.  Then see if the instance can be deleted.

Thanks!