Bug 1460286

Summary: Failed delete network because port is still in use after Instance deleted
Product: Red Hat OpenStack Reporter: Yuri Obshansky <yobshans>
Component: openstack-novaAssignee: Eoghan Glynn <eglynn>
Status: CLOSED NOTABUG QA Contact: Joe H. Rahme <jhakimra>
Severity: high Docs Contact:
Priority: unspecified    
Version: 11.0 (Ocata)CC: berrange, dasmith, eglynn, kchamart, sbauza, sferdjao, sgordon, srevivo, stephenfin, vromanso
Target Milestone: ---   
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-06-16 14:18:03 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
neutron server log from controller
none
horizon log from controller
none
nova log from compute
none
Horizon screenshot of error when delete Network
none
Horizon screenshot of ports none

Description Yuri Obshansky 2017-06-09 15:09:35 UTC
Description of problem:
This issue is a little bit complicated for describe.
Burt, the bottom line is: 
I cannot delete Network, raise error in Horizon dashboard:
Error: Failed to delete network 030b8fdd-63a4-401b-a0f4-45e47cc84df2
Error: Unable to delete network: perf-18-net
(see screenshot and neutron-server.log)
It happened because one or more ports are still in use.
I see, that Network has 3 ports (network:dhcp) 
and 1 or more (Detached) with Status: Down and Admin State: UP  
It's result after Instance deletion.
In other words: Instance deleted, but 1 or more ports stayed connected to Network, which prevent us delete Network.
Once I delete port, I can delete Network as well.
I investigated what instance was connected to that Network 
ID:b51937fd-8da0-4c25-8665-24b3c2bad1bc
(see attached nova.log)
Network Name    perf-18-net
Network ID     030b8fdd-63a4-401b-a0f4-45e47cc84df2
Port ID 3ecde99a-8a26-44b1-8cf0-67e32b575b9f
Subnet ID    22c49121-c539-46b9-9b9a-24f3a5459345

Neutron error when delete Network with port:
Neutron server returns request_ids: ['req-7e5d19b1-4909-49b8-b5cc-9658b4b74efb']
2017-06-09 12:29:41,245 22876 WARNING horizon.tables.actions Action (u'deleted network', u'perf-18-net') Failed for 
2017-06-09 12:30:49,057 22875 INFO openstack_dashboard.dashboards.admin.networks.tables Failed to delete network 030b8fdd-63a4-401b-a0f4-45e47cc84df2
2017-06-09 12:30:49,058 22875 WARNING horizon.exceptions Recoverable error: Unable to complete operation on network 030b8fdd-63a4-401b-a0f4-45e47cc84df2. There are one or more ports still in use on the network.

Neutron message when delete Network with delete port:
2017-06-09 14:26:52,377 22875 INFO horizon.tables.actions Deleted Port: "3ecde99a-8a26-44b1-8cf0-67e32b575b9f"
2017-06-09 14:27:34,683 22875 INFO horizon.tables.actions Deleted Network: "perf-18-net"

That issues raised many times during my performance test execution and degraded performance because not deleted networks (more than 250) impacted on overall openstack performance.   


Version-Release number of selected component (if applicable):
rhos-release 11   -p 2017-05-30.1
openstack-nova-novncproxy-15.0.3-3.el7ost.noarch
python-nova-15.0.3-3.el7ost.noarch
openstack-nova-cert-15.0.3-3.el7ost.noarch
openstack-nova-migration-15.0.3-3.el7ost.noarch
openstack-nova-api-15.0.3-3.el7ost.noarch
python-novaclient-7.1.0-1.el7ost.noarch
puppet-nova-10.4.0-5.el7ost.noarch
openstack-nova-compute-15.0.3-3.el7ost.noarch
openstack-nova-scheduler-15.0.3-3.el7ost.noarch
openstack-nova-console-15.0.3-3.el7ost.noarch
openstack-nova-common-15.0.3-3.el7ost.noarch
openstack-nova-conductor-15.0.3-3.el7ost.noarch
openstack-nova-placement-api-15.0.3-3.el7ost.noarch

How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 1 Yuri Obshansky 2017-06-09 15:10:28 UTC
python-neutron-10.0.1-1.el7ost.noarch
python-neutron-lbaas-10.0.0-8.el7ost.noarch
openstack-neutron-metering-agent-10.0.1-1.el7ost.noarch
python-neutronclient-6.1.0-1.el7ost.noarch
openstack-neutron-ml2-10.0.1-1.el7ost.noarch
openstack-neutron-10.0.1-1.el7ost.noarch
openstack-neutron-sriov-nic-agent-10.0.1-1.el7ost.noarch
puppet-neutron-10.3.0-2.el7ost.noarch
openstack-neutron-common-10.0.1-1.el7ost.noarch
openstack-neutron-openvswitch-10.0.1-1.el7ost.noarch
python-neutron-lib-1.1.0-1.el7ost.noarch
openstack-neutron-lbaas-10.0.0-8.el7ost.noarch

Comment 2 Yuri Obshansky 2017-06-09 15:11:14 UTC
Created attachment 1286442 [details]
neutron server log from controller

Comment 3 Yuri Obshansky 2017-06-09 15:11:53 UTC
Created attachment 1286443 [details]
horizon log from controller

Comment 4 Yuri Obshansky 2017-06-09 15:12:34 UTC
Created attachment 1286444 [details]
nova log from compute

Comment 5 Yuri Obshansky 2017-06-09 15:14:44 UTC
Created attachment 1286445 [details]
Horizon screenshot of error when delete Network

Comment 6 Yuri Obshansky 2017-06-09 15:15:27 UTC
Created attachment 1286446 [details]
Horizon screenshot of ports

Comment 7 Stephen Finucane 2017-06-16 14:17:41 UTC
To be honest, this looks like it's working as expected. Detaching interfaces from an instance occurs asynchronously and it requires collaboration between the guest and host to complete. It is possible that this detach would never complete, if the guest is unable or unwilling to detach the interface, and there's no way the host can force this. As a result of this, the interfaces are still present and neutron is correctly stating that it cannot delete the network.

If you want to delete a network, the only guaranteed way to do so is to delete all instances using that network.

I'm going to close this as NOTABUG. If I've missed something, feel free to reopen.

Comment 8 Yuri Obshansky 2017-06-16 15:47:47 UTC
Yes, I agree with you.
Actually, I wanted to do an accent on an issue when instance didn't deleted ports. Probably, it is covered in another bug 
Bug 1459687 - Failed to delete Instance, it's state changed to Error under load test.
So, let's close it.
Regards,
Yuri