Description of problem: When creating network & delete them I expected that OVN will clean up the logical switches. As you can see in the outputs there are some logical switches of networks that do not exist. overcloud) [root@controller-0 ~]# openstack network list +--------------------------------------+----------+--------------------------------------+ | ID | Name | Subnets | +--------------------------------------+----------+--------------------------------------+ | 4cee7e2a-4564-453c-b48b-f73511716ad4 | net-64-2 | 9f899267-b028-4a2d-b9b3-a246b40887d2 | | 5fedebc6-172b-4e81-a9da-594c1c9c2770 | nova | e73c088f-810c-4c1a-8fd9-6563e8740a8d | +--------------------------------------+----------+--------------------------------------+ (overcloud) [root@controller-0 ~]# docker exec -it ebbf1c698ebc /bin/bash ()[root@controller-0 /]# ovn-nbctl lr-list ()[root@controller-0 /]# ovn-nbctl show switch 7971f648-8996-4afd-a8b9-d9edf6885f1b (neutron-db369b75-e0ef-470b-a190-40580d6dc3df) (aka net-64-2) port 1a7e0c50-8ec4-4ef1-8ac7-d5ae29d458b1 type: localport addresses: ["fa:16:3e:e1:15:8e 10.0.2.2"] switch 4d72eca9-c090-4ae2-8420-fb68c7bc66c9 (neutron-e33954a3-cbb8-4ad1-8743-d4f34418b2ca) (aka net-64-3) port 6fc905c4-f749-45d0-b41e-c4b8952b4a8a type: localport addresses: ["fa:16:3e:57:f6:69"] switch 501a6c1e-8c91-4155-bf59-79ecbdde4506 (neutron-4cee7e2a-4564-453c-b48b-f73511716ad4) (aka net-64-2) port 79bb795d-6c5b-4b63-bb05-06cd83113d6f type: localport addresses: ["fa:16:3e:1d:1e:86"] switch af21e202-021c-4606-8f0e-da70fc44eeac (neutron-c85e6cb0-66c7-4e24-a810-30bf4776e9b0) (aka net-64-2) l Version-Release number of selected component (if applicable): OSP13-OVN_HA (overcloud) [root@controller-0 ~]# rpm -qa | grep ovn python-networking-ovn-4.0.0-0.20180220131809.329d6d8.el7ost.noarch novnc-0.6.1-1.el7ost.noarch openvswitch-ovn-central-2.9.0-3.el7fdp.x86_64 openvswitch-ovn-host-2.9.0-3.el7fdp.x86_64 puppet-ovn-12.3.1-0.20180221062110.4b16f7c.el7ost.noarch openstack-nova-novncproxy-17.0.0-0.20180223162252.a4a53bf.el7ost.noarch openvswitch-ovn-common-2.9.0-3.el7fdp.x86_64 python-networking-ovn-metadata-agent-4.0.0-0.20180220131809.329d6d8.el7ost.noarch (overcloud) [root@controller-0 ~]# rpm -qa | grep openvs openvswitch-2.9.0-3.el7fdp.x86_64 openvswitch-ovn-central-2.9.0-3.el7fdp.x86_64 openstack-neutron-openvswitch-12.0.0-0.20180222093622.abb60c6.el7ost.noarch python-openvswitch-2.9.0-3.el7fdp.noarch openvswitch-ovn-host-2.9.0-3.el7fdp.x86_64 openvswitch-ovn-common-2.9.0-3.el7fdp.x86_64 (overcloud) [root@controller-0 ~]# Steps to Reproduce: 1. create few networks & remove them 2. verify that they removed from ovn-nbctl 3. Actual results: Expected results: Additional info:
logs : https://drive.google.com/open?id=1pp-Yaq3N6YPu8B5dRvBmux8gXYcxvd2g
I noticed that even ports don't clean from ovn DB (overcloud) [root@controller-0 ~]# ovn-nbctl --db=tcp:172.17.1.15:6641 show switch 31990f00-c41e-466e-9070-bf3760b58926 (neutron-7b8f0751-6907-408a-8997-89747009fd09) (aka net-64-2) port 6a9c85b2-8a8e-470b-b50f-7ae7c3380b03 type: localport addresses: ["fa:16:3e:85:ae:47 10.0.2.2"] port a0cc0b12-70d5-46c9-8e00-e76e970c711f addresses: ["fa:16:3e:42:d6:89 10.0.2.8"] port 580a8d2c-eaa0-48f0-a7e8-8c379abb8b29 type: router router-port: lrp-580a8d2c-eaa0-48f0-a7e8-8c379abb8b29 switch 7bb30649-71dc-405f-9220-37f7f80f855f (neutron-88236779-29ef-46aa-bc6b-80d8f0f15b45) (aka nova) port 2ae28cbb-8ced-4158-ac3a-7f43cf520ee7 type: localport addresses: ["fa:16:3e:18:b4:cd"] port 6042c7e2-79b3-4925-b606-b86c6dc1e824 type: router router-port: lrp-6042c7e2-79b3-4925-b606-b86c6dc1e824 port 284190ed-ff6a-438b-b9ee-a843f13edbd6 type: router router-port: lrp-284190ed-ff6a-438b-b9ee-a843f13edbd6 port provnet-88236779-29ef-46aa-bc6b-80d8f0f15b45 type: localnet addresses: ["unknown"] switch 26f1fe62-b330-47a6-8527-0d098a2239ac (neutron-6484b473-5e68-440e-9d90-a53e42fe9dc2) (aka net-64-3) port 783de96f-ed69-4d3f-83a3-afa2560a7e02 type: router router-port: lrp-783de96f-ed69-4d3f-83a3-afa2560a7e02 port d12c0cd5-b818-484a-ac0f-70222b15b0cd addresses: ["fa:16:3e:cb:69:c1 10.0.3.9"] port 53afc813-7488-47fe-ba2d-9047577e9ce3 addresses: ["fa:16:3e:33:c2:e6 10.0.3.10"] port accbd0cb-be25-4f96-8e5b-59e3f473871d type: localport addresses: ["fa:16:3e:04:03:20 10.0.3.2"] router ed8829a4-4206-4410-983d-df2e88790121 (neutron-9b83b3ff-e802-4e2a-8c36-1918b6355c7a) (aka Router_eNet_2) port lrp-6042c7e2-79b3-4925-b606-b86c6dc1e824 mac: "fa:16:3e:0a:22:a5" networks: ["10.0.0.220/24"] gateway chassis: [113644ed-b3c6-47f2-9488-984d37936c97 a34f57de-09d3-4c1f-b56b-270eb850537a 942750fc-cec5-4a9f-aeb5-6dfddf9be3be] router 0769eb6f-60ed-451a-af57-8ea56c257fda (neutron-cb989bd4-f821-46b4-b556-b499dd64d5c7) (aka Router_eNet) port lrp-284190ed-ff6a-438b-b9ee-a843f13edbd6 mac: "fa:16:3e:53:26:19" networks: ["10.0.0.214/24"] gateway chassis: [a34f57de-09d3-4c1f-b56b-270eb850537a 113644ed-b3c6-47f2-9488-984d37936c97 942750fc-cec5-4a9f-aeb5-6dfddf9be3be] port lrp-783de96f-ed69-4d3f-83a3-afa2560a7e02 mac: "fa:16:3e:0c:8e:28" networks: ["10.0.3.1/24"] port lrp-580a8d2c-eaa0-48f0-a7e8-8c379abb8b29 mac: "fa:16:3e:c3:0a:b0" networks: ["10.0.2.1/24"] nat 1801d558-fe18-4015-96c7-6998160c64f5 external ip: "10.0.0.218" logical ip: "10.0.3.9" type: "dnat_and_snat" nat 46c19fad-c450-490f-8255-66bb3c1f715f external ip: "10.0.0.214" logical ip: "10.0.2.0/24" type: "snat" nat b81c0ac9-6e19-4beb-88aa-3c1e120fe680 external ip: "10.0.0.215" logical ip: "10.0.2.8" type: "dnat_and_snat" nat dce146ff-354b-4340-9607-49ee78d33be9 external ip: "10.0.0.214" logical ip: "10.0.3.0/24" type: "snat" (overcloud) [root@controller-0 ~]# openstack port list +--------------------------------------+------+-------------------+---------------------------------------------------------------------------+--------+ | ID | Name | MAC Address | Fixed IP Addresses | Status | +--------------------------------------+------+-------------------+---------------------------------------------------------------------------+--------+ | 284190ed-ff6a-438b-b9ee-a843f13edbd6 | | fa:16:3e:53:26:19 | ip_address='10.0.0.214', subnet_id='96e6e38a-c1d8-4cc5-a7fa-5794dc907dd1' | DOWN | | 2ae28cbb-8ced-4158-ac3a-7f43cf520ee7 | | fa:16:3e:18:b4:cd | | DOWN | | 32316c03-b870-4c7f-964f-5f58dd7ee977 | | fa:16:3e:9d:b6:54 | ip_address='10.0.0.215', subnet_id='96e6e38a-c1d8-4cc5-a7fa-5794dc907dd1' | N/A | | 4500a4c9-8b05-444d-9bed-dc048f71cf67 | | fa:16:3e:1d:58:10 | ip_address='10.0.0.218', subnet_id='96e6e38a-c1d8-4cc5-a7fa-5794dc907dd1' | N/A | | 580a8d2c-eaa0-48f0-a7e8-8c379abb8b29 | | fa:16:3e:c3:0a:b0 | ip_address='10.0.2.1', subnet_id='edc531c7-0177-48d1-b20b-c989f746c1bb' | DOWN | | 6042c7e2-79b3-4925-b606-b86c6dc1e824 | | fa:16:3e:0a:22:a5 | ip_address='10.0.0.220', subnet_id='96e6e38a-c1d8-4cc5-a7fa-5794dc907dd1' | DOWN | | 6a9c85b2-8a8e-470b-b50f-7ae7c3380b03 | | fa:16:3e:85:ae:47 | ip_address='10.0.2.2', subnet_id='edc531c7-0177-48d1-b20b-c989f746c1bb' | DOWN | | 783de96f-ed69-4d3f-83a3-afa2560a7e02 | | fa:16:3e:0c:8e:28 | ip_address='10.0.3.1', subnet_id='d9c73540-37f6-4401-a428-1bc961e8bcc4' | DOWN | | a0cc0b12-70d5-46c9-8e00-e76e970c711f | | fa:16:3e:42:d6:89 | ip_address='10.0.2.8', subnet_id='edc531c7-0177-48d1-b20b-c989f746c1bb' | ACTIVE | | accbd0cb-be25-4f96-8e5b-59e3f473871d | | fa:16:3e:04:03:20 | ip_address='10.0.3.2', subnet_id='d9c73540-37f6-4401-a428-1bc961e8bcc4' | DOWN | | d12c0cd5-b818-484a-ac0f-70222b15b0cd | | fa:16:3e:cb:69:c1 | ip_address='10.0.3.9', subnet_id='d9c73540-37f6-4401-a428-1bc961e8bcc4' | ACTIVE |
You mean that after creating a network and some ports with openstack (not ovn-nbctl) and then deleting them would not clean the resources from the backend (OVN)? Could you please post some reproduction steps? I have tried simple commands and it appears to work: $ openstack network create test_net +---------------------------+--------------------------------------+ | Field | Value | +---------------------------+--------------------------------------+ | admin_state_up | UP | | availability_zone_hints | | | availability_zones | | | created_at | 2018-04-02T10:59:05Z | | description | | | dns_domain | None | | id | 44aa72a2-5413-4f96-8b67-76111230f301 | $ sudo ovn-nbctl ls-list | grep 44aa72a2 8b1bf65d-8509-477b-a086-44d3cec6137f (neutron-44aa72a2-5413-4f96-8b67-76111230f301) $ openstack network delete 44aa72a2-5413-4f96-8b67-76111230f301 $ sudo ovn-nbctl ls-list | grep 44aa72a2 $ If you could please reproduce it and indicate the steps, it'd be really helpful. How reproducible is it? Thanks, Daniel
(In reply to Daniel Alvarez Sanchez from comment #4) > You mean that after creating a network and some ports with openstack (not > ovn-nbctl) and then deleting them would not clean the resources from the > backend (OVN)? > > Could you please post some reproduction steps? I have tried simple commands > and it appears to work: > > > $ openstack network create test_net > +---------------------------+--------------------------------------+ > | Field | Value | > +---------------------------+--------------------------------------+ > | admin_state_up | UP | > | availability_zone_hints | | > | availability_zones | | > | created_at | 2018-04-02T10:59:05Z | > | description | | > | dns_domain | None | > | id | 44aa72a2-5413-4f96-8b67-76111230f301 | > > > $ sudo ovn-nbctl ls-list | grep 44aa72a2 > 8b1bf65d-8509-477b-a086-44d3cec6137f > (neutron-44aa72a2-5413-4f96-8b67-76111230f301) > > > $ openstack network delete 44aa72a2-5413-4f96-8b67-76111230f301 > > $ sudo ovn-nbctl ls-list | grep 44aa72a2 > $ > > If you could please reproduce it and indicate the steps, it'd be really > helpful. > How reproducible is it? > > Thanks, > Daniel Yes, this is exactly what I meant you can see the output from my setup, Numan saw the issue on my setup. Steps to Reproduce: 1. create few networks 2. I found it after I restarted ovn northd docker {https://bugzilla.redhat.com/show_bug.cgi?id=1560892} 3. create few instances 10 for example (few of the instances are in error state - lack of resource) 4. remove all instances & network %. verify that they removed from ovn-nbctl I saw that switches & port not deleted
I think this is a side effect of https://bugzilla.redhat.com/show_bug.cgi?id=1560892 How reproducible is it?
(In reply to Daniel Alvarez Sanchez from comment #6) > I think this is a side effect of > https://bugzilla.redhat.com/show_bug.cgi?id=1560892 > How reproducible is it? I saw it twice. I will try to reproduce it again on fresh setup deployment.
Could not reproduce this issue. You can close this one. I will reopen it if I will see it again