Description of problem: We have 3 controllers and 2 Compute nodes. No instances were created Tunnels between the controllers and compute nodes are not removed after enabling l2population. http://pastebin.test.redhat.com/338768 Version-Release number of selected component (if applicable): # rhos-release 8-director -p 2015-12-03.1 # rhos-release 8 -p 2015-12-22.2 [root@overcloud-controller-0 ~]# rpm -qa | grep neutron python-neutronclient-3.1.0-1.el7ost.noarch python-neutron-7.0.1-2.el7ost.noarch openstack-neutron-common-7.0.1-2.el7ost.noarch openstack-neutron-7.0.1-2.el7ost.noarch openstack-neutron-ml2-7.0.1-2.el7ost.noarch python-neutron-lbaas-7.0.0-2.el7ost.noarch openstack-neutron-lbaas-7.0.0-2.el7ost.noarch openstack-neutron-openvswitch-7.0.1-2.el7ost.noarch openstack-neutron-metering-agent-7.0.1-2.el7ost.noarch How reproducible: 100% Steps to Reproduce: 1.Enable l2 population 2.See if tunnels exist between compute nodes and controlers 3. Actual results: Tunnels exist Expected results: tunnels between compute nodes and controllers should be removed Additional info: http://pastebin.test.redhat.com/338767 My environment is installed with Liberty OSPD- l2population is enable MANUALLY
That's how the code works right now. It checks if a tunnel should be deleted when it receives a message from the controller that a VM was removed. The OVS agent doesn't check if it should clean up tunnels when it starts - The only reason why it would do that is to handle the situation (As outlined in this bug report) that the admin enabled l2pop after starting the agent. Still, this is a case we might want to handle.