Bug 1048053
Summary: | How to remove un-used tunnel ports on OVS | ||
---|---|---|---|
Product: | [Community] RDO | Reporter: | chen.li |
Component: | openstack-neutron | Assignee: | RHOS Maint <rhos-maint> |
Status: | CLOSED DUPLICATE | QA Contact: | Ofer Blaut <oblaut> |
Severity: | unspecified | Docs Contact: | |
Priority: | unspecified | ||
Version: | unspecified | CC: | amuller, chen.li, chrisw, kchamart, nyechiel, sputhenp, yeylon |
Target Milestone: | --- | ||
Target Release: | --- | ||
Hardware: | Unspecified | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Bug Fix | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2014-06-29 11:02:57 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
chen.li
2014-01-03 02:45:58 UTC
I just tried to reproduce with ML2 plugin (which uses OpenvSwitch), the below is what I see. Chen, can you confirm step-5 below that demonstrates what you describe?. That's my setup: 2-node RDO setup (using Fedora 20, and IceHouse packages from Fedora Rawhide as of this writing 14-MAY-2014) with Nova, Keystone, Glance and Neutron with ML2+GRE+OVS. RabbitMQ for AMQP messaging. This is a virtualized setup, i.e. Nova instances are KVM nested guests. Versions -------- openstack-neutron-openvswitch-2014.1-11.fc21.noarch openvswitch-2.0.1-1.fc20.x86_64 Test ---- 1. `ovs-vsctl show` before modifying the 'local_ip' value in /etc/neutron/plugin.ini (that's a symlink to /etc/neutron/plugins/ml2/ml2_conf.ini) and /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini 2. Some diagnostics info (before I change the 'local_ip'): --------------- $ ip a | grep 192.169.142.97 inet 192.169.142.97/24 brd 192.169.142.255 scope global br-ex $ ovs-vsctl show c993ff93-7d03-42e2-8566-331d10442686 Bridge br-int Port "qr-f0ea1594-3f" tag: 1 Interface "qr-f0ea1594-3f" type: internal Port "tapa8818ee8-f9" tag: 1 Interface "tapa8818ee8-f9" type: internal Port patch-tun Interface patch-tun type: patch options: {peer=patch-int} Port br-int Interface br-int type: internal Port "tapc5a1f7b4-dc" tag: 2 Interface "tapc5a1f7b4-dc" type: internal Bridge br-tun Port "gre-c0a98ea8" Interface "gre-c0a98ea8" type: gre options: {in_key=flow, local_ip="192.169.142.97", out_key=flow, remote_ip="192.169.142.168"} Port br-tun Interface br-tun type: internal Port patch-int Interface patch-int type: patch options: {peer=patch-tun} Bridge br-ex Port "qg-fb9ff0ad-56" Interface "qg-fb9ff0ad-56" type: internal Port "ens2" Interface "ens2" Port br-ex Interface br-ex type: internal ovs_version: "2.0.1" --------------- 3. Check what's the current 'local_ip' (it's 192.169.142.97) on Controller node (which is running Neutron server, OpenvSwitch agent, DHCP agent, L3 agent, OpenvSwitch agent: $ cd /etc/neutron/plugins $ ls ml2 openvswitch $ grep -r 192.169.142.97 * ml2/ml2_conf.ini:local_ip = 192.169.142.97 openvswitch/ovs_neutron_plugin.ini:local_ip = 192.169.142.97 4. Change the value of 'local_ip' 192.169.142.97 to the non-existent 192.169.142.98 (in both ml2_conf.ini, and ovs_neutron_plugin.ini) on Controller node: Restart Neutron OVS agent: $ systemctl restart neutron-openvswitch-agent 5. Run `ovs-vsctl` again and grep for the new IP (192.169.142.98): I think the below demonstrates what you're saying: --------------- $ ovs-vsctl show | grep 192.169.142.98 -A3 -B3 Port "gre-c0a98e61" Interface "gre-c0a98e61" type: gre options: {in_key=flow, local_ip="192.169.142.98", out_key=flow, remote_ip="192.169.142.97"} Port "gre-c1a98e62" Interface "gre-c1a98e62" type: gre options: {in_key=flow, local_ip="192.169.142.98", out_key=flow, remote_ip="193.169.142.98"} Port "gre-c0a98ea8" Interface "gre-c0a98ea8" type: gre options: {in_key=flow, local_ip="192.169.142.98", out_key=flow, remote_ip="192.169.142.168"} Port patch-int Interface patch-int type: patch --------------- 6. Change the 'local_ip' back to 192.169.142.97 on Controller node, restart neutron-openvswitch-agent, and grep for both IPs (192.169.142.97 and 192.169.142.98) in `ovs-vsctl show`: --------------- $ ovs-vsctl show | egrep -i '192.169.142.97|192.169.142.98' -A2 -B2 Interface "gre-c0a98e62" type: gre options: {in_key=flow, local_ip="192.169.142.97", out_key=flow, remote_ip="192.169.142.98"} Port "gre-c0a98ea8" Interface "gre-c0a98ea8" type: gre options: {in_key=flow, local_ip="192.169.142.97", out_key=flow, remote_ip="192.169.142.168"} Port "gre-c1a98e62" Interface "gre-c1a98e62" type: gre options: {in_key=flow, local_ip="192.169.142.97", out_key=flow, remote_ip="193.169.142.98"} Bridge br-int Port "qr-f0ea1594-3f" --------------- Yes, I think this is what I described! *** This bug has been marked as a duplicate of bug 1108790 *** I can't reach bug 1108790. You are not authorized to access bug #1108790 Any solutions for this ???? Only option now is to remove that ip from ovs_tunnel_endpoints table of neutron/ovs_neutron database. If need help wit the exact command for your system, please open a case with Red Hat support. |