Bug 1327880

Summary: OVS-firewall: There is no connectivity to instance after updating " admin_state_up " from false to True
Product: Red Hat OpenStack Reporter: Eran Kuris <ekuris>
Component: openstack-neutronAssignee: Assaf Muller <amuller>
Status: CLOSED WORKSFORME QA Contact: GenadiC <gcheresh>
Severity: high Docs Contact:
Priority: low    
Version: 9.0 (Mitaka)CC: amuller, chrisw, ekuris, gcheresh, jlibosva, nyechiel, srevivo, tfreger
Target Milestone: ---Keywords: ZStream
Target Release: 9.0 (Mitaka)   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2016-05-03 13:55:06 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:

Description Eran Kuris 2016-04-17 12:33:12 UTC
Description of problem:
On setup with ovs 2.5 - ovs-firewall env I run Tepmest and TestNetworkBasicOps:test_update_instance_port_admin_state  failed .

I validate it manually -->
when neutron port admin_state_up is true SG allow ICMP/SSH connectivity there is connectivity.
After I update port to admin_state_up= false the connectivity failed as expected but the problem is that connectivity still failed when I return it to True . 
Version-Release number of selected component (if applicable):
[root@puma15 ~]# rpm -qa |grep -i neutron 
openstack-neutron-8.0.0.0b4-0.20160304174813.0ae20a3.el7.centos.noarch
openstack-neutron-ml2-8.0.0.0b4-0.20160304174813.0ae20a3.el7.centos.noarch
python-neutron-8.0.0.0b4-0.20160304174813.0ae20a3.el7.centos.noarch
openstack-neutron-openvswitch-8.0.0.0b4-0.20160304174813.0ae20a3.el7.centos.noarch
openstack-neutron-common-8.0.0.0b4-0.20160304174813.0ae20a3.el7.centos.noarch
python-neutronclient-4.1.2-0.20160304195803.5d28651.el7.centos.noarch
python-neutron-lib-0.0.3-0.20160227020344.999828a.el7.centos.noarch
openstack-neutron-metering-agent-8.0.0.0b4-0.20160304174813.0ae20a3.el7.centos.noarch
[root@puma15 ~]# rpm -qa |grep -i openvswitch 
python-openvswitch-2.5.0-2.el7.noarch
openstack-neutron-openvswitch-8.0.0.0b4-0.20160304174813.0ae20a3.el7.centos.noarch
openvswitch-2.5.0-2.el7.x86_64
[root@puma15 ~]# 


How reproducible:
Always

Steps to Reproduce:
1.Create  vm on ovs-firewall setup
2. assign SG that allow SSH & icmp and check connectivity (should success)
3. change vm port admin state to false and check connectivity (should failed)
4. change vm port admin state to true and check connectivity(should success) --> the connectivity failed 
workaround is to restart neutron-ovs-agent
Actual results:


Expected results:


Additional info:

Comment 5 Assaf Muller 2016-04-17 17:55:04 UTC
Jakub can you look in to this? Thank you.

Comment 6 Eran Kuris 2016-04-18 06:09:11 UTC
 rhos-9.0

Comment 7 Jakub Libosvar 2016-05-02 12:15:06 UTC
Works for me:

[root@centos7-rdo versions(keystone_admin)]# neutron port-show -c status 21ccfb0e-a98b-4b3f-8f27-ad180ebf6756
+--------+--------+
| Field  | Value  |
+--------+--------+
| status | ACTIVE |
+--------+--------+
[root@centos7-rdo versions(keystone_admin)]# neutron port-show -c status,id 21ccfb0e-a98b-4b3f-8f27-ad180ebf6756

[root@centos7-rdo versions(keystone_admin)]# neutron port-show -c status -c id 21ccfb0e-a98b-4b3f-8f27-ad180ebf6756
+--------+--------------------------------------+
| Field  | Value                                |
+--------+--------------------------------------+
| id     | 21ccfb0e-a98b-4b3f-8f27-ad180ebf6756 |
| status | ACTIVE                               |
+--------+--------------------------------------+
[root@centos7-rdo versions(keystone_admin)]# ovs-vsctl get Port tap21ccfb0e-a9 tag
1
[root@centos7-rdo versions(keystone_admin)]# ip net e qdhcp-faefcb03-dab7-465e-80f4-f4ad449a3899 ping 192.168.0.3 -c 2
PING 192.168.0.3 (192.168.0.3) 56(84) bytes of data.
64 bytes from 192.168.0.3: icmp_seq=1 ttl=64 time=1.41 ms
64 bytes from 192.168.0.3: icmp_seq=2 ttl=64 time=0.593 ms

--- 192.168.0.3 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 0.593/1.004/1.415/0.411 ms
[root@centos7-rdo versions(keystone_admin)]# neutron port-update --admin-state-up=False 21ccfb0e-a98b-4b3f-8f27-ad180ebf6756
Updated port: 21ccfb0e-a98b-4b3f-8f27-ad180ebf6756
[root@centos7-rdo versions(keystone_admin)]# neutron port-show -c status -c id 21ccfb0e-a98b-4b3f-8f27-ad180ebf6756
+--------+--------------------------------------+
| Field  | Value                                |
+--------+--------------------------------------+
| id     | 21ccfb0e-a98b-4b3f-8f27-ad180ebf6756 |
| status | DOWN                                 |
+--------+--------------------------------------+
[root@centos7-rdo versions(keystone_admin)]# ip net e qdhcp-faefcb03-dab7-465e-80f4-f4ad449a3899 ping 192.168.0.3 -c 2
PING 192.168.0.3 (192.168.0.3) 56(84) bytes of data.

--- 192.168.0.3 ping statistics ---
2 packets transmitted, 0 received, 100% packet loss, time 999ms

[root@centos7-rdo versions(keystone_admin)]# ovs-vsctl get Port tap21ccfb0e-a9 tag
4095
[root@centos7-rdo versions(keystone_admin)]# neutron port-update --admin-state-up=True 21ccfb0e-a98b-4b3f-8f27-ad180ebf6756
Updated port: 21ccfb0e-a98b-4b3f-8f27-ad180ebf6756
[root@centos7-rdo versions(keystone_admin)]# ovs-vsctl get Port tap21ccfb0e-a9 tag
1
[root@centos7-rdo versions(keystone_admin)]# ip net e qdhcp-faefcb03-dab7-465e-80f4-f4ad449a3899 ping 192.168.0.3 -c 2
PING 192.168.0.3 (192.168.0.3) 56(84) bytes of data.
64 bytes from 192.168.0.3: icmp_seq=1 ttl=64 time=1.87 ms
64 bytes from 192.168.0.3: icmp_seq=2 ttl=64 time=0.500 ms

--- 192.168.0.3 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 0.500/1.188/1.877/0.689 ms


As per output above admin-state-up=False puts port into dead vlan with tag 4095.

Eran, can you please make sure vlan tags are correct before you set admin state up to False, after you set it to False and after you set it back to True?
Can you track where are the packets dropped?

Comment 9 GenadiC 2016-05-03 13:55:06 UTC
Tried it on CentOS Linux release 7.2.1511 with kernel 4.5.2-1.el7.elrepo.x86_64 and everything work as expected