Bug 1726196
Summary: | [OVN][no-DVR] No connectivity between internal and external networks | ||
---|---|---|---|
Product: | Red Hat OpenStack | Reporter: | Roman Safronov <rsafrono> |
Component: | python-networking-ovn | Assignee: | Assaf Muller <amuller> |
Status: | CLOSED DUPLICATE | QA Contact: | Eran Kuris <ekuris> |
Severity: | high | Docs Contact: | |
Priority: | unspecified | ||
Version: | 15.0 (Stein) | CC: | apevec, lhh, majopela, scohen |
Target Milestone: | --- | ||
Target Release: | --- | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2019-07-02 11:19:05 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Roman Safronov
2019-07-02 10:19:21 UTC
[heat-admin@controller-0 ~]$ sudo podman exec -it neutron_api rpm -qa | grep ovn puppet-ovn-14.4.1-0.20190531130403.69b5479.el8ost.noarch python3-networking-ovn-6.0.1-0.20190615010402.1a83cb6.el8ost.noarch [heat-admin@controller-0 ~]$ ovn-nbctl show switch 7156b4c8-cc12-47f8-aad9-c9176b830bbb (neutron-663a46e8-c73a-4df0-8f5e-1f09031e893b) (aka admin_int_net_1) port f43650b5-5902-453d-8ce5-13dce1173d48 (aka rhel76_admin_vm1_net1_10.0.0.237) addresses: ["fa:16:3e:f2:fc:97 10.0.1.251"] port 10cf9c7b-919d-46f3-bba5-9048247bd2eb type: localport addresses: ["fa:16:3e:b5:4e:05 10.0.1.2", "unknown"] port 491bddf9-64f0-48ef-a399-fa5fd88c2667 type: router router-port: lrp-491bddf9-64f0-48ef-a399-fa5fd88c2667 switch d66b51aa-6ea7-42df-8c71-ea152af7d769 (neutron-b8881506-4980-4210-96be-14c30bc0ebbe) (aka admin_int_net_2) port e67dbfd2-ec2e-45d5-95ae-49bd244e0112 type: localport addresses: ["fa:16:3e:b2:1c:1b 10.0.2.2", "unknown"] port 5b6369a0-e85c-46eb-b252-d1624eff9790 (aka rhel76_admin_vm1_net2_10.0.0.242) addresses: ["fa:16:3e:b0:0a:5d 10.0.2.187"] port 75a4854a-ea94-4861-8a62-6c1e9c7ffa7a type: router router-port: lrp-75a4854a-ea94-4861-8a62-6c1e9c7ffa7a switch b3bc4607-f10b-4438-803d-4333d071f115 (neutron-6d086495-02cd-4892-95ac-b97ef340737d) (aka public) port 36be2590-4be5-4d07-a06f-c34591ceda74 type: router router-port: lrp-36be2590-4be5-4d07-a06f-c34591ceda74 port provnet-6d086495-02cd-4892-95ac-b97ef340737d type: localnet addresses: ["unknown"] router d000f3ae-82a3-4e86-ad46-299f5cf5bc26 (neutron-4524f6b1-26ff-472c-ad72-66523340ee5d) (aka admin_Router_eNet) port lrp-491bddf9-64f0-48ef-a399-fa5fd88c2667 mac: "fa:16:3e:78:e5:9a" networks: ["10.0.1.1/24"] port lrp-36be2590-4be5-4d07-a06f-c34591ceda74 mac: "fa:16:3e:4d:b9:07" networks: ["10.0.0.222/24"] gateway chassis: [b878e4c5-b387-4ca0-807e-a6852e2d9668 a4bad0ad-1ef1-4bfc-aa3f-475f4eab1d57 ae4eeea0-5b3c-4fa5-aea5-1a96bbc2c705 b055d014-d44d-4d75-9d6a-3411b7ef171e 2ace55e7-9fb0-4533-a43a-4e5b942dec55] port lrp-75a4854a-ea94-4861-8a62-6c1e9c7ffa7a mac: "fa:16:3e:05:1b:b5" networks: ["10.0.2.1/24"] nat 524d1cce-7cad-4a1f-8717-540eb03ff71c external ip: "10.0.0.222" logical ip: "10.0.2.0/24" type: "snat" nat 5507e68c-760c-45ad-9ae6-1850760727b3 external ip: "10.0.0.222" logical ip: "10.0.1.0/24" type: "snat" nat 82fbea19-a418-42e6-a3f1-0438a86461d0 external ip: "10.0.0.237" logical ip: "10.0.1.251" type: "dnat_and_snat" nat fb23bbbd-c1d1-440e-9cbb-670e24540686 external ip: "10.0.0.242" logical ip: "10.0.2.187" type: "dnat_and_snat" [root@rhel76-admin-vm1-net2 ~]# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1442 qdisc pfifo_fast state UP group default qlen 1000 link/ether fa:16:3e:b0:0a:5d brd ff:ff:ff:ff:ff:ff inet 10.0.2.187/24 brd 10.0.2.255 scope global noprefixroute dynamic eth0 valid_lft 36207sec preferred_lft 36207sec inet6 fe80::f816:3eff:feb0:a5d/64 scope link valid_lft forever preferred_lft forever [root@rhel76-admin-vm1-net2 ~]# ping 10.0.0.222 PING 10.0.0.222 (10.0.0.222) 56(84) bytes of data. 64 bytes from 10.0.0.222: icmp_seq=1 ttl=254 time=0.960 ms 64 bytes from 10.0.0.222: icmp_seq=2 ttl=254 time=0.883 ms 64 bytes from 10.0.0.222: icmp_seq=3 ttl=254 time=0.800 ms ^C --- 10.0.0.222 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2003ms rtt min/avg/max/mdev = 0.800/0.881/0.960/0.065 ms [root@rhel76-admin-vm1-net2 ~]# ping 10.0.0.1 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. ^C --- 10.0.0.1 ping statistics --- 10 packets transmitted, 0 received, 100% packet loss, time 8999ms [root@rhel76-admin-vm1-net2 ~]# [root@rhel76-admin-vm1-net2 ~]# ping 10.0.1.251 PING 10.0.1.251 (10.0.1.251) 56(84) bytes of data. 64 bytes from 10.0.1.251: icmp_seq=1 ttl=63 time=2.97 ms 64 bytes from 10.0.1.251: icmp_seq=2 ttl=63 time=1.99 ms ^C --- 10.0.1.251 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1001ms rtt min/avg/max/mdev = 1.993/2.482/2.971/0.489 ms [root@rhel76-admin-vm1-net2 ~]# ping 10.0.0.242 PING 10.0.0.242 (10.0.0.242) 56(84) bytes of data. 64 bytes from 10.0.0.242: icmp_seq=1 ttl=62 time=3.39 ms 64 bytes from 10.0.0.242: icmp_seq=2 ttl=62 time=1.95 ms ^C --- 10.0.0.242 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1002ms rtt min/avg/max/mdev = 1.952/2.672/3.393/0.722 ms [root@rhel76-admin-vm1-net2 ~]# ping 8.8.8.8 PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data. ^C --- 8.8.8.8 ping statistics --- 4 packets transmitted, 0 received, 100% packet loss, time 2999ms [root@rhel76-admin-vm1-net2 ~]# ping 10.0.0.237 PING 10.0.0.237 (10.0.0.237) 56(84) bytes of data. 64 bytes from 10.0.0.237: icmp_seq=1 ttl=62 time=3.37 ms 64 bytes from 10.0.0.237: icmp_seq=2 ttl=62 time=2.13 ms 64 bytes from 10.0.0.237: icmp_seq=3 ttl=62 time=1.55 ms ^C --- 10.0.0.237 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2003ms (overcloud) [stack@undercloud-0 my]$ openstack router show admin_Router_eNet +-------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Field | Value | +-------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | admin_state_up | UP | | availability_zone_hints | None | | availability_zones | None | | created_at | 2019-07-02T08:28:39Z | | description | created by admin | | external_gateway_info | {"network_id": "6d086495-02cd-4892-95ac-b97ef340737d", "external_fixed_ips": [{"subnet_id": "3cb80043-e084-4afd-afd8-c106640d0ba4", "ip_address": "10.0.0.222"}], "enable_snat": true} | | flavor_id | None | | id | 4524f6b1-26ff-472c-ad72-66523340ee5d | | interfaces_info | [{"port_id": "491bddf9-64f0-48ef-a399-fa5fd88c2667", "ip_address": "10.0.1.1", "subnet_id": "da5c421b-c471-418b-8978-c7add46bfdc4"}, {"port_id": "75a4854a-ea94-4861-8a62-6c1e9c7ffa7a", "ip_address": "10.0.2.1", "subnet_id": "8842d6ff-c846-424a-be8b-e845b5bc392f"}] | | location | Munch({'cloud': '', 'region_name': 'regionOne', 'zone': None, 'project': Munch({'id': '8139f39965694cde8c626e1512313525', 'name': 'admin', 'domain_id': None, 'domain_name': 'Default'})}) | | name | admin_Router_eNet | | project_id | 8139f39965694cde8c626e1512313525 | | revision_number | 4 | | routes | | | status | ACTIVE | | tags | | | updated_at | 2019-07-02T08:29:31Z | +-------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ *** This bug has been marked as a duplicate of bug 1716341 *** |