Bug 1734301
| Summary: | [OVN][LB] IPv6 member is not reachable in Load_Balancer with IPv4 listener | |||
|---|---|---|---|---|
| Product: | Red Hat OpenStack | Reporter: | Maciej Józefczyk <mjozefcz> | |
| Component: | python-networking-ovn | Assignee: | Maciej Józefczyk <mjozefcz> | |
| Status: | CLOSED ERRATA | QA Contact: | Eran Kuris <ekuris> | |
| Severity: | medium | Docs Contact: | ||
| Priority: | medium | |||
| Version: | 16.0 (Train) | CC: | apevec, gregraka, joflynn, lhh, lmartins, majopela, njohnston, scohen | |
| Target Milestone: | beta | Keywords: | Triaged | |
| Target Release: | 16.0 (Train on RHEL 8.1) | |||
| Hardware: | Unspecified | |||
| OS: | Unspecified | |||
| Whiteboard: | ||||
| Fixed In Version: | python-networking-ovn-7.0.0-0.20191021193239.4f64542.el8 | Doc Type: | Known Issue | |
| Doc Text: |
Currently, the OVN load balancer does not open new connections when fetching data from members. The load balancer modifies the destination address and destination port and sends request packets to the members. As a result, it is not possible to define an IPv6 member while using an IPv4 load balancer address and vice versa. There is currently no workaround for this issue.
|
Story Points: | --- | |
| Clone Of: | ||||
| : | 1764025 (view as bug list) | Environment: | ||
| Last Closed: | 2020-02-06 14:41:56 UTC | Type: | Bug | |
| Regression: | --- | Mount Type: | --- | |
| Documentation: | --- | CRM: | ||
| Verified Versions: | Category: | --- | ||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
| Cloudforms Team: | --- | Target Upstream Version: | ||
| Embargoed: | ||||
| Bug Depends On: | ||||
| Bug Blocks: | 1764025 | |||
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2020:0283 |
Description of problem: While setting mixed IPv6 and IPv4 members in the same pool HTTP request hangs on reaching (SYN send without response) IPv6 member and then falls back to IPv4 member. Version-Release number of selected component (if applicable): devstack master + master OVN/OVS How reproducible: Setup Load_Balancer with 2 members. One will be IPv4, second one IPv6. Add both to the same Load_Balancer. Create a FIP that points on loadbalancer VIP. Try to reach FIP. Some of requests hangs and falls back to call IPv4 member. I've already checked if IPv6 member responds at all - yes it works from different host withing the same network. This scenario is covered by: octavia_tempest_plugin.tests.scenario.v2.test_traffic_ops.TrafficOperationsScenarioTest.test_mixed_ipv4_ipv6_members_traffic Created Load_Balancer: stack@ovn-octavia-gate:~/devstack$ ovn-nbctl list load_balancer _uuid : 27fbcb55-ab47-404f-8b83-f41b7238bcd0 external_ids : {enabled=True, "listener_a82a9c06-8ab1-44b0-9210-275f734b86f0"="80:pool_16a881a1-a4f1-4789-9329-40ecd9d649c1", lr_ref="neutron-2e7f04f8-fc52-4dc5-aacf-1b027f47b559", ls_refs="{\"neutron-32e5bf63-aff8-4a9e-aeb9-84482bb0017a\": 1, \"neutron-61255b5b-d513-43d5-9ea4-064960e01049\": 1, \"neutron-9e7a6391-e47a-4b89-9739-d4d8fcf02de6\": 1}", "neutron:vip"="10.1.1.6", "neutron:vip_fip"="172.24.4.249", "neutron:vip_port_id"="5ccc504c-f4bc-4f0e-8ef3-cbaf59c90d04", "pool_16a881a1-a4f1-4789-9329-40ecd9d649c1"="member_9e16bc68-a49a-4d32-96e7-c974104b2090_10.2.1.104:80,member_4b3878a8-8c4a-40af-8512-baf34f9d7b9e_fd77:1457:4cf0:26a8::3d0:80"} name : "e6aaf4f3-fdcc-4325-a9f4-6fe3a03c7bc9" protocol : tcp vips : {"10.1.1.6:80"="10.2.1.104:80,fd77:1457:4cf0:26a8::3d0:80", "172.24.4.249:80"="10.2.1.104:80,fd77:1457:4cf0:26a8::3d0:80"} stack@ovn-octavia-gate:~/devstack$ Actual results: IPv6 member is not reachable via LB. Expected results: IPv6 member responds. Additional info: