Bug 1734301 - [OVN][LB] IPv6 member is not reachable in Load_Balancer with IPv4 listener
Summary: [OVN][LB] IPv6 member is not reachable in Load_Balancer with IPv4 listener
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: python-networking-ovn
Version: 16.0 (Train)
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: beta
: 16.0 (Train on RHEL 8.1)
Assignee: Maciej Józefczyk
QA Contact: Eran Kuris
URL:
Whiteboard:
Depends On:
Blocks: 1764025
TreeView+ depends on / blocked
 
Reported: 2019-07-30 08:11 UTC by Maciej Józefczyk
Modified: 2020-02-06 17:45 UTC (History)
8 users (show)

Fixed In Version: python-networking-ovn-7.0.0-0.20191021193239.4f64542.el8
Doc Type: Known Issue
Doc Text:
Currently, the OVN load balancer does not open new connections when fetching data from members. The load balancer modifies the destination address and destination port and sends request packets to the members. As a result, it is not possible to define an IPv6 member while using an IPv4 load balancer address and vice versa. There is currently no workaround for this issue.
Clone Of:
: 1764025 (view as bug list)
Environment:
Last Closed: 2020-02-06 14:41:56 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
OpenStack gerrit 674255 0 None MERGED Don't allow mixing IPv4/IPv6 configuration 2020-02-06 08:34:16 UTC
OpenStack gerrit 687220 0 None MERGED Disable ip_version checking while updating member without new ip 2020-02-06 08:34:17 UTC
Red Hat Product Errata RHEA-2020:0283 0 None None None 2020-02-06 14:42:30 UTC

Description Maciej Józefczyk 2019-07-30 08:11:44 UTC
Description of problem:

While setting mixed IPv6 and IPv4 members in the same pool HTTP request hangs on reaching (SYN send without response) IPv6 member and then falls back to IPv4 member.


Version-Release number of selected component (if applicable): 
devstack master + master OVN/OVS


How reproducible:
Setup Load_Balancer with 2 members. One will be IPv4, second one IPv6.
Add both to the same Load_Balancer. Create a FIP that points on loadbalancer VIP. Try to reach FIP. Some of requests hangs and falls back to call IPv4 member.

I've already checked if IPv6 member responds at all - yes it works from different host withing the same network.


This scenario is covered by:
octavia_tempest_plugin.tests.scenario.v2.test_traffic_ops.TrafficOperationsScenarioTest.test_mixed_ipv4_ipv6_members_traffic


Created Load_Balancer:
stack@ovn-octavia-gate:~/devstack$ ovn-nbctl list load_balancer
_uuid               : 27fbcb55-ab47-404f-8b83-f41b7238bcd0
external_ids        : {enabled=True, "listener_a82a9c06-8ab1-44b0-9210-275f734b86f0"="80:pool_16a881a1-a4f1-4789-9329-40ecd9d649c1", lr_ref="neutron-2e7f04f8-fc52-4dc5-aacf-1b027f47b559", ls_refs="{\"neutron-32e5bf63-aff8-4a9e-aeb9-84482bb0017a\": 1, \"neutron-61255b5b-d513-43d5-9ea4-064960e01049\": 1, \"neutron-9e7a6391-e47a-4b89-9739-d4d8fcf02de6\": 1}", "neutron:vip"="10.1.1.6", "neutron:vip_fip"="172.24.4.249", "neutron:vip_port_id"="5ccc504c-f4bc-4f0e-8ef3-cbaf59c90d04", "pool_16a881a1-a4f1-4789-9329-40ecd9d649c1"="member_9e16bc68-a49a-4d32-96e7-c974104b2090_10.2.1.104:80,member_4b3878a8-8c4a-40af-8512-baf34f9d7b9e_fd77:1457:4cf0:26a8::3d0:80"}
name                : "e6aaf4f3-fdcc-4325-a9f4-6fe3a03c7bc9"
protocol            : tcp
vips                : {"10.1.1.6:80"="10.2.1.104:80,fd77:1457:4cf0:26a8::3d0:80", "172.24.4.249:80"="10.2.1.104:80,fd77:1457:4cf0:26a8::3d0:80"}
stack@ovn-octavia-gate:~/devstack$ 



Actual results:
IPv6 member is not reachable via LB.

Expected results:
IPv6 member responds.


Additional info:

Comment 7 errata-xmlrpc 2020-02-06 14:41:56 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2020:0283


Note You need to log in before you can comment on or make changes to this bug.