Bug 2003091 - IPv6 virtual ports are not associated to chassis
Summary: IPv6 virtual ports are not associated to chassis
Keywords:
Status: MODIFIED
Alias: None
Product: Red Hat Enterprise Linux Fast Datapath
Classification: Red Hat
Component: OVN
Version: FDP 21.K
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: ---
Assignee: Mohammad Heib
QA Contact: Jianlin Shi
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-09-10 11:17 UTC by Luis Tomas Bolivar
Modified: 2021-11-10 10:36 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed:
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker FD-1531 0 None None None 2021-09-10 11:18:45 UTC

Description Luis Tomas Bolivar 2021-09-10 11:17:23 UTC
Description of problem:
When creating an OpenStack Amphora Octavia Loadbalancer in a Neutron Provider Network with IPv6 VIP, there is no connectivity to it.

The same works for IPv4, the difference is that for IPv4, the VIP (virtual) port is up and attached to a chassis, while for the IPv6 is down and not associated to the chassis where the LoadBalancer Amphora VM is located.
For example: 
IPv4 SB DB port binding (working):
_uuid               : 314a8103-4cdb-4c1c-bbbf-608df7e9676c
chassis             : 5c262ce5-1505-4af5-a829-8b287a8e2c69
datapath            : fd056a7c-a911-4050-bc59-c62637fea169
encap               : []
external_ids        : {name=octavia-lb-b785ea25-dc97-4a5a-b258-dc1238b37dfc, "neutron:cidrs"="172.24.100.25/24 2001:db8::f816:3eff:feaa:d859/64", "neutron:device_id"=lb-b785ea25-dc97-4a5a-b258-dc1238b37dfc, "neutron:device_owner"=Octavia, "neutron:network_name"=neutron-dc32ff2a-758f-4f51-93fb-03e9aee23b0d, "neutron:port_name"=octavia-lb-b785ea25-dc97-4a5a-b258-dc1238b37dfc, "neutron:project_id"="01c910303f864dbe83afffc5d7dad975", "neutron:revision_number"="2", "neutron:security_group_ids"="1b26d73f-409e-43ae-b56a-e203741b4214"}                                                                                                      
gateway_chassis     : []
ha_chassis_group    : []
logical_port        : "124f2240-3156-42c1-9a44-91256c0938f8"
mac                 : ["fa:16:3e:aa:d8:59 172.24.100.25 2001:db8::f816:3eff:feaa:d859"]
nat_addresses       : []
options             : {mcast_flood_reports="true", requested-chassis="", virtual-ip="172.24.100.25", virtual-parents="f8808210-a7ec-4b83-9d9c-726a6bceb9f1"}                                                      
parent_port         : []
tag                 : []
tunnel_key          : 6
type                : virtual
up                  : true


IPv6 SB DB port binding (not working):
_uuid               : 677c40b8-a54b-4dc2-a624-9b1d0f791888
chassis             : []
datapath            : fd056a7c-a911-4050-bc59-c62637fea169
encap               : []
external_ids        : {name=octavia-lb-ca702149-a714-4730-b6ab-47ec1581cb69, "neutron:cidrs"="2001:db8::f816:3eff:fe9b:61d1/64", "neutron:device_id"=lb-ca702149-a714-4730-b6ab-47ec1581cb69, "neutron:device_owner"
=Octavia, "neutron:network_name"=neutron-dc32ff2a-758f-4f51-93fb-03e9aee23b0d, "neutron:port_name"=octavia-lb-ca702149-a714-4730-b6ab-47ec1581cb69, "neutron:project_id"="01c910303f864dbe83afffc5d7dad975", "neutro
n:revision_number"="2", "neutron:security_group_ids"="da6ed914-2017-423e-b5bd-2575dec9e789"}
gateway_chassis     : []
ha_chassis_group    : []
logical_port        : "6e214af2-1c7d-4167-b635-ae187ec5102b"
mac                 : ["fa:16:3e:9b:61:d1 2001:db8::f816:3eff:fe9b:61d1"]
nat_addresses       : []
options             : {mcast_flood_reports="true", requested-chassis="", virtual-ip="2001:db8::f816:3eff:fe9b:61d1", virtual-parents="3b2c8f5a-73ba-412d-a83c-1abf817c7b99"}
parent_port         : []
tag                 : []
tunnel_key          : 8
type                : virtual
up                  : false


How reproducible:
100%, just create an octavia amphora loadbalancer on a provider network, with an IPv6 VIP.


Actual results:
virtual ipv6 port not wired up


Expected results:
virtual ipv6 port up, and associated to the chassis where the amphora VM is running

Comment 1 Luis Tomas Bolivar 2021-09-10 12:06:35 UTC
Obviously, the same problem happens with amphora loadbalancers created on tenant networks. The VIP port (ovn virtual port) does not get attached to any chassis and its status remains as "up: false". Raising the severity as this basically means there is no support for amphora loadbalancers with ipv6 VIPs.

Comment 2 Luis Tomas Bolivar 2021-09-10 14:53:46 UTC
Testing it for IPv6 VIPs on tenant networks, I checked the status is the same (virtual port with "up: false" and no chassis), but the connectivity from other ports in that tenant network to the IPv6 VIP works.

Comment 3 Mohammad Heib 2021-09-22 18:25:08 UTC
fix posted upstream for review:
http://patchwork.ozlabs.org/project/ovn/patch/20210922175755.822094-1-mheib@redhat.com/

Comment 4 Mohammad Heib 2021-11-10 10:36:40 UTC
fix now available in the upstream ovn.


Note You need to log in before you can comment on or make changes to this bug.