Bug 2108212 - [OSP17] During OVN migration workload VMs became not reacheable via IPv6 [NEEDINFO]
Summary: [OSP17] During OVN migration workload VMs became not reacheable via IPv6
Keywords:
Status: ON_DEV
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-neutron
Version: 17.0 (Wallaby)
Hardware: Unspecified
OS: Unspecified
high
medium
Target Milestone: z2
: 17.1
Assignee: Miro Tomaska
QA Contact: Eran Kuris
James Smith
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-07-18 15:29 UTC by Roman Safronov
Modified: 2023-08-11 16:45 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: Known Issue
Doc Text:
If you use IPv6 to connect to instances during migration to the OVN mechanism driver, connection to the instances might be disrupted for up to several minutes when the ML2/OVS services are stopped. To avoid this, use IPv4 instead. + The router advertisement daemon `radvd` for IPv6 is stopped during migration to the OVN mechanism driver. While `radvd` is stopped, router advertisements are no longer broadcast. This broadcast interruption results in instance connection loss over IPv6. IPv6 communication is automatically restored once the new ML2/OVN services start. + To avoid the potential disruption, use IPv4 instead.
Clone Of:
Environment:
Last Closed:
Target Upstream Version:
Embargoed:
mtomaska: needinfo? (jamsmith)


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker OSP-17675 0 None None None 2022-07-18 15:54:23 UTC

Description Roman Safronov 2022-07-18 15:29:19 UTC
Description of problem:
Workload VM stops to respond to IPv6 address during OVN migration

Version-Release number of selected component (if applicable):
RHOS-17.0-RHEL-9-20220714.n.1

How reproducible:
happened on most osp17 ovs2ovn downstream jobs, 4 out of 5, except dvr2dvr 

Steps to Reproduce:
1.Deploy HA environment (3 controllers + 2 compute nodes) with ML2/OVS backend
2.Create a workload, internal network, router, security groups allowing ping/ping6 and ssh. Connect internal network to the external through the router.
3.Run infinite IPv6 ping to VM ipv6 addresses and IPv4 ping to the VM IPv4 address
4. Start ML2/OVS to ML2/OVN migration procedure according to the official documentation
  https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.2/html/migrating_the_networking_service_to_the_ml2ovn_mechanism_driver/migrating-ml2ovs-to-ovn

Actual results:
The VM IPv4 address (FIP) responds to ping from the external network - this is CORRECT
The VM IPv6 address stops responding to ping6 from the external network, very high packet loss reported (see below) - BAD

--- 2001:db8:cafe:1:f816:3eff:fe91:7627 ping statistics ---
2356 packets transmitted, 186 received, 92.1053% packet loss, time 2407621ms



Expected results:
The VM IPv4 address (FIP) responds to ping from the external network
The VM IPv6 address responds to ping6 from the external network


Additional info:

Comment 20 Eran Kuris 2023-06-18 11:41:54 UTC
Raising the blocker flag as its an automation blocker


Note You need to log in before you can comment on or make changes to this bug.