Bug 1955127 - [ovn][migration] pre-migration step to arrange VNIs when the target overlay is VXLAN
Summary: [ovn][migration] pre-migration step to arrange VNIs when the target overlay i...
Keywords:
Status: NEW
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: python-networking-ovn
Version: 16.1 (Train)
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: ---
Assignee: OSP Team
QA Contact: Eran Kuris
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-04-29 13:47 UTC by Daniel Alvarez Sanchez
Modified: 2022-03-24 13:58 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed:
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker OSP-3481 0 None None None 2021-11-18 15:05:47 UTC

Description Daniel Alvarez Sanchez 2021-04-29 13:47:21 UTC
The VXLAN implementation for ML2/OVN poses a limitation of 4K networks and 4K ports per network. If the user selects VXLAN as the target overlay for the ML2/OVN migration, the migration process should validate that the cloud is suitable before attempting the migration.

If the upgrade qualifies, the tool needs to arrange the VNIs in that way that all of them are < 4K since there could be VNIs beyond that number:

Example:

User creates networks from 1 to 5000 so the VNIs are 1...5000.
Now the user deletes the first 1K networks and the VNIs are 1001...5000.

While the migration to ML2/OVN qualifies (#ports < 4096), the VNI numbers are beyond the limit and they should be re-arranged to fit within the range as a pre-migration step.

The control plane should be fenced as well during this change to avoid a race between the validation and the actual migration.


Please note that there's a potential disruption when changing the VNIs in the dataplane since the nodes that have workloads in a given network need to all honor the change until communication can happen across them.


Note You need to log in before you can comment on or make changes to this bug.