Bug 1869308
| Summary: | [16.1][OVN] ML2OVS->ML2OVN migration with default compute bridge mapping settings fails on no-DVR environment | ||
|---|---|---|---|
| Product: | Red Hat OpenStack | Reporter: | Roman Safronov <rsafrono> |
| Component: | python-networking-ovn | Assignee: | Assaf Muller <amuller> |
| Status: | CLOSED NOTABUG | QA Contact: | Eran Kuris <ekuris> |
| Severity: | high | Docs Contact: | |
| Priority: | medium | ||
| Version: | 16.1 (Train) | CC: | apevec, dsneddon, jlibosva, lhh, majopela, scohen |
| Target Milestone: | z2 | Keywords: | Triaged |
| Target Release: | --- | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2021-03-02 14:08:49 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
(In reply to Roman Safronov from comment #0) > Description of problem: > > ML2OVS -> ML2OVN migration on non-DVR environment (3 controllers + 2 > computes) is failing with default (empty) bridge mapping settings on compute > nodes. I think this is a bit misleading, the DVR option doesn't influence this bug. IIRC the reason was that the ml2/ovs environment had some custom provider bridges - br-isolated, that was configured for vlan tenant network and at the same time used for control plane traffic (IIRC this kind of setup is highly discouraged by RH since OSP 11). The br-isolated bridge mapping was configured for compute nodes in ml2/ovs but was missing in the environment files used by the migration. Note that br-isolated is not the default in ml2/ovs case, so the migration was from custom environment to the default. There are several ways how we can improve to avoid this situation: 1) During the migration TripleO can manage the bridges it created during the deployment and reset controllers to none (they were configured for ryu/os-ken) 2) Migration can preventively reset the controllers on all bridges before starting ovn controller Both actions above will lead to weird state, where provider networks from ml2/ovs won't be accessible for the instances because environment was not configured as such. 3) Emphasise in our docs that custom network modifications for the deployment must remain the same for the migration. Closing this BZ because we already added the instructions how to configure the bridge mappings into our ml2ons->ml2ovn documentation: https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.1/html-single/networking_with_open_virtual_network/index#ml2-ovs-to-ovn-migration-migrate |
Description of problem: ML2OVS -> ML2OVN migration on non-DVR environment (3 controllers + 2 computes) is failing with default (empty) bridge mapping settings on compute nodes. File /usr/share/openstack-tripleo-heat-templates/environments/services/neutron-ovn-ha.yaml contains the following: ComputeParameters: NeutronBridgeMappings: "" This causes that after the overcloud update that happens during the migration all existing VMs are not accessible. Workaround is to set in /usr/share/openstack-tripleo-heat-templates/environments/services/neutron-ovn-ha.yaml ComputeParameters: NeutronBridgeMappings: "tenant:br-isolated" before staring the migration. Feel free to change the component to 'documentation' in case this behavior is expected from dev point of view and is not going to be fixed in networking-ovn code. In documentation we need to mention that on non-DVR topology we need to specify proper bridge mappings for compute nodes in neutron-ovn-ha.yaml environment file before starting the migration. Version-Release number of selected component (if applicable): RHOS-16.1-RHEL-8-20200813.n.0 python3-networking-ovn-migration-tool-7.2.1-0.20200611133439.15f2281.el8ost.noarch How reproducible: 100% Steps to Reproduce: Run migration from ml2ovs to ml2ovn on non-DVR environment according to the official documentation. Actual results: Migration fails with default bridge mappings on compute nodes. Expected results: Migration succeeds. Additional info: On environments with DVR enable the issue does not occur because bridge mappings on compute nodes are not empty (are same as on controller nodes).