The binding-host-id value passed in REST Port POST/PUT requests should be used to populate LSP requested-chassis. This is a follow up change to: https://bugzilla.redhat.com/1532674 +++ This bug was initially created as a clone of Bug #1532674 +++ Description of problem: Engine should update neutron that binding host has been changed after VM migration with neutron network. During the testing done to certify the ospd12 with RHV, i have found that when migratin a VM that has a neutron network, engine doesn't update neutron that the binding host has been changed and the port's state reported as DOWN in neutron. Engine should do what nova doing and update neutron after VM migration with the new binding host. Note, that although this bug, the VM remain operational and i can ping it and from it. Version-Release number of selected component (if applicable): openstack-neutron-11.0.1-7.el7ost.noarch Red Hat OpenStack Platform release 12.0 Beta (Pike) rhv-4.2.1-0.2.el7 vdsm-4.20.11-1.el7ev How reproducible: 100% Steps to Reproduce: 1. Add ospd12 to RHV 4.2 - See BZ 1518370 for the flow 2. Add 2 rhv nodes with neutron roles and neutron agents configured and Create 2 VMs 3. After the setup is up and you have neutron networks in RHV, start VM1 on node1 with neutron network and check the port's state. Ping the VM2(on same node) 4. Migrate VM to node2 and ping VM2 Actual results: 3. Port is ACTIVE and ping is ok - Port state before migration: (overcloud) [root@overcloud-controller-0 ~]# openstack port show f4f2a451-45c2-4204-b6f6-682b19cc1894 +-----------------------+--------------------------------------------------------------------------+ | Field | Value | +-----------------------+--------------------------------------------------------------------------+ | admin_state_up | UP | | allowed_address_pairs | | | binding_host_id | overcloud-rhv-1.localdomain | | binding_profile | | | binding_vif_details | datapath_type='system', ovs_hybrid_plug='True', port_filter='True' | | binding_vif_type | ovs | | binding_vnic_type | normal | | created_at | 2018-01-08T14:43:41Z | | data_plane_status | None | | description | | | device_id | ab7d9b77-3f4b-4edd-b43d-5a3fa63578ef | | device_owner | oVirt | | dns_assignment | None | | dns_name | None | | extra_dhcp_opts | | | fixed_ips | ip_address='12.0.0.14', subnet_id='b854cabd-4682-4d4e-8f09-e34b8b3200bf' | | id | f4f2a451-45c2-4204-b6f6-682b19cc1894 | | ip_address | None | | mac_address | 00:00:00:00:00:21 | | name | nic1 | | network_id | 1194373f-6e53-474e-96bf-581ea5c1617f | | option_name | None | | option_value | None | | port_security_enabled | True | | project_id | 0446aa0ba1dd4df289c2590ec8a1a382 | | qos_policy_id | None | | revision_number | 23 | | security_group_ids | a352570d-f77f-4bb7-95f3-81d6d77e6c6c | | status | ACTIVE | | subnet_id | None | | tags | | | trunk_details | None | | updated_at | 2018-01-09T13:56:26Z | +-----------------------+--------------------------------------------------------------------------+ 4. Port state is DOWN, the binding host wasn't changed, but ping still ok- Port state after migration: (overcloud) [root@overcloud-controller-0 ~]# openstack port show f4f2a451-45c2-4204-b6f6-682b19cc1894 +-----------------------+--------------------------------------------------------------------------+ | Field | Value | +-----------------------+--------------------------------------------------------------------------+ | admin_state_up | UP | | allowed_address_pairs | | | binding_host_id | overcloud-rhv-1.localdomain | | binding_profile | | | binding_vif_details | datapath_type='system', ovs_hybrid_plug='True', port_filter='True' | | binding_vif_type | ovs | | binding_vnic_type | normal | | created_at | 2018-01-08T14:43:41Z | | data_plane_status | None | | description | | | device_id | ab7d9b77-3f4b-4edd-b43d-5a3fa63578ef | | device_owner | oVirt | | dns_assignment | None | | dns_name | None | | extra_dhcp_opts | | | fixed_ips | ip_address='12.0.0.14', subnet_id='b854cabd-4682-4d4e-8f09-e34b8b3200bf' | | id | f4f2a451-45c2-4204-b6f6-682b19cc1894 | | ip_address | None | | mac_address | 00:00:00:00:00:21 | | name | nic1 | | network_id | 1194373f-6e53-474e-96bf-581ea5c1617f | | option_name | None | | option_value | None | | port_security_enabled | True | | project_id | 0446aa0ba1dd4df289c2590ec8a1a382 | | qos_policy_id | None | | revision_number | 24 | | security_group_ids | a352570d-f77f-4bb7-95f3-81d6d77e6c6c | | status | DOWN | | subnet_id | None | | tags | | | trunk_details | None | | updated_at | 2018-01-09T14:36:28Z | +-----------------------+--------------------------------------------------------------------------+ Expected results: Engine should update neutron after VM migrate that the binding host is changed to new host. Additional info: See also - https://bugzilla.redhat.com/show_bug.cgi?id=1518370 --- Additional comment from Michael Burman on 2018-01-09 10:02 EST ---
OVS build comming 18-06
(In reply to Marcin Mirecki from comment #1) > OVS build comming 18-06 If we need to wait 18-06 please mark this as blocker since after next week build only blockers will be accepted.
Proposing as blocker as per assignee request
The issue is fixed in the coming Jun18th OVS build. Until the build is available, the fix can be tested with one of the available ovs build, for example: https://brewweb.engineering.redhat.com/brew/buildinfo?buildID=706535
Yes, this can get into 4.2.5. We would add a configurable to disable this by default until ovs-2.9 is ubiquitous (currently it is missing on Fedora and CentOS.
The feature will only be enabled when the following config key is set to true: [PROVIDER] ovs-version-2.9=true
(In reply to Marcin Mirecki from comment #7) > The feature will only be enabled when the following config key is set to > true: > > [PROVIDER] > ovs-version-2.9=true Thanks Marcin, Can you please tell the test steps here, i'm not sure how test this exactly.
This bug intends to improve the downtime during migration. To test this, use ovs-2.9 throughout the cluster, set ovs-version-2.9=true and check if live migration still works, and that the downtime has improved. In the past, migrating a VM back and forth between two hosts could lead to 5 second downtime.
the build fixing this bug can be taken from https://errata.devel.redhat.com/advisory/34970/builds
Verified on - ovirt-provider-ovn-1.2.13-1.el7ev.noarch with 4.2.5.2_SNAPSHOT-79.gffafd93.0.scratch.master.el7ev openvswitch-2.9.0-47.el7fdp.3.x86_64 openvswitch-ovn-common-2.9.0-47.el7fdp.3.x86_64
This bugzilla is included in oVirt 4.2.5 release, published on July 30th 2018. Since the problem described in this bug report should be resolved in oVirt 4.2.5 release, it has been closed with a resolution of CURRENT RELEASE. If the solution does not work for you, please open a new bug report.