Please verify that our external network provider certification plan passes with OSP12.
Testing flow: - Use director to configure 'networker' roles for rhv nodes - Add neutron provider to rhv (Controller) - * Run 'iptables -D INPUT -j REJECT --reject-with icmp-host-prohibited' on rhv nodes - Add rhv nodes to rhv 4.2.1 - Import networks from neutron provider or create networks on the neutron provider from rhv(when 'read-only' property isn't set on edit provider) - Use neutron networks for the VMs - Run network/neutron sanity testing openstack-neutron-11.0.1-7.el7ost.noarch Red Hat OpenStack Platform release 12.0 Beta (Pike) rhv-4.2.1-0.2.el7 vdsm-4.20.11-1.el7ev Tested: 1) Add neutron provider to RHV 4.2.1 - PASS 2) Add 2 rhv nodes that were configured with neutron roles using director - PASS 3) Create neutron network on the provider via rhv - PASS 4) Import neutron network from provider to rhv - PASS 5) Run VMs on both rhv nodes with neutron networks - PASS 6) Migrate VMs between the rhv nodes - PASS 7) Get IPs from neutron-dhcp-agent - PASS 8) Test connectivity between VMs on the same rhv node and different nodes 9) Hotplug/unplug - PASS * NOTE- I have found a bug on RHV side.Will report a bug right away. When migrating the VM/s from one rhv-node to the second rhv-node, the port state in neutron is changed to DOWN and it seems like neutron not updating the binding host to the new host and it's why the port state is reported in neutron as down(because rhv didn't updated neutron about the change as nova use to do). But, even when the port is down and binding host wasn't updated to the new host after migration, the VM is remain operational and connectivity to it and from it still working as expected. It's why we decided to verify this bug as VMs remain operational after migration and functionality is working. Port state before migration: (overcloud) [root@overcloud-controller-0 ~]# openstack port show f4f2a451-45c2-4204-b6f6-682b19cc1894 +-----------------------+--------------------------------------------------------------------------+ | Field | Value | +-----------------------+--------------------------------------------------------------------------+ | admin_state_up | UP | | allowed_address_pairs | | | binding_host_id | overcloud-rhv-1.localdomain | | binding_profile | | | binding_vif_details | datapath_type='system', ovs_hybrid_plug='True', port_filter='True' | | binding_vif_type | ovs | | binding_vnic_type | normal | | created_at | 2018-01-08T14:43:41Z | | data_plane_status | None | | description | | | device_id | ab7d9b77-3f4b-4edd-b43d-5a3fa63578ef | | device_owner | oVirt | | dns_assignment | None | | dns_name | None | | extra_dhcp_opts | | | fixed_ips | ip_address='12.0.0.14', subnet_id='b854cabd-4682-4d4e-8f09-e34b8b3200bf' | | id | f4f2a451-45c2-4204-b6f6-682b19cc1894 | | ip_address | None | | mac_address | 00:00:00:00:00:21 | | name | nic1 | | network_id | 1194373f-6e53-474e-96bf-581ea5c1617f | | option_name | None | | option_value | None | | port_security_enabled | True | | project_id | 0446aa0ba1dd4df289c2590ec8a1a382 | | qos_policy_id | None | | revision_number | 23 | | security_group_ids | a352570d-f77f-4bb7-95f3-81d6d77e6c6c | | status | ACTIVE | | subnet_id | None | | tags | | | trunk_details | None | | updated_at | 2018-01-09T13:56:26Z | +-----------------------+--------------------------------------------------------------------------+ Port state after migration: (overcloud) [root@overcloud-controller-0 ~]# openstack port show f4f2a451-45c2-4204-b6f6-682b19cc1894 +-----------------------+--------------------------------------------------------------------------+ | Field | Value | +-----------------------+--------------------------------------------------------------------------+ | admin_state_up | UP | | allowed_address_pairs | | | binding_host_id | overcloud-rhv-1.localdomain | | binding_profile | | | binding_vif_details | datapath_type='system', ovs_hybrid_plug='True', port_filter='True' | | binding_vif_type | ovs | | binding_vnic_type | normal | | created_at | 2018-01-08T14:43:41Z | | data_plane_status | None | | description | | | device_id | ab7d9b77-3f4b-4edd-b43d-5a3fa63578ef | | device_owner | oVirt | | dns_assignment | None | | dns_name | None | | extra_dhcp_opts | | | fixed_ips | ip_address='12.0.0.14', subnet_id='b854cabd-4682-4d4e-8f09-e34b8b3200bf' | | id | f4f2a451-45c2-4204-b6f6-682b19cc1894 | | ip_address | None | | mac_address | 00:00:00:00:00:21 | | name | nic1 | | network_id | 1194373f-6e53-474e-96bf-581ea5c1617f | | option_name | None | | option_value | None | | port_security_enabled | True | | project_id | 0446aa0ba1dd4df289c2590ec8a1a382 | | qos_policy_id | None | | revision_number | 24 | | security_group_ids | a352570d-f77f-4bb7-95f3-81d6d77e6c6c | | status | DOWN | | subnet_id | None | | tags | | | trunk_details | None | | updated_at | 2018-01-09T14:36:28Z | +-----------------------+--------------------------------------------------------------------------+ As you can see, the binding host wasn't updated and port state is DOWN.
This is the new bug - https://bugzilla.redhat.com/show_bug.cgi?id=1532674 Engine should update neutron that binding host has been changed after VM migration(with neutron network).
This bugzilla is included in oVirt 4.2.1 release, published on Feb 12th 2018. Since the problem described in this bug report should be resolved in oVirt 4.2.1 release, it has been closed with a resolution of CURRENT RELEASE. If the solution does not work for you, please open a new bug report.