Description of problem: hosts_for_migration/[ovn-dbs] host group is populated wrongly and doesn't consider neutron_dhcp containers running elsewhere than neutron_server. Version-Release number of selected component (if applicable): python3-networking-ovn-migration-tool-7.3.1-1.20200902233413.el8ost.noarch (RHOS 16.1.3) How reproducible: always Steps to Reproduce: 1. Install overcloud with split control plane having neutron_dhcp and neutron_api running on distinct nodes 2. 3. Actual results: [ovn-dbs] ctrl-net-c-01 ansible_host=10.0.37.45 ovn_central=true ansible_ssh_user=stack ansible_become=true ctrl-net-c-02 ansible_host=10.0.37.21 ansible_ssh_user=stack ansible_become=true ctrl-net-c-03 ansible_host=10.0.36.66 ansible_ssh_user=stack ansible_become=true ctrl-net-c-04 ansible_host=10.0.37.139 ansible_ssh_user=stack ansible_become=true [ovn-controllers] compute-net-c-001 ansible_host=10.0.36.225 ansible_ssh_user=stack ansible_become=true ovn_controller=true compute-net-c-002 ansible_host=10.0.36.152 ansible_ssh_user=stack ansible_become=true ovn_controller=true ctrl-nova-c-01 ansible_host=10.0.37.104 ansible_ssh_user=stack ansible_become=true ovn_controller=true ctrl-nova-c-02 ansible_host=10.0.37.15 ansible_ssh_user=stack ansible_become=true ovn_controller=true ctrl-nova-c-03 ansible_host=10.0.37.8 ansible_ssh_user=stack ansible_become=true ovn_controller=true Expected results: [ovn-dbs] compute-net-c-001 ansible_host=10.0.36.225 ansible_ssh_user=stack ansible_become=true compute-net-c-002 ansible_host=10.0.36.152 ansible_ssh_user=stack ansible_become=true [ovn-controllers] compute-net-c-001 ansible_host=10.0.36.225 ansible_ssh_user=stack ansible_become=true ovn_controller=true compute-net-c-002 ansible_host=10.0.36.152 ansible_ssh_user=stack ansible_become=true ovn_controller=true ctrl-nova-c-01 ansible_host=10.0.37.104 ansible_ssh_user=stack ansible_become=true ovn_controller=true ctrl-nova-c-02 ansible_host=10.0.37.15 ansible_ssh_user=stack ansible_become=true ovn_controller=true ctrl-nova-c-03 ansible_host=10.0.37.8 ansible_ssh_user=stack ansible_become=true ovn_controller=true Additional info: Reproducible in PSI
I assume this is also fixed in 16.2? ( thought there'd be a clone BZ)
(In reply to Yaniv Kaul from comment #4) > I assume this is also fixed in 16.2? ( thought there'd be a clone BZ) It is fixed in 16.2, it got there from upstream stable Train release import. There is no BZ for 16.2 because of the upstream fix.
Was verified on RHOS-16.1-RHEL-8-20210804.n.0 with python3-networking-ovn-7.3.1-1.20210714143305.4e24f4c.el8ost.noarch and python3-networking-ovn-migration-tool-7.3.1-1.20210714143305.4e24f4c.el8ost.noarch. Tested on a composable roles environment where neutron_dhcp and neutron_api were running on distinct nodes (networkers and controllers respectively). Checked that ovn_migration.sh script was able to detect neutron_dhcp agents when they run on a different node than neutron_api. Ansible inventory file was created properly.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat OpenStack Platform 16.1.7 (Train) bug fix and enhancement advisory), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2021:3762