Description of problem: The nodes was able to pass the introspection. The ironic dnsmask is in use at introspection time No evidence for neutron tried to launch it's own dnsmask. The networks/subnets/ports are created on the API level and they have the pxe attributes. Overcloud deployment times out. The deployer log is not too intuitive: TASK [Run container-puppet tasks (generate config) during step 1] ************** Thursday 29 August 2019 15:50:49 +0000 (0:00:01.592) 0:16:11.056 ******* ok: [ceph-2] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} Overcloud configuration failed. ok: [ceph-1] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} ok: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} ok: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} ok: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} ok: [controller-2] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} ok: [controller-1] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} Ansible timed out at 5228 seconds. Version-Release number of selected component (if applicable): RHOS_TRUNK-16.0-RHEL-8-20190828.n.2 How reproducible: always/unknown Additional info: The issue maybe not related to change happened in the networking code, maybe something outside changed, but the at the end it is a networking issue.
Looks like this is the osp16 version of https://bugzilla.redhat.com/show_bug.cgi?id=1722033
Indeed, it is the same, from openstack-neutron-openvswitch-agent log: + /usr/bin/python3 -m neutron.cmd.destroy_patch_ports --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-openvswitch-agent /usr/bin/python3: No module named neutron.cmd.destroy_patch_ports + sudo -E kolla_set_configs
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2020:0283