Bug 1747426
| Summary: | Failed to provision overcloud nodes | ||
|---|---|---|---|
| Product: | Red Hat OpenStack | Reporter: | Attila Fazekas <afazekas> |
| Component: | openstack-neutron | Assignee: | Bernard Cafarelli <bcafarel> |
| Status: | CLOSED ERRATA | QA Contact: | Eran Kuris <ekuris> |
| Severity: | high | Docs Contact: | |
| Priority: | high | ||
| Version: | 16.0 (Train) | CC: | amuller, bcafarel, chrisw, scohen |
| Target Milestone: | Upstream M3 | Keywords: | Triaged |
| Target Release: | 16.0 (Train on RHEL 8.1) | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | openstack-neutron-14.1.0-0.20190830044821.bd99780.el8ost | Doc Type: | No Doc Update |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2020-02-06 14:42:04 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
Looks like this is the osp16 version of https://bugzilla.redhat.com/show_bug.cgi?id=1722033 Indeed, it is the same, from openstack-neutron-openvswitch-agent log: + /usr/bin/python3 -m neutron.cmd.destroy_patch_ports --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-openvswitch-agent /usr/bin/python3: No module named neutron.cmd.destroy_patch_ports + sudo -E kolla_set_configs Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2020:0283 |
Description of problem: The nodes was able to pass the introspection. The ironic dnsmask is in use at introspection time No evidence for neutron tried to launch it's own dnsmask. The networks/subnets/ports are created on the API level and they have the pxe attributes. Overcloud deployment times out. The deployer log is not too intuitive: TASK [Run container-puppet tasks (generate config) during step 1] ************** Thursday 29 August 2019 15:50:49 +0000 (0:00:01.592) 0:16:11.056 ******* ok: [ceph-2] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} Overcloud configuration failed. ok: [ceph-1] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} ok: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} ok: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} ok: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} ok: [controller-2] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} ok: [controller-1] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} Ansible timed out at 5228 seconds. Version-Release number of selected component (if applicable): RHOS_TRUNK-16.0-RHEL-8-20190828.n.2 How reproducible: always/unknown Additional info: The issue maybe not related to change happened in the networking code, maybe something outside changed, but the at the end it is a networking issue.