OpenDaylight (ODL) configuration files were not recreated during Controller replacement, which caused updates to fail. This fix unmounts /opt/opendaylight/data from the host, which causes the configuration files to be recreated during redeployment.
Description of problem:
Replacing a controller following the documentation: https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/13/html/director_installation_and_usage/sect-scaling_the_overcloud#sect-Replacing_Controller_Nodes
results in a failure:
Stack overcloud UPDATE_FAILED
overcloud.AllNodesDeploySteps.ComputeDeployment_Step4.1:
resource_type: OS::Heat::StructuredDeployment
physical_resource_id: 6b430a88-2f6b-4f8b-bfb6-7da3b40ece22
status: UPDATE_FAILED
status_reason: |
Error: resources[1]: Deployment to server failed: deploy_status_code : Deployment exited with non-zero status code: 2
deploy_stdout: |
...
"Error: curl -k -o /dev/null --fail --silent --head -u odladmin:redhat http://172.17.1.18:8081/restconf/operational/network-topology:network-topology/topology/netvirt:1 returned 22 instead of one of [0]",
"Error: /Stage[main]/Neutron::Plugins::Ovs::Opendaylight/Exec[Wait for NetVirt OVSDB to come up]/returns: change from notrun to 0 failed: curl -k -o /dev/null --fail --silent --head -u odladmin:redhat http://172.17.1.18:8081/restconf/operational/network-topology:network-topology/topology/netvirt:1 returned 22 instead of one of [0]",
"Warning: /Stage[main]/Neutron::Plugins::Ovs::Opendaylight/Exec[Set OVS Manager to OpenDaylight]: Skipping because of failed dependencies"
]
}
to retry, use: --limit @/var/lib/heat-config/heat-config-ansible/1561c766-d74b-46e7-9ea9-7f1391a31911_playbook.retry
PLAY RECAP *********************************************************************
localhost : ok=4 changed=1 unreachable=0 failed=1
The error happens during the last step, after manually editing the cluster configuration and running overcloud deploy for the final time
Version-Release number of selected component (if applicable):
openstack-tripleo-heat-templates-8.0.2-36.el7ost.noarch
How reproducible:
100 %
The root cause seems to be similar to https://bugzilla.redhat.com/show_bug.cgi?id=1623123. Need karaf logs to confirm. Meanwhile, Tomas please try controller replacement again with this patch https://review.openstack.org/#/c/612663/. And if it fails this time, please share karaf logs.
What you need to do:
1. Delete the deployed stack
2. Apply this patch
3. Deploy the stack
4. Do container replacement procedure
Created attachment 1513756[details]
Evidence of the verification and guide to reproduce it
I've verified that this bug is fixed by applying the following two patches, which are included in the RPM stated in the 'fixed in' field:
- https://review.openstack.org/612663
- https://review.openstack.org/620053
The complete evidence of the verification of this patch is attached here. This document also serves as a guide to reproduce the steps to replace a controller.
Next steps are for QE to verify.
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.
https://access.redhat.com/errata/RHBA-2019:0068