Description of problem: While deploying controller or computes nodes puppet get stuck in to check memcached status using "/tmp/ha-all-in-one-util.bash all_members_include memcached" and then eventually fails. Further investigation reveals that crm_node have changed output format which breaks the regex used in "ha-all-in-one-util.erb" to get members. i.e. members=$(/usr/sbin/crm_node -l | perl -p -e 's/^.*\s+(\S+)$/$1/g') Version-Release number of selected component (if applicable): 6.0 How reproducible: Steps to Reproduce: 1. Install rhel-osp-installer on RHEL7.2 and create a deployment. 2. Discover nodes and assign deployment roles to the discovered node. 3. Node will install the operating system and runs the puppet. 4. puppet will get stuck in "tmp/ha-all-in-one-util.bash all_members_include memcached" Actual results: Failed to get service status from members. Expected results: It should be able to get the status of the node. Additional info: I tried using awk to get crm members in ha-all-in-one-util.bash by using following command to get pass this error. members=$(/usr/sbin/crm_node -l | awk '{print $2}')
I'm going to piggy back off of this bug. This problem is causing deployments for me to fail as well following the FlexPod OpenStack CVD. Adding additional members to this bug for awareness. If one patches /etc/puppet/environments/production/modules/quickstack/templates/ha-all-in-one-util.erb and comment out this: members=$(/usr/sbin/crm_node -l | perl -p -e 's/^.*\s+(\S+)$/$1/g') which returns "member" and instead use: members=$(/usr/sbin/crm_node -l | awk '{print $2}') The deployment proceeds.
David, Is there any way you can provide a pull request to the kilo branch of astapor with the update you mention in comment 2?
FYI, there is no kilo branch for astapor, it is just master[1], as kilo was the last (osp7) release we did with astapor. That said, since you are using osp6/juno, it might make more sense for such a patch (including the one for the other BZ) to go directly to the juno branch, and we can later discuss what of that is relevant to kilo/osp7 if needed. [1] https://github.com/redhat-openstack/astapor/branches
Pull request to fix this bug here: https://github.com/redhat-openstack/astapor/pull/568/
David, If we are able to produce an updated rpm would you be able to help verify?
Hi Mike, Sure can, and would be happy to kick off a new build to verify, assuming 1292555 is patched too. That bug and this one prevent deployments from occurring.
Merged
*** Bug 1299987 has been marked as a duplicate of this bug. ***
*** Bug 1299812 has been marked as a duplicate of this bug. ***
I was able to test this with the assistance of a colleague of mine and can say that builds are successful now with the rhel-osp-installer. Builds no longer fail. We even did a clean install of the database, as I seem to remember that it was required when updating the openstack-foreman-installer package. Marking bug VERIFIED. Hope that's the right state.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2016-0284.html
*** Bug 1294539 has been marked as a duplicate of this bug. ***