Created attachment 1053921 [details]
Failed resource after scale up
Description of problem:
I tried to scale up from 1 compute to 3 (on BMs with puddle 2015-07-13) and the stack failed on resource "ComputePuppetDeployment" of the 1st compute node (the one that already existed couldn't be updated). The failure reason is not very informative: "Error: Deployment to server failed: deploy_status_code : Deployment exited with non-zero status code: 6".
Further debugging shows that on the compute node that failed, there was an attempt to run "virsh secret-set-value" while the secret table is really empty and the uuid of the secret didn't exist. It seems the fsid was updated during the scale-up when it should not have been.
Additional info is attached to the bug. It shows the error from "heat deployment-show".
Version-Release number of selected component (if applicable):
Steps to Reproduce:
1. Deploy with 3 controllers, 1 compute and 1 ceph. I deployed on bare metals, without network isolation. I deployed with tuskar.
2. Run the deployment command again and scale up to 3 computes
Scale up fails.
Believed to be fixed by: https://review.gerrithub.io/#/c/239994/
Brad, this is a different BZ; we need to make sure the params at  are not re-created when updating an existing deployment.
The previous fix would partially fix it. Here is the remainder:
*** Bug 1246023 has been marked as a duplicate of this bug. ***
Verified in: python-rdomanager-oscplugin-0.0.8-43.el7ost.noarch
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.