Bug 1255759
| Summary: | heat race condition causing deployments to get stuck at various places | ||
|---|---|---|---|
| Product: | Red Hat OpenStack | Reporter: | Mike Burns <mburns> |
| Component: | openstack-heat | Assignee: | Steve Baker <sbaker> |
| Status: | CLOSED ERRATA | QA Contact: | Amit Ugol <augol> |
| Severity: | high | Docs Contact: | |
| Priority: | high | ||
| Version: | Director | CC: | augol, ddomingo, djuran, fhubik, gfidente, ggillies, jliberma, jschluet, jstransk, kprabhak, lnatapov, mburns, mcornea, nbarcet, ohochman, opavlenk, rhel-osp-director-maint, rhos-maint, rlandy, sasha, sbaker, shardy, yeylon, zbitter |
| Target Milestone: | z2 | Keywords: | Triaged, ZStream |
| Target Release: | 7.0 (Kilo) | ||
| Hardware: | x86_64 | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Fixed In Version: | openstack-heat-2015.1.1-1.el7ost | Doc Type: | Bug Fix |
| Doc Text: |
Previously, it was possible for deployment signalling to cause other deployments to remain in an IN_PROGRESS state until stack timeout. This prevented deployment data from reacing overcloud nodes, thereby causing the overcloud deployment to time out and go into a FAILED state.
This issue was caused by legacy code paths, which update the metadata on every stack resource after any signal. These paths have now been disabled; as such, metadata is only updated using the resource-level locking mechanism. This mechanism correctly handles concurrent metadata updates.
|
Story Points: | --- |
| Clone Of: | 1249628 | Environment: | |
| Last Closed: | 2015-10-08 12:20:26 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
| Bug Depends On: | |||
| Bug Blocks: | 1258497 | ||
|
Description
Mike Burns
2015-08-21 13:31:37 UTC
Been testing the proposed patch by sbaker for multiple days now and can't reproduce the issue at all, so I consider this patch having completely fixed the issue Regards, Graeme I have already closed on of these too soon. I would like to keep this on qa for a while longer (not blocking ATM). I will gather results from CI, virt setups and HA BM setups and will verify it when I have more data. This might take up to a week. So far so good, I am clicking with a shaky finger on the Save Changes... Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2015:1865 |