Bug 1469598 - [Docs][RFE][Upgrades] Revive failed upgrade where it is possible
Summary: [Docs][RFE][Upgrades] Revive failed upgrade where it is possible
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: documentation
Version: 13.0 (Queens)
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: ---
Assignee: RHOS Documentation Team
QA Contact: RHOS Documentation Team
URL:
Whiteboard:
Depends On: 1430914
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-07-11 14:19 UTC by Dan Macpherson
Modified: 2018-09-10 04:56 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-09-10 04:56:59 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Dan Macpherson 2017-07-11 14:19:38 UTC
== DECRIPTION ==
As a cloud operator, in case of failed upgrade I want to revive whenever it is possible and continue in the process (or start it from beginning) without need to rollback my environment to previously working state.

== DOCS IMPACT ==
Requires documentation in the Troubleshooting section of the Upgrade Guide

Comment 1 Sofer Athlan-Guyot 2017-07-25 08:21:16 UTC
Hi Dan,

this won't make it to osp12 and is reported for osp13, so I guess this can be closed.

Thanks,

Comment 2 Dan Macpherson 2017-07-25 12:53:56 UTC
Moving to OSP13 as per Sofer's comment

Comment 3 Dan Macpherson 2018-09-07 03:59:40 UTC
Sofer and Carlos,

Do we still need an action on this item? My understanding is the new "openstack overcloud upgrade" command can essentially be rerun and using specific node roles or hostname (due to Ansible). Is that the case? Do we need any further documentation on this item?

Comment 4 Sofer Athlan-Guyot 2018-09-07 11:19:56 UTC
Hi Dan,

(In reply to Dan Macpherson from comment #3)
> Sofer and Carlos,
> 
> Do we still need an action on this item? My understanding is the new
> "openstack overcloud upgrade" command can essentially be rerun and using
> specific node roles or hostname (due to Ansible). Is that the case? Do we
> need any further documentation on this item?

No, I think we're good with the current workflow and this bz can closed.

One thing, though, is that we don't test for idempotency of the task run during upgrade.  So even if we can now rerun any role/host, it may fail because we didn't check that all tasks are idempotent.  But that's another story.


Note You need to log in before you can comment on or make changes to this bug.