Bug 1861854 - [RFE] Provide method to update central DCN stack with minimal service interruption
Summary: [RFE] Provide method to update central DCN stack with minimal service interru...
Keywords:
Status: CLOSED DEFERRED
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: tripleo-ansible
Version: 16.2 (Train)
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: z2
: 16.2 (Train on RHEL 8.4)
Assignee: John Fulton
QA Contact: Mike Abrams
URL:
Whiteboard:
Depends On: 1859692
Blocks: 1802774
TreeView+ depends on / blocked
 
Reported: 2020-07-29 17:59 UTC by John Fulton
Modified: 2022-08-24 10:04 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-09-08 14:38:04 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker OSP-2711 0 None None None 2022-08-24 10:04:52 UTC

Description John Fulton 2020-07-29 17:59:04 UTC
When using DCN with storage, after additional Ceph clusters are added at edge sites the central heat stack must be updated [1] so that the cephx keys of the new ceph clusters are available to the ceph clients and so that the central glance configuration had entries for the new glance backends (which are the new ceph clusters). 

All of this is possible today, but the update of the central stack could be less disruptive. E.g. in theory we should be able to rehub, not restart, the glance contianers and no other services need to be restarted.

This RFE tracks giving TripleO the ability to do the above.


[1] https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/features/distributed_multibackend_storage.html#update-central-site-to-use-additional-ceph-clusters-as-glance-stores

Comment 2 John Fulton 2020-08-12 20:28:24 UTC
On resolution undo result of docbug 1868487

Comment 5 John Fulton 2021-09-08 14:38:04 UTC
This work would be non-trivial and is not a high enough priority to pursue at this time (as per a conversation with PM and Engineering).


Note You need to log in before you can comment on or make changes to this bug.