Bug 1306716 - [RFE] Deploy ceph and openstack seperately or Integrate existing ceph cluster into overcloud.
Summary: [RFE] Deploy ceph and openstack seperately or Integrate existing ceph cluster...
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: rhosp-director
Version: 7.0 (Kilo)
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: 10.0 (Newton)
Assignee: Angus Thomas
QA Contact: Shai Revivo
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-02-11 16:21 UTC by Jeremy
Modified: 2019-10-10 11:10 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: Enhancement
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-09-15 06:55:58 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Jeremy 2016-02-11 16:21:06 UTC
Description of problem:
Could Director deploy Openstack and Ceph separately? This would allow to redeploy Openstack without having to tear down the ceph cluster every time. The idea is to be able to re-deploy and keep the existing ceph cluster.

Additional info:

If not able to deploy ceph cluster serpeartly from the overcloud then it seems a similar solution would be to Add existing ceph cluster into director deployment.

Comment 1 Jeremy 2016-02-11 20:49:15 UTC
The customer suggests deploying ceph in a separate heat stack from the overcloud.

Comment 3 Mike Burns 2016-02-12 16:14:39 UTC
We don't have the ability today to essentially adopt either a ceph or openstack deployment with director.  It's a pretty difficult thing to do, really, since we'd need to essentially dynamically generate the heat templates needed to deploy the ceph cluster.

The concept of having separate heat stacks that can be managed separately is one I've heard mentioned, but I don't know if it's a roadmap item or something that was rejected upstream.

We do have the ability to connect to an existing external ceph cluster, but that cluster is managed completely independently of director.  We won't deploy ceph mons or interact with the ceph OSDs at all other than to utilize them.  Upgrades, etc, are all down independently for the external ceph cluster.

Comment 4 Jeremy 2016-02-22 15:46:55 UTC
So is this something that might be a future feature?

Comment 5 Ramon Acedo 2016-03-03 20:46:19 UTC
The following article covers the second half of this request already: https://access.redhat.com/articles/1994713

Comment 6 Mike Burns 2016-04-07 21:07:13 UTC
This bug did not make the OSP 8.0 release.  It is being deferred to OSP 10.

Comment 8 Jeff Brown 2016-09-13 18:09:12 UTC
This is an RFE that is going to be moved to OSP11 when confirmed by Federico.

Comment 9 Federico Lucifredi 2016-09-15 06:55:58 UTC
I am considering WONTFIX for this one. If anyone wants it to be done, please speak up why this is a good idea or forever hold your peace. 

Deploying Ceph separately is possible today with the RHCS product. Importing that cluster into OSP is also possible. So what is left here is doing all of this with dynamically generated Heat templates from OSPd and I do not see any value.


Note You need to log in before you can comment on or make changes to this bug.