Bug 1127852 - [RFE] Need ceph-deploy to be able to use pre-generated ceph.conf and keyring file
Summary: [RFE] Need ceph-deploy to be able to use pre-generated ceph.conf and keyring ...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: ceph
Version: 5.0 (RHEL 7)
Hardware: x86_64
OS: Linux
urgent
urgent
Target Milestone: z1
: 5.0 (RHEL 6)
Assignee: Alfredo Deza
QA Contact: Warren
URL: https://trello.com/c/RSYLWwMd
Whiteboard: MVP
Depends On:
Blocks: 1108193
TreeView+ depends on / blocked
 
Reported: 2014-08-07 17:01 UTC by arkady kanevsky
Modified: 2016-04-27 03:17 UTC (History)
16 users (show)

Fixed In Version:
Doc Type: Enhancement
Doc Text:
Clone Of:
: 1127854 (view as bug list)
Environment:
Last Closed: 2014-09-24 14:01:45 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Ceph Project Bug Tracker 9136 0 None None None Never

Description arkady kanevsky 2014-08-07 17:01:02 UTC
Description of problem:
As part of OpenStack deployment with Ceph backend end for block storage we need ceph-deploy command to be able to generate ceph.conf and keyring files, that will be used by clients and Ceph cluster nodes, including Cinder and Glance configurations, without doing any other steps of the deployment, like pushing created files for nodes.

Version-Release number of selected component (if applicable):
RHEL OSP5  Installer

How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 2 Alfredo Deza 2014-08-07 17:49:42 UTC
arkady, this should really be assigned to me.

ceph-deploy is already able to create a ceph.conf and a keyring when you do `ceph-deploy new {nodes}`

Not sure if you are asking to also generate Cinder and Glance configurations but that seems a bit out of scope for what ceph-deploy does.

Can you explain a bit more how is ceph-deploy meant to help here? Is it just creating those configs? Where would it push those files to? Again, seems like it is not a good fit for what ceph-deploy does.

Comment 3 arkady kanevsky 2014-08-11 14:58:28 UTC
Alfredo,
glad that you are stepping in.
Neil was not 100% sure if such capability already exists so bug was created.
This one of the bugs in the sequence on OSP and Ceph join deployment.
This bug is to only generate 2 files.
There is https://bugzilla.redhat.com/show_bug.cgi?id=1127854 which will copy them to OSP Installer VM so OSP Installer can push it to all nodes.

Please, either document how it is suppose to be done or pointer to documentation on how to create it, when nodes do not exist yet.

Comment 4 Alfredo Deza 2014-08-11 15:13:09 UTC
This functionality does not exist but should not be difficult to tell ceph-deploy to do it.

Will work on this today.

Comment 5 Alfredo Deza 2014-08-11 17:51:08 UTC
If by "no other action" we also mean "do not try to resolve initial monitors" then the ceph.conf file generated will be incomplete (it will not have a mon_initial_members) and it will be required to edit the ceph.conf file manually.

ceph-deploy does this so that monitors are aware of that mapping and can form quorum initially.

That would be the only "action" that is taken by `ceph-deploy new` that involves something else other than creating the admin keyring and ceph.conf files

Comment 6 Neil Levine 2014-08-11 21:17:43 UTC
If I remember correctly, the goal of this ticket was to allow the RHEL-OSP Installer to distribute the ceph.conf and keyring *before* the actual Ceph cluster was running. 

Unfortunately, this is not possible because ceph-deploy does not actually generate the keys itself (as I have I just discovered). It actually gathers keys from the running MONs - the MONs generate the default keys when they are installed. ceph-deploy merely 'gathers' them to the admin node so they can be distributed out.

We probably need to have another call to review the process given this info.

Comment 7 Neil Levine 2014-08-11 23:57:48 UTC
So after chatting with Josh, there is a more elegant solution.

We can create a small python script which would allow the RHEL-OSP Installer to generate the client key itself. It could then distribute it to the various nodes, along with a preseeded ceph.conf file, and then it place that key onto the ICE admin node. ceph-deploy would then add that key to the Ceph cluster as part of the server-side installation process.

In this way, the RHEL-OSP Installer could do everything on the OpenStack side autonomously without their needing to be a Ceph cluster running. Once finished on the OSP side, the user would then go to the ICE Installer to do the storage setup and voila, it should all work after that.

Comment 8 arkady kanevsky 2014-08-12 03:30:04 UTC
Sounds good. Looking forward to details and what input parameters scripts needs.

Comment 9 arkady kanevsky 2014-08-12 03:35:23 UTC
It should handle default HA case of 3 MON service on controller nodes, and degenarate case of single MON for non-HA. Also it should handle number of OSD nodes from 3 to "many".
If we want to generate different keyrings for cinder and glance, that are running on the same controller nodes we need the script to handle it.

And it ceph.conf should handle 3 networks: provisioning, storage (client), and clustering.

Comment 10 Neil Levine 2014-08-12 16:12:40 UTC
Ceph upstream ticket: http://tracker.ceph.com/issues/9083

The script should be able to generate an arbitrary number of client keys that can be used for images, volumes, and backups.

The ceph.conf issue is separate. We should be able to handle the generation of a ceph.conf file within Puppet currently. We just need to document the k/v pairs necessary.

Comment 11 Neil Levine 2014-08-14 18:01:24 UTC
So a Ceph key can be generated using the ceph-authtool command which is part of the ceph-common package. This package is already in the RHEL-OSP channel.

If the RHEL-OSP INstaller can create the key and pass it to the ICE Installer node to a predefined directory, we will modify ceph-deploy to automatically add it to the cluster during the install process. (http://tracker.ceph.com/issues/9118)

Comment 12 Crag Wolfe 2014-08-14 20:55:52 UTC
First will draw attention to the fact Foreman is RHEL6 and the rest of my comment makes the assumption ceph-authtool is available there and produces valid output for use on the ICE Installer node.

So, when foreman is installed, it can run ceph-authtool and create the keyring secret that will be used for both the glance images and cinder volumes pools.  It will place them on the filesystem on the foreman server in a location where a human can retrieve them before going through the ICE Installer process, at least for A1.

Does that align with everybody's expectations?

Comment 13 Neil Levine 2014-08-14 22:25:17 UTC
We have a BZ to create the ceph client packages for RHEL6 but it's stuck: https://bugzilla.redhat.com/show_bug.cgi?id=1125406

I agree with your workflow comments though in future it would be nice to have Puppet place the pregenerated key on the ICE Admin node for us to use.

Additionally, are we going to get Foreman to generate the ceph.conf file? It's can be predefined fairly easily with (as per Sage's email suggestion) a round-robin DNS name for the MON value that can then be set to resolve against whatever the Controller node IPs are.

Comment 14 Crag Wolfe 2014-08-14 23:02:58 UTC
Acknowledged on the "nice to have," will keep that in mind when working with the Staypuft folks over the next sprint.

I agree it would be ideal for Foreman to generate the ceph.conf since it is now already generating keyrings.  Is there a command-line tool to do this or should we just fill in a template?  If we take the template route, how do we choose an fsid?  Does mon_initial_members need to be dns names, or can they just be IP's?  The mon_host configuration parameter looks straightforward -- it will just be the controller IP's.  I think for simplicity at this stage we should assume both cinder and glance pools will be defined by the ICE Installer.  Am I missing anything else?  

Pasting a proposed /etc/ceph/ceph.conf (the first 3 config values would be templated):

[global]
fsid = db43b3be-35f5-433f-b045-2b8fa84e1ffa
mon_initial_members = d1a1, d1a2, d1a3
mon_host = 192.168.7.151,192.168.7.36,192.168.7.12
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
filestore_xattr_use_omap = true

[client.images]
keyring = /etc/ceph/ceph.client.images.keyring
 
[client.volumes]
keyring = /etc/ceph/ceph.client.volumes.keyring

Comment 15 Neil Levine 2014-08-15 17:08:34 UTC
FSID is just generated via uuidgen so can be arbitrarily defined. The rest of the template looks fine though I need to confirm how we handle public/private network definitions in ceph.conf.

Will chat with Alfredo and get back to you.

Comment 16 Neil Levine 2014-08-15 18:32:52 UTC
Upstream tickets to allow ceph-deploy to support a pre-defined ceph.conf and add a pre-generated key are:

http://tracker.ceph.com/issues/9136
http://tracker.ceph.com/issues/9118

Comment 17 Neil Levine 2014-08-21 18:33:44 UTC
This workflow has been superceded

Comment 18 Mike Burns 2014-09-24 14:01:45 UTC
Since RHEL-OSP doesn't ship ceph-deploy and comment 17 indicates that the design has changed, closing this bug.

Comment 19 arkady kanevsky 2014-09-24 14:51:13 UTC
So what is the replacement for this bug?
How do we deploy automatcially ceph and configure OSP for it?


Note You need to log in before you can comment on or make changes to this bug.