Red Hat Bugzilla – Bug 1267283
[Docs] [Director] Need explanation on how to setup Ceph Storage Node journals partitions using GPT labels
Last modified: 2016-06-09 21:58:02 EDT
Customer trying to setup an enhanced deployment using Ceph Storage nodes as explained in the Red Hat documentation: https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux_OpenStack_Platform/7/html/Director_Installation_and_Usage/sect-Advanced-Scenario_3_Using_the_CLI_to_Create_an_Advanced_Overcloud_with_Ceph_Nodes.html#sect-Advanced-Configuring_Ceph_Storage
There is an important information regarding partitioning of the Journal Disk.
---- from the docs ---
The director does not create partitions on the journal disk. You must manually create these journal partitions before the Director can deploy the Ceph Storage nodes.
The Ceph Storage OSDs and journals partitions require GPT disk labels, which you also configure prior to customization. For example, use the following command on the potential Ceph Storage host to create a GPT disk label for a disk or partition:
# parted [device] mklabel gpt
How is one supposed to use parted to create gpt labels - on the ceph nodes before deployment - when there is no OS in it (prior to deployment)?
Can you please provide additional info on how to create such partitions in front of a deployment?
Assigning to Dan for review.
Couple of things on this BZ:
* This is pretty simple to do by using a RHEL Live CD and partitioning the drives through the Live OS.
* I also recommend a tool called GParted, which allows you run as a live CD and contains all tools needed to partition drives.
* You can also run the partitioning commands as extra config through a firstboot template. This method partitions the disks automatically after of the Overcloud deployment process. This is probably the method I'll end up documenting, unless...
* I've heard version 7.1 of the director can automatically partition these disks. I'll have to get in touch with a few people on this, but if so then that saves a lot of time for the end user.
This bug is going to be targeted for a new Ceph Storage Configuration Guide for the director. This will be a whole new guide that aim to show how to configure the director and your Ceph nodes (both external or deployed).
Aiming to be developing this guide during Jan-Feb period.
any updates to this? or any other osp7 ceph related documents that can be shared ?
Not yet. I'm aiming to put together some more comprehensive documentation for Ceph/OSPd over the next month. I'll be targeting OSP8, but we might release a similar version for OSP7.
Sent the Ceph Storage guide out for technical review. Switching Ceph bugs to POST.
This bug was accidentally moved from POST to MODIFIED via an error in automation, please see email@example.com with any questions
So the Ceph Storage Guide was released for OSP 8 and it includes the following section of GPT formatting disks:
Is this what you had in mind? Let me know if you had any suggestions for changes.
No response in over a month. Closing this BZ. However, if further changes are required, please feel free to reopen this BZ.