Bug 1267690

Summary: [director] document how to partition disks manually
Product: Red Hat OpenStack Reporter: Mike Burns <mburns>
Component: documentationAssignee: Dan Macpherson <dmacpher>
Status: CLOSED CURRENTRELEASE QA Contact: RHOS Documentation Team <rhos-docs>
Severity: unspecified Docs Contact:
Priority: high    
Version: 7.0 (Kilo)CC: dmacpher, dtantsur, gfidente, hbrock, jdonohue, jliberma, mburns, mcornea, morazi, racedoro, rhel-osp-director-maint, rnishtal, srevivo
Target Milestone: y1Keywords: Documentation, FutureFeature, ZStream
Target Release: 7.0 (Kilo)   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Enhancement
Doc Text:
Story Points: ---
Clone Of: 1252260 Environment:
Last Closed: 2016-06-16 04:40:56 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1252260, 1256103    
Bug Blocks: 1190166, 1243520    

Comment 1 Mike Burns 2015-09-30 16:48:06 UTC
Giulio,  can you provide the required information here?

Comment 2 Giulio Fidente 2015-10-02 10:52:04 UTC
Mike, I think the existing docs cover much of the issue, see:

https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux_OpenStack_Platform/7/html/Director_Installation_and_Usage/sect-Advanced-Scenario_3_Using_the_CLI_to_Create_an_Advanced_Overcloud_with_Ceph_Nodes.html#sect-Advanced-Configuring_Ceph_Storage

(if on the first load attempt the link does not bring you to the relevant section try a reload)


Regardless, the existing docs say so:

 The director does not create partitions on the journal disk. You must manually create these journal partitions before the Director can deploy the Ceph Storage nodes.

 The Ceph Storage OSDs and journals partitions require GPT disk labels, which you also configure prior to customization. For example, use the following command on the potential Ceph Storage host to create a GPT disk label for a disk or partition:

  # parted [device] mklabel gpt


This remains valid, I think we might just add a command to create the actual journal partition.

 This following will create a primary partition on [device] which spans the entire disk. For more options refer to parted(8).

  # parted -a optimal [device] mkpart primary 0% 100%

Comment 3 Giulio Fidente 2015-10-02 11:01:33 UTC
Maybe it is useful to state clearly that users are only required to partition manually the disks if they want to use dedicated disks as journal devices.

In which case only the journal disks need to be partitioned manually, using a GPT table. The OSD disks mapping config parameter should look something like:

ceph::profile::params::osds: {/dev/sdb: {journal: /dev/sdd1}, /dev/sdc: {journal: /dev/sdd2}}

where /dev/sdb and /dev/sdc are data disks and /dev/sdc{1,2} the partitions to be used for the respective journals.

Comment 4 Giulio Fidente 2015-10-02 11:03:51 UTC
Amend, in the example in c#3 /dev/sdd{1,2} are the partitions to be used for the respective journals.

Comment 5 Dan Macpherson 2016-05-13 01:24:37 UTC
I think I got this covered in the Ceph Guide:

https://access.redhat.com/documentation/en/red-hat-openstack-platform/version-8/red-hat-ceph-storage-for-the-overcloud/#Formatting_Ceph_Storage_Nodes_Disks_to_GPT

Mike, Giulio -- Anything else I should add/modify for this section?

Comment 8 Dan Macpherson 2016-05-23 06:25:22 UTC
Removed admonition:
https://gitlab.cee.redhat.com/rhci-documentation/docs-Red_Hat_Enterprise_Linux_OpenStack_Platform/commit/e2876354b3364a4d7030069649ffe344a8c4e905

Will be published on the next cycle.

How does that sound, Giulio? Is it okay to close this BZ?

Comment 9 Giulio Fidente 2016-05-23 08:57:00 UTC
Yes thanks, I think we can close the BZ!

Comment 10 Dan Macpherson 2016-05-23 09:50:44 UTC
Closing BZ as verified. Final chance will be published on the next cycle (which should be this week).

Comment 11 Dan Macpherson 2016-06-16 04:40:56 UTC
Changes now live on the customer portal.