Bug 1290269
Summary: | [Director] Sample ceph.yaml syntax for journals on dedicated disks seems wrong | ||
---|---|---|---|
Product: | Red Hat OpenStack | Reporter: | Giulio Fidente <gfidente> |
Component: | documentation | Assignee: | Dan Macpherson <dmacpher> |
Status: | CLOSED CURRENTRELEASE | QA Contact: | RHOS Documentation Team <rhos-docs> |
Severity: | unspecified | Docs Contact: | |
Priority: | unspecified | ||
Version: | 7.0 (Kilo) | CC: | dmacpher, gfidente, jliberma, yeylon |
Target Milestone: | --- | Keywords: | Documentation |
Target Release: | 8.0 (Liberty) | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Bug Fix | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2016-03-01 01:33:56 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Giulio Fidente
2015-12-10 00:19:30 UTC
Hi Giulio, Thanks for reporting, though I'm somewhat confused. jliberma reported that the syntax had changed due to this bug: https://bugzilla.redhat.com/show_bug.cgi?id=1269329 You also made a comment in that bug that seems like the syntax is [disk]: [journal] https://bugzilla.redhat.com/show_bug.cgi?id=1269329#c2 I'm confused because it looks like there's one syntax reported in BZ#1269329 and in this bug the syntax reported is the exact opposite. Can you clarify the distinction in syntax between this bug and BZ#1269329? - Dan hi Dan, indeed it might get confusing but the two bugs are about different things. In bz #1269329 Jacob is asking to change the docs which describe how to use journals *colocated* on the data disks (the default). The correct syntax for this is: ceph::profile::params::osds: '/dev/sdb': {} '/dev/sdc': {} '/dev/sdd': {} and it seems to be what we have in the docs, so I think that bz can be closed. In this bz I am asking to change the docs which describe how to use *dedicated* disks. The correct syntax for that is: ceph::profile::params::osds: '/dev/sdc': journal: '/dev/sdb1' '/dev/sdd': journal: '/dev/sdb2' I've updated the BZ title as well to explain this better. Ah gotcha. Thanks for the clarification. I'll update the docs accordingly. This is not live on the portal: https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux_OpenStack_Platform/7/html/Director_Installation_and_Usage/sect-Advanced-Scenario_3_Using_the_CLI_to_Create_an_Advanced_Overcloud_with_Ceph_Nodes.html#sect-Advanced-Configuring_Ceph_Storage Guilio, how do these examples read now? Are there any further updates required? hi Dan, the new samples are good and you can actually stop reading here :) OR given we can now create the journal partitions automatically too, then we might change these samples from: ceph::profile::params::osds: '/dev/sdc': journal: '/dev/sdb1' '/dev/sdd': journal: '/dev/sdb2' into ceph::profile::params::osds: '/dev/sdc': journal: '/dev/sdb' '/dev/sdd': journal: '/dev/sdb' and mention that when using the same device (/dev/sdb, as in the example) for multiple journals, then multiple journal partitions will be created on it, one per each data disk associated to it In doing so we should also update the final comment where we say: """ The director does not create partitions on the journal disk. You must manually create these journal partitions before the Director can deploy the Ceph Storage nodes. """ as this can be removed now, and update the other sentence with: """ The Ceph Storage journal disks must have a pre-existing GPT label. Use the following command on the potential Ceph Storage host to create a GPT disk label on a disk: # parted [device] mklabel gpt """ Hopefully we'll soon be able to remove this last limitation :( Thanks, Giulio. I might close this one because Ben opened another bug for that same issue: https://bugzilla.redhat.com/show_bug.cgi?id=1299612 Its worth noting that we recommend SSD drives for dedicated OSD journal disks at a ratio of no more than 5 journal partitions per SSD drive. A Ceph node with 10 OSD disks should have 2 SSD journal disks with 5 partitions each. (12 drives total) We recommend colocated Ceph journals only for Ceph nodes that do not have SSD drives at a ratio of 1 journal partition per OSD disk. We do not recommend using spinning disks as dedicated journal disks with multiple journal partitions due to the high average seek time for random access. There is a substantial performance improvement associated with using dedicated SSD disks for journaling over colocated spinning disks, so dedicated SSD disks should be used for journaling whenever possible. At some point Giulio asked me to document this upstream but I never got around to it. |