Red Hat Bugzilla – Bug 1290269
[Director] Sample ceph.yaml syntax for journals on dedicated disks seems wrong
Last modified: 2016-03-06 20:47:13 EST
Description of problem:
The sample syntax to use in ceph.yaml to decouple the ceph data disks and the ceph journal disks seems wrong. In section 6.3.6 at:
For this example, puppet/hieradata/ceph.yaml would contain the following:
but the correct syntax is:
Thanks for reporting, though I'm somewhat confused. jliberma reported that the syntax had changed due to this bug:
You also made a comment in that bug that seems like the syntax is [disk]: [journal]
I'm confused because it looks like there's one syntax reported in BZ#1269329 and in this bug the syntax reported is the exact opposite.
Can you clarify the distinction in syntax between this bug and BZ#1269329?
hi Dan, indeed it might get confusing but the two bugs are about different things.
In bz #1269329 Jacob is asking to change the docs which describe how to use journals *colocated* on the data disks (the default). The correct syntax for this is:
and it seems to be what we have in the docs, so I think that bz can be closed.
In this bz I am asking to change the docs which describe how to use *dedicated* disks. The correct syntax for that is:
I've updated the BZ title as well to explain this better.
Ah gotcha. Thanks for the clarification. I'll update the docs accordingly.
This is not live on the portal:
Guilio, how do these examples read now? Are there any further updates required?
the new samples are good and you can actually stop reading here :)
given we can now create the journal partitions automatically too, then we might change these samples from:
and mention that when using the same device (/dev/sdb, as in the example) for multiple journals, then multiple journal partitions will be created on it, one per each data disk associated to it
In doing so we should also update the final comment where we say:
The director does not create partitions on the journal disk. You must manually create these journal partitions before the Director can deploy the Ceph Storage nodes.
as this can be removed now, and update the other sentence with:
The Ceph Storage journal disks must have a pre-existing GPT label. Use the following command on the potential Ceph Storage host to create a GPT disk label on a disk:
# parted [device] mklabel gpt
Hopefully we'll soon be able to remove this last limitation :(
Thanks, Giulio. I might close this one because Ben opened another bug for that same issue:
Its worth noting that we recommend SSD drives for dedicated OSD journal disks at a ratio of no more than 5 journal partitions per SSD drive. A Ceph node with 10 OSD disks should have 2 SSD journal disks with 5 partitions each. (12 drives total)
We recommend colocated Ceph journals only for Ceph nodes that do not have SSD drives at a ratio of 1 journal partition per OSD disk. We do not recommend using spinning disks as dedicated journal disks with multiple journal partitions due to the high average seek time for random access.
There is a substantial performance improvement associated with using dedicated SSD disks for journaling over colocated spinning disks, so dedicated SSD disks should be used for journaling whenever possible.
At some point Giulio asked me to document this upstream but I never got around to it.