Bug 1290269 - [Director] Sample ceph.yaml syntax for journals on dedicated disks seems wrong
[Director] Sample ceph.yaml syntax for journals on dedicated disks seems wrong
Product: Red Hat OpenStack
Classification: Red Hat
Component: documentation (Show other bugs)
7.0 (Kilo)
Unspecified Unspecified
unspecified Severity unspecified
: ---
: 8.0 (Liberty)
Assigned To: Dan Macpherson
RHOS Documentation Team
: Documentation
Depends On:
  Show dependency treegraph
Reported: 2015-12-09 19:19 EST by Giulio Fidente
Modified: 2016-03-06 20:47 EST (History)
4 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2016-02-29 20:33:56 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

  None (edit)
Description Giulio Fidente 2015-12-09 19:19:30 EST
Description of problem:
The sample syntax to use in ceph.yaml to decouple the ceph data disks and the ceph journal disks seems wrong. In section 6.3.6 at:


we say:

  For this example, puppet/hieradata/ceph.yaml would contain the following:

      '/dev/sdc': '/dev/sdb1'
      '/dev/sdd': '/dev/sdb2'

but the correct syntax is:

        journal: '/dev/sdb1'
        journal: '/dev/sdb2'
Comment 1 Dan Macpherson 2015-12-09 22:42:00 EST
Hi Giulio,

Thanks for reporting, though I'm somewhat confused. jliberma reported that the syntax had changed due to this bug:


You also made a comment in that bug that seems like the syntax is [disk]: [journal]


I'm confused because it looks like there's one syntax reported in BZ#1269329 and in this bug the syntax reported is the exact opposite.

Can you clarify the distinction in syntax between this bug and BZ#1269329?

- Dan
Comment 2 Giulio Fidente 2015-12-10 04:48:24 EST
hi Dan, indeed it might get confusing but the two bugs are about different things.

In bz #1269329 Jacob is asking to change the docs which describe how to use journals *colocated* on the data disks (the default). The correct syntax for this is:

    '/dev/sdb': {}
    '/dev/sdc': {}
    '/dev/sdd': {}

and it seems to be what we have in the docs, so I think that bz can be closed.

In this bz I am asking to change the docs which describe how to use *dedicated* disks. The correct syntax for that is:

      journal: '/dev/sdb1'
      journal: '/dev/sdb2'

I've updated the BZ title as well to explain this better.
Comment 3 Dan Macpherson 2015-12-10 12:31:36 EST
Ah gotcha. Thanks for the clarification. I'll update the docs accordingly.
Comment 6 Giulio Fidente 2016-02-15 06:08:24 EST
hi Dan,

the new samples are good and you can actually stop reading here :)


given we can now create the journal partitions automatically too, then we might change these samples from:

          journal: '/dev/sdb1'
          journal: '/dev/sdb2'


          journal: '/dev/sdb'
          journal: '/dev/sdb'

and mention that when using the same device (/dev/sdb, as in the example) for multiple journals, then multiple journal partitions will be created on it, one per each data disk associated to it

In doing so we should also update the final comment where we say:

 The director does not create partitions on the journal disk. You must manually create these journal partitions before the Director can deploy the Ceph Storage nodes.

as this can be removed now, and update the other sentence with:

 The Ceph Storage journal disks must have a pre-existing GPT label. Use the following command on the potential Ceph Storage host to create a GPT disk label on a disk:

# parted [device] mklabel gpt

Hopefully we'll soon be able to remove this last limitation :(
Comment 7 Dan Macpherson 2016-02-15 07:15:27 EST
Thanks, Giulio. I might close this one because Ben opened another bug for that same issue: 

Comment 9 jliberma@redhat.com 2016-03-06 20:47:13 EST
Its worth noting that we recommend SSD drives for dedicated OSD journal disks at a ratio of no more than 5 journal partitions per SSD drive. A Ceph node with 10 OSD disks should have 2 SSD journal disks with 5 partitions each. (12 drives total) 

We recommend colocated Ceph journals only for Ceph nodes that do not have SSD drives at a ratio of 1 journal partition per OSD disk. We do not recommend using spinning disks as dedicated journal disks with multiple journal partitions due to the high average seek time for random access.

There is a substantial performance improvement associated with using dedicated SSD disks for journaling over colocated spinning disks, so dedicated SSD disks should be used for journaling whenever possible. 

At some point Giulio asked me to document this upstream but I never got around to it.

Note You need to log in before you can comment on or make changes to this bug.