Bug 1655168 - Support multiple NVMe installation according to best practices
Summary: Support multiple NVMe installation according to best practices
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Ceph-Volume
Version: 3.2
Hardware: Unspecified
OS: Unspecified
urgent
urgent
Target Milestone: z2
: 3.3
Assignee: Alfredo Deza
QA Contact: Vasishta
URL:
Whiteboard:
Depends On:
Blocks: 1641792 1656135 1656454
TreeView+ depends on / blocked
 
Reported: 2018-11-30 20:18 UTC by Douglas Fuller
Modified: 2019-09-30 19:28 UTC (History)
13 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1656135 (view as bug list)
Environment:
Last Closed: 2019-09-30 19:28:55 UTC
Embargoed:


Attachments (Terms of Use)

Description Douglas Fuller 2018-11-30 20:18:01 UTC
As of RHCS 3.1, the recommended configuration for installation with multiple NVMe devices is to place RGW bucket index OSDs and all journals on NVMe devices with the remaining OSDs on spinning disk.

With bluestore, the equivalent configuration places the bucket index OSD's db partitions on the NVMe devices with all data partitions on spinning disk.

The only way to deploy this configuration currently is to use ceph-disk with ceph-ansible's 'non-collocated' scenario.

The strategy used by ceph-volume lvm-batch lumps all NVMe devices into a single LV. Current best practices call for each bucket index db to be located entirely on a single device, with no sharing of devices.

We need to determine whether we will:
* Support this type of installation with ceph-disk for RHCS3.2
* Enhance ceph-volume to automatically map particular OSD db partitions to individual NVMe devices
* Enhance ceph-ansible to enable users to create LVs manually and drive ceph-volume to assign LVs for db partitions (like osd_scenario: noncollocated)

Any of these approaches (or some other approach) is acceptable, but one is needed.

Comment 4 Christina Meno 2018-11-30 22:46:58 UTC
Alfredo would you please comment what our current options are to align with best practices ?

Comment 5 Douglas Fuller 2018-12-03 16:33:35 UTC
Conclusion from discussions with Alfredo and Andrew was that ceph-disk non-collocated will be the supported installation method for RHCS 3.2.

Going forward, the performance impact of grouping all bucket index db partitions in a single VG composed of all NVMe devices will be examined and the results will drive recommendations for RHCS 4.0.

Comment 6 Douglas Fuller 2018-12-03 16:35:04 UTC
Are there doc implications for this recommendation?

Comment 9 Alfredo Deza 2019-09-26 14:34:53 UTC
This has been addressed in ceph-ansible already, there is nothing that I can see ceph-volume being able to do here correctly. I think we can close this

Comment 10 Douglas Fuller 2019-09-30 19:28:55 UTC
Agreed.


Note You need to log in before you can comment on or make changes to this bug.