Bug 1602919 - Deploying with osd_scenario=lvm for optimal hardware usage on OSD nodes
Summary: Deploying with osd_scenario=lvm for optimal hardware usage on OSD nodes
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Documentation
Version: 3.1
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: rc
: 3.1
Assignee: John Brier
QA Contact: Tiffany Nguyen
URL:
Whiteboard:
Depends On: 1589154 1593868
Blocks: 1581350 1591074 1593418
TreeView+ depends on / blocked
 
Reported: 2018-07-18 19:55 UTC by John Harrigan
Modified: 2018-09-27 13:40 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-09-27 13:40:29 UTC
Embargoed:


Attachments (Terms of Use)

Description John Harrigan 2018-07-18 19:55:21 UTC
Description of problem:
Support is being added to the installer/ceph-ansible to allow users to
more effectively utilize NVMe devices on their OSD nodes. Documentation
adds in the "Ceph Object Gateway for Production" and "Administrative" guides
will be required.

Additional info:
The installer needs to support placement of performance 
sensitive Ceph OSD elements such as Filestore journals and RGW
pools (i.e. bucketIndex) on NVMe devices along with OSD data
on HDDs. Document the rationale for deploying Ceph in this
way and the actual installation procedure for several hardware
configurations.

Comment 3 John Harrigan 2018-07-30 17:36:28 UTC
I noted a few places in the "Ceph Object Gateway for Production" where this change will require updates

Section 3.6 Bucket Indexes - currently recommends using one SSD for BI and one for journals. That needs to be updated.

Section 5.1 CRUSH - currently has the following recommendations
  * do not create partitions for BI on SSDs used for OSD journals
  * do not use the same SSD drive to store journals, bucket indexes and data

Comment 4 John Harrigan 2018-08-01 14:41:04 UTC
The procedure needs to cover both usage of this playbook and the teardown procedure, which will be manual and NOT covered by the playbook.

Comment 9 Tiffany Nguyen 2018-08-31 19:33:01 UTC
Please provide documentation's link for reviewing.

Comment 14 John Brier 2018-09-27 13:40:29 UTC
Closing as these changes were published along with the Ceph 3.1 bits yesterday for GA:

https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/3/html-single/ceph_object_gateway_for_production/#using-nvme-with-lvm-optimally


Note You need to log in before you can comment on or make changes to this bug.