Bug 1602919

Summary: Deploying with osd_scenario=lvm for optimal hardware usage on OSD nodes
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: John Harrigan <jharriga>
Component: DocumentationAssignee: John Brier <jbrier>
Status: CLOSED CURRENTRELEASE QA Contact: Tiffany Nguyen <tunguyen>
Severity: high Docs Contact:
Priority: high    
Version: 3.1CC: agunn, amaredia, asriram, ceph-qe-bugs, hnallurv, jbrier, kdreyer, nojha, tunguyen
Target Milestone: rc   
Target Release: 3.1   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-09-27 13:40:29 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1589154, 1593868    
Bug Blocks: 1581350, 1591074, 1593418    

Description John Harrigan 2018-07-18 19:55:21 UTC
Description of problem:
Support is being added to the installer/ceph-ansible to allow users to
more effectively utilize NVMe devices on their OSD nodes. Documentation
adds in the "Ceph Object Gateway for Production" and "Administrative" guides
will be required.

Additional info:
The installer needs to support placement of performance 
sensitive Ceph OSD elements such as Filestore journals and RGW
pools (i.e. bucketIndex) on NVMe devices along with OSD data
on HDDs. Document the rationale for deploying Ceph in this
way and the actual installation procedure for several hardware
configurations.

Comment 3 John Harrigan 2018-07-30 17:36:28 UTC
I noted a few places in the "Ceph Object Gateway for Production" where this change will require updates

Section 3.6 Bucket Indexes - currently recommends using one SSD for BI and one for journals. That needs to be updated.

Section 5.1 CRUSH - currently has the following recommendations
  * do not create partitions for BI on SSDs used for OSD journals
  * do not use the same SSD drive to store journals, bucket indexes and data

Comment 4 John Harrigan 2018-08-01 14:41:04 UTC
The procedure needs to cover both usage of this playbook and the teardown procedure, which will be manual and NOT covered by the playbook.

Comment 9 Tiffany Nguyen 2018-08-31 19:33:01 UTC
Please provide documentation's link for reviewing.

Comment 14 John Brier 2018-09-27 13:40:29 UTC
Closing as these changes were published along with the Ceph 3.1 bits yesterday for GA:

https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/3/html-single/ceph_object_gateway_for_production/#using-nvme-with-lvm-optimally