Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
This project is now read‑only. Starting Monday, February 2, please use https://ibm-ceph.atlassian.net/ for all bug tracking management.

Bug 1637523

Summary: [Doc RFE] Document how to partition NVMe-SSDs to optimize performance
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Anjana Suparna Sriram <asriram>
Component: DocumentationAssignee: John Brier <jbrier>
Status: CLOSED CURRENTRELEASE QA Contact: Vasishta <vashastr>
Severity: high Docs Contact:
Priority: high    
Version: 3.2CC: ceph-qe-bugs, hnallurv, jbrier, kdreyer, pasik, vashastr
Target Milestone: rc   
Target Release: 3.2   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2019-01-23 09:59:36 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1652475    
Bug Blocks: 1629585    

Description Anjana Suparna Sriram 2018-10-09 12:11:40 UTC
User Story: As a storage admin that has ceph running on NVMe-SSD OSDs, I need a convenient way to partition the NVMe-SSDs so I can optimize performance.

Content Plan Reference: https://docs.google.com/document/d/1Nxnh6XxpTiDO2TANEw5pvXZ0nYUwf36zTaqxCm0014w/edit#

Comment 6 John Brier 2018-11-02 22:53:42 UTC
Note Pantheon is updated now and you don't have to use the Jenkins build links.

Comment 7 Vasishta 2018-11-05 13:34:56 UTC
Hi John,

Based on https://bugzilla.redhat.com/show_bug.cgi?id=1541415#c50 and further comments, it seems that support for this feature has been provided for containerized scenario also.

Can you please add the new section to container guide also ?

Regards,
Vasishta Shatsry
QE, Ceph

Comment 12 Vasishta 2018-11-30 12:06:18 UTC
Hi John,

Thanks for the changes.

Here are another other small changes required.

1) Ref - https://bugzilla.redhat.com/show_bug.cgi?id=1652475#c6

In doc where we mention about ceph-volume lvm batch - We can mention about block_db_size which would allow users to reserve specific amount of memory for blockdb so that adding OSDs can be an option preventing maximum allocation for initial OSDs

Refer - https://github.com/ceph/ceph-ansible/blob/master/group_vars/all.yml.sample#L365


2)  ----- Optional, I think it would be better if we add ----

Either we can change the name of the section to "configuring osds ansible settings" on all NVMe storage or add a sentence after step 3 asking users to go back to previous section (Step 6) before which user might have come to this section.


Regards,
Vasishta Shastry
QE, Ceph

Comment 13 Vasishta 2018-12-03 14:27:17 UTC
Hi John, 

I'm sorry

(In reply to Vasishta from comment #12)
> Hi John,
> 
> Thanks for the changes.
> 
> Here are another other small changes required.
> 
> 1) Ref - https://bugzilla.redhat.com/show_bug.cgi?id=1652475#c6
> 
> In doc where we mention about ceph-volume lvm batch - We can mention about
> block_db_size which would allow users to reserve specific amount of memory
> for blockdb so that adding OSDs can be an option preventing maximum
> allocation for initial OSDs
> 
> Refer -
> https://github.com/ceph/ceph-ansible/blob/master/group_vars/all.yml.
> sample#L365
>

Above point doesn't apply here where there are only NVMes. But applies when there are mixture of NVMe and HDDs.

Please ignore and consider point 2 only.

Regards,
Vasishta Shastry
QE, Ceph

Comment 17 Vasishta 2018-12-14 06:25:11 UTC
Hi John,

Thanks for the update.
Moving to VERIFIED state.

Regards,
Vasishta Shastry
QE, Ceph

Comment 18 Anjana Suparna Sriram 2019-01-23 09:59:36 UTC
Published on the customer portal as part of the RHCS 3.2 GA on 3rd Jan 2019