Bug 1637523 - [Doc RFE] Document how to partition NVMe-SSDs to optimize performance
Summary: [Doc RFE] Document how to partition NVMe-SSDs to optimize performance
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Documentation
Version: 3.2
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: rc
: 3.2
Assignee: John Brier
QA Contact: Vasishta
URL:
Whiteboard:
Depends On: 1652475
Blocks: 1629585
TreeView+ depends on / blocked
 
Reported: 2018-10-09 12:11 UTC by Anjana Suparna Sriram
Modified: 2019-01-23 09:59 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-01-23 09:59:36 UTC
Embargoed:


Attachments (Terms of Use)

Description Anjana Suparna Sriram 2018-10-09 12:11:40 UTC
User Story: As a storage admin that has ceph running on NVMe-SSD OSDs, I need a convenient way to partition the NVMe-SSDs so I can optimize performance.

Content Plan Reference: https://docs.google.com/document/d/1Nxnh6XxpTiDO2TANEw5pvXZ0nYUwf36zTaqxCm0014w/edit#

Comment 6 John Brier 2018-11-02 22:53:42 UTC
Note Pantheon is updated now and you don't have to use the Jenkins build links.

Comment 7 Vasishta 2018-11-05 13:34:56 UTC
Hi John,

Based on https://bugzilla.redhat.com/show_bug.cgi?id=1541415#c50 and further comments, it seems that support for this feature has been provided for containerized scenario also.

Can you please add the new section to container guide also ?

Regards,
Vasishta Shatsry
QE, Ceph

Comment 12 Vasishta 2018-11-30 12:06:18 UTC
Hi John,

Thanks for the changes.

Here are another other small changes required.

1) Ref - https://bugzilla.redhat.com/show_bug.cgi?id=1652475#c6

In doc where we mention about ceph-volume lvm batch - We can mention about block_db_size which would allow users to reserve specific amount of memory for blockdb so that adding OSDs can be an option preventing maximum allocation for initial OSDs

Refer - https://github.com/ceph/ceph-ansible/blob/master/group_vars/all.yml.sample#L365


2)  ----- Optional, I think it would be better if we add ----

Either we can change the name of the section to "configuring osds ansible settings" on all NVMe storage or add a sentence after step 3 asking users to go back to previous section (Step 6) before which user might have come to this section.


Regards,
Vasishta Shastry
QE, Ceph

Comment 13 Vasishta 2018-12-03 14:27:17 UTC
Hi John, 

I'm sorry

(In reply to Vasishta from comment #12)
> Hi John,
> 
> Thanks for the changes.
> 
> Here are another other small changes required.
> 
> 1) Ref - https://bugzilla.redhat.com/show_bug.cgi?id=1652475#c6
> 
> In doc where we mention about ceph-volume lvm batch - We can mention about
> block_db_size which would allow users to reserve specific amount of memory
> for blockdb so that adding OSDs can be an option preventing maximum
> allocation for initial OSDs
> 
> Refer -
> https://github.com/ceph/ceph-ansible/blob/master/group_vars/all.yml.
> sample#L365
>

Above point doesn't apply here where there are only NVMes. But applies when there are mixture of NVMe and HDDs.

Please ignore and consider point 2 only.

Regards,
Vasishta Shastry
QE, Ceph

Comment 17 Vasishta 2018-12-14 06:25:11 UTC
Hi John,

Thanks for the update.
Moving to VERIFIED state.

Regards,
Vasishta Shastry
QE, Ceph

Comment 18 Anjana Suparna Sriram 2019-01-23 09:59:36 UTC
Published on the customer portal as part of the RHCS 3.2 GA on 3rd Jan 2019


Note You need to log in before you can comment on or make changes to this bug.