Bug 1663663 - [RFE][HPE-OSP16] Generic RH Ceph sizing tool
Summary: [RFE][HPE-OSP16] Generic RH Ceph sizing tool
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: ceph
Version: 1.0 (Essex)
Hardware: Unspecified
OS: Unspecified
low
unspecified
Target Milestone: ---
: ---
Assignee: Giulio Fidente
QA Contact: Yogev Rabl
URL:
Whiteboard:
Depends On:
Blocks: 1595325 hpeosp16rfe
TreeView+ depends on / blocked
 
Reported: 2019-01-06 07:47 UTC by Vinayak
Modified: 2020-09-16 14:19 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-09-16 14:19:33 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Vinayak 2019-01-06 07:47:04 UTC
Description of problem:
Can RH provide a baseline sizing tool for RH CEPH on OpenStack? This could include a set of baseline recommendations and guidelines on how performance would vary when different parameters are changed. It is not necessary that this tool should cater to every possible configuration on the field, but a baseline set of values will definitely help guide field while responding to opportunities.

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 3 John Fulton 2019-01-09 17:02:26 UTC
Perhaps PM can provide some guidance here (setting a needinfo). This doesn't really seem like a bug in the product so I'm not sure bugzilla is the right way to track this.

I'd say that this information is available in our existing documentation [1] in addition to reference architectures that the Ceph Solution Architects and the Performance Engineering group have already written [2] but that PM or our SAs would be a better contact to help with this.

[1] https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/3/

[2] For example:
https://www.redhat.com/en/resources/performance-intensive-workloads-are-now-possible-red-hat-ceph-storage-and-samsung-nvme-ssds
https://www.redhat.com/en/resources/red-hat-ceph-storage-hardware-selection-guide

Comment 5 Vinayak 2019-02-20 07:20:59 UTC
Hi,

Could not access the second URL - https://www.redhat.com/en/resources/performance-intensive-workloads-are-now-possible-red-hat-ceph-storage-and-samsung-nvme-ssds. 

Are there any published performance results for (Filestore and Bluestore)  - Capacity optimized, IOPS optimized and throughput optimized requirements for varying read/write access patterns? Also are there any scale tests results? For e.g. what were the IOPS when additional OSDs were added to the CEPH nodes, adding more Ceph nodes to the cluster. How did the result vary when the storage/storage-networking bandwidth change?

These inputs could help us derive a very rough Ceph sizing tool - say for an input IOPS or capacity requirement - what would be the size of the CEPH cluster - number of ceph nodes, OSDs per node (HDD or SSD based OSD), storage and storage-management networking requirement, suggested replication factor etc.

Regards,
Vinayak

Comment 6 Antonios Dakopoulos 2020-07-08 13:00:56 UTC
Hello HPE team,
Do we still need to track this Ceph Sizing tool BZ?

Any feedback/confirmation would be appreciated.

Thank you,
Antonios

Comment 7 Antonios Dakopoulos 2020-09-16 13:01:41 UTC
Hello HPE team,
Please let me know if we still need to track this BZ as well as if we require a follow-up with Ceph Eng/PM?

Thank you,
Antonios

Comment 8 Vinayak 2020-09-16 13:34:48 UTC
Hi Antonios,

Yes you may close this BZ. We could track the need to discuss with Ceph Eng/PM externally.

Regards,
Vinayak

Comment 9 Antonios Dakopoulos 2020-09-16 14:19:33 UTC
HPE confirmed that this RFE/BZ can be CLOSED


Note You need to log in before you can comment on or make changes to this bug.