Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1663663

Summary: [RFE][HPE-OSP16] Generic RH Ceph sizing tool
Product: Red Hat OpenStack Reporter: Vinayak <vinayak.ram>
Component: cephAssignee: Giulio Fidente <gfidente>
Status: CLOSED NOTABUG QA Contact: Yogev Rabl <yrabl>
Severity: unspecified Docs Contact:
Priority: low    
Version: 1.0 (Essex)CC: adakopou, brault, jdurgin, johfulto, lhh, mushtaq.ahmed, vinayak.ram
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2020-09-16 14:19:33 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1595325, 1663045    

Description Vinayak 2019-01-06 07:47:04 UTC
Description of problem:
Can RH provide a baseline sizing tool for RH CEPH on OpenStack? This could include a set of baseline recommendations and guidelines on how performance would vary when different parameters are changed. It is not necessary that this tool should cater to every possible configuration on the field, but a baseline set of values will definitely help guide field while responding to opportunities.

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 3 John Fulton 2019-01-09 17:02:26 UTC
Perhaps PM can provide some guidance here (setting a needinfo). This doesn't really seem like a bug in the product so I'm not sure bugzilla is the right way to track this.

I'd say that this information is available in our existing documentation [1] in addition to reference architectures that the Ceph Solution Architects and the Performance Engineering group have already written [2] but that PM or our SAs would be a better contact to help with this.

[1] https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/3/

[2] For example:
https://www.redhat.com/en/resources/performance-intensive-workloads-are-now-possible-red-hat-ceph-storage-and-samsung-nvme-ssds
https://www.redhat.com/en/resources/red-hat-ceph-storage-hardware-selection-guide

Comment 5 Vinayak 2019-02-20 07:20:59 UTC
Hi,

Could not access the second URL - https://www.redhat.com/en/resources/performance-intensive-workloads-are-now-possible-red-hat-ceph-storage-and-samsung-nvme-ssds. 

Are there any published performance results for (Filestore and Bluestore)  - Capacity optimized, IOPS optimized and throughput optimized requirements for varying read/write access patterns? Also are there any scale tests results? For e.g. what were the IOPS when additional OSDs were added to the CEPH nodes, adding more Ceph nodes to the cluster. How did the result vary when the storage/storage-networking bandwidth change?

These inputs could help us derive a very rough Ceph sizing tool - say for an input IOPS or capacity requirement - what would be the size of the CEPH cluster - number of ceph nodes, OSDs per node (HDD or SSD based OSD), storage and storage-management networking requirement, suggested replication factor etc.

Regards,
Vinayak

Comment 6 Antonios Dakopoulos 2020-07-08 13:00:56 UTC
Hello HPE team,
Do we still need to track this Ceph Sizing tool BZ?

Any feedback/confirmation would be appreciated.

Thank you,
Antonios

Comment 7 Antonios Dakopoulos 2020-09-16 13:01:41 UTC
Hello HPE team,
Please let me know if we still need to track this BZ as well as if we require a follow-up with Ceph Eng/PM?

Thank you,
Antonios

Comment 8 Vinayak 2020-09-16 13:34:48 UTC
Hi Antonios,

Yes you may close this BZ. We could track the need to discuss with Ceph Eng/PM externally.

Regards,
Vinayak

Comment 9 Antonios Dakopoulos 2020-09-16 14:19:33 UTC
HPE confirmed that this RFE/BZ can be CLOSED