Bug 1663663
| Summary: | [RFE][HPE-OSP16] Generic RH Ceph sizing tool | ||
|---|---|---|---|
| Product: | Red Hat OpenStack | Reporter: | Vinayak <vinayak.ram> |
| Component: | ceph | Assignee: | Giulio Fidente <gfidente> |
| Status: | CLOSED NOTABUG | QA Contact: | Yogev Rabl <yrabl> |
| Severity: | unspecified | Docs Contact: | |
| Priority: | low | ||
| Version: | 1.0 (Essex) | CC: | adakopou, brault, jdurgin, johfulto, lhh, mushtaq.ahmed, vinayak.ram |
| Target Milestone: | --- | ||
| Target Release: | --- | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2020-09-16 14:19:33 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
| Bug Depends On: | |||
| Bug Blocks: | 1595325, 1663045 | ||
|
Description
Vinayak
2019-01-06 07:47:04 UTC
Perhaps PM can provide some guidance here (setting a needinfo). This doesn't really seem like a bug in the product so I'm not sure bugzilla is the right way to track this. I'd say that this information is available in our existing documentation [1] in addition to reference architectures that the Ceph Solution Architects and the Performance Engineering group have already written [2] but that PM or our SAs would be a better contact to help with this. [1] https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/3/ [2] For example: https://www.redhat.com/en/resources/performance-intensive-workloads-are-now-possible-red-hat-ceph-storage-and-samsung-nvme-ssds https://www.redhat.com/en/resources/red-hat-ceph-storage-hardware-selection-guide Hi, Could not access the second URL - https://www.redhat.com/en/resources/performance-intensive-workloads-are-now-possible-red-hat-ceph-storage-and-samsung-nvme-ssds. Are there any published performance results for (Filestore and Bluestore) - Capacity optimized, IOPS optimized and throughput optimized requirements for varying read/write access patterns? Also are there any scale tests results? For e.g. what were the IOPS when additional OSDs were added to the CEPH nodes, adding more Ceph nodes to the cluster. How did the result vary when the storage/storage-networking bandwidth change? These inputs could help us derive a very rough Ceph sizing tool - say for an input IOPS or capacity requirement - what would be the size of the CEPH cluster - number of ceph nodes, OSDs per node (HDD or SSD based OSD), storage and storage-management networking requirement, suggested replication factor etc. Regards, Vinayak Hello HPE team, Do we still need to track this Ceph Sizing tool BZ? Any feedback/confirmation would be appreciated. Thank you, Antonios Hello HPE team, Please let me know if we still need to track this BZ as well as if we require a follow-up with Ceph Eng/PM? Thank you, Antonios Hi Antonios, Yes you may close this BZ. We could track the need to discuss with Ceph Eng/PM externally. Regards, Vinayak HPE confirmed that this RFE/BZ can be CLOSED |