Description of problem: Can RH provide a baseline sizing tool for RH CEPH on OpenStack? This could include a set of baseline recommendations and guidelines on how performance would vary when different parameters are changed. It is not necessary that this tool should cater to every possible configuration on the field, but a baseline set of values will definitely help guide field while responding to opportunities. Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
Perhaps PM can provide some guidance here (setting a needinfo). This doesn't really seem like a bug in the product so I'm not sure bugzilla is the right way to track this. I'd say that this information is available in our existing documentation [1] in addition to reference architectures that the Ceph Solution Architects and the Performance Engineering group have already written [2] but that PM or our SAs would be a better contact to help with this. [1] https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/3/ [2] For example: https://www.redhat.com/en/resources/performance-intensive-workloads-are-now-possible-red-hat-ceph-storage-and-samsung-nvme-ssds https://www.redhat.com/en/resources/red-hat-ceph-storage-hardware-selection-guide
Hi, Could not access the second URL - https://www.redhat.com/en/resources/performance-intensive-workloads-are-now-possible-red-hat-ceph-storage-and-samsung-nvme-ssds. Are there any published performance results for (Filestore and Bluestore) - Capacity optimized, IOPS optimized and throughput optimized requirements for varying read/write access patterns? Also are there any scale tests results? For e.g. what were the IOPS when additional OSDs were added to the CEPH nodes, adding more Ceph nodes to the cluster. How did the result vary when the storage/storage-networking bandwidth change? These inputs could help us derive a very rough Ceph sizing tool - say for an input IOPS or capacity requirement - what would be the size of the CEPH cluster - number of ceph nodes, OSDs per node (HDD or SSD based OSD), storage and storage-management networking requirement, suggested replication factor etc. Regards, Vinayak
Hello HPE team, Do we still need to track this Ceph Sizing tool BZ? Any feedback/confirmation would be appreciated. Thank you, Antonios
Hello HPE team, Please let me know if we still need to track this BZ as well as if we require a follow-up with Ceph Eng/PM? Thank you, Antonios
Hi Antonios, Yes you may close this BZ. We could track the need to discuss with Ceph Eng/PM externally. Regards, Vinayak
HPE confirmed that this RFE/BZ can be CLOSED