In flash-based Ceph clusters, a single librbd client would struggle to achieve more than 20K 4KiB IOPS while a krbd client could achieve nearly 4x more single-client throughput. Improvements to the librbd IO path have improved small IO throughput and reduced latency to vastly narrow the gap between krbd and librbd in such clusters.
Please specify the severity of this bug. Severity is defined here: https://bugzilla.redhat.com/page.cgi?id=fields.html#bug_severity.
Hey! How close to the krbd performances will this improvement bring us ? It looks like using krbd intead of librados brings us 2x the performances at least . Will it be like this [1] ? [1] https://access.redhat.com/solutions/5514611
I think you'll need to test it against specific clusters and workloads to know for sure. There isn't a one-size-fits-all answer unfortunately. The 20K IOPS wall was a known issue in librbd when the cache was enabled, so I'd expect the results to be much closer if retested.
QA verified. Completed testing by Teuthology Librbd tests.
(In reply to Harish Munjulur from comment #9) > QA verified. > > Completed testing by Teuthology Librbd tests. That's great functionality-wise, but this BZ is about performance - we need to ensure we deliver better throughput and lower latency here.
*** Bug 1906857 has been marked as a duplicate of this bug. ***
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat Ceph Storage 5.0 bug fix and enhancement), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2021:3294