This bug is the same pointed by you as rook issue and it was solved (see https://tracker.ceph.com/issues/63992). The fix for this issue (https://tracker.ceph.com/issues/63992) was implemented in Ceph 18.2.2, and available by default starting with Rook v1.13.7. Theoretically ODF V4.16 must have a Rook Version above v1.13.7. and using a Ceph version above V18.2.2. Prasad would you mind to verify what ceph/rook version is using ODF v. 4.16? Are this versions used in your test system?
Juan, we are using 18.2.1 in ODF Downstream which is Ceph 7.1 downstream and not 18.2.2. Do you know which Ceph downstream version is 18.2.2, as we are not using that?
I will have to confirm on the rook version that is being used for ODF 4.16. AFAIK rook v1.14.0 is the basis for ODF 4.16. @Travis, could you please confirm on the rook version that we use for ODF 4.16? Hi Juan, The cluster is running on ceph version ceph 18.2.1-188.el9cp.
(In reply to Prasad Desala from comment #24) > I will have to confirm on the rook version that is being used for ODF 4.16. > AFAIK rook v1.14.0 is the basis for ODF 4.16. > @Travis, could you please confirm on the rook version that we use for ODF > 4.16? Correct, Rook v1.14 is the basis for ODF 4.16, but it looks like the key question for this issue is if the fix made it into our downstream version used by ODF.
Simone, That I cannot answer. I'll set the need info for your comment (c#39) on the engi assigned to the ticket.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: Red Hat OpenShift Data Foundation 4.16.1 bug fix and security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2024:5547