Created attachment 1990993 [details] rook-ceph-prepare pod for missing OSD Description of problem (please be detailed as possible and provide log snippests): Created RDR test env with OCP 4.14, ACM 2.9,and ODF 4.14. When ODF was initially installed bluestore-rdr was NOT enabled. After creating first DRPolicy for RDR, ceph mirroring was enabled which caused bluestore-rdr to be enabled for ODF storagecluster. First OSD attempted to recreate with bluestore-rdr. The prepare pod for first (deleted) OSD does not get to Completed state, stuck in Running. Rook operator log Before first DRPolicy created and mirroring enabled - http://pastebin.test.redhat.com/1110031 Rook operator log AFTER first DRPolicy created and mirroring enabled - http://pastebin.test.redhat.com/1110030 Version of all relevant components (if applicable): ODF - 4.14.0-139.stable OCP - 4.14.0-rc.2 ACM - 2.9.0-165 Is there any workaround available to the best of your knowledge? No Rate from 1 - 5 the complexity of the scenario you performed that caused this bug (1 - very simple, 5 - very complex)? 3 Can this issue reproducible? Yes, happened in 2 independent OCP clusters. Steps to Reproduce: 1. Create RDR test env with ACM 2.9 (install MCO) 2. Create first DR Policy Actual results: One of three OSD pods is deleted and associated prepare pod is stuck in Running (log attached). Cephcluster has bluestore-rdr enabled. Expected results: All three OSD pods are Running and bluestore-rdr is not enabled in cephcluster. Additional info:
Santosh, there is an OCS operator BZ for disabling brown field RDR in 4.14, right? Assuming that BZ, we won't need to fix this issue with replacing OSDs in brownfield yet and can move this BZ to 4.15.
(In reply to Travis Nielsen from comment #2) > Santosh, there is an OCS operator BZ for disabling brown field RDR in 4.14, > right? Assuming that BZ, we won't need to fix this issue with replacing OSDs > in brownfield yet and can move this BZ to 4.15. Yes, we have a BZ (https://bugzilla.redhat.com/show_bug.cgi?id=2234735) to revert these changes. Since QE does not have the resources to test it and current BZ might blocking the testing, I would open a PR to revert the changes today. Annette, can you please share the logs of the osd prepare pod that is stuck.
proposing to move this BZ to 4.15 since we are not support bluestore-rdr in this release.
@sapillai I attached the log for the osd prepare pod that is stuck (attached when I created BZ).
This should be retested with 4.15 Now we are using `ceph-volume lvm zap` to clean up the resources. I've tested with some data and I'm not seeing OSD prepare pod being stuck for long time clean up the data on the OSD. Also, there is a change in the flow now. In 4.14, we had a approach where OSD migration would start as soon as the user enabled mirroring on a cluster with OSDs on bluestore. In 4.15, we have changed this flow. Now migration will not happen in the while enabling mirroring. User will have to first migrate (by adding the annotation) and after migration they can enable mirroring. There is an OCS operator PR (https://github.com/red-hat-storage/ocs-operator/pull/2247) to support this new flow.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: Red Hat OpenShift Data Foundation 4.15.0 security, enhancement, & bug fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2024:1383