Created attachment 2005103 [details] screenshot_1 Description of problem (please be detailed as possible and provide log snippests): Attached external Storagecluster shows error state after odf installation. All the pods in the openshift-storage namespace are running. error seen on UI- "Error while reconciling: cephcluster status does not have OSD store information" ➜ clust1 oc get pods -n openshift-storage NAME READY STATUS RESTARTS AGE csi-addons-controller-manager-68845fc459-nwjpc 2/2 Running 13 (112m ago) 21h csi-cephfsplugin-7spg6 2/2 Running 0 20h csi-cephfsplugin-82dzg 2/2 Running 1 (20h ago) 20h csi-cephfsplugin-provisioner-5f67df5ffb-bmbx8 6/6 Running 1 (20h ago) 20h csi-cephfsplugin-provisioner-5f67df5ffb-m7zbs 6/6 Running 0 20h csi-cephfsplugin-zf6nd 2/2 Running 0 20h csi-rbdplugin-4dvfh 3/3 Running 0 20h csi-rbdplugin-provisioner-5b77cf784f-9jbjm 6/6 Running 2 (20h ago) 20h csi-rbdplugin-provisioner-5b77cf784f-cjhl5 6/6 Running 0 20h csi-rbdplugin-vjs74 3/3 Running 1 (20h ago) 20h csi-rbdplugin-zhf9b 3/3 Running 0 20h noobaa-core-0 1/1 Running 0 20h noobaa-db-pg-0 1/1 Running 0 20h noobaa-endpoint-5d5467d955-7p5nw 1/1 Running 0 20h noobaa-operator-f765b5f84-9llgt 2/2 Running 5 (7h8m ago) 22h ocs-operator-7d7d5c7f84-h5j65 1/1 Running 18 (88m ago) 22h odf-console-856d547ff7-mszsw 1/1 Running 0 21h odf-operator-controller-manager-5d7b667545-52jnl 2/2 Running 12 (124m ago) 21h rook-ceph-operator-d7fcb5d5c-9sbv6 1/1 Running 0 21h Please find attached screenshot for error messages. Version of all relevant components (if applicable): OCP- 4.15 ODF- 4.15.0-89 ACM- 2.9 RHCS- 6.1 Does this issue impact your ability to continue to work with the product (please explain in detail what is the user impact)? - yes Is there any workaround available to the best of your knowledge? Rate from 1 - 5 the complexity of the scenario you performed that caused this bug (1 - very simple, 5 - very complex)? Can this issue reproducible? - yes Can this issue reproduce from the UI? - yes If this is a regression, please provide more details to justify this: Steps to Reproduce: 1.Create OCP 4.15 cluster. 2.Install ODF(4.15) operator manually. 3.Create external storage system by Attaching json. Actual results: ocs-external-storagecluster shows error state Expected results: ocs-external-storagecluster should be avialable/ready state. Additional info:
Hi Umanga The new getIsDROptimized method is failing for External Cluster. CephCluster does not capture the OSD store status in case of external cluster as OSDs are not running on the cluster. What could be the alternatives here?
No issues found after attaching external cluster to managed cluster. ocs-external-storagecluster in ready state. Verified bug with below product versions- OCP- 4.15.0 ODF- 4.15.0-98 Ceph- 6.1z3 ACM- 2.9.1 Attached screenshot of showing ocs-external-storagecluster in ready state. Hence Marking this bug as verified.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: Red Hat OpenShift Data Foundation 4.15.0 security, enhancement, & bug fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2024:1383