Test with ODF 4.14.0-139, noobaa still doesn't deploy with this buid. There is no 'storageDeviceSets:' section in storagecluster yaml. $ oc get csv -A NAMESPACE NAME DISPLAY VERSION REPLACES PHASE openshift-operator-lifecycle-manager packageserver Package Server 0.0.1-snapshot Succeeded openshift-storage mcg-operator.v4.15.0-139.stable NooBaa Operator 4.15.0-139.stable Succeeded openshift-storage ocs-operator.v4.15.0-139.stable OpenShift Container Storage 4.15.0-139.stable Succeeded openshift-storage odf-csi-addons-operator.v4.15.0-139.stable CSI Addons 4.15.0-139.stable Succeeded openshift-storage odf-operator.v4.15.0-139.stable OpenShift Data Foundation 4.15.0-139.stable Succeeded $ oc get pod NAME READY STATUS RESTARTS AGE csi-addons-controller-manager-555bcf9c9d-n5v8g 2/2 Running 0 27m csi-cephfsplugin-dszbm 2/2 Running 0 25m csi-cephfsplugin-provisioner-5b5575d8d5-2j4kx 6/6 Running 0 25m csi-cephfsplugin-provisioner-5b5575d8d5-pjdtn 6/6 Running 0 25m csi-cephfsplugin-sfmss 2/2 Running 0 25m csi-cephfsplugin-v2q6l 2/2 Running 1 (24m ago) 25m csi-rbdplugin-64k5k 3/3 Running 0 25m csi-rbdplugin-fscs6 3/3 Running 0 25m csi-rbdplugin-k7w7n 3/3 Running 1 (24m ago) 25m csi-rbdplugin-provisioner-df8895f7b-qxmcb 6/6 Running 0 25m csi-rbdplugin-provisioner-df8895f7b-sc6f5 6/6 Running 4 (23m ago) 25m noobaa-operator-7cccc64c59-mf6nf 2/2 Running 0 27m ocs-operator-5bc895b594-p6mgh 1/1 Running 0 27m odf-console-7c7d845fb-qwc66 1/1 Running 0 27m odf-operator-controller-manager-5ccc94dd7b-skswv 2/2 Running 0 27m rook-ceph-crashcollector-compute-0-755b9c4cf4-dqhgf 1/1 Running 0 22m rook-ceph-crashcollector-compute-1-5698884fdc-f8z7s 1/1 Running 0 22m rook-ceph-crashcollector-compute-2-6dc4dd7b4-xqhhw 1/1 Running 0 22m rook-ceph-exporter-compute-0-5f6768cb7b-dk2zz 1/1 Running 0 22m rook-ceph-exporter-compute-1-85676cbdd5-dm5xf 1/1 Running 0 22m rook-ceph-exporter-compute-2-7cd9d965d-rjhwp 1/1 Running 0 22m rook-ceph-mds-ocs-storagecluster-cephfilesystem-a-657ff949lnpns 2/2 Running 0 22m rook-ceph-mds-ocs-storagecluster-cephfilesystem-b-54f46f58nsl76 2/2 Running 0 22m rook-ceph-mgr-a-7777dc7c77-7z4tb 3/3 Running 0 22m rook-ceph-mgr-b-5d4685d7f8-p46nt 3/3 Running 0 22m rook-ceph-mon-a-8584b5768-mfw55 2/2 Running 0 23m rook-ceph-mon-b-5b5f99bcbd-r6vrc 2/2 Running 0 23m rook-ceph-mon-c-6d8bfdc8b4-wk8jc 2/2 Running 0 22m rook-ceph-operator-94b6546d-72hrq 1/1 Running 0 25m ux-backend-server-687cddc8b7-ldf72 2/2 Running 0 27m $ oc get storagecluster -o yaml apiVersion: v1 items: - apiVersion: ocs.openshift.io/v1 kind: StorageCluster metadata: annotations: uninstall.ocs.openshift.io/cleanup-policy: delete uninstall.ocs.openshift.io/mode: graceful creationTimestamp: "2024-02-12T23:12:13Z" finalizers: - storagecluster.ocs.openshift.io generation: 3 name: ocs-storagecluster namespace: openshift-storage ownerReferences: - apiVersion: odf.openshift.io/v1alpha1 kind: StorageSystem name: ocs-storagecluster-storagesystem uid: b34322f9-cf0e-4158-b6a1-f500279b5caf resourceVersion: "96513" uid: 88d089e6-1dde-4f31-bac8-d2748509d02c spec: arbiter: {} encryption: kms: {} externalStorage: {} managedResources: cephBlockPools: {} cephCluster: {} cephConfig: {} cephDashboard: {} cephFilesystems: {} cephNonResilientPools: count: 1 cephObjectStoreUsers: {} cephObjectStores: {} cephRBDMirror: daemonCount: 1 cephToolbox: {} mirroring: {} multiCloudGateway: externalPgConfig: pgSecretName: noobaa-external-pg resourceProfile: balanced status: conditions: - lastHeartbeatTime: "2024-02-12T23:12:14Z" lastTransitionTime: "2024-02-12T23:12:14Z" message: Version check successful reason: VersionMatched status: "False" type: VersionMismatch - lastHeartbeatTime: "2024-02-12T23:40:46Z" lastTransitionTime: "2024-02-12T23:12:15Z" message: 'Error while reconciling: some StorageClasses were skipped while waiting for pre-requisites to be met: [ocs-storagecluster-cephfs,ocs-storagecluster-ceph-rbd]' reason: ReconcileFailed status: "False" type: ReconcileComplete - lastHeartbeatTime: "2024-02-12T23:12:14Z" lastTransitionTime: "2024-02-12T23:12:14Z" message: Initializing StorageCluster reason: Init status: "False" type: Available - lastHeartbeatTime: "2024-02-12T23:12:14Z" lastTransitionTime: "2024-02-12T23:12:14Z" message: Initializing StorageCluster reason: Init status: "True" type: Progressing - lastHeartbeatTime: "2024-02-12T23:12:14Z" lastTransitionTime: "2024-02-12T23:12:14Z" message: Initializing StorageCluster reason: Init status: "False" type: Degraded - lastHeartbeatTime: "2024-02-12T23:12:14Z" lastTransitionTime: "2024-02-12T23:12:14Z" message: Initializing StorageCluster reason: Init status: Unknown type: Upgradeable currentMonCount: 3 failureDomain: rack failureDomainKey: topology.rook.io/rack failureDomainValues: - rack0 - rack1 - rack2 images: ceph: actualImage: registry.redhat.io/rhceph/rhceph-6-rhel9@sha256:9dbd051cfcdb334aad33a536cc115ae1954edaea5f8cb5943ad615f1b41b0226 desiredImage: registry.redhat.io/rhceph/rhceph-6-rhel9@sha256:9dbd051cfcdb334aad33a536cc115ae1954edaea5f8cb5943ad615f1b41b0226 noobaaCore: desiredImage: registry.redhat.io/odf4/mcg-core-rhel9@sha256:1d79a2ac176ca6e69c3198d0e35537aaf29373440d214d324d0d433d1473d9a1 noobaaDB: desiredImage: registry.redhat.io/rhel9/postgresql-15@sha256:10e53e191e567248a514a7344c6d78432640aedbc1fa1f7b0364d3b88f8bde2c kmsServerConnection: {} nodeTopologies: labels: kubernetes.io/hostname: - compute-0 - compute-1 - compute-2 topology.rook.io/rack: - rack0 - rack1 - rack2 phase: Progressing relatedObjects: - apiVersion: ceph.rook.io/v1 kind: CephCluster name: ocs-storagecluster-cephcluster namespace: openshift-storage resourceVersion: "96510" uid: 191f41bb-f5d5-4a5b-bd95-c780f8089605 version: 4.15.0 kind: List metadata: resourceVersion: "" Must gather logs: http://rhsqe-repo.lab.eng.blr.redhat.com/OCS/ocs-qe-bugs/bz-2262974/ocs_must_gather_v212/
Verified this issue using build 4.15.0-142. The issue is now fixed. Noobaa deployed with external pgsql.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: Red Hat OpenShift Data Foundation 4.15.0 security, enhancement, & bug fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2024:1383