+++ This bug was initially created as a clone of Bug #2101871 +++ Description of problem: As discussed in https://chat.google.com/room/AAAASHA9vWs/8BJgltoxETc, if rbd storageclassclaim is created after ocs-operator restart then it ends in Failed phase. Version-Release number of selected component (if applicable): ocs-operator.v4.11.0 How reproducible: 1/1 Steps to Reproduce: 1. Install ODF Managed Service with ODF 4.11. 2. Create storageclassclaim 3. Restart ocs-operator 4. Create rbd storageclassclaim like: apiVersion: ocs.openshift.io/v1alpha1 kind: StorageClassClaim metadata: name: test-storageclassclaim spec: # type: blockpool or sharedfilesystem type: blockpool Actual results: StorageClassClaims get into Failed state. In my case even the first claim went into failed phase. According to https://chat.google.com/room/AAAASHA9vWs/8BJgltoxETc we can follow steps in Reproducer for more confidence. Expected results: StorageClassClaims are ready. Additional info:
Verified in version: ODF 4.11.0-13 OCP 4.10.25 $ oc get csv NAME DISPLAY VERSION REPLACES PHASE mcg-operator.v4.11.0 NooBaa Operator 4.11.0 mcg-operator.v4.10.5 Succeeded ocs-operator.v4.11.0 OpenShift Container Storage 4.11.0 ocs-operator.v4.10.5 Succeeded ocs-osd-deployer.v2.0.4 OCS OSD Deployer 2.0.4 ocs-osd-deployer.v2.0.3 Succeeded odf-csi-addons-operator.v4.11.0 CSI Addons 4.11.0 odf-csi-addons-operator.v4.10.5 Succeeded odf-operator.v4.11.0 OpenShift Data Foundation 4.11.0 odf-operator.v4.10.4 Succeeded ose-prometheus-operator.4.10.0 Prometheus Operator 4.10.0 ose-prometheus-operator.4.8.0 Succeeded route-monitor-operator.v0.1.422-151be96 Route Monitor Operator 0.1.422-151be96 route-monitor-operator.v0.1.420-b65f47e Succeeded Created storageclassclaims(blockpool and sharedfilesystem storage types) before and after re-spin of ocs-operator pod. Used the storageclasses to create PVC after new ocs-operator came up. No issue was observed.