Description of problem (please be detailed as possible and provide log snippests): ceph-spec: found 1 ceph clusters in namespace "openshift-storage" 2022-02-22 04:22:36.301854 D | ceph-cluster-controller: update event on CephCluster CR 2022-02-22 04:22:36.373136 I | ceph-cluster-controller: CephCluster "openshift-storage/ocs-storagecluster15-cephcluster" will not be deleted until all dependents are removed: CephFilesystemSubVolumeGroup: [cephfilesystemsubvolumegroup-storageconsumer-88a03266-93d7-4a5e-85f4-f97e78a6c042] 2022-02-22 04:22:36.384105 E | ceph-cluster-controller: failed to reconcile CephCluster "openshift-storage/ocs-storagecluster15-cephcluster". CephCluster "openshift-storage/ocs-storagecluster15-cephcluster" will not be deleted until all dependents are removed: CephFilesystemSubVolumeGroup: [cephfilesystemsubvolumegroup-storageconsumer-88a03266-93d7-4a5e-85f4-f97e78a6c042] 2022-02-22 04:22:36.384182 D | ceph-spec: found 1 ceph clusters in namespace "openshift-storage" 2022-02-22 04:22:36.384199 D | ceph-cluster-controller: update event on CephCluster CR 2022-02-22 04:22:44.392350 I | ceph-spec: "ceph-fs-subvolumegroup-controller": CephCluster has a destructive cleanup policy, allowing "openshift-storage/cephfilesystemsubvolumegroup-storageconsumer-88a03266-93d7-4a5e-85f4-f97e78a6c042" to be deleted Version of all relevant components (if applicable): Does this issue impact your ability to continue to work with the product (please explain in detail what is the user impact)? Is there any workaround available to the best of your knowledge? Rate from 1 - 5 the complexity of the scenario you performed that caused this bug (1 - very simple, 5 - very complex)? Can this issue reproducible? Can this issue reproduce from the UI? If this is a regression, please provide more details to justify this: Steps to Reproduce: 1. Onboard a storage consumer 2. Offboard the consumer 3. Actual results: StorageCluster is stuck in `deleting` phase because of one of the dependent resource (CephFilesystemSubVolumeGroup: [cephfilesystemsubvolumegroup-storageconsumer-88a03266-93d7-4a5e-85f4-f97e78a6c042]) is not deleted. Expected results: Offboarding should happen without issues. Additional info:
*** Bug 2060098 has been marked as a duplicate of this bug. ***
Verification blocked due to https://bugzilla.redhat.com/show_bug.cgi?id=2061525