Bug 2056790

Summary: Offboarding of storageConsumer is stuck due to pending resources.
Product: [Red Hat Storage] Red Hat OpenShift Data Foundation Reporter: Santosh Pillai <sapillai>
Component: ocs-operatorAssignee: Santosh Pillai <sapillai>
Status: CLOSED CURRENTRELEASE QA Contact: Neha Berry <nberry>
Severity: high Docs Contact:
Priority: unspecified    
Version: 4.10CC: madam, mmuench, muagarwa, nberry, ocs-bugs, odf-bz-bot, rperiyas, sostapov
Target Milestone: ---   
Target Release: ODF 4.10.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: 4.10.0-177 Doc Type: No Doc Update
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2022-04-21 09:12:48 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Santosh Pillai 2022-02-22 04:27:00 UTC
Description of problem (please be detailed as possible and provide log
snippests):

ceph-spec: found 1 ceph clusters in namespace "openshift-storage"
2022-02-22 04:22:36.301854 D | ceph-cluster-controller: update event on CephCluster CR
2022-02-22 04:22:36.373136 I | ceph-cluster-controller: CephCluster "openshift-storage/ocs-storagecluster15-cephcluster" will not be deleted until all dependents are removed: CephFilesystemSubVolumeGroup: [cephfilesystemsubvolumegroup-storageconsumer-88a03266-93d7-4a5e-85f4-f97e78a6c042]
2022-02-22 04:22:36.384105 E | ceph-cluster-controller: failed to reconcile CephCluster "openshift-storage/ocs-storagecluster15-cephcluster". CephCluster "openshift-storage/ocs-storagecluster15-cephcluster" will not be deleted until all dependents are removed: CephFilesystemSubVolumeGroup: [cephfilesystemsubvolumegroup-storageconsumer-88a03266-93d7-4a5e-85f4-f97e78a6c042]
2022-02-22 04:22:36.384182 D | ceph-spec: found 1 ceph clusters in namespace "openshift-storage"
2022-02-22 04:22:36.384199 D | ceph-cluster-controller: update event on CephCluster CR
2022-02-22 04:22:44.392350 I | ceph-spec: "ceph-fs-subvolumegroup-controller": CephCluster has a destructive cleanup policy, allowing "openshift-storage/cephfilesystemsubvolumegroup-storageconsumer-88a03266-93d7-4a5e-85f4-f97e78a6c042" to be deleted


Version of all relevant components (if applicable):


Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?


Is there any workaround available to the best of your knowledge?


Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?


Can this issue reproducible?


Can this issue reproduce from the UI?


If this is a regression, please provide more details to justify this:


Steps to Reproduce:
1. Onboard a storage consumer
2. Offboard the consumer
3.


Actual results:  StorageCluster is stuck in `deleting` phase because of one of the dependent resource (CephFilesystemSubVolumeGroup: [cephfilesystemsubvolumegroup-storageconsumer-88a03266-93d7-4a5e-85f4-f97e78a6c042]) is not deleted.  


Expected results: Offboarding should happen without issues.


Additional info:

Comment 4 Neha Berry 2022-03-03 04:37:44 UTC
*** Bug 2060098 has been marked as a duplicate of this bug. ***

Comment 5 Jilju Joy 2022-03-09 12:07:12 UTC
Verification blocked due to https://bugzilla.redhat.com/show_bug.cgi?id=2061525