Bug 2056790 - Offboarding of storageConsumer is stuck due to pending resources.
Summary: Offboarding of storageConsumer is stuck due to pending resources.
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: ocs-operator
Version: 4.10
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: ODF 4.10.0
Assignee: Santosh Pillai
QA Contact: Neha Berry
URL:
Whiteboard:
: 2060098 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-02-22 04:27 UTC by Santosh Pillai
Modified: 2023-08-09 17:00 UTC (History)
8 users (show)

Fixed In Version: 4.10.0-177
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-04-21 09:12:48 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github red-hat-storage ocs-operator pull 1549 0 None open Uninstall: remove CephFilesystemSubVolumeGroup on external mode uninstall 2022-03-01 07:48:12 UTC
Github red-hat-storage ocs-operator pull 1570 0 None open Bug 2056790: [release-4.10] Uninstall: remove CephFilesystemSubVolumeGroup on external mode uninstall 2022-03-03 03:14:02 UTC

Description Santosh Pillai 2022-02-22 04:27:00 UTC
Description of problem (please be detailed as possible and provide log
snippests):

ceph-spec: found 1 ceph clusters in namespace "openshift-storage"
2022-02-22 04:22:36.301854 D | ceph-cluster-controller: update event on CephCluster CR
2022-02-22 04:22:36.373136 I | ceph-cluster-controller: CephCluster "openshift-storage/ocs-storagecluster15-cephcluster" will not be deleted until all dependents are removed: CephFilesystemSubVolumeGroup: [cephfilesystemsubvolumegroup-storageconsumer-88a03266-93d7-4a5e-85f4-f97e78a6c042]
2022-02-22 04:22:36.384105 E | ceph-cluster-controller: failed to reconcile CephCluster "openshift-storage/ocs-storagecluster15-cephcluster". CephCluster "openshift-storage/ocs-storagecluster15-cephcluster" will not be deleted until all dependents are removed: CephFilesystemSubVolumeGroup: [cephfilesystemsubvolumegroup-storageconsumer-88a03266-93d7-4a5e-85f4-f97e78a6c042]
2022-02-22 04:22:36.384182 D | ceph-spec: found 1 ceph clusters in namespace "openshift-storage"
2022-02-22 04:22:36.384199 D | ceph-cluster-controller: update event on CephCluster CR
2022-02-22 04:22:44.392350 I | ceph-spec: "ceph-fs-subvolumegroup-controller": CephCluster has a destructive cleanup policy, allowing "openshift-storage/cephfilesystemsubvolumegroup-storageconsumer-88a03266-93d7-4a5e-85f4-f97e78a6c042" to be deleted


Version of all relevant components (if applicable):


Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?


Is there any workaround available to the best of your knowledge?


Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?


Can this issue reproducible?


Can this issue reproduce from the UI?


If this is a regression, please provide more details to justify this:


Steps to Reproduce:
1. Onboard a storage consumer
2. Offboard the consumer
3.


Actual results:  StorageCluster is stuck in `deleting` phase because of one of the dependent resource (CephFilesystemSubVolumeGroup: [cephfilesystemsubvolumegroup-storageconsumer-88a03266-93d7-4a5e-85f4-f97e78a6c042]) is not deleted.  


Expected results: Offboarding should happen without issues.


Additional info:

Comment 4 Neha Berry 2022-03-03 04:37:44 UTC
*** Bug 2060098 has been marked as a duplicate of this bug. ***

Comment 5 Jilju Joy 2022-03-09 12:07:12 UTC
Verification blocked due to https://bugzilla.redhat.com/show_bug.cgi?id=2061525


Note You need to log in before you can comment on or make changes to this bug.