Bug 2092143

Summary: Deleting a CephBlockPool CR does not delete the underlying Ceph pool
Product: [Red Hat Storage] Red Hat OpenShift Data Foundation Reporter: Travis Nielsen <tnielsen>
Component: rookAssignee: Travis Nielsen <tnielsen>
Status: CLOSED ERRATA QA Contact: Joy John Pinto <jopinto>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 4.11CC: ebenahar, hnallurv, madam, muagarwa, ocs-bugs, odf-bz-bot
Target Milestone: ---Keywords: Regression
Target Release: ODF 4.11.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: 4.11.0-89 Doc Type: No Doc Update
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2022-08-24 13:54:12 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Travis Nielsen 2022-05-31 22:51:07 UTC
Description of problem (please be detailed as possible and provide log
snippests):

When the CephBlockPool CR is deleted, its underlying Ceph pool is not deleted. An upstream user reported the issue with https://github.com/rook/rook/issues/10360.


Version of all relevant components (if applicable):

This is a regression in 4.11 and does not affect previous ODF versions.


Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?

No

Is there any workaround available to the best of your knowledge?

Delete the pool from the ceph toolbox if needed.

Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?

2

Can this issue reproducible?

Yes

Can this issue reproduce from the UI?

Yes

If this is a regression, please provide more details to justify this:


Steps to Reproduce:
1. Create an ODF 4.11 cluster
2. Create a CephBlockPool CR (from the UI or CLI)
3. Delete the CephBlockPool CR
4. See the pool still exists in Ceph (through the toolbox with "ceph osd pool ls"

Actual results:

The Ceph pool still exists after deletion.

Expected results:

The Ceph pool should be deleted.

Comment 5 Joy John Pinto 2022-06-28 06:53:46 UTC
Verified on ODF cluster with following configuration

[$ oc get clusterversion
NAME      VERSION                              AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.11.0-0.nightly-2022-06-25-081133   True        False         23h     Cluster version is 4.11.0-0.nightly-2022-06-25-081133
 
$ oc get csv odf-operator.v4.11.0 -o yaml -n openshift-storage| grep full_version
full_version: 4.11.0-103

Verification steps:

1. Created an ODF 4.11 cluster
2. Created a CephBlockPool CR from CLI 
3. Deleted the CephBlockPool CR
4. Verified that the pool got deleted in Ceph (through the toolbox with "ceph osd pool ls"

[jopinto@jopinto bugs]$ oc apply -f a.yaml 
cephblockpool.ceph.rook.io/replicapool1 created

Ceph toolbox pod output:
sh-4.4$ ceph osd pool ls
ocs-storagecluster-cephblockpool
ocs-storagecluster-cephobjectstore.rgw.log
.rgw.root
ocs-storagecluster-cephobjectstore.rgw.meta
ocs-storagecluster-cephobjectstore.rgw.buckets.index
ocs-storagecluster-cephobjectstore.rgw.buckets.non-ec
ocs-storagecluster-cephobjectstore.rgw.control
device_health_metrics
ocs-storagecluster-cephfilesystem-metadata
ocs-storagecluster-cephobjectstore.rgw.buckets.data
ocs-storagecluster-cephfilesystem-data0
replicapool1

[jopinto@jopinto bugs]$ oc delete -f a.yaml 
cephblockpool.ceph.rook.io "replicapool1" deleted

Ceph toolbox pod output:
sh-4.4$ ceph osd pool ls
ocs-storagecluster-cephblockpool
ocs-storagecluster-cephobjectstore.rgw.log
.rgw.root
ocs-storagecluster-cephobjectstore.rgw.meta
ocs-storagecluster-cephobjectstore.rgw.buckets.index
ocs-storagecluster-cephobjectstore.rgw.buckets.non-ec
ocs-storagecluster-cephobjectstore.rgw.control
device_health_metrics
ocs-storagecluster-cephfilesystem-metadata
ocs-storagecluster-cephobjectstore.rgw.buckets.data
ocs-storagecluster-cephfilesystem-data0

Comment 7 errata-xmlrpc 2022-08-24 13:54:12 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: Red Hat OpenShift Data Foundation 4.11.0 security, enhancement, & bugfix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:6156