Bug 2092143 - Deleting a CephBlockPool CR does not delete the underlying Ceph pool
Summary: Deleting a CephBlockPool CR does not delete the underlying Ceph pool
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: rook
Version: 4.11
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: ODF 4.11.0
Assignee: Travis Nielsen
QA Contact: Joy John Pinto
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-05-31 22:51 UTC by Travis Nielsen
Modified: 2023-08-09 17:03 UTC (History)
6 users (show)

Fixed In Version: 4.11.0-89
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-08-24 13:54:12 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github red-hat-storage rook pull 382 0 None open Bug 2092143: Delete ceph pool when blockpool cr is deleted and fix canary tests 2022-06-03 15:25:49 UTC
Github rook rook issues 10360 0 None open Operator Deletes CephBlockPool CRD Instance, but Pool is Retained in Ceph 2022-05-31 22:51:41 UTC
Github rook rook pull 10362 0 None open pool: Delete ceph pool when blockpool cr is deleted 2022-05-31 22:51:41 UTC
Red Hat Product Errata RHSA-2022:6156 0 None None None 2022-08-24 13:54:26 UTC

Description Travis Nielsen 2022-05-31 22:51:07 UTC
Description of problem (please be detailed as possible and provide log
snippests):

When the CephBlockPool CR is deleted, its underlying Ceph pool is not deleted. An upstream user reported the issue with https://github.com/rook/rook/issues/10360.


Version of all relevant components (if applicable):

This is a regression in 4.11 and does not affect previous ODF versions.


Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?

No

Is there any workaround available to the best of your knowledge?

Delete the pool from the ceph toolbox if needed.

Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?

2

Can this issue reproducible?

Yes

Can this issue reproduce from the UI?

Yes

If this is a regression, please provide more details to justify this:


Steps to Reproduce:
1. Create an ODF 4.11 cluster
2. Create a CephBlockPool CR (from the UI or CLI)
3. Delete the CephBlockPool CR
4. See the pool still exists in Ceph (through the toolbox with "ceph osd pool ls"

Actual results:

The Ceph pool still exists after deletion.

Expected results:

The Ceph pool should be deleted.

Comment 5 Joy John Pinto 2022-06-28 06:53:46 UTC
Verified on ODF cluster with following configuration

[$ oc get clusterversion
NAME      VERSION                              AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.11.0-0.nightly-2022-06-25-081133   True        False         23h     Cluster version is 4.11.0-0.nightly-2022-06-25-081133
 
$ oc get csv odf-operator.v4.11.0 -o yaml -n openshift-storage| grep full_version
full_version: 4.11.0-103

Verification steps:

1. Created an ODF 4.11 cluster
2. Created a CephBlockPool CR from CLI 
3. Deleted the CephBlockPool CR
4. Verified that the pool got deleted in Ceph (through the toolbox with "ceph osd pool ls"

[jopinto@jopinto bugs]$ oc apply -f a.yaml 
cephblockpool.ceph.rook.io/replicapool1 created

Ceph toolbox pod output:
sh-4.4$ ceph osd pool ls
ocs-storagecluster-cephblockpool
ocs-storagecluster-cephobjectstore.rgw.log
.rgw.root
ocs-storagecluster-cephobjectstore.rgw.meta
ocs-storagecluster-cephobjectstore.rgw.buckets.index
ocs-storagecluster-cephobjectstore.rgw.buckets.non-ec
ocs-storagecluster-cephobjectstore.rgw.control
device_health_metrics
ocs-storagecluster-cephfilesystem-metadata
ocs-storagecluster-cephobjectstore.rgw.buckets.data
ocs-storagecluster-cephfilesystem-data0
replicapool1

[jopinto@jopinto bugs]$ oc delete -f a.yaml 
cephblockpool.ceph.rook.io "replicapool1" deleted

Ceph toolbox pod output:
sh-4.4$ ceph osd pool ls
ocs-storagecluster-cephblockpool
ocs-storagecluster-cephobjectstore.rgw.log
.rgw.root
ocs-storagecluster-cephobjectstore.rgw.meta
ocs-storagecluster-cephobjectstore.rgw.buckets.index
ocs-storagecluster-cephobjectstore.rgw.buckets.non-ec
ocs-storagecluster-cephobjectstore.rgw.control
device_health_metrics
ocs-storagecluster-cephfilesystem-metadata
ocs-storagecluster-cephobjectstore.rgw.buckets.data
ocs-storagecluster-cephfilesystem-data0

Comment 7 errata-xmlrpc 2022-08-24 13:54:12 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: Red Hat OpenShift Data Foundation 4.11.0 security, enhancement, & bugfix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:6156


Note You need to log in before you can comment on or make changes to this bug.