Description of problem (please be detailed as possible and provide log snippests): When the CephBlockPool CR is deleted, its underlying Ceph pool is not deleted. An upstream user reported the issue with https://github.com/rook/rook/issues/10360. Version of all relevant components (if applicable): This is a regression in 4.11 and does not affect previous ODF versions. Does this issue impact your ability to continue to work with the product (please explain in detail what is the user impact)? No Is there any workaround available to the best of your knowledge? Delete the pool from the ceph toolbox if needed. Rate from 1 - 5 the complexity of the scenario you performed that caused this bug (1 - very simple, 5 - very complex)? 2 Can this issue reproducible? Yes Can this issue reproduce from the UI? Yes If this is a regression, please provide more details to justify this: Steps to Reproduce: 1. Create an ODF 4.11 cluster 2. Create a CephBlockPool CR (from the UI or CLI) 3. Delete the CephBlockPool CR 4. See the pool still exists in Ceph (through the toolbox with "ceph osd pool ls" Actual results: The Ceph pool still exists after deletion. Expected results: The Ceph pool should be deleted.
Verified on ODF cluster with following configuration [$ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.11.0-0.nightly-2022-06-25-081133 True False 23h Cluster version is 4.11.0-0.nightly-2022-06-25-081133 $ oc get csv odf-operator.v4.11.0 -o yaml -n openshift-storage| grep full_version full_version: 4.11.0-103 Verification steps: 1. Created an ODF 4.11 cluster 2. Created a CephBlockPool CR from CLI 3. Deleted the CephBlockPool CR 4. Verified that the pool got deleted in Ceph (through the toolbox with "ceph osd pool ls" [jopinto@jopinto bugs]$ oc apply -f a.yaml cephblockpool.ceph.rook.io/replicapool1 created Ceph toolbox pod output: sh-4.4$ ceph osd pool ls ocs-storagecluster-cephblockpool ocs-storagecluster-cephobjectstore.rgw.log .rgw.root ocs-storagecluster-cephobjectstore.rgw.meta ocs-storagecluster-cephobjectstore.rgw.buckets.index ocs-storagecluster-cephobjectstore.rgw.buckets.non-ec ocs-storagecluster-cephobjectstore.rgw.control device_health_metrics ocs-storagecluster-cephfilesystem-metadata ocs-storagecluster-cephobjectstore.rgw.buckets.data ocs-storagecluster-cephfilesystem-data0 replicapool1 [jopinto@jopinto bugs]$ oc delete -f a.yaml cephblockpool.ceph.rook.io "replicapool1" deleted Ceph toolbox pod output: sh-4.4$ ceph osd pool ls ocs-storagecluster-cephblockpool ocs-storagecluster-cephobjectstore.rgw.log .rgw.root ocs-storagecluster-cephobjectstore.rgw.meta ocs-storagecluster-cephobjectstore.rgw.buckets.index ocs-storagecluster-cephobjectstore.rgw.buckets.non-ec ocs-storagecluster-cephobjectstore.rgw.control device_health_metrics ocs-storagecluster-cephfilesystem-metadata ocs-storagecluster-cephobjectstore.rgw.buckets.data ocs-storagecluster-cephfilesystem-data0
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: Red Hat OpenShift Data Foundation 4.11.0 security, enhancement, & bugfix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:6156