Bug 2092143
| Summary: | Deleting a CephBlockPool CR does not delete the underlying Ceph pool | ||
|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat OpenShift Data Foundation | Reporter: | Travis Nielsen <tnielsen> |
| Component: | rook | Assignee: | Travis Nielsen <tnielsen> |
| Status: | CLOSED ERRATA | QA Contact: | Joy John Pinto <jopinto> |
| Severity: | unspecified | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | 4.11 | CC: | ebenahar, hnallurv, madam, muagarwa, ocs-bugs, odf-bz-bot |
| Target Milestone: | --- | Keywords: | Regression |
| Target Release: | ODF 4.11.0 | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | 4.11.0-89 | Doc Type: | No Doc Update |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2022-08-24 13:54:12 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
|
Description
Travis Nielsen
2022-05-31 22:51:07 UTC
Verified on ODF cluster with following configuration [$ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.11.0-0.nightly-2022-06-25-081133 True False 23h Cluster version is 4.11.0-0.nightly-2022-06-25-081133 $ oc get csv odf-operator.v4.11.0 -o yaml -n openshift-storage| grep full_version full_version: 4.11.0-103 Verification steps: 1. Created an ODF 4.11 cluster 2. Created a CephBlockPool CR from CLI 3. Deleted the CephBlockPool CR 4. Verified that the pool got deleted in Ceph (through the toolbox with "ceph osd pool ls" [jopinto@jopinto bugs]$ oc apply -f a.yaml cephblockpool.ceph.rook.io/replicapool1 created Ceph toolbox pod output: sh-4.4$ ceph osd pool ls ocs-storagecluster-cephblockpool ocs-storagecluster-cephobjectstore.rgw.log .rgw.root ocs-storagecluster-cephobjectstore.rgw.meta ocs-storagecluster-cephobjectstore.rgw.buckets.index ocs-storagecluster-cephobjectstore.rgw.buckets.non-ec ocs-storagecluster-cephobjectstore.rgw.control device_health_metrics ocs-storagecluster-cephfilesystem-metadata ocs-storagecluster-cephobjectstore.rgw.buckets.data ocs-storagecluster-cephfilesystem-data0 replicapool1 [jopinto@jopinto bugs]$ oc delete -f a.yaml cephblockpool.ceph.rook.io "replicapool1" deleted Ceph toolbox pod output: sh-4.4$ ceph osd pool ls ocs-storagecluster-cephblockpool ocs-storagecluster-cephobjectstore.rgw.log .rgw.root ocs-storagecluster-cephobjectstore.rgw.meta ocs-storagecluster-cephobjectstore.rgw.buckets.index ocs-storagecluster-cephobjectstore.rgw.buckets.non-ec ocs-storagecluster-cephobjectstore.rgw.control device_health_metrics ocs-storagecluster-cephfilesystem-metadata ocs-storagecluster-cephobjectstore.rgw.buckets.data ocs-storagecluster-cephfilesystem-data0 Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: Red Hat OpenShift Data Foundation 4.11.0 security, enhancement, & bugfix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:6156 |