Description of problem: A user encountered the situation where they need to clear data in their ceph cluster and started the cluster from scratch. After re-initializing the cluster, they tried to remove a snapshot record left in cinder db but both delete and unmanage operation keep failing. It seems the problem is that the rbd driver tries to access the source volume first, and the operations fails if the source volume doesn't exist. Version-Release number of selected component (if applicable): RHOSP16.1.6 How reproducible: Always Steps to Reproduce: 1. Create a volume and a snapshot 2. Delete rbd images in the backend ceph pool 3. Delete/Unmanage the snapshot Actual results: The snapshot becomes error_deleting/error_unmanaging status Expected results: The snapshot is deleted/unmanaged Additional info:
We solved the issue by creating an empty dummy image in backend. # rbd create --size 1 <pool>/volume-<volume id> We also observe that deleting a volume gets stuck in deleting status when the actual image doesn't exist in rbd. I'll check some details and open a separate bug if needed.
There's a root cause for this, so no need for a separate bug (per comment #2). The RBD driver does have provision for gracefully handling a request to delete a volume when it's already deleted on the backend, see [1]. The driver also has similar code to handle deleting a snapshot when the snapshot no long exists [2]. However, what's missing is code to handle an error when deleting a snapshot because the parent volume doesn't exist. We need to catch failures that occur at [3]. [1] https://opendev.org/openstack/cinder/src/branch/master/cinder/volume/drivers/rbd.py#L1192 [2] https://opendev.org/openstack/cinder/src/branch/master/cinder/volume/drivers/rbd.py#L1315 [3] https://opendev.org/openstack/cinder/src/branch/master/cinder/volume/drivers/rbd.py#L1289 BTW, deleting the volume fails because of the occur that occurs when first it tries to delete the volume's snapshots.
*** Bug 2117852 has been marked as a duplicate of this bug. ***
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat OpenStack Platform 17.1.3 bug fix and enhancement advisory), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2024:2741