Bug 1399741 - rbd snapshot delete fails if backend is missing file
Summary: rbd snapshot delete fails if backend is missing file
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-cinder
Version: 8.0 (Liberty)
Hardware: Unspecified
OS: Unspecified
low
low
Target Milestone: ---
: 8.0 (Liberty)
Assignee: Eric Harney
QA Contact: Tzach Shefi
URL:
Whiteboard:
Depends On:
Blocks: 1399733 1399760 1399763
TreeView+ depends on / blocked
 
Reported: 2016-11-29 15:56 UTC by Eric Harney
Modified: 2020-08-13 08:43 UTC (History)
5 users (show)

Fixed In Version: openstack-cinder-7.0.3-2.el7ost
Doc Type: Bug Fix
Doc Text:
Previously, an unhandled case in the the Block Storage RBD driver prevented the deletion of snapshots if their corresponding snapshot no longer existed in the Ceph/RBD back end. This release fixes the issue, thereby allowing snapshot deletion to succeed in such cases.
Clone Of: 1399733
Environment:
Last Closed: 2017-02-01 14:17:24 UTC
Target Upstream Version:
tshefi: automate_bug-


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2017:0227 0 normal SHIPPED_LIVE openstack-cinder bug fix advisory 2017-02-01 19:15:38 UTC

Description Eric Harney 2016-11-29 15:56:19 UTC
+++ This bug was initially created as a clone of Bug #1399733 +++

Description of problem:
rbd snapshot delete fails if backend is missing file

Is it possible to backport this patch [1] to RHOSP 5?  It is a simple patch that will simply pass if this file is missing

[1] https://bugs.launchpad.net/cinder/+bug/1415905

Version-Release number of selected component (if applicable):


How reproducible:
Somtimes

Steps to Reproduce:
1. Missing patch
2.
3.

Actual results:
Fails deleting the snapshot

Expected results:
Succeeds


Additional info:

Comment 2 Tzach Shefi 2016-12-28 16:57:49 UTC
Verified, 
RHOS8 openstack-cinder-7.0.3-2.el7ost.noarch
External ceph.

I had an older (openstack-cinder-7.0.3-1.el7ost.noarch)  system already installed so created Cinder volume+snapshot

Using rbd commands:
rbd -p volumes ls -l 
rbd -p volumes snap unprotect vol...@... 
rbd -p volumes snap rm vol...@... 

I'd deleted the snapshot backing file on ceph. 

Once I attempted to delete snapshot I got the expected:

[stack@undercloud-0 ~]$ cinder snapshot-list
+--------------------------------------+--------------------------------------+----------------+------+------+
|                  ID                  |              Volume ID               |     Status     | Name | Size |
+--------------------------------------+--------------------------------------+----------------+------+------+
| 470c312d-881d-4330-99ae-b355893e0b9b | eb02c4e1-9be5-43b1-a187-3b026d640b0c | error_deleting |  -   |  1   |
+--------------------------------------+--------------------------------------+----------------+------+------+


Updated cinder to latest version (7.0.3-2), restarted service. 
When I attempted to redelete the above image it failed (cause for another bug?).

[stack@undercloud-0 ~]$ cinder snapshot-delete 470c312d-881d-4330-99ae-b355893e0b9b
Delete for snapshot 470c312d-881d-4330-99ae-b355893e0b9b failed: Invalid snapshot: Volume Snapshot status must be available or error. (HTTP 400) (Request-ID: req-b21ccd92-3a6e-49c5-9e19-f13a26ec27d0)
ERROR: Unable to delete any of the specified snapshots.


Any way to verify this bz, created a new Cinder volume+snapshot.
Again deleted the snapshot backing file on Ceph. 
This time cinder snapshot-delete worked without errors steps below: 

[stack@undercloud-0 ~]$ cinder list
+--------------------------------------+-----------+------------------+------+------+-------------+----------+-------------+-------------+
|                  ID                  |   Status  | Migration Status | Name | Size | Volume Type | Bootable | Multiattach | Attached to |
+--------------------------------------+-----------+------------------+------+------+-------------+----------+-------------+-------------+
| 66af1455-4c00-4404-b0a4-f6ca0bed8e84 | available |        -         | vol2 |  1   |      -      |  false   |    False    |             |
| eb02c4e1-9be5-43b1-a187-3b026d640b0c | available |        -         |  -   |  1   |      -      |  false   |    False    |             |
+--------------------------------------+-----------+------------------+------+------+-------------+----------+-------------+-------------+


[stack@undercloud-0 ~]$ cinder snapshot-create 66af1455-4c00-4404-b0a4-f6ca0bed8e84


[stack@undercloud-0 ~]$ cinder snapshot-list
+--------------------------------------+--------------------------------------+----------------+------+------+
|                  ID                  |              Volume ID               |     Status     | Name | Size |
+--------------------------------------+--------------------------------------+----------------+------+------+
| 470c312d-881d-4330-99ae-b355893e0b9b | eb02c4e1-9be5-43b1-a187-3b026d640b0c | error_deleting |  -   |  1   |
| 66b47c35-7fff-4ed6-90fd-9ca9af38e84c | 66af1455-4c00-4404-b0a4-f6ca0bed8e84 |   available    |  -   |  1   |
+--------------------------------------+--------------------------------------+----------------+------+------+

Again back to ceph, deleted backing file of snap 66b47... 

[stack@undercloud-0 ~]$ cinder snapshot-delete 66b47c35-7fff-4ed6-90fd-9ca9af38e84c

[stack@undercloud-0 ~]$ cinder snapshot-list
+--------------------------------------+--------------------------------------+----------------+------+------+
|                  ID                  |              Volume ID               |     Status     | Name | Size |
+--------------------------------------+--------------------------------------+----------------+------+------+
| 470c312d-881d-4330-99ae-b355893e0b9b | eb02c4e1-9be5-43b1-a187-3b026d640b0c | error_deleting |  -   |  1   |
+--------------------------------------+--------------------------------------+----------------+------+------+

With only the initial failed snapshot remaining. 
Meaning our new/second snapshot was deleted successfully while it's  backing file was deleted beforehand thus another bug squashed :)

Comment 6 errata-xmlrpc 2017-02-01 14:17:24 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2017-0227.html


Note You need to log in before you can comment on or make changes to this bug.