Bug 2050163

Summary: [RFE] Cinder doesn't keep track of the volumes from snapshots
Product: Red Hat OpenStack Reporter: Eduard Barrera <ebarrera>
Component: openstack-cinderAssignee: Cinder Bugs List <cinder-bugs>
Status: CLOSED DUPLICATE QA Contact: Tzach Shefi <tshefi>
Severity: unspecified Docs Contact: Andy Stillman <astillma>
Priority: unspecified    
Version: 16.2 (Train)CC: ltoscano, senrique
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2022-02-03 13:31:52 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Eduard Barrera 2022-02-03 11:55:40 UTC
Description of problem:

If you want to delete a snapshot having a rw clone you can't and
cinder doesn't give any information of way, you have to check the
logs to find what is going on, because cinder doesn't keep any
track of it.

Example:


(overcloud) [stack@undercloud-0 ~]$ cinder list 
                                    
| da53019a-346e-4115-960d-0c791c9d84a7 | available | 03XXXXXX-vol          | 1    | tripleo     | false    |                                      


(overcloud) [stack@undercloud-0 ~]$ cinder snapshot-list 

| e499f160-d080-4077-bb8e-4ab41ba338aa | da53019a-346e-4115-960d-0c791c9d84a7 | available | 03XXXXXX-vol-snap | 1


I want to delete 03XXXXXX-vol-snap

$ openstack volume snapshot delete e499f160-d080-4077-bb8e-4ab41ba338aa
$ echo $?
0
$ openstack volume snapshot show  e499f160-d080-4077-bb8e-4ab41ba338aa
+--------------------------------------------+--------------------------------------+
| Field                                      | Value                                |
+--------------------------------------------+--------------------------------------+
| created_at                                 | 2022-02-03T09:18:17.000000           |
| description                                | None                                 |
| id                                         | e499f160-d080-4077-bb8e-4ab41ba338aa |
| name                                       | 03XXXXXX-vol-snap                    |
| os-extended-snapshot-attributes:progress   | 100%                                 |
| os-extended-snapshot-attributes:project_id | d7ad92b9537c4cf5966ad66012005ff7     |
| properties                                 |                                      |
| size                                       | 1                                    |
| status                                     | available                            |
| updated_at                                 | 2022-02-03T11:37:43.000000           |
| volume_id                                  | da53019a-346e-4115-960d-0c791c9d84a7 |
+--------------------------------------------+--------------------------------------+
There is no information about the dependent clones


we are forced to check the logs:

cinder-volume.log:
2022-02-03 11:37:43.040 59 INFO cinder.volume.drivers.rbd [req-02127f65-498a-405b-b27d-21bb13b9fe0b 92452da4acce4c49b3011ec1ad71d544 d7ad92b9537c4cf5966ad66012005ff7 - default default] Image volumes/volume-7051625d-2134-41fc-a398-c3c862e09a90 is dependent on the snapshot snapshot-e499f160-d080-4077-bb8e-4ab41ba338aa.
2022-02-03 11:37:43.053 59 ERROR cinder.volume.manager [req-02127f65-498a-405b-b27d-21bb13b9fe0b 92452da4acce4c49b3011ec1ad71d544 d7ad92b9537c4cf5966ad66012005ff7 - default default] Delete snapshot failed, due to snapshot busy.: cinder.exception.SnapshotIsBusy: deleting snapshot snapshot-e499f160-d080-4077-bb8e-4ab41ba338aa that has dependent volumes

Or check ceph:

# rbd children volumes/volume-da53019a-346e-4115-960d-0c791c9d84a7@snapshot-e499f160-d080-4077-bb8e-4ab41ba338aa
volumes/volume-7051625d-2134-41fc-a398-c3c862e09a90

Suggestions:
- keep track of the volumes from snapshots
- ability to flatten volumes from snapshots so they become independent


Version-Release number of selected component (if applicable):


How reproducible:
OSP 13
OSP 16.1

Steps to Reproduce:
1. Create a volume, a snapshot fo the volume and a volume from the snapshot
2.
3.

Actual results:
No way to find the volume created from the snapshot if we want to delete the snapshot

Comment 1 Luigi Toscano 2022-02-03 13:18:39 UTC
Isn't this a duplicate of bug 2021562 (and bug 1997715, and related to bug 1989680)? The feature was originally enabled (see bug 1764324) but it had to be reverted to some unexpected side effects.
Anyway, cinder does it keep track of volumes from snapshots, otherwise it would error out.

Comment 2 Eduard Barrera 2022-02-03 13:31:52 UTC

*** This bug has been marked as a duplicate of bug 1989680 ***

Comment 3 Red Hat Bugzilla 2023-09-15 01:19:22 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 500 days