Bug 1613038

Summary: [RFE][Cinder] RBD driver support to get manageable snapshots
Product: Red Hat OpenStack Reporter: Sean Cohen <scohen>
Component: openstack-cinderAssignee: Eric Harney <eharney>
Status: CLOSED ERRATA QA Contact: Tzach Shefi <tshefi>
Severity: medium Docs Contact: Kim Nylander <knylande>
Priority: medium    
Version: 14.0 (Rocky)CC: gcharot, gregraka, mabrams, srevivo, tshefi
Target Milestone: Upstream M1Keywords: FutureFeature, TestOnly, Triaged
Target Release: 15.0 (Stein)   
Hardware: Unspecified   
OS: Unspecified   
URL: https://blueprints.launchpad.net/cinder/+spec/ceph-list-manageable-volumes-and-snapshots
Whiteboard:
Fixed In Version: openstack-cinder-14.0.1-0.20190420004000.84d3d12.el8ost Doc Type: Release Note
Doc Text:
The Block Storage service (cinder) command, "snapshot-manageable-list," now lists the snapshots on the back end for Red Hat Ceph RADOS block devices (RBD).
Story Points: ---
Clone Of: Environment:
Last Closed: 2019-09-21 11:16:49 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Sean Cohen 2018-08-06 20:14:39 UTC
The feature allow the Cinder RBD driver to list the manageable volume on the backend in order to make this flow more user-friendly

Comment 8 Tzach Shefi 2019-08-07 09:07:23 UTC
Verified on:
openstack-cinder-14.0.1-0.20190712060430.0996f0a.el8ost.noarch


1. Create a volume (rbd backed)
(overcloud) [stack@undercloud-0 ~]$ cinder create 1 --image cirros
+--------------------------------+--------------------------------------+
| Property                       | Value                                |
+--------------------------------+--------------------------------------+
| attachments                    | []                                   |
| availability_zone              | nova                                 |
| bootable                       | false                                |
| consistencygroup_id            | None                                 |
| created_at                     | 2019-08-07T08:58:48.000000           |
| description                    | None                                 |
| encrypted                      | False                                |
| id                             | 715a5c7c-b5ae-4a6e-a6b7-514329b97aa6 |
| metadata                       | {}                                   |
| migration_status               | None                                 |
| multiattach                    | False                                |
| name                           | None                                 |
| os-vol-host-attr:host          | hostgroup@tripleo_ceph#tripleo_ceph  |
| os-vol-mig-status-attr:migstat | None                                 |
| os-vol-mig-status-attr:name_id | None                                 |
| os-vol-tenant-attr:tenant_id   | 5d7e49c661ae42e498dcbbfa8098a9f8     |
| replication_status             | None                                 |
| size                           | 1                                    |
| snapshot_id                    | None                                 |
| source_volid                   | None                                 |
| status                         | creating                             |
| updated_at                     | 2019-08-07T08:58:48.000000           |
| user_id                        | f32508264e86488abd194ef9ded8a4b2     |
| volume_type                    | tripleo                              |
+--------------------------------+--------------------------------------+


2. Create a snapshot of said volume:
(overcloud) [stack@undercloud-0 ~]$ cinder snapshot-create 715a5c7c-b5ae-4a6e-a6b7-514329b97aa6 --name snap1
+-------------+--------------------------------------+
| Property    | Value                                |
+-------------+--------------------------------------+
| created_at  | 2019-08-07T08:59:38.236502           |
| description | None                                 |
| id          | 69deb14a-36f5-48e0-804c-c1b908cf0711 |
| metadata    | {}                                   |
| name        | snap1                                |
| size        | 1                                    |
| status      | creating                             |
| updated_at  | None                                 |
| volume_id   | 715a5c7c-b5ae-4a6e-a6b7-514329b97aa6 |
+-------------+--------------------------------------+


3. Check snapshot status:
(overcloud) [stack@undercloud-0 ~]$ cinder snapshot-list
+--------------------------------------+--------------------------------------+-----------+-------+------+
| ID                                   | Volume ID                            | Status    | Name  | Size |
+--------------------------------------+--------------------------------------+-----------+-------+------+
| 69deb14a-36f5-48e0-804c-c1b908cf0711 | 715a5c7c-b5ae-4a6e-a6b7-514329b97aa6 | available | snap1 | 1    |
+--------------------------------------+--------------------------------------+-----------+-------+------+

4. Note this is only supported on/from micro version 3.8, but we see that now rbd snapshots also showup as managed/manageable. 

(overcloud) [stack@undercloud-0 ~]$ cinder --os-volume-api-version 3.8 snapshot-manageable-list hostgroup@tripleo_ceph
+------------------------------------------------------------------+------+----------------+----------------------------------------------------------------+-----------------+--------------------------------------+------------+
| reference                                                        | size | safe_to_manage | source_reference                                               | reason_not_safe | cinder_id                            | extra_info |
+------------------------------------------------------------------+------+----------------+----------------------------------------------------------------+-----------------+--------------------------------------+------------+
| {'source-name': 'snapshot-69deb14a-36f5-48e0-804c-c1b908cf0711'} | 1    | False          | {'source-name': 'volume-715a5c7c-b5ae-4a6e-a6b7-514329b97aa6'} | already managed | 69deb14a-36f5-48e0-804c-c1b908cf0711 | -          |
+------------------------------------------------------------------+------+----------------+----------------------------------------------------------------+-----------------+--------------------------------------+------------+

Looks good.

Comment 10 Tzach Shefi 2019-08-08 05:55:55 UTC
Added another test which skipped my mind before

1. Unmanage the snapshot from before:
(overcloud) [stack@undercloud-0 ~]$ cinder snapshot-unmanage 69deb14a-36f5-48e0-804c-c1b908cf0711                                                                                                                                           
                                  
We see no more snapshots on openstack:                                                                         
(overcloud) [stack@undercloud-0 ~]$ cinder snapshot-list                                                                                                                                                                                     
+----+-----------+--------+------+------+                                                                                                                                                                                                    
| ID | Volume ID | Status | Name | Size |                                                                                                                                                                                                    
+----+-----------+--------+------+------+                                                                                                                                                                                                    
+----+-----------+--------+------+------+                                                                                                                                                                                                    

2. Recheck snapshot-manageable-list

(overcloud) [stack@undercloud-0 ~]$ cinder --os-volume-api-version 3.8 snapshot-manageable-list hostgroup@tripleo_ceph
+------------------------------------------------------------------+------+----------------+----------------------------------------------------------------+-----------------+-----------+------------+                                     
| reference                                                        | size | safe_to_manage | source_reference                                               | reason_not_safe | cinder_id | extra_info |                                     
+------------------------------------------------------------------+------+----------------+----------------------------------------------------------------+-----------------+-----------+------------+                                     
| {'source-name': 'snapshot-69deb14a-36f5-48e0-804c-c1b908cf0711'} | 1    | True           | {'source-name': 'volume-715a5c7c-b5ae-4a6e-a6b7-514329b97aa6'} | -               | -         | -          |                                     
+------------------------------------------------------------------+------+----------------+----------------------------------------------------------------+-----------------+-----------+------------+ 

Meaning we're able to see snapshots on the backend which have been removed are now longer a part of openstack at the moment.


Another test this time snap initiated by ceph.

1. Find the volume
bash-4.4# rbd -p volumes ls
volume-715a5c7c-b5ae-4a6e-a6b7-514329b97aa6

2, Find the snapshot we created before:
bash-4.4# rbd snap ls volumes/volume-715a5c7c-b5ae-4a6e-a6b7-514329b97aa6
SNAPID NAME                                          SIZE  PROTECTED TIMESTAMP                
     4 snapshot-69deb14a-36f5-48e0-804c-c1b908cf0711 1 GiB           Wed Aug  7 08:59:38 2019 


3. Create a snapshot of said volume this time from ceph's side:
rbd snap create volumes/volume-715a5c7c-b5ae-4a6e-a6b7-514329b97aa6@snap2

4. Show again snapshots:
bash-4.4# rbd snap ls volumes/volume-715a5c7c-b5ae-4a6e-a6b7-514329b97aa6
SNAPID NAME                                          SIZE  PROTECTED TIMESTAMP                
     4 snapshot-69deb14a-36f5-48e0-804c-c1b908cf0711 1 GiB           Wed Aug  7 08:59:38 2019 
     5 snap2                                         1 GiB           Thu Aug  8 05:49:29 2019
We see snap2 added/created on ceph's side. 

5. Returning to openstack we should see these two snapshots, as expected we now see both of them:
(overcloud) [stack@undercloud-0 ~]$ cinder --os-volume-api-version 3.8 snapshot-manageable-list hostgroup@tripleo_ceph
+------------------------------------------------------------------+------+----------------+----------------------------------------------------------------+-----------------+-----------+------------+
| reference                                                        | size | safe_to_manage | source_reference                                               | reason_not_safe | cinder_id | extra_info |
+------------------------------------------------------------------+------+----------------+----------------------------------------------------------------+-----------------+-----------+------------+
| {'source-name': 'snapshot-69deb14a-36f5-48e0-804c-c1b908cf0711'} | 1    | True           | {'source-name': 'volume-715a5c7c-b5ae-4a6e-a6b7-514329b97aa6'} | -               | -         | -          |
| {'source-name': 'snap2'}                                         | 1    | True           | {'source-name': 'volume-715a5c7c-b5ae-4a6e-a6b7-514329b97aa6'} | -               | -         | -          |
+------------------------------------------------------------------+------+----------------+----------------------------------------------------------------+-----------------+-----------+------------+

Verified now even better :)

Comment 13 errata-xmlrpc 2019-09-21 11:16:49 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2019:2811