Bug 1613038 - [RFE][Cinder] RBD driver support to get manageable snapshots
Summary: [RFE][Cinder] RBD driver support to get manageable snapshots
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-cinder
Version: 14.0 (Rocky)
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: Upstream M1
: 15.0 (Stein)
Assignee: Eric Harney
QA Contact: Tzach Shefi
Kim Nylander
URL: https://blueprints.launchpad.net/cind...
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-08-06 20:14 UTC by Sean Cohen
Modified: 2019-09-26 10:45 UTC (History)
5 users (show)

Fixed In Version: openstack-cinder-14.0.1-0.20190420004000.84d3d12.el8ost
Doc Type: Release Note
Doc Text:
The Block Storage service (cinder) command, "snapshot-manageable-list," now lists the snapshots on the back end for Red Hat Ceph RADOS block devices (RBD).
Clone Of:
Environment:
Last Closed: 2019-09-21 11:16:49 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
OpenStack gerrit 552936 0 None MERGED RBD: support to get manageable snapshots 2020-03-20 20:00:12 UTC
Red Hat Product Errata RHEA-2019:2811 0 None None None 2019-09-21 11:18:35 UTC

Description Sean Cohen 2018-08-06 20:14:39 UTC
The feature allow the Cinder RBD driver to list the manageable volume on the backend in order to make this flow more user-friendly

Comment 8 Tzach Shefi 2019-08-07 09:07:23 UTC
Verified on:
openstack-cinder-14.0.1-0.20190712060430.0996f0a.el8ost.noarch


1. Create a volume (rbd backed)
(overcloud) [stack@undercloud-0 ~]$ cinder create 1 --image cirros
+--------------------------------+--------------------------------------+
| Property                       | Value                                |
+--------------------------------+--------------------------------------+
| attachments                    | []                                   |
| availability_zone              | nova                                 |
| bootable                       | false                                |
| consistencygroup_id            | None                                 |
| created_at                     | 2019-08-07T08:58:48.000000           |
| description                    | None                                 |
| encrypted                      | False                                |
| id                             | 715a5c7c-b5ae-4a6e-a6b7-514329b97aa6 |
| metadata                       | {}                                   |
| migration_status               | None                                 |
| multiattach                    | False                                |
| name                           | None                                 |
| os-vol-host-attr:host          | hostgroup@tripleo_ceph#tripleo_ceph  |
| os-vol-mig-status-attr:migstat | None                                 |
| os-vol-mig-status-attr:name_id | None                                 |
| os-vol-tenant-attr:tenant_id   | 5d7e49c661ae42e498dcbbfa8098a9f8     |
| replication_status             | None                                 |
| size                           | 1                                    |
| snapshot_id                    | None                                 |
| source_volid                   | None                                 |
| status                         | creating                             |
| updated_at                     | 2019-08-07T08:58:48.000000           |
| user_id                        | f32508264e86488abd194ef9ded8a4b2     |
| volume_type                    | tripleo                              |
+--------------------------------+--------------------------------------+


2. Create a snapshot of said volume:
(overcloud) [stack@undercloud-0 ~]$ cinder snapshot-create 715a5c7c-b5ae-4a6e-a6b7-514329b97aa6 --name snap1
+-------------+--------------------------------------+
| Property    | Value                                |
+-------------+--------------------------------------+
| created_at  | 2019-08-07T08:59:38.236502           |
| description | None                                 |
| id          | 69deb14a-36f5-48e0-804c-c1b908cf0711 |
| metadata    | {}                                   |
| name        | snap1                                |
| size        | 1                                    |
| status      | creating                             |
| updated_at  | None                                 |
| volume_id   | 715a5c7c-b5ae-4a6e-a6b7-514329b97aa6 |
+-------------+--------------------------------------+


3. Check snapshot status:
(overcloud) [stack@undercloud-0 ~]$ cinder snapshot-list
+--------------------------------------+--------------------------------------+-----------+-------+------+
| ID                                   | Volume ID                            | Status    | Name  | Size |
+--------------------------------------+--------------------------------------+-----------+-------+------+
| 69deb14a-36f5-48e0-804c-c1b908cf0711 | 715a5c7c-b5ae-4a6e-a6b7-514329b97aa6 | available | snap1 | 1    |
+--------------------------------------+--------------------------------------+-----------+-------+------+

4. Note this is only supported on/from micro version 3.8, but we see that now rbd snapshots also showup as managed/manageable. 

(overcloud) [stack@undercloud-0 ~]$ cinder --os-volume-api-version 3.8 snapshot-manageable-list hostgroup@tripleo_ceph
+------------------------------------------------------------------+------+----------------+----------------------------------------------------------------+-----------------+--------------------------------------+------------+
| reference                                                        | size | safe_to_manage | source_reference                                               | reason_not_safe | cinder_id                            | extra_info |
+------------------------------------------------------------------+------+----------------+----------------------------------------------------------------+-----------------+--------------------------------------+------------+
| {'source-name': 'snapshot-69deb14a-36f5-48e0-804c-c1b908cf0711'} | 1    | False          | {'source-name': 'volume-715a5c7c-b5ae-4a6e-a6b7-514329b97aa6'} | already managed | 69deb14a-36f5-48e0-804c-c1b908cf0711 | -          |
+------------------------------------------------------------------+------+----------------+----------------------------------------------------------------+-----------------+--------------------------------------+------------+

Looks good.

Comment 10 Tzach Shefi 2019-08-08 05:55:55 UTC
Added another test which skipped my mind before

1. Unmanage the snapshot from before:
(overcloud) [stack@undercloud-0 ~]$ cinder snapshot-unmanage 69deb14a-36f5-48e0-804c-c1b908cf0711                                                                                                                                           
                                  
We see no more snapshots on openstack:                                                                         
(overcloud) [stack@undercloud-0 ~]$ cinder snapshot-list                                                                                                                                                                                     
+----+-----------+--------+------+------+                                                                                                                                                                                                    
| ID | Volume ID | Status | Name | Size |                                                                                                                                                                                                    
+----+-----------+--------+------+------+                                                                                                                                                                                                    
+----+-----------+--------+------+------+                                                                                                                                                                                                    

2. Recheck snapshot-manageable-list

(overcloud) [stack@undercloud-0 ~]$ cinder --os-volume-api-version 3.8 snapshot-manageable-list hostgroup@tripleo_ceph
+------------------------------------------------------------------+------+----------------+----------------------------------------------------------------+-----------------+-----------+------------+                                     
| reference                                                        | size | safe_to_manage | source_reference                                               | reason_not_safe | cinder_id | extra_info |                                     
+------------------------------------------------------------------+------+----------------+----------------------------------------------------------------+-----------------+-----------+------------+                                     
| {'source-name': 'snapshot-69deb14a-36f5-48e0-804c-c1b908cf0711'} | 1    | True           | {'source-name': 'volume-715a5c7c-b5ae-4a6e-a6b7-514329b97aa6'} | -               | -         | -          |                                     
+------------------------------------------------------------------+------+----------------+----------------------------------------------------------------+-----------------+-----------+------------+ 

Meaning we're able to see snapshots on the backend which have been removed are now longer a part of openstack at the moment.


Another test this time snap initiated by ceph.

1. Find the volume
bash-4.4# rbd -p volumes ls
volume-715a5c7c-b5ae-4a6e-a6b7-514329b97aa6

2, Find the snapshot we created before:
bash-4.4# rbd snap ls volumes/volume-715a5c7c-b5ae-4a6e-a6b7-514329b97aa6
SNAPID NAME                                          SIZE  PROTECTED TIMESTAMP                
     4 snapshot-69deb14a-36f5-48e0-804c-c1b908cf0711 1 GiB           Wed Aug  7 08:59:38 2019 


3. Create a snapshot of said volume this time from ceph's side:
rbd snap create volumes/volume-715a5c7c-b5ae-4a6e-a6b7-514329b97aa6@snap2

4. Show again snapshots:
bash-4.4# rbd snap ls volumes/volume-715a5c7c-b5ae-4a6e-a6b7-514329b97aa6
SNAPID NAME                                          SIZE  PROTECTED TIMESTAMP                
     4 snapshot-69deb14a-36f5-48e0-804c-c1b908cf0711 1 GiB           Wed Aug  7 08:59:38 2019 
     5 snap2                                         1 GiB           Thu Aug  8 05:49:29 2019
We see snap2 added/created on ceph's side. 

5. Returning to openstack we should see these two snapshots, as expected we now see both of them:
(overcloud) [stack@undercloud-0 ~]$ cinder --os-volume-api-version 3.8 snapshot-manageable-list hostgroup@tripleo_ceph
+------------------------------------------------------------------+------+----------------+----------------------------------------------------------------+-----------------+-----------+------------+
| reference                                                        | size | safe_to_manage | source_reference                                               | reason_not_safe | cinder_id | extra_info |
+------------------------------------------------------------------+------+----------------+----------------------------------------------------------------+-----------------+-----------+------------+
| {'source-name': 'snapshot-69deb14a-36f5-48e0-804c-c1b908cf0711'} | 1    | True           | {'source-name': 'volume-715a5c7c-b5ae-4a6e-a6b7-514329b97aa6'} | -               | -         | -          |
| {'source-name': 'snap2'}                                         | 1    | True           | {'source-name': 'volume-715a5c7c-b5ae-4a6e-a6b7-514329b97aa6'} | -               | -         | -          |
+------------------------------------------------------------------+------+----------------+----------------------------------------------------------------+-----------------+-----------+------------+

Verified now even better :)

Comment 13 errata-xmlrpc 2019-09-21 11:16:49 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2019:2811


Note You need to log in before you can comment on or make changes to this bug.