Bug 1790752 - [RFE][cinder] Add Ceph incremental backup support
Summary: [RFE][cinder] Add Ceph incremental backup support
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-cinder
Version: unspecified
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: z1
: 16.0 (Train on RHEL 8.1)
Assignee: Cinder Bugs List
QA Contact: Tzach Shefi
Chuck Copello
URL:
Whiteboard:
Depends On: 1375207 1561862
Blocks: 1463058 1463059 1463060 1463061 1503352 1710946 1761778
TreeView+ depends on / blocked
 
Reported: 2020-01-14 05:55 UTC by Giulio Fidente
Modified: 2020-03-11 13:50 UTC (History)
24 users (show)

Fixed In Version: openstack-cinder-15.0.1-0.20191114132949.b91f514.el8ost
Doc Type: Enhancement
Doc Text:
Previously, when using Red Hat Ceph Storage as a back end for both the Block Storage service (cinder) volumes and backups, any attempt to perform a full backup--after the first full backup--instead resulted in an incremental backup without any warning. In Red Hat OpenStack Platform 16.0.1, the fix for this issue is fully supported.
Clone Of: 1375207
Environment:
Last Closed: 2020-02-20 11:40:21 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Comment 12 Lon Hohberger 2020-02-13 11:40:09 UTC
According to our records, this should be resolved by openstack-cinder-15.0.2-0.20200123220928.900f769.el8ost.  This build is available now.

Comment 14 Tzach Shefi 2020-02-19 07:47:37 UTC
Verfied on:
openstack-cinder-15.0.2-0.20200123220928.900f769.el8ost.noarch


Created a full backup of Ceph backed volume(had 1 file):
(overcloud) [stack@undercloud-0 ~]$ cinder backup-show 42608c38-5143-4681-80b8-c35fef9f90da
+-----------------------+--------------------------------------+
| Property              | Value                                |
+-----------------------+--------------------------------------+
| availability_zone     | nova                                 |
| container             | backups                              |
| created_at            | 2020-02-19T05:24:05.000000           |
| data_timestamp        | 2020-02-19T05:24:05.000000           |
| description           | None                                 |
| fail_reason           | None                                 |
| has_dependent_backups | False                                |
| id                    | 42608c38-5143-4681-80b8-c35fef9f90da |
| is_incremental        | False                                |  -> as expected as it's the first backup.
| name                  | FullBackupA                          |
| object_count          | 0                                    |
| size                  | 5                                    |
| snapshot_id           | None                                 |
| status                | available                            |
| updated_at            | 2020-02-19T05:24:08.000000           |
| volume_id             | a8c2ca8c-511b-4b12-9c6f-41423559433c |
+-----------------------+--------------------------------------+


Restore this backup to new target volume compare both volume's contents identical.

Add a file to volumeA, create an incremental backup 
Restore to new empty target volume. 
Again both volumes had identical volumes. 

Add third file to volumeA this time do full backup (in the past would create an incremental backup) 
The resulting backup is now full as requested/expected. 

(overcloud) [stack@undercloud-0 ~]$ cinder backup-show ef18d1f3-1005-43c4-91a7-56a6e8ee0aa6
+-----------------------+--------------------------------------+
| Property              | Value                                |
+-----------------------+--------------------------------------+
| availability_zone     | nova                                 |
| container             | backups                              |
| created_at            | 2020-02-19T07:00:01.000000           |
| data_timestamp        | 2020-02-19T07:00:01.000000           |
| description           | None                                 |
| fail_reason           | None                                 |
| has_dependent_backups | False                                |
| id                    | ef18d1f3-1005-43c4-91a7-56a6e8ee0aa6 |
| is_incremental        | False                                |--> as expected it's full not incremental
| name                  | FullBackup2                          |
| object_count          | 0                                    |
| size                  | 5                                    |
| snapshot_id           | None                                 |
| status                | available                            |
| updated_at            | 2020-02-19T07:02:00.000000           |
| volume_id             | a8c2ca8c-511b-4b12-9c6f-41423559433c |
+-----------------------+--------------------------------------+

Restore this backup to to a new empty volume, compare contents are identical. 

Both incremental and full backup on Ceph backend now work as excepted. 
A user can create full or incremental backups and as expected they indeed result in full or incremental backups. 

Automation run also passed Cinder backup tests without errors.


Note You need to log in before you can comment on or make changes to this bug.