Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1790752

Summary: [RFE][cinder] Add Ceph incremental backup support
Product: Red Hat OpenStack Reporter: Giulio Fidente <gfidente>
Component: openstack-cinderAssignee: Cinder Bugs List <cinder-bugs>
Status: CLOSED CURRENTRELEASE QA Contact: Tzach Shefi <tshefi>
Severity: high Docs Contact: Chuck Copello <ccopello>
Priority: high    
Version: unspecifiedCC: abishop, amcleod, brian.rosmaita, ccopello, cschwede, dcadzow, eharney, gcharot, geguileo, gfidente, jvisser, lmarsh, ltoscano, mabrams, nchandek, pgrist, rhos-docs, sclewis, scohen, senrique, shrjoshi, srevivo, tshefi, tvignaud
Target Milestone: z1Keywords: FutureFeature, TestOnly, Triaged
Target Release: 16.0 (Train on RHEL 8.1)   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: openstack-cinder-15.0.1-0.20191114132949.b91f514.el8ost Doc Type: Enhancement
Doc Text:
Previously, when using Red Hat Ceph Storage as a back end for both the Block Storage service (cinder) volumes and backups, any attempt to perform a full backup--after the first full backup--instead resulted in an incremental backup without any warning. In Red Hat OpenStack Platform 16.0.1, the fix for this issue is fully supported.
Story Points: ---
Clone Of: 1375207 Environment:
Last Closed: 2020-02-20 11:40:21 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1375207, 1561862    
Bug Blocks: 1463058, 1463059, 1463060, 1463061, 1503352, 1710946, 1761778    

Comment 12 Lon Hohberger 2020-02-13 11:40:09 UTC
According to our records, this should be resolved by openstack-cinder-15.0.2-0.20200123220928.900f769.el8ost.  This build is available now.

Comment 14 Tzach Shefi 2020-02-19 07:47:37 UTC
Verfied on:
openstack-cinder-15.0.2-0.20200123220928.900f769.el8ost.noarch


Created a full backup of Ceph backed volume(had 1 file):
(overcloud) [stack@undercloud-0 ~]$ cinder backup-show 42608c38-5143-4681-80b8-c35fef9f90da
+-----------------------+--------------------------------------+
| Property              | Value                                |
+-----------------------+--------------------------------------+
| availability_zone     | nova                                 |
| container             | backups                              |
| created_at            | 2020-02-19T05:24:05.000000           |
| data_timestamp        | 2020-02-19T05:24:05.000000           |
| description           | None                                 |
| fail_reason           | None                                 |
| has_dependent_backups | False                                |
| id                    | 42608c38-5143-4681-80b8-c35fef9f90da |
| is_incremental        | False                                |  -> as expected as it's the first backup.
| name                  | FullBackupA                          |
| object_count          | 0                                    |
| size                  | 5                                    |
| snapshot_id           | None                                 |
| status                | available                            |
| updated_at            | 2020-02-19T05:24:08.000000           |
| volume_id             | a8c2ca8c-511b-4b12-9c6f-41423559433c |
+-----------------------+--------------------------------------+


Restore this backup to new target volume compare both volume's contents identical.

Add a file to volumeA, create an incremental backup 
Restore to new empty target volume. 
Again both volumes had identical volumes. 

Add third file to volumeA this time do full backup (in the past would create an incremental backup) 
The resulting backup is now full as requested/expected. 

(overcloud) [stack@undercloud-0 ~]$ cinder backup-show ef18d1f3-1005-43c4-91a7-56a6e8ee0aa6
+-----------------------+--------------------------------------+
| Property              | Value                                |
+-----------------------+--------------------------------------+
| availability_zone     | nova                                 |
| container             | backups                              |
| created_at            | 2020-02-19T07:00:01.000000           |
| data_timestamp        | 2020-02-19T07:00:01.000000           |
| description           | None                                 |
| fail_reason           | None                                 |
| has_dependent_backups | False                                |
| id                    | ef18d1f3-1005-43c4-91a7-56a6e8ee0aa6 |
| is_incremental        | False                                |--> as expected it's full not incremental
| name                  | FullBackup2                          |
| object_count          | 0                                    |
| size                  | 5                                    |
| snapshot_id           | None                                 |
| status                | available                            |
| updated_at            | 2020-02-19T07:02:00.000000           |
| volume_id             | a8c2ca8c-511b-4b12-9c6f-41423559433c |
+-----------------------+--------------------------------------+

Restore this backup to to a new empty volume, compare contents are identical. 

Both incremental and full backup on Ceph backend now work as excepted. 
A user can create full or incremental backups and as expected they indeed result in full or incremental backups. 

Automation run also passed Cinder backup tests without errors.