Bug 1375207

Summary: [RFE][cinder] Add Ceph incremental backup support
Product: Red Hat OpenStack Reporter: Sean Cohen <scohen>
Component: openstack-cinderAssignee: Sofia Enriquez <senrique>
Status: CLOSED ERRATA QA Contact: Tzach Shefi <tshefi>
Severity: high Docs Contact: RHOS Documentation Team <rhos-docs>
Priority: high    
Version: 16.0 (Train)CC: abishop, astupnik, brian.rosmaita, ccopello, cschwede, eharney, gcharot, geguileo, gfidente, gregraka, lmarsh, mabrams, nchandek, pgrist, sclewis, senrique, srevivo, tshefi, tvignaud
Target Milestone: rcKeywords: FutureFeature, TechPreview, Triaged
Target Release: 16.0 (Train on RHEL 8.1)Flags: tshefi: automate_bug+
scohen: needinfo+
senrique: needinfo-
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: openstack-cinder-15.0.1-0.20191114132949.b91f514.el8ost Doc Type: Technology Preview
Doc Text:
Previously, when using Red Hat Ceph Storage as a back end for both the Block Storage service (cinder) volumes and backups, any attempt to perform a full backup--after the first full backup--instead resulted in an incremental backup without any warning. In Red Hat OpenStack Platform 16.0, a technology preview has fixed this issue.
Story Points: ---
Clone Of:
: 1463058 1463059 1463060 1463061 1710946 1790752 (view as bug list) Environment:
Last Closed: 2020-02-06 14:37:21 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1561862    
Bug Blocks: 1463058, 1463059, 1463060, 1463061, 1503352, 1710946, 1761778, 1790752    

Description Sean Cohen 2016-09-12 13:22:52 UTC
"When I try to backup volume (Ceph backend) via "cinder backup" to 2nd Ceph cluster cinder create a full backup each time instead diff backup.

mitaka release

cinder-backup 2:8.0.0-0ubuntu1 all Cinder storage service - Scheduler server
cinder-common 2:8.0.0-0ubuntu1 all Cinder storage service - common files
cinder-volume 2:8.0.0-0ubuntu1 all Cinder storage service - Volume server
python-cinder 2:8.0.0-0ubuntu1 all Cinder Python libraries

My steps are:
1. cinder backup-create a3bacaf5-6cf8-480d-a5db-5ecdf4223b6a
2. cinder backup-create --incremental --force a3bacaf5-6cf8-480d-a5db-5ecdf4223b6a

and what I have in Ceph backup cluster:
rbd --cluster bak -p backups du
volume-a3bacaf5-6cf8-480d-a5db-5ecdf4223b6a.backup.37cddcbf-4a18-4f44-927d-5e925b37755f 1024M 1024M
volume-a3bacaf5-6cf8-480d-a5db-5ecdf4223b6a.backup.55e5c1a3-8c0c-4912-b98a-1ea7e6396f85 1024M 1024M"


Working hypothesis is that since the ceph backup driver code has not changed, this issue was introduced by the work done in mitaka to decouple backup and volume services.

See: https://bugs.launchpad.net/cinder/+bug/1578036

Comment 3 Sean Cohen 2017-08-10 18:52:30 UTC
*** Bug 1461132 has been marked as a duplicate of this bug. ***

Comment 5 Sean Cohen 2017-10-25 13:17:26 UTC
*** Bug 1501637 has been marked as a duplicate of this bug. ***

Comment 9 Tzach Shefi 2018-03-29 05:17:14 UTC
Hi Alan, 
Benny spotted and error which might block this RFE, could you or Eric or other dev give it a look? 
https://bugzilla.redhat.com/show_bug.cgi?id=1561862

Comment 10 Gorka Eguileor 2018-04-02 07:59:45 UTC
That seems to be a bug in RBD: http://tracker.ceph.com/issues/18844 version 10.2.6 or 10.2.7 should have this fixed.

Comment 11 Tzach Shefi 2018-04-10 05:08:44 UTC
Updating status, we are blocked on this RFE! 
See comment #9 BZ 1561862

Cinder backup-create <VolID> --incremental --force
Of an attached volume doesn't set incremental flag, it remains as false!

Same command on a none attached volume --incremental flag is set. 

We have reproduced this on separate RDB deployments. 
On LVM backed system no issue.

Comment 15 Tzach Shefi 2018-04-24 14:46:08 UTC
As this doesn't work with attached volumes at the moment.
Not much use for this RFE at current state, moving back to dev to fix issue.

Comment 16 Sean Cohen 2018-04-26 14:17:13 UTC
(In reply to Tzach Shefi from comment #15)
> As this doesn't work with attached volumes at the moment.
> Not much use for this RFE at current state, moving back to dev to fix issue.

Thanks Tzach,
Pushing out to OSP14 for proper review and fix.
Sean

Comment 27 Tzach Shefi 2018-12-27 11:29:34 UTC
Tip to self, once test plan is completed 
update close loop flags here to "+"
https://bugzilla.redhat.com/show_bug.cgi?id=1501637#c2

Comment 38 Sofia Enriquez 2019-12-04 21:10:36 UTC
*** Bug 1710946 has been marked as a duplicate of this bug. ***

Comment 46 errata-xmlrpc 2020-02-06 14:37:21 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2020:0283