Bug 1375207 - [RFE][cinder] Add Ceph incremental backup support
Summary: [RFE][cinder] Add Ceph incremental backup support
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-cinder
Version: 16.0 (Train)
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: rc
: 16.0 (Train on RHEL 8.1)
Assignee: Sofia Enriquez
QA Contact: Tzach Shefi
RHOS Documentation Team
URL:
Whiteboard:
: 1461132 1501637 1710946 (view as bug list)
Depends On: 1561862
Blocks: 1463058 1463059 1463060 1463061 1503352 1710946 1761778 1790752
TreeView+ depends on / blocked
 
Reported: 2016-09-12 13:22 UTC by Sean Cohen
Modified: 2023-09-20 22:28 UTC (History)
19 users (show)

Fixed In Version: openstack-cinder-15.0.1-0.20191114132949.b91f514.el8ost
Doc Type: Technology Preview
Doc Text:
Previously, when using Red Hat Ceph Storage as a back end for both the Block Storage service (cinder) volumes and backups, any attempt to perform a full backup--after the first full backup--instead resulted in an incremental backup without any warning. In Red Hat OpenStack Platform 16.0, a technology preview has fixed this issue.
Clone Of:
: 1463058 1463059 1463060 1463061 1710946 1790752 (view as bug list)
Environment:
Last Closed: 2020-02-06 14:37:21 UTC
Target Upstream Version:
Embargoed:
tshefi: automate_bug+
scohen: needinfo+
senrique: needinfo-


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Launchpad 1578036 0 None None None 2016-09-12 13:22:52 UTC
Launchpad 1810270 0 None None None 2019-01-15 16:09:19 UTC
OpenStack gerrit 511232 0 'None' MERGED Fix ceph incremental backup fail 2021-02-18 15:15:17 UTC
OpenStack gerrit 579606 0 'None' MERGED Fix RBD incremental backup 2021-02-18 15:15:18 UTC
OpenStack gerrit 627941 0 'None' MERGED Support Incremental Backup Completion In RBD 2021-02-18 15:15:21 UTC
Red Hat Bugzilla 1501637 0 high CLOSED cinder backup is not working with incremental option with ceph backend 2021-03-11 15:59:21 UTC
Red Hat Issue Tracker OSP-8516 0 None None None 2022-08-09 14:19:24 UTC
Red Hat Product Errata RHEA-2020:0283 0 None None None 2020-02-06 14:39:51 UTC

Internal Links: 1501637

Description Sean Cohen 2016-09-12 13:22:52 UTC
"When I try to backup volume (Ceph backend) via "cinder backup" to 2nd Ceph cluster cinder create a full backup each time instead diff backup.

mitaka release

cinder-backup 2:8.0.0-0ubuntu1 all Cinder storage service - Scheduler server
cinder-common 2:8.0.0-0ubuntu1 all Cinder storage service - common files
cinder-volume 2:8.0.0-0ubuntu1 all Cinder storage service - Volume server
python-cinder 2:8.0.0-0ubuntu1 all Cinder Python libraries

My steps are:
1. cinder backup-create a3bacaf5-6cf8-480d-a5db-5ecdf4223b6a
2. cinder backup-create --incremental --force a3bacaf5-6cf8-480d-a5db-5ecdf4223b6a

and what I have in Ceph backup cluster:
rbd --cluster bak -p backups du
volume-a3bacaf5-6cf8-480d-a5db-5ecdf4223b6a.backup.37cddcbf-4a18-4f44-927d-5e925b37755f 1024M 1024M
volume-a3bacaf5-6cf8-480d-a5db-5ecdf4223b6a.backup.55e5c1a3-8c0c-4912-b98a-1ea7e6396f85 1024M 1024M"


Working hypothesis is that since the ceph backup driver code has not changed, this issue was introduced by the work done in mitaka to decouple backup and volume services.

See: https://bugs.launchpad.net/cinder/+bug/1578036

Comment 3 Sean Cohen 2017-08-10 18:52:30 UTC
*** Bug 1461132 has been marked as a duplicate of this bug. ***

Comment 5 Sean Cohen 2017-10-25 13:17:26 UTC
*** Bug 1501637 has been marked as a duplicate of this bug. ***

Comment 9 Tzach Shefi 2018-03-29 05:17:14 UTC
Hi Alan, 
Benny spotted and error which might block this RFE, could you or Eric or other dev give it a look? 
https://bugzilla.redhat.com/show_bug.cgi?id=1561862

Comment 10 Gorka Eguileor 2018-04-02 07:59:45 UTC
That seems to be a bug in RBD: http://tracker.ceph.com/issues/18844 version 10.2.6 or 10.2.7 should have this fixed.

Comment 11 Tzach Shefi 2018-04-10 05:08:44 UTC
Updating status, we are blocked on this RFE! 
See comment #9 BZ 1561862

Cinder backup-create <VolID> --incremental --force
Of an attached volume doesn't set incremental flag, it remains as false!

Same command on a none attached volume --incremental flag is set. 

We have reproduced this on separate RDB deployments. 
On LVM backed system no issue.

Comment 15 Tzach Shefi 2018-04-24 14:46:08 UTC
As this doesn't work with attached volumes at the moment.
Not much use for this RFE at current state, moving back to dev to fix issue.

Comment 16 Sean Cohen 2018-04-26 14:17:13 UTC
(In reply to Tzach Shefi from comment #15)
> As this doesn't work with attached volumes at the moment.
> Not much use for this RFE at current state, moving back to dev to fix issue.

Thanks Tzach,
Pushing out to OSP14 for proper review and fix.
Sean

Comment 27 Tzach Shefi 2018-12-27 11:29:34 UTC
Tip to self, once test plan is completed 
update close loop flags here to "+"
https://bugzilla.redhat.com/show_bug.cgi?id=1501637#c2

Comment 38 Sofia Enriquez 2019-12-04 21:10:36 UTC
*** Bug 1710946 has been marked as a duplicate of this bug. ***

Comment 46 errata-xmlrpc 2020-02-06 14:37:21 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2020:0283


Note You need to log in before you can comment on or make changes to this bug.