"When I try to backup volume (Ceph backend) via "cinder backup" to 2nd Ceph cluster cinder create a full backup each time instead diff backup. mitaka release cinder-backup 2:8.0.0-0ubuntu1 all Cinder storage service - Scheduler server cinder-common 2:8.0.0-0ubuntu1 all Cinder storage service - common files cinder-volume 2:8.0.0-0ubuntu1 all Cinder storage service - Volume server python-cinder 2:8.0.0-0ubuntu1 all Cinder Python libraries My steps are: 1. cinder backup-create a3bacaf5-6cf8-480d-a5db-5ecdf4223b6a 2. cinder backup-create --incremental --force a3bacaf5-6cf8-480d-a5db-5ecdf4223b6a and what I have in Ceph backup cluster: rbd --cluster bak -p backups du volume-a3bacaf5-6cf8-480d-a5db-5ecdf4223b6a.backup.37cddcbf-4a18-4f44-927d-5e925b37755f 1024M 1024M volume-a3bacaf5-6cf8-480d-a5db-5ecdf4223b6a.backup.55e5c1a3-8c0c-4912-b98a-1ea7e6396f85 1024M 1024M" Working hypothesis is that since the ceph backup driver code has not changed, this issue was introduced by the work done in mitaka to decouple backup and volume services. See: https://bugs.launchpad.net/cinder/+bug/1578036
*** Bug 1461132 has been marked as a duplicate of this bug. ***
*** Bug 1501637 has been marked as a duplicate of this bug. ***
Hi Alan, Benny spotted and error which might block this RFE, could you or Eric or other dev give it a look? https://bugzilla.redhat.com/show_bug.cgi?id=1561862
That seems to be a bug in RBD: http://tracker.ceph.com/issues/18844 version 10.2.6 or 10.2.7 should have this fixed.
Updating status, we are blocked on this RFE! See comment #9 BZ 1561862 Cinder backup-create <VolID> --incremental --force Of an attached volume doesn't set incremental flag, it remains as false! Same command on a none attached volume --incremental flag is set. We have reproduced this on separate RDB deployments. On LVM backed system no issue.
As this doesn't work with attached volumes at the moment. Not much use for this RFE at current state, moving back to dev to fix issue.
(In reply to Tzach Shefi from comment #15) > As this doesn't work with attached volumes at the moment. > Not much use for this RFE at current state, moving back to dev to fix issue. Thanks Tzach, Pushing out to OSP14 for proper review and fix. Sean
Tip to self, once test plan is completed update close loop flags here to "+" https://bugzilla.redhat.com/show_bug.cgi?id=1501637#c2
*** Bug 1710946 has been marked as a duplicate of this bug. ***
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2020:0283