According to our records, this should be resolved by openstack-cinder-15.0.2-0.20200123220928.900f769.el8ost. This build is available now.
Verfied on: openstack-cinder-15.0.2-0.20200123220928.900f769.el8ost.noarch Created a full backup of Ceph backed volume(had 1 file): (overcloud) [stack@undercloud-0 ~]$ cinder backup-show 42608c38-5143-4681-80b8-c35fef9f90da +-----------------------+--------------------------------------+ | Property | Value | +-----------------------+--------------------------------------+ | availability_zone | nova | | container | backups | | created_at | 2020-02-19T05:24:05.000000 | | data_timestamp | 2020-02-19T05:24:05.000000 | | description | None | | fail_reason | None | | has_dependent_backups | False | | id | 42608c38-5143-4681-80b8-c35fef9f90da | | is_incremental | False | -> as expected as it's the first backup. | name | FullBackupA | | object_count | 0 | | size | 5 | | snapshot_id | None | | status | available | | updated_at | 2020-02-19T05:24:08.000000 | | volume_id | a8c2ca8c-511b-4b12-9c6f-41423559433c | +-----------------------+--------------------------------------+ Restore this backup to new target volume compare both volume's contents identical. Add a file to volumeA, create an incremental backup Restore to new empty target volume. Again both volumes had identical volumes. Add third file to volumeA this time do full backup (in the past would create an incremental backup) The resulting backup is now full as requested/expected. (overcloud) [stack@undercloud-0 ~]$ cinder backup-show ef18d1f3-1005-43c4-91a7-56a6e8ee0aa6 +-----------------------+--------------------------------------+ | Property | Value | +-----------------------+--------------------------------------+ | availability_zone | nova | | container | backups | | created_at | 2020-02-19T07:00:01.000000 | | data_timestamp | 2020-02-19T07:00:01.000000 | | description | None | | fail_reason | None | | has_dependent_backups | False | | id | ef18d1f3-1005-43c4-91a7-56a6e8ee0aa6 | | is_incremental | False |--> as expected it's full not incremental | name | FullBackup2 | | object_count | 0 | | size | 5 | | snapshot_id | None | | status | available | | updated_at | 2020-02-19T07:02:00.000000 | | volume_id | a8c2ca8c-511b-4b12-9c6f-41423559433c | +-----------------------+--------------------------------------+ Restore this backup to to a new empty volume, compare contents are identical. Both incremental and full backup on Ceph backend now work as excepted. A user can create full or incremental backups and as expected they indeed result in full or incremental backups. Automation run also passed Cinder backup tests without errors.