Description of problem: Delete backup aborted, the backup service currently configured [cinder.backup.drivers.swift] is not the backup service that was used to create this backup [cinder.backup.drivers.swift.SwiftBackupDriver]. Version-Release number of selected component (if applicable): # rpm -qa | grep cinder puppet-cinder-12.4.1-2.el7ost.noarch openstack-cinder-12.0.4-3.el7ost.noarch python2-cinderclient-3.5.0-1.el7ost.noarch python-cinder-12.0.4-3.el7ost.noarch # yumdownloader puppet-cinder puppet-cinder-12.4.1-2.el7ost.noarch.rpm How reproducible: 100% Steps to Reproduce: 1. Create volume backups in RHSOP12 and upgrade to RHOSP13 2. Try to delete the volume backups in RHOSP13. Actual results: Not able to delete the volumes. ERROR: Delete backup aborted, the backup service currently configured [cinder.backup.drivers.swift] is not the backup service that was used to create this backup [cinder.backup.drivers.swift.SwiftBackupDriver] Expected results: Volume backup should be deleted. Additional info: We are able to create new volume backups and delete them. However, we have some volume backups, which were created before the RHOSP13 upgrade, which cannot be deleted anymore. We see the following error when we do it: "Delete backup aborted, the backup service currently configured [cinder.backup.drivers.swift] is not the backup service that was used to create this backup [cinder.backup.drivers.swift.SwiftBackupDriver]." We guess there was some change with the upgrade. (overcloud) [stack@director ~]$ openstack volume backup show 8d0844b1-b8b8-494d-b5a4-26e1864897e9 +-----------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Field | Value | +-----------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | availability_zone | nova | | container | volumebackups | | created_at | 2019-01-06T16:01:28.000000 | | data_timestamp | 2019-01-06T16:01:28.000000 | | description | | | fail_reason | Delete backup aborted, the backup service currently configured [cinder.backup.drivers.swift] is not the backup service that was used to create this backup [cinder.backup.drivers.swift.SwiftBackupDriver]. | | has_dependent_backups | False | | id | 8d0844b1-b8b8-494d-b5a4-26e1864897e9 | | is_incremental | False | | name | backup-1 | | object_count | 1201 | | size | 1200 | | snapshot_id | None | | status | error | | updated_at | 2019-03-09T12:35:57.000000 | | volume_id | f9bb27c2-afb6-4b15-8200-9a1342cd4c21 | +-----------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ Tried deleting old backu, Entered in error state. (myenvironment)[test@localhost LAB]$ openstack volume backup show 65dbb187-0770-4f8b-93dd-ca759683ed82 +-----------------------+--------------------------------------+ | Field | Value | +-----------------------+--------------------------------------+ | availability_zone | nova | | container | volumebackups | | created_at | 2018-10-29T13:38:05.000000 | | data_timestamp | 2018-10-29T13:38:05.000000 | | description | | | fail_reason | None | | has_dependent_backups | False | | id | 65dbb187-0770-4f8b-93dd-ca759683ed82 | | is_incremental | False | | name | loreg_test_backup | | object_count | 16 | | size | 15 | | snapshot_id | None | | status | available | | updated_at | 2019-01-29T10:54:19.000000 | | volume_id | 66b64701-5f93-49e2-8bdb-e92ff93e7966 | +-----------------------+--------------------------------------+ (myenvironment)[test@localhost LAB]$ openstack volume backup delete 65dbb187-0770-4f8b-93dd-ca759683ed82 (myenvironment)[test@localhost LAB]$ openstack volume backup show 65dbb187-0770-4f8b-93dd-ca759683ed82 +-----------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Field | Value | +-----------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | availability_zone | nova | | container | volumebackups | | created_at | 2018-10-29T13:38:05.000000 | | data_timestamp | 2018-10-29T13:38:05.000000 | | description | | | fail_reason | Delete backup aborted, the backup service currently configured [cinder.backup.drivers.swift] is not the backup service that was used to create this backup [cinder.backup.drivers.swift.SwiftBackupDriver]. | | has_dependent_backups | False | | id | 65dbb187-0770-4f8b-93dd-ca759683ed82 | | is_incremental | False | | name | test_backup | | object_count | 16 | | size | 15 | | snapshot_id | None | | status | error | | updated_at | 2019-03-11T15:10:31.000000 | | volume_id | 66b64701-5f93-49e2-8bdb-e92ff93e7966 | +-----------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
Tested on on: openstack-cinder-12.0.7-1.el7ost.noarch Installed an OSP12 system (2019-05-17.1) openstack-cinder-11.1.0-22.el7ost.noarch Created a cinder volume (overcloud) [stack@undercloud-0 ~]$ cinder list +--------------------------------------+--------+--------------+------+-------------+----------+--------------------------------------+ | ID | Status | Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+--------+--------------+------+-------------+----------+--------------------------------------+ | a1d9ac5e-c1a8-48b6-8c20-d376504534b3 | in-use | Pansible_vol | 1 | - | true | c9ae4df6-e143-48de-b303-d0dc2301812e | +--------------------------------------+--------+--------------+------+-------------+----------+--------------------------------------+ Created two backups of this volume: (overcloud) [stack@undercloud-0 ~]$ cinder backup-list +--------------------------------------+--------------------------------------+-----------+---------+------+--------------+---------------+ | ID | Volume ID | Status | Name | Size | Object Count | Container | +--------------------------------------+--------------------------------------+-----------+---------+------+--------------+---------------+ | 4860cc55-17be-4100-b9ec-f5e2d5f76732 | a1d9ac5e-c1a8-48b6-8c20-d376504534b3 | available | backup1 | 1 | 22 | volumebackups | | bf04171f-0f67-45c9-930f-b5cc9b9648ef | a1d9ac5e-c1a8-48b6-8c20-d376504534b3 | available | backup2 | 1 | 22 | volumebackups | +--------------------------------------+--------------------------------------+-----------+---------+------+--------------+---------------+ Upgrade system to OSP13 This is where i hit a snag, a recently reported upgrade bug https://bugzilla.redhat.com/show_bug.cgi?id=1720080#c3 Till we figure that bug out I can't upgrade, which puts this bz at risk for missing deadline.
bz 1651136 isn't technically verified as of this moment, not mine to verify. However after updating ropes to latest on my system which left as is, I managed to upgrade the undercloud. Again I got stuck now overcloud controller upgrade (undercloud) [stack@undercloud-0 ~]$ openstack overcloud upgrade run --nodes Controller Started Mistral Workflow tripleo.package_update.v1.update_nodes. Execution ID: 8d7a597e-5d31-4e85-935d-ab5e75064569 Waiting for messages on queue 'tripleo' with no timeout. Ansible failed, check log at /var/lib/mistral/8d7a597e-5d31-4e85-935d-ab5e75064569/ansible.log. Update failed with: Ansible failed, check log at /var/lib/mistral/8d7a597e-5d31-4e85-935d-ab5e75064569/ansible.log. (undercloud) [stack@undercloud-0 ~]$ vi /var/lib/mistral/8d7a597e-5d31-4e85-935d-ab5e75064569/ansible.log I'll recheck my upgrade steps and try again. Opened a new upgrade bug 1722738
I'd reproduced same upgrade problem mentioned on #30, on a second system. Still can't verify, failing complete upgrade.
Still trying to finish this last one off. Retested upgrade (bz moved to ON_QA) failed upgrade again https://bugzilla.redhat.com/show_bug.cgi?id=1722738#c13
Verified on: openstack-cinder-12.0.7-2.el7ost.noarch Created a cinder volume+2*backup on OSP12. Upgraded deployment to OSP13 took a while.. 2019-07-03 15:28:14Z [AllNodesDeploySteps]: UPDATE_COMPLETE state changed 2019-07-03 15:28:19Z [overcloud]: UPDATE_COMPLETE Stack UPDATE completed successfully Stack overcloud UPDATE_COMPLETE Started Mistral Workflow tripleo.deployment.v1.get_horizon_url. Execution ID: f41ae804-9aed-420c-b5f0-bddcf15593d9 Overcloud Endpoint: http://10.0.0.107:5000/ Overcloud Horizon Dashboard URL: http://10.0.0.107:80/dashboard Overcloud rc file: /home/stack/overcloudrc Overcloud Deployed Completed Overcloud Upgrade Converge for stack overcloud Post upgrade have volume plus two backups: (overcloud) [stack@undercloud-0 ~]$ cinder list +--------------------------------------+-----------+------+------+-------------+----------+-------------+ | ID | Status | Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+-----------+------+------+-------------+----------+-------------+ | ee5e6024-211b-4923-9add-4454c8f46e4e | available | - | 1 | - | false | | +--------------------------------------+-----------+------+------+-------------+----------+-------------+ (overcloud) [stack@undercloud-0 ~]$ cinder backup-list +--------------------------------------+--------------------------------------+-----------+------+------+--------------+---------------+ | ID | Volume ID | Status | Name | Size | Object Count | Container | +--------------------------------------+--------------------------------------+-----------+------+------+--------------+---------------+ | d7130c85-8a4a-4967-acc7-bdfe1d0ba7ee | ee5e6024-211b-4923-9add-4454c8f46e4e | available | - | 1 | 22 | volumebackups | | ede26929-4f79-4d72-b828-1dc550a0ec0a | ee5e6024-211b-4923-9add-4454c8f46e4e | available | - | 1 | 22 | volumebackups | +--------------------------------------+--------------------------------------+-----------+------+------+--------------+---------------+ And for the actual verification deleting a Cinder volume post upgrade now works: (overcloud) [stack@undercloud-0 ~]$ cinder backup-delete d7130c85-8a4a-4967-acc7-bdfe1d0ba7ee Request to delete backup d7130c85-8a4a-4967-acc7-bdfe1d0ba7ee has been accepted. (overcloud) [stack@undercloud-0 ~]$ cinder backup-list +--------------------------------------+--------------------------------------+-----------+------+------+--------------+---------------+ | ID | Volume ID | Status | Name | Size | Object Count | Container | +--------------------------------------+--------------------------------------+-----------+------+------+--------------+---------------+ | ede26929-4f79-4d72-b828-1dc550a0ec0a | ee5e6024-211b-4923-9add-4454c8f46e4e | available | - | 1 | 22 | volumebackups | +--------------------------------------+--------------------------------------+-----------+------+------+--------------+---------------+ First backup is deleted as well as second one: (overcloud) [stack@undercloud-0 ~]$ cinder backup-delete ede26929-4f79-4d72-b828-1dc550a0ec0a Request to delete backup ede26929-4f79-4d72-b828-1dc550a0ec0a has been accepted. ( overcloud) [stack@undercloud-0 ~]$ cinder backup-list +----+-----------+--------+------+------+--------------+-----------+ | ID | Volume ID | Status | Name | Size | Object Count | Container | +----+-----------+--------+------+------+--------------+-----------+ +----+-----------+--------+------+------+--------------+-----------+ Works as expected.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:1732