+++ This bug was initially created as a clone of Bug #1293440 +++ Description ========= Migrating volumes between different storage backends or different storage clusters while the volume is in use in Cinder. Currently, you can migrate an offline volume between different storage clusters, or technologies, like LVM and Ceph, but the Volume needs to be not in use. User Stories ======== - As an operator, I want to migrate images between storage clusters without regard for their active or idle status. --- This bug/RFE is a special case of 1293440 to track the case where the live migration is performed between a Ceph backend to a non-Ceph backend (more granular use cases may be defined in the future).
Docs: maybe we only need to remove "you can move in-use RBD volumes only within a Ceph cluster." from https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.1/html-single/storage_guide/index#con_moving-in-use-volumes_osp-storage-guide
Exception flag + given
Verified on: openstack-cinder-15.3.1-5.el8ost.noarch This works mostly as expected, critical test cases passed fine. This time I had reversed the backends, testing Ceph to none Ceph migration. Hit issues with encrypted volume migration, need to document limitations or fix. Reported one bug: https://bugzilla.redhat.com/show_bug.cgi?id=1926761 There is also this bug: https://bugzilla.redhat.com/show_bug.cgi?id=1794249 There is a third possibly related bug, Ceph encryption/driver, where resulting Ceph encryption disks are a bit larger and thus fail.
Adding for reference, Nova limitations per encrypted volume swap\migration: https://bugzilla.redhat.com/show_bug.cgi?id=1926761#c4 Doc bug: https://bugzilla.redhat.com/show_bug.cgi?id=1928458
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat OpenStack Platform 16.1.4 director bug fix advisory), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2021:0817