Bug 1888674 - [RFE][cinder] Allow in use volumes to be migrated (live) between a Ceph backend to a non-Ceph backend
Summary: [RFE][cinder] Allow in use volumes to be migrated (live) between a Ceph backe...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-cinder
Version: 7.0 (Kilo)
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: z4
: 16.1 (Train on RHEL 8.2)
Assignee: Jon Bernard
QA Contact: Tzach Shefi
Chuck Copello
URL:
Whiteboard:
Depends On: 905125 1293440 1306562 1306569 1623877 1780119
Blocks: 1888670 1434362 1543156 1601807 1728334 1728337 1888672
TreeView+ depends on / blocked
 
Reported: 2020-10-15 13:25 UTC by Luigi Toscano
Modified: 2021-03-17 15:39 UTC (History)
27 users (show)

Fixed In Version: openstack-cinder-15.3.1-5.el8ost
Doc Type: Enhancement
Doc Text:
Clone Of: 1293440
Environment:
Last Closed: 2021-03-17 15:33:11 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2021:0817 0 None None None 2021-03-17 15:39:05 UTC

Description Luigi Toscano 2020-10-15 13:25:03 UTC
+++ This bug was initially created as a clone of Bug #1293440 +++

Description
=========

Migrating volumes between different storage backends or different storage clusters while the volume is in use in Cinder.

Currently, you can migrate an offline volume between different storage clusters, or technologies, like LVM and Ceph, but the Volume needs to be not in use.

User Stories
========
- As an operator, I want to migrate images between storage clusters without regard for their active or idle status.



---


This bug/RFE is a special case of 1293440 to track the case where the live migration is performed between a Ceph backend to a non-Ceph backend (more granular use cases may be defined in the future).

Comment 3 ndeevy 2020-12-03 15:22:33 UTC
Docs: maybe we only need to remove "you can move in-use RBD volumes only within a Ceph cluster." from https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.1/html-single/storage_guide/index#con_moving-in-use-volumes_osp-storage-guide

Comment 8 spower 2021-01-11 16:39:24 UTC
Exception flag + given

Comment 12 Tzach Shefi 2021-02-09 22:30:32 UTC
Verified on:
openstack-cinder-15.3.1-5.el8ost.noarch

This works mostly as expected, critical test cases passed fine. 
This time I had reversed the backends, testing Ceph to none Ceph migration. 

Hit issues with encrypted volume migration, need to document limitations or fix. 
Reported one bug:
https://bugzilla.redhat.com/show_bug.cgi?id=1926761

There is also this bug:
https://bugzilla.redhat.com/show_bug.cgi?id=1794249

There is a third possibly related bug, Ceph encryption/driver, where resulting Ceph encryption disks are a bit larger and thus fail.

Comment 15 Tzach Shefi 2021-02-14 08:43:49 UTC
Adding for reference,

Nova limitations per encrypted volume swap\migration:
https://bugzilla.redhat.com/show_bug.cgi?id=1926761#c4

Doc bug:
https://bugzilla.redhat.com/show_bug.cgi?id=1928458

Comment 20 errata-xmlrpc 2021-03-17 15:33:11 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat OpenStack Platform 16.1.4 director bug fix advisory), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:0817

Comment 21 errata-xmlrpc 2021-03-17 15:38:00 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat OpenStack Platform 16.1.4 director bug fix advisory), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:0817


Note You need to log in before you can comment on or make changes to this bug.