Bug 1888674

Summary: [RFE][cinder] Allow in use volumes to be migrated (live) between a Ceph backend to a non-Ceph backend
Product: Red Hat OpenStack Reporter: Luigi Toscano <ltoscano>
Component: openstack-cinderAssignee: Jon Bernard <jobernar>
Status: CLOSED ERRATA QA Contact: Tzach Shefi <tshefi>
Severity: medium Docs Contact: Chuck Copello <ccopello>
Priority: medium    
Version: 7.0 (Kilo)CC: aiyengar, bkopilov, cpaquin, dhill, egallen, eharney, flucifre, gcharot, gfidente, gkadam, jamsmith, jobernar, kchamart, lmarsh, lyarwood, marjones, nchandek, nwolf, pablo.iranzo, pgrist, rajini.karthik, rszmigie, scohen, slinaber, spower, srevivo, tshefi
Target Milestone: z4Keywords: FutureFeature, TestOnly, Triaged
Target Release: 16.1 (Train on RHEL 8.2)   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: openstack-cinder-15.3.1-5.el8ost Doc Type: Enhancement
Doc Text:
Story Points: ---
Clone Of: 1293440 Environment:
Last Closed: 2021-03-17 15:33:11 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 905125, 1293440, 1306562, 1306569, 1623877, 1780119    
Bug Blocks: 1434362, 1543156, 1601807, 1728334, 1728337, 1888670, 1888672    

Description Luigi Toscano 2020-10-15 13:25:03 UTC
+++ This bug was initially created as a clone of Bug #1293440 +++

Description
=========

Migrating volumes between different storage backends or different storage clusters while the volume is in use in Cinder.

Currently, you can migrate an offline volume between different storage clusters, or technologies, like LVM and Ceph, but the Volume needs to be not in use.

User Stories
========
- As an operator, I want to migrate images between storage clusters without regard for their active or idle status.



---


This bug/RFE is a special case of 1293440 to track the case where the live migration is performed between a Ceph backend to a non-Ceph backend (more granular use cases may be defined in the future).

Comment 3 ndeevy 2020-12-03 15:22:33 UTC
Docs: maybe we only need to remove "you can move in-use RBD volumes only within a Ceph cluster." from https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.1/html-single/storage_guide/index#con_moving-in-use-volumes_osp-storage-guide

Comment 8 spower 2021-01-11 16:39:24 UTC
Exception flag + given

Comment 12 Tzach Shefi 2021-02-09 22:30:32 UTC
Verified on:
openstack-cinder-15.3.1-5.el8ost.noarch

This works mostly as expected, critical test cases passed fine. 
This time I had reversed the backends, testing Ceph to none Ceph migration. 

Hit issues with encrypted volume migration, need to document limitations or fix. 
Reported one bug:
https://bugzilla.redhat.com/show_bug.cgi?id=1926761

There is also this bug:
https://bugzilla.redhat.com/show_bug.cgi?id=1794249

There is a third possibly related bug, Ceph encryption/driver, where resulting Ceph encryption disks are a bit larger and thus fail.

Comment 15 Tzach Shefi 2021-02-14 08:43:49 UTC
Adding for reference,

Nova limitations per encrypted volume swap\migration:
https://bugzilla.redhat.com/show_bug.cgi?id=1926761#c4

Doc bug:
https://bugzilla.redhat.com/show_bug.cgi?id=1928458

Comment 20 errata-xmlrpc 2021-03-17 15:33:11 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat OpenStack Platform 16.1.4 director bug fix advisory), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:0817

Comment 21 errata-xmlrpc 2021-03-17 15:38:00 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat OpenStack Platform 16.1.4 director bug fix advisory), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:0817