Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1888672

Summary: [RFE][cinder] Allow in use volumes to be migrated (live) between a non-Ceph backend to a Ceph backend
Product: Red Hat OpenStack Reporter: Luigi Toscano <ltoscano>
Component: openstack-cinderAssignee: Jon Bernard <jobernar>
Status: CLOSED ERRATA QA Contact: Tzach Shefi <tshefi>
Severity: medium Docs Contact: Chuck Copello <ccopello>
Priority: high    
Version: 7.0 (Kilo)CC: aiyengar, amcleod, bkopilov, cpaquin, dhill, egallen, eharney, flucifre, gcharot, gfidente, gkadam, jamsmith, jobernar, kchamart, lmarsh, lyarwood, marjones, nchandek, nwolf, pablo.iranzo, pgrist, rajini.karthik, rszmigie, scohen, slinaber, spower, srevivo, tshefi
Target Milestone: z4Keywords: FutureFeature, TestOnly, Triaged
Target Release: 16.1 (Train on RHEL 8.2)   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: openstack-cinder-15.3.1-5.el8ost Doc Type: Enhancement
Doc Text:
Story Points: ---
Clone Of: 1293440 Environment:
Last Closed: 2021-03-17 15:33:11 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 905125, 1293440, 1306562, 1306569, 1623877, 1780119, 1888674    
Bug Blocks: 1434362, 1543156, 1601807, 1728334, 1728337, 1888670    

Description Luigi Toscano 2020-10-15 13:22:11 UTC
+++ This bug was initially created as a clone of Bug #1293440 +++

Description
=========

Migrating volumes between different storage backends or different storage clusters while the volume is in use in Cinder.

Currently, you can migrate an offline volume between different storage clusters, or technologies, like LVM and Ceph, but the Volume needs to be not in use.

User Stories
========
- As an operator, I want to migrate images between storage clusters without regard for their active or idle status.



---


This bug/RFE is a special case of 1293440 to track the case where the live migration is performed between a non-Ceph backend to a Ceph backend (more granular use cases may be defined in the future).

Comment 14 Tzach Shefi 2021-02-09 12:59:19 UTC
Verified on:
openstack-cinder-15.3.1-5.el8ost.noarch

This works mostly as expected, critical test cases passed fine. 

Hit issues with encrypted volume migration, need to document limitations or fix. 
Reported one bug:
https://bugzilla.redhat.com/show_bug.cgi?id=1926761

There is also this bug:
https://bugzilla.redhat.com/show_bug.cgi?id=1794249

There is a third possibly related bug, Ceph encryption/driver, where resulting Ceph encryption disks are a bit larger and thus fail.

Comment 17 Tzach Shefi 2021-02-14 08:43:22 UTC
Adding for reference,

Nova limitations per encrypted volume swap\migration:
https://bugzilla.redhat.com/show_bug.cgi?id=1926761#c4

Doc bug:
https://bugzilla.redhat.com/show_bug.cgi?id=1928458

Comment 22 errata-xmlrpc 2021-03-17 15:33:11 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat OpenStack Platform 16.1.4 director bug fix advisory), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:0817

Comment 23 errata-xmlrpc 2021-03-17 15:38:00 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat OpenStack Platform 16.1.4 director bug fix advisory), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:0817