Bug 1888674
| Summary: | [RFE][cinder] Allow in use volumes to be migrated (live) between a Ceph backend to a non-Ceph backend | ||
|---|---|---|---|
| Product: | Red Hat OpenStack | Reporter: | Luigi Toscano <ltoscano> |
| Component: | openstack-cinder | Assignee: | Jon Bernard <jobernar> |
| Status: | CLOSED ERRATA | QA Contact: | Tzach Shefi <tshefi> |
| Severity: | medium | Docs Contact: | Chuck Copello <ccopello> |
| Priority: | medium | ||
| Version: | 7.0 (Kilo) | CC: | aiyengar, bkopilov, cpaquin, dhill, egallen, eharney, flucifre, gcharot, gfidente, gkadam, jamsmith, jobernar, kchamart, lmarsh, lyarwood, marjones, nchandek, nwolf, pablo.iranzo, pgrist, rajini.karthik, rszmigie, scohen, slinaber, spower, srevivo, tshefi |
| Target Milestone: | z4 | Keywords: | FutureFeature, TestOnly, Triaged |
| Target Release: | 16.1 (Train on RHEL 8.2) | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | openstack-cinder-15.3.1-5.el8ost | Doc Type: | Enhancement |
| Doc Text: | Story Points: | --- | |
| Clone Of: | 1293440 | Environment: | |
| Last Closed: | 2021-03-17 15:33:11 UTC | Type: | --- |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
| Bug Depends On: | 905125, 1293440, 1306562, 1306569, 1623877, 1780119 | ||
| Bug Blocks: | 1434362, 1543156, 1601807, 1728334, 1728337, 1888670, 1888672 | ||
|
Description
Luigi Toscano
2020-10-15 13:25:03 UTC
Docs: maybe we only need to remove "you can move in-use RBD volumes only within a Ceph cluster." from https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.1/html-single/storage_guide/index#con_moving-in-use-volumes_osp-storage-guide Exception flag + given Verified on: openstack-cinder-15.3.1-5.el8ost.noarch This works mostly as expected, critical test cases passed fine. This time I had reversed the backends, testing Ceph to none Ceph migration. Hit issues with encrypted volume migration, need to document limitations or fix. Reported one bug: https://bugzilla.redhat.com/show_bug.cgi?id=1926761 There is also this bug: https://bugzilla.redhat.com/show_bug.cgi?id=1794249 There is a third possibly related bug, Ceph encryption/driver, where resulting Ceph encryption disks are a bit larger and thus fail. Adding for reference, Nova limitations per encrypted volume swap\migration: https://bugzilla.redhat.com/show_bug.cgi?id=1926761#c4 Doc bug: https://bugzilla.redhat.com/show_bug.cgi?id=1928458 Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat OpenStack Platform 16.1.4 director bug fix advisory), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2021:0817 Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat OpenStack Platform 16.1.4 director bug fix advisory), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2021:0817 |