Bug 1122356

Summary: [RFE][cinder]: Generic Volume Migration
Product: Red Hat OpenStack Reporter: RHOS Integration <rhos-integ>
Component: openstack-cinderAssignee: Jon Bernard <jobernar>
Status: CLOSED ERRATA QA Contact: Yogev Rabl <yrabl>
Severity: medium Docs Contact:
Priority: urgent    
Version: unspecifiedCC: ddomingo, eharney, jobernar, jschluet, markmc, mburns, mschuppe, pneedle, rhel-osp-director-maint, scohen, sgotliv, yeylon, yrabl
Target Milestone: betaKeywords: FutureFeature
Target Release: 8.0 (Liberty)   
Hardware: Unspecified   
OS: Unspecified   
URL: https://blueprints.launchpad.net/cinder/+spec/generic-volume-migration
Whiteboard: upstream_milestone_liberty-3 upstream_definition_approved upstream_status_implemented
Fixed In Version: openstack-cinder-7.0.0-2.el7ost, python-os-brick-0.5.0-1.el7 Doc Type: Enhancement
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2016-04-07 20:59:20 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1315661    
Bug Blocks: 1131335    

Description RHOS Integration 2014-07-23 04:06:43 UTC
Cloned from launchpad blueprint https://blueprints.launchpad.net/cinder/+spec/generic-volume-migration.

Description:

When migrating a volume between two backends, the copy_volume_data routine in the source volume's driver is executed to move the blocks from one volume to another.  This routine assumes that both source and destination volumes can be attached locally via iSCSI.  This is technically not necessary for local volumes and also prevents drivers such as Ceph from participating in volume migration operations.

I propose that we abstract the iSCSI volume attachement into a routine that can determine the best way to return a file-like python object.  For remote volumes, this may fall back to attaching the volume using iSCSI.  And for drivers that do not use iSCSI like Ceph, its RBD interface can be hidden behind a similar file-like object such that copy_volume_data does not see this difference.

Blueprint submission via cinder-specs is in progress.

Specification URL (additional information):

None

Comment 1 Sean Cohen 2015-03-18 16:02:25 UTC
*** Bug 1121610 has been marked as a duplicate of this bug. ***

Comment 2 Sean Cohen 2015-03-25 12:20:59 UTC
*** Bug 1205621 has been marked as a duplicate of this bug. ***

Comment 7 Jon Bernard 2016-01-12 02:55:34 UTC
I believe each driver validates the extra specs and can allow the migration to continue if acceptable.  If you can give me a specific example I'd be happy to verify and report back.

'migrate' will move a volume from one host to another if the volume type and backend do not change.  'retype' allows you to change type and backend if desired.  Without a migration policy, retype will update metadata (extra specs) but not move the volume data from its current location.  If not moving the data means the retype cannot be satisfied, then migration is required where the type also changes.  And this is where we connect to both backends, read data from the source, and write it to the destination.

The cinder naming is a little confusing in this regard, let me know if I can clarify any of this.

Comment 9 Jon Bernard 2016-01-13 20:51:28 UTC
I think what's there is accurate.  For retype, we are missing the case of retyping without migration where the user wants to change the class (or other attribute) of the storage.  Is that documented in another section?  If not it might be good to include it here.

Also, wether only the admin can issue these commands is dependent on the configuration, it's possible to allow users to migrate and retype their own storage volumes.

Comment 11 Jon Bernard 2016-01-15 17:08:34 UTC
This is better, I like it.  Thanks Don!

Comment 12 Yogev Rabl 2016-03-09 08:01:12 UTC
Verification failed due to Bug 1315661 
tested on 
python-cinder-7.0.1-6.el7ost.noarch
python-os-brick-0.5.0-1.4.el7ost.noarch

Comment 13 Sergey Gotliv 2016-03-09 13:19:41 UTC
Probably blocked by https://bugzilla.redhat.com/show_bug.cgi?id=1315661

Comment 18 Yogev Rabl 2016-03-15 13:03:00 UTC
Migration of volume between one pool to another on the same Ceph cluster. In future releases the feature will support migration between Ceph clusters, documented in  Bug 1315661 

The verification was on 
openstack-cinder-7.0.1-7.el7ost.noarch

Comment 20 errata-xmlrpc 2016-04-07 20:59:20 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHEA-2016-0603.html