Bug 1623750

Summary: [RFE]Migrating data to new pool online
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: liuwei <wliu>
Component: RBDAssignee: Jason Dillaman <jdillama>
Status: CLOSED ERRATA QA Contact: Gopi <gpatta>
Severity: medium Docs Contact: Bara Ancincova <bancinco>
Priority: high    
Version: 3.0CC: anharris, ceph-eng-bugs, ceph-qe-bugs, gpatta, jdillama, mkasturi, pasik, tserlin, vumrao
Target Milestone: rcKeywords: FutureFeature
Target Release: 4.0   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: ceph-14.2.0 Doc Type: Enhancement
Doc Text:
.Moving RBD images between different pools within the same cluster This version of {product} adds the ability to move RBD images between different pools within the same cluster. For details, see the link:{block-dev-guide}#moving-images-between-pools_block[_Moving images between pools_] section in the _Block Device Guide_ for {product} {release}.
Story Points: ---
Clone Of: Environment:
Last Closed: 2020-01-31 12:44:52 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1730176    

Description liuwei 2018-08-30 06:13:51 UTC
Description of problem:

When migration data to new pool,using rados client is not a good solution, as it means relying on the client. It also requires downtime, and we don't have the means to export the huge amounts of data to some location to then import it back to new pool.

Other storage systems are able to move data around between logical aggregations and groups on the "server side" (controller) without interruption to clients, and I think it would be very valuable to have in ceph. This could also lead to having features where data is automatically moved around/tiered (not cache-tiered) based on it's hotness. eg. if data is not being used frequently it gets moved to a slower pool automatically.

So, according to above situations, the specific uses cases for this functionality right now are:

- moving data from rep pool to ec pool to maximise capacity use 
- moving data from ec pool to another ec pool to reduce number of PGs
- moving data from ec pool to another ec pool to change k+m values



Version-Release number of selected component (if applicable):

RHCS 2.x

RHCS 3.x 

How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 3 Jason Dillaman 2018-08-30 16:14:35 UTC
The Nautilus release of Ceph will include a feature for live image migration. It's still performed client-side, however, but without requiring client downtime. As for tiering data, that's also in-development at the OSD layer.

Comment 7 Giridhar Ramaraju 2019-08-05 13:12:02 UTC
Updating the QA Contact to a Hemant. Hemant will be rerouting them to the appropriate QE Associate. 

Regards,
Giri

Comment 8 Giridhar Ramaraju 2019-08-05 13:12:52 UTC
Updating the QA Contact to a Hemant. Hemant will be rerouting them to the appropriate QE Associate. 

Regards,
Giri

Comment 10 Yaniv Kaul 2019-08-27 07:00:03 UTC
Can we get a QA ACK please?

Comment 15 errata-xmlrpc 2020-01-31 12:44:52 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:0312