Bug 1623750 - [RFE]Migrating data to new pool online
Summary: [RFE]Migrating data to new pool online
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: RBD
Version: 3.0
Hardware: x86_64
OS: Linux
high
medium
Target Milestone: rc
: 4.0
Assignee: Jason Dillaman
QA Contact: Gopi
Bara Ancincova
URL:
Whiteboard:
Depends On:
Blocks: 1730176
TreeView+ depends on / blocked
 
Reported: 2018-08-30 06:13 UTC by liuwei
Modified: 2021-12-10 17:20 UTC (History)
9 users (show)

Fixed In Version: ceph-14.2.0
Doc Type: Enhancement
Doc Text:
.Moving RBD images between different pools within the same cluster This version of {product} adds the ability to move RBD images between different pools within the same cluster. For details, see the link:{block-dev-guide}#moving-images-between-pools_block[_Moving images between pools_] section in the _Block Device Guide_ for {product} {release}.
Clone Of:
Environment:
Last Closed: 2020-01-31 12:44:52 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHCEPH-2692 0 None None None 2021-12-10 17:20:26 UTC
Red Hat Knowledge Base (Solution) 1595733 0 None None None 2019-01-26 19:19:51 UTC
Red Hat Product Errata RHBA-2020:0312 0 None None None 2020-01-31 12:45:28 UTC

Description liuwei 2018-08-30 06:13:51 UTC
Description of problem:

When migration data to new pool,using rados client is not a good solution, as it means relying on the client. It also requires downtime, and we don't have the means to export the huge amounts of data to some location to then import it back to new pool.

Other storage systems are able to move data around between logical aggregations and groups on the "server side" (controller) without interruption to clients, and I think it would be very valuable to have in ceph. This could also lead to having features where data is automatically moved around/tiered (not cache-tiered) based on it's hotness. eg. if data is not being used frequently it gets moved to a slower pool automatically.

So, according to above situations, the specific uses cases for this functionality right now are:

- moving data from rep pool to ec pool to maximise capacity use 
- moving data from ec pool to another ec pool to reduce number of PGs
- moving data from ec pool to another ec pool to change k+m values



Version-Release number of selected component (if applicable):

RHCS 2.x

RHCS 3.x 

How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 3 Jason Dillaman 2018-08-30 16:14:35 UTC
The Nautilus release of Ceph will include a feature for live image migration. It's still performed client-side, however, but without requiring client downtime. As for tiering data, that's also in-development at the OSD layer.

Comment 7 Giridhar Ramaraju 2019-08-05 13:12:02 UTC
Updating the QA Contact to a Hemant. Hemant will be rerouting them to the appropriate QE Associate. 

Regards,
Giri

Comment 8 Giridhar Ramaraju 2019-08-05 13:12:52 UTC
Updating the QA Contact to a Hemant. Hemant will be rerouting them to the appropriate QE Associate. 

Regards,
Giri

Comment 10 Yaniv Kaul 2019-08-27 07:00:03 UTC
Can we get a QA ACK please?

Comment 15 errata-xmlrpc 2020-01-31 12:44:52 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:0312


Note You need to log in before you can comment on or make changes to this bug.