Description of problem: When migration data to new pool,using rados client is not a good solution, as it means relying on the client. It also requires downtime, and we don't have the means to export the huge amounts of data to some location to then import it back to new pool. Other storage systems are able to move data around between logical aggregations and groups on the "server side" (controller) without interruption to clients, and I think it would be very valuable to have in ceph. This could also lead to having features where data is automatically moved around/tiered (not cache-tiered) based on it's hotness. eg. if data is not being used frequently it gets moved to a slower pool automatically. So, according to above situations, the specific uses cases for this functionality right now are: - moving data from rep pool to ec pool to maximise capacity use - moving data from ec pool to another ec pool to reduce number of PGs - moving data from ec pool to another ec pool to change k+m values Version-Release number of selected component (if applicable): RHCS 2.x RHCS 3.x How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
The Nautilus release of Ceph will include a feature for live image migration. It's still performed client-side, however, but without requiring client downtime. As for tiering data, that's also in-development at the OSD layer.
Updating the QA Contact to a Hemant. Hemant will be rerouting them to the appropriate QE Associate. Regards, Giri
Can we get a QA ACK please?
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:0312