Bug 1516330 - [RFE][Cinder] Allow to move disk images between "volume types" for Cinder.
Summary: [RFE][Cinder] Allow to move disk images between "volume types" for Cinder.
Keywords:
Status: CLOSED DEFERRED
Alias: None
Product: ovirt-engine
Classification: oVirt
Component: BLL.Storage
Version: 4.1.6
Hardware: All
OS: All
high
medium
Target Milestone: ---
: ---
Assignee: Tal Nisan
QA Contact: Avihai
URL:
Whiteboard:
Depends On:
Blocks: 1539837
TreeView+ depends on / blocked
 
Reported: 2017-11-22 13:53 UTC by Konstantin Shalygin
Modified: 2020-04-01 14:49 UTC (History)
2 users (show)

Fixed In Version:
Clone Of:
Environment:
Last Closed: 2020-04-01 14:44:25 UTC
oVirt Team: Storage
Embargoed:
ylavi: ovirt-4.3?
ylavi: planning_ack+
rule-engine: devel_ack?
rule-engine: testing_ack?


Attachments (Terms of Use)
How it looks on oVirt. (35.24 KB, image/png)
2017-11-22 14:06 UTC, Konstantin Shalygin
no flags Details

Description Konstantin Shalygin 2017-11-22 13:53:29 UTC
Description of problem:

At this time is impossible move one oVirt disk from one "volume type" to another.
For example: move disk from fast (nvme-based) storage to cold (hdd-based) storage.

Version-Release number of selected component (if applicable):
4.1.6

Actual results:
Move disk is impossible.

Expected results:
Move disk between Cinder "volume types" can be done from oVirt.

Additional info:
For now I do movements via create new disk on target pool and migrate data via cp/rsync or via qemu-img, then delete old disk.
Actually this is only one critical missed feature for a year of oVirt + Ceph usage.

Comment 1 Konstantin Shalygin 2017-11-22 14:06:50 UTC
Created attachment 1357527 [details]
How it looks on oVirt.

Screenshot from oVirt. On Cinder-side this is look like this:

[replicated-rbd]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
volume_backend_name = replicated-rbd
rbd_pool = replicated_rbd
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1
glance_api_version = 2
rbd_user = cinder
rbd_secret_uuid = e0828f39-2832-4d82-90ee-23b26fc7b20a
report_discard_supported = true

[ec-rbd]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
volume_backend_name = ec-rbd
rbd_pool = ec_rbd
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1
glance_api_version = 2
rbd_user = cinder
rbd_secret_uuid = ab3b9537-c7ee-4ffb-af47-5ae3243acf70
report_discard_supported = true

[solid-rbd]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
volume_backend_name = solid-rbd
rbd_pool = solid_rbd
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1
glance_api_version = 2
rbd_user = cinder
rbd_secret_uuid = f420a0d4-1681-463f-ab2a-f85e216ada77
report_discard_supported = true


And on ceph:

[root@ceph-mon0 ceph]# ceph osd pool ls
replicated_rbd
ec_rbd
ec_cache
solid_rbd

Comment 3 Sandro Bonazzola 2019-01-28 09:34:24 UTC
This bug has not been marked as blocker for oVirt 4.3.0.
Since we are releasing it tomorrow, January 29th, this bug has been re-targeted to 4.3.1.

Comment 4 Michal Skrivanek 2020-03-18 15:43:33 UTC
This bug didn't get any attention for a while, we didn't have the capacity to make any progress. If you deeply care about it or want to work on it please assign/target accordingly

Comment 5 Michal Skrivanek 2020-03-18 15:46:48 UTC
This bug didn't get any attention for a while, we didn't have the capacity to make any progress. If you deeply care about it or want to work on it please assign/target accordingly

Comment 6 Michal Skrivanek 2020-04-01 14:44:25 UTC
ok, closing. Please reopen if still relevant/you want to work on it.

Comment 7 Michal Skrivanek 2020-04-01 14:49:27 UTC
ok, closing. Please reopen if still relevant/you want to work on it.


Note You need to log in before you can comment on or make changes to this bug.