Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1516330

Summary: [RFE][Cinder] Allow to move disk images between "volume types" for Cinder.
Product: [oVirt] ovirt-engine Reporter: Konstantin Shalygin <shalygin.k>
Component: BLL.StorageAssignee: Tal Nisan <tnisan>
Status: CLOSED DEFERRED QA Contact: Avihai <aefrat>
Severity: medium Docs Contact:
Priority: high    
Version: 4.1.6CC: bugs, ebenahar
Target Milestone: ---Keywords: FutureFeature
Target Release: ---Flags: ylavi: ovirt-4.3?
ylavi: planning_ack+
rule-engine: devel_ack?
rule-engine: testing_ack?
Hardware: All   
OS: All   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2020-04-01 14:44:25 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Storage RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1539837    
Attachments:
Description Flags
How it looks on oVirt. none

Description Konstantin Shalygin 2017-11-22 13:53:29 UTC
Description of problem:

At this time is impossible move one oVirt disk from one "volume type" to another.
For example: move disk from fast (nvme-based) storage to cold (hdd-based) storage.

Version-Release number of selected component (if applicable):
4.1.6

Actual results:
Move disk is impossible.

Expected results:
Move disk between Cinder "volume types" can be done from oVirt.

Additional info:
For now I do movements via create new disk on target pool and migrate data via cp/rsync or via qemu-img, then delete old disk.
Actually this is only one critical missed feature for a year of oVirt + Ceph usage.

Comment 1 Konstantin Shalygin 2017-11-22 14:06:50 UTC
Created attachment 1357527 [details]
How it looks on oVirt.

Screenshot from oVirt. On Cinder-side this is look like this:

[replicated-rbd]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
volume_backend_name = replicated-rbd
rbd_pool = replicated_rbd
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1
glance_api_version = 2
rbd_user = cinder
rbd_secret_uuid = e0828f39-2832-4d82-90ee-23b26fc7b20a
report_discard_supported = true

[ec-rbd]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
volume_backend_name = ec-rbd
rbd_pool = ec_rbd
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1
glance_api_version = 2
rbd_user = cinder
rbd_secret_uuid = ab3b9537-c7ee-4ffb-af47-5ae3243acf70
report_discard_supported = true

[solid-rbd]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
volume_backend_name = solid-rbd
rbd_pool = solid_rbd
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1
glance_api_version = 2
rbd_user = cinder
rbd_secret_uuid = f420a0d4-1681-463f-ab2a-f85e216ada77
report_discard_supported = true


And on ceph:

[root@ceph-mon0 ceph]# ceph osd pool ls
replicated_rbd
ec_rbd
ec_cache
solid_rbd

Comment 3 Sandro Bonazzola 2019-01-28 09:34:24 UTC
This bug has not been marked as blocker for oVirt 4.3.0.
Since we are releasing it tomorrow, January 29th, this bug has been re-targeted to 4.3.1.

Comment 4 Michal Skrivanek 2020-03-18 15:43:33 UTC
This bug didn't get any attention for a while, we didn't have the capacity to make any progress. If you deeply care about it or want to work on it please assign/target accordingly

Comment 5 Michal Skrivanek 2020-03-18 15:46:48 UTC
This bug didn't get any attention for a while, we didn't have the capacity to make any progress. If you deeply care about it or want to work on it please assign/target accordingly

Comment 6 Michal Skrivanek 2020-04-01 14:44:25 UTC
ok, closing. Please reopen if still relevant/you want to work on it.

Comment 7 Michal Skrivanek 2020-04-01 14:49:27 UTC
ok, closing. Please reopen if still relevant/you want to work on it.