Bug 1516330
| Summary: | [RFE][Cinder] Allow to move disk images between "volume types" for Cinder. | ||||||
|---|---|---|---|---|---|---|---|
| Product: | [oVirt] ovirt-engine | Reporter: | Konstantin Shalygin <shalygin.k> | ||||
| Component: | BLL.Storage | Assignee: | Tal Nisan <tnisan> | ||||
| Status: | CLOSED DEFERRED | QA Contact: | Avihai <aefrat> | ||||
| Severity: | medium | Docs Contact: | |||||
| Priority: | high | ||||||
| Version: | 4.1.6 | CC: | bugs, ebenahar | ||||
| Target Milestone: | --- | Keywords: | FutureFeature | ||||
| Target Release: | --- | Flags: | ylavi:
ovirt-4.3?
ylavi: planning_ack+ rule-engine: devel_ack? rule-engine: testing_ack? |
||||
| Hardware: | All | ||||||
| OS: | All | ||||||
| Whiteboard: | |||||||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |||||
| Doc Text: | Story Points: | --- | |||||
| Clone Of: | Environment: | ||||||
| Last Closed: | 2020-04-01 14:44:25 UTC | Type: | Bug | ||||
| Regression: | --- | Mount Type: | --- | ||||
| Documentation: | --- | CRM: | |||||
| Verified Versions: | Category: | --- | |||||
| oVirt Team: | Storage | RHEL 7.3 requirements from Atomic Host: | |||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||
| Embargoed: | |||||||
| Bug Depends On: | |||||||
| Bug Blocks: | 1539837 | ||||||
| Attachments: |
|
||||||
|
Description
Konstantin Shalygin
2017-11-22 13:53:29 UTC
Created attachment 1357527 [details]
How it looks on oVirt.
Screenshot from oVirt. On Cinder-side this is look like this:
[replicated-rbd]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
volume_backend_name = replicated-rbd
rbd_pool = replicated_rbd
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1
glance_api_version = 2
rbd_user = cinder
rbd_secret_uuid = e0828f39-2832-4d82-90ee-23b26fc7b20a
report_discard_supported = true
[ec-rbd]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
volume_backend_name = ec-rbd
rbd_pool = ec_rbd
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1
glance_api_version = 2
rbd_user = cinder
rbd_secret_uuid = ab3b9537-c7ee-4ffb-af47-5ae3243acf70
report_discard_supported = true
[solid-rbd]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
volume_backend_name = solid-rbd
rbd_pool = solid_rbd
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1
glance_api_version = 2
rbd_user = cinder
rbd_secret_uuid = f420a0d4-1681-463f-ab2a-f85e216ada77
report_discard_supported = true
And on ceph:
[root@ceph-mon0 ceph]# ceph osd pool ls
replicated_rbd
ec_rbd
ec_cache
solid_rbd
This bug has not been marked as blocker for oVirt 4.3.0. Since we are releasing it tomorrow, January 29th, this bug has been re-targeted to 4.3.1. This bug didn't get any attention for a while, we didn't have the capacity to make any progress. If you deeply care about it or want to work on it please assign/target accordingly This bug didn't get any attention for a while, we didn't have the capacity to make any progress. If you deeply care about it or want to work on it please assign/target accordingly ok, closing. Please reopen if still relevant/you want to work on it. ok, closing. Please reopen if still relevant/you want to work on it. |