Description of problem:
Improve integration and support with the cinder service external provider. This should include:
- Providing Cinder standalone as a service.
- Support FC/ISCSI storage backends.
- Support Ceph.
- Support Cinder volume with multipath support on the hypervisor.
- Support triggering vDisk live snapshots from engine, even if not managing them.
- Support migration from storage domains vDisk to Cinder based volume.
- Support migration between Cinder backends (retyping).
- Support ManageIQ cloud storage provider for oVirt infrastructure provider.
- Support snapshot tree representation and consistency groups management in ManageIQ.
- Support wipe after delete.
- Support downloading/uploading OVAs with Cinder volumes.
In different to the existing integration, we should treat Cinder volumes as externally managed objects that require minimal integration with engine for ease of use.
Additional requirements to scope:
- Download/upload volumes from Cinder.
- Download/upload OVA with Cinder volumes.
Additional requirements to scope:
- Using Cinder volumes without needing storage domain and SPM in the DC.
- Online update of Cinder volume re-size for the VM after external LUN resize via Cinder.
- Display and manage unattached/attached Cinder volumes with snapshots (UI plugin?).
- Host assisted Cinder volume retype (cold/live).
- Multipath for Cinder volumes.
- SSL support for Cinder external provider.
- Allow read only Cinder volumes.
This bug has not been marked as blocker for oVirt 4.3.0.
Since we are releasing it tomorrow, January 29th, this bug has been re-targeted to 4.3.1.
Please, keep in mind that before drop current cinder integration we need some time and some way to migrate current Ceph integrations.
On the whole world not a couple of clusters like this. I know clusters with hundreds Tbytes and VMs, my cluster is 0.5Pb, 210 librbd clients (oVirt disks), 110 VMs, 1 Storage Providers with 4 auth keys (Cinder volume types aka Ceph pools).
I second what Konstantin said. We have a similar sized deployment, 530T, 290 libvirt clients, over 700 RBDs and we really need a viable upgrade path from 4.2 with cinder as an external provider to 4.3 with cinderlib.
I am glad to see that there is a movement towards simplifying the use of such storage facilities as ceph.
As already mentioned, it would be good to work out the migration procedure from the solution ovirt + cinder.
Regarding item "Support migration from storage domains vDisk to Cinder based volume." - it's great, but still it would be nice to work out the creation of virtual machines from openstack glance, now ovirt can create vm from glance image to hosted_storage only, and not support external providers.
And I know that this is not included in this thread, but is it possible to initiate the implementation of a hosted_engine installation on a ceph rbd/cephfs?
This request is not currently committed to 4.4.z, moving it to 4.5
I look to the Cinderlib integration oVirt doc, provided by Michal Skrivanek 
Don't understand the moment, why oVirt hosts need ceph-common package? ceph-common is meta package and will install userland utilites, such as rbd, rados. Actually hosts need only libvirt-daemon-driver-storage-rbd package for rbd support in qemu.
Also I don't see how-to configure ceph monitors array. Current OpenStack Provider Cinder code get's all disk-related configuration from Cinder - host have zero configuration for this...
Example how-to cinder.conf backends looks like:
volume_driver = cinder.volume.drivers.rbd.RBDDriver
volume_backend_name = replicated-rbd
rbd_pool = replicated_rbd
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1
glance_api_version = 2
rbd_user = cinder
rbd_secret_uuid = e0828f39-2832-4d82-90ee-23b26fc7b20a
report_discard_supported = true
We are past 4.5.0 feature freeze, please re-target.
In 4.5 we concluded the improvements. Not all are implemented and the feature has to stay at Tech Preview, but it is more or less stable and working ok. Closing a tracker as no fundamental changes are planned.
(slightly) more up-to-date information about usage are at https://blogs.ovirt.org/2021/07/using-ceph-only-storage-for-ovirt-datacenter/