It would be beneficial to support RBD devices for virtual machine disks with RHEV. While Cinder integration is nice, it adds the additional complexity of deploying an OpenStack environment.
We will be looking to add CEPH ISCSI support, but to add RBD devices we will need a infrastructure to offload storage actions.
Ceph iSCSI is a step in the right direction but wouldn't using RBD devices provide the highest performance (similar to using libgfapi vs fuse)?
Ceph ISCSI would come along with the overhead of having to setup multipath to achieve the best performance. Native RBD devices would be easier in the long run, much like gluster support is now.
(In reply to Donny Davis from comment #3) > Ceph ISCSI would come along with the overhead of having to setup multipath > to achieve the best performance. Native RBD devices would be easier in the > long run, much like gluster support is now. We will be working on SDS type solution to allow native RBD, but this will happen only in later version of RHV 4.x.
(In reply to Yaniv Dary from comment #4) > (In reply to Donny Davis from comment #3) > > Ceph ISCSI would come along with the overhead of having to setup multipath > > to achieve the best performance. Native RBD devices would be easier in the > > long run, much like gluster support is now. > > We will be working on SDS type solution to allow native RBD, but this will > happen only in later version of RHV 4.x. Any idea on an ETA or projected release Yaniv? Thanks Donny
(In reply to Donny Davis from comment #5) > (In reply to Yaniv Dary from comment #4) > > (In reply to Donny Davis from comment #3) > > > Ceph ISCSI would come along with the overhead of having to setup multipath > > > to achieve the best performance. Native RBD devices would be easier in the > > > long run, much like gluster support is now. > > > > We will be working on SDS type solution to allow native RBD, but this will > > happen only in later version of RHV 4.x. > > Any idea on an ETA or projected release Yaniv? Not yet.
I have a customer asking about this currently, as they are planning a Ceph installation and would like to use RBD disks instead of local DAS if possible.
For those CCed on this bug and for future reference. We now support Ceph ISCSI target for RHV 4.1+, customer can use this path until this RFE is resolved.
This bug has not been marked as blocker for oVirt 4.3.0. Since we are releasing it tomorrow, January 29th, this bug has been re-targeted to 4.3.1.
How far are we with this?
This is going to be tech preview for 4.3 as part of the CinderLib integration. https://ovirt.org/develop/release-management/features/storage/cinderlib-integration.html There are still work to do to as the needed packaged are not yet available in oVirt channel and cinderlib is not yet packaged in RPM. With several manual configuration steps, we are able to add/delete ceph disks and run VMs. The disks are attached with OS-BRICK as RBD devices. Also not all storage operations will be available.
delivered in RHV 4.4 as a tech preview support (actually even 4.3, but let's say 4.4 is finally usable) I would propose to close this bug
(In reply to Michal Skrivanek from comment #17) > delivered in RHV 4.4 as a tech preview support (actually even 4.3, but let's > say 4.4 is finally usable) > > I would propose to close this bug I second that
Can you guys please be a bit more specific on how it was implemented exactly? Is it via cinderlib integration or there is another mechanism? If I am willing to try this out - where do I start?
And a second question to Tal is about this KCS: https://access.redhat.com/solutions/2428911. I would like to update it and I would appreciate your help on the wording, it would be based off your answer on my previous question.
(In reply to Marina Kalinin from comment #19) > Can you guys please be a bit more specific on how it was implemented exactly? > Is it via cinderlib integration or there is another mechanism? > > If I am willing to try this out - where do I start? Yes, we refer to the Cinberlib integration
Is there a way for this knowledge base article to be available to users without a RHEL subscription? I'm finding a complete lack of instructions on the newer cinderlib/ceph integration.
Tal, I am confused about this bug still. Is it about cinderlib integration or iscsi gateway testing of Ceph remote storage? If yes, please adjust the title accordingly. If not, let's close won't fix, since Cinder Openstack integration is deprecated in RHV.
(In reply to Marina Kalinin from comment #27) > Tal, > I am confused about this bug still. > Is it about cinderlib integration or iscsi gateway testing of Ceph remote > storage? > If yes, please adjust the title accordingly. > If not, let's close won't fix, since Cinder Openstack integration is > deprecated in RHV. Eyal, can you help please?
(In reply to Marina Kalinin from comment #28) > (In reply to Marina Kalinin from comment #27) > > Tal, > > I am confused about this bug still. > > Is it about cinderlib integration or iscsi gateway testing of Ceph remote > > storage? > > If yes, please adjust the title accordingly. > > If not, let's close won't fix, since Cinder Openstack integration is > > deprecated in RHV. > > Eyal, can you help please? Please see comment #24, yes it is about Cinderlib integration (Managed Block Storage). From my point of view, we can close it. Managed Block Storage is GA'ed from 4.4.
According to current Release Notes for 4.4, this is still tech preview for cinderlib and thus for Ceph rbd as well [1]. Either this is a doc bug or we still won't support Ceph rbd. It's not clear in any way, since the storage domains doc [2] doesn't reflect any Ceph integration. [1] https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/html-single/release_notes/index#Additional_Packages_from_Red_Hat_Network [2] https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/html/administration_guide/chap-storage
(In reply to Eyal Shenitzky from comment #29) > Please see comment #24, yes it is about Cinderlib integration (Managed Block > Storage). > > From my point of view, we can close it. > Managed Block Storage is GA'ed from 4.4. +1
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 500 days