Bug 1343676 - [RFE][MBS] Add support for using Ceph RBD devices for virtual machine disks
Summary: [RFE][MBS] Add support for using Ceph RBD devices for virtual machine disks
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: RFEs
Version: unspecified
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ovirt-4.4.10-2
: ---
Assignee: Stephen Gordon
QA Contact: Avihai
URL:
Whiteboard:
Depends On: 912761
Blocks: 1539837
TreeView+ depends on / blocked
 
Reported: 2016-06-07 16:12 UTC by Tony James
Modified: 2024-12-20 18:41 UTC (History)
17 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-03-16 13:19:54 UTC
oVirt Team: Storage
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Knowledge Base (Solution) 2428911 0 None None None 2020-12-08 15:37:10 UTC
Red Hat Knowledge Base (Solution) 3610111 0 None None None 2020-12-08 15:37:10 UTC
Red Hat Knowledge Base (Solution) 5605141 0 None None None 2020-12-09 22:16:29 UTC

Description Tony James 2016-06-07 16:12:21 UTC
It would be beneficial to support RBD devices for virtual machine disks with RHEV.  While Cinder integration is nice, it adds the additional complexity of deploying an OpenStack environment.

Comment 1 Yaniv Lavi 2016-06-26 13:00:59 UTC
We will be looking to add CEPH ISCSI support, but to add RBD devices we will need a infrastructure to offload storage actions.

Comment 2 Tony James 2016-10-06 12:22:03 UTC
Ceph iSCSI is a step in the right direction but wouldn't using RBD devices provide the highest performance (similar to using libgfapi vs fuse)?

Comment 3 Donny Davis 2016-12-06 15:47:44 UTC
Ceph ISCSI would come along with the overhead of having to setup multipath to achieve the best performance. Native RBD devices would be easier in the long run, much like gluster support is now.

Comment 4 Yaniv Lavi 2016-12-11 15:01:17 UTC
(In reply to Donny Davis from comment #3)
> Ceph ISCSI would come along with the overhead of having to setup multipath
> to achieve the best performance. Native RBD devices would be easier in the
> long run, much like gluster support is now.

We will be working on SDS type solution to allow native RBD, but this will happen only in later version of RHV 4.x.

Comment 5 Donny Davis 2016-12-11 18:26:09 UTC
(In reply to Yaniv Dary from comment #4)
> (In reply to Donny Davis from comment #3)
> > Ceph ISCSI would come along with the overhead of having to setup multipath
> > to achieve the best performance. Native RBD devices would be easier in the
> > long run, much like gluster support is now.
> 
> We will be working on SDS type solution to allow native RBD, but this will
> happen only in later version of RHV 4.x.

Any idea on an ETA or projected release Yaniv? 

Thanks 
Donny

Comment 6 Yaniv Lavi 2016-12-28 13:53:05 UTC
(In reply to Donny Davis from comment #5)
> (In reply to Yaniv Dary from comment #4)
> > (In reply to Donny Davis from comment #3)
> > > Ceph ISCSI would come along with the overhead of having to setup multipath
> > > to achieve the best performance. Native RBD devices would be easier in the
> > > long run, much like gluster support is now.
> > 
> > We will be working on SDS type solution to allow native RBD, but this will
> > happen only in later version of RHV 4.x.
> 
> Any idea on an ETA or projected release Yaniv? 

Not yet.

Comment 7 John Apple II 2017-08-10 02:51:34 UTC
I have a customer asking about this currently, as they are planning a Ceph installation and would like to use RBD disks instead of local DAS if possible.

Comment 8 Yaniv Lavi 2017-11-26 13:40:45 UTC
For those CCed on this bug and for future reference. We now support Ceph ISCSI target for RHV 4.1+, customer can use this path until this RFE is resolved.

Comment 9 Sandro Bonazzola 2019-01-28 09:40:37 UTC
This bug has not been marked as blocker for oVirt 4.3.0.
Since we are releasing it tomorrow, January 29th, this bug has been re-targeted to 4.3.1.

Comment 11 Yaniv Kaul 2019-01-28 10:27:47 UTC
How far are we with this?

Comment 12 Fred Rolland 2019-01-28 10:45:39 UTC
This is going to be tech preview for 4.3 as part of the CinderLib integration.

https://ovirt.org/develop/release-management/features/storage/cinderlib-integration.html

There are still work to do to as the needed packaged are not yet available in oVirt channel and cinderlib is not yet packaged in RPM.

With several manual configuration steps, we are able to add/delete ceph disks and run VMs. The disks are attached with OS-BRICK as RBD devices.

Also not all storage operations will be available.

Comment 17 Michal Skrivanek 2020-12-09 15:59:23 UTC
delivered in RHV 4.4 as a tech preview support (actually even 4.3, but let's say 4.4 is finally usable)

I would propose to close this bug

Comment 18 Tal Nisan 2020-12-09 16:28:38 UTC
(In reply to Michal Skrivanek from comment #17)
> delivered in RHV 4.4 as a tech preview support (actually even 4.3, but let's
> say 4.4 is finally usable)
> 
> I would propose to close this bug

I second that

Comment 19 Marina Kalinin 2020-12-09 19:45:22 UTC
Can you guys please be a bit more specific on how it was implemented exactly?
Is it via cinderlib integration or there is another mechanism?

If I am willing to try this out - where do I start?

Comment 20 Marina Kalinin 2020-12-09 19:51:59 UTC
And a second question to Tal is about this KCS: https://access.redhat.com/solutions/2428911.
I would like to update it and I would appreciate your help on the wording, it would be based off your answer on my previous question.

Comment 24 Tal Nisan 2020-12-10 13:20:16 UTC
(In reply to Marina Kalinin from comment #19)
> Can you guys please be a bit more specific on how it was implemented exactly?
> Is it via cinderlib integration or there is another mechanism?
> 
> If I am willing to try this out - where do I start?

Yes, we refer to the Cinberlib integration

Comment 26 Eric Kerby 2021-01-06 13:30:31 UTC
Is there a way for this knowledge base article to be available to users without a RHEL subscription? I'm finding a complete lack of instructions on the newer cinderlib/ceph integration.

Comment 27 Marina Kalinin 2021-02-19 19:52:58 UTC
Tal,
I am confused about this bug still.
Is it about cinderlib integration or iscsi gateway testing of Ceph remote storage?
If yes, please adjust the title accordingly.
If not, let's close won't fix, since Cinder Openstack integration is deprecated in RHV.

Comment 28 Marina Kalinin 2021-03-12 20:08:05 UTC
(In reply to Marina Kalinin from comment #27)
> Tal,
> I am confused about this bug still.
> Is it about cinderlib integration or iscsi gateway testing of Ceph remote
> storage?
> If yes, please adjust the title accordingly.
> If not, let's close won't fix, since Cinder Openstack integration is
> deprecated in RHV.

Eyal, can you help please?

Comment 29 Eyal Shenitzky 2021-03-14 10:14:58 UTC
(In reply to Marina Kalinin from comment #28)
> (In reply to Marina Kalinin from comment #27)
> > Tal,
> > I am confused about this bug still.
> > Is it about cinderlib integration or iscsi gateway testing of Ceph remote
> > storage?
> > If yes, please adjust the title accordingly.
> > If not, let's close won't fix, since Cinder Openstack integration is
> > deprecated in RHV.
> 
> Eyal, can you help please?

Please see comment #24, yes it is about Cinderlib integration (Managed Block Storage).

From my point of view, we can close it.
Managed Block Storage is GA'ed from 4.4.

Comment 30 Matthias Muench 2021-03-16 08:46:49 UTC
According to current Release Notes for 4.4, this is still tech preview for cinderlib and thus for Ceph rbd as well [1].
Either this is a doc bug or we still won't support Ceph rbd. 
It's not clear in any way, since the storage domains doc [2] doesn't reflect any Ceph integration.

[1] https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/html-single/release_notes/index#Additional_Packages_from_Red_Hat_Network 
[2] https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/html/administration_guide/chap-storage

Comment 34 Arik 2022-03-16 13:19:54 UTC
(In reply to Eyal Shenitzky from comment #29)
> Please see comment #24, yes it is about Cinderlib integration (Managed Block
> Storage).
> 
> From my point of view, we can close it.
> Managed Block Storage is GA'ed from 4.4.

+1

Comment 35 Red Hat Bugzilla 2023-09-14 23:59:39 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 500 days


Note You need to log in before you can comment on or make changes to this bug.