Bug 1154203 - [RFE] Support for cinder as glance default_store
Summary: [RFE] Support for cinder as glance default_store
Keywords:
Status: CLOSED DUPLICATE of bug 1293435
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: python-glance-store
Version: 5.0 (RHEL 7)
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: ---
: 9.0 (Mitaka)
Assignee: Flavio Percoco
QA Contact: nlevinki
URL:
Whiteboard:
: 1140269 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-10-17 21:17 UTC by Rajini Karthik
Modified: 2016-04-27 05:49 UTC (History)
14 users (show)

Fixed In Version:
Doc Type: Enhancement
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-03-10 15:13:32 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
OpenStack gerrit 183363 0 None MERGED Support download from and upload to Cinder volumes 2020-06-09 21:27:23 UTC

Description Rajini Karthik 2014-10-17 21:17:21 UTC
Description of problem:
 Is cinder as glance default_store implemented?
https://ask.openstack.org/en/question/7322/how-to-use-cinder-as-glance-default_store/
I don't think that was ever really finished or fully baked. Entering this bug to keep track in kilo


Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 2 arkady kanevsky 2014-10-17 21:28:06 UTC
The problem is in OpenStack. SUggest we target it to 6.0.
Start the work upstream in Kilo and then backport to Juno.

Comment 3 Flavio Percoco 2014-10-20 07:14:03 UTC
Cinder's driver for Glance is far for complete and I understand how this can be confusing. I don't think it's going to be completed during Kilo - at least I haven't heard of anyone interested in doing this. What's the use case?

The current workflow for cinder's driver works like this:

1. Create a volume
2. Create an image with a remote location

Can I have some extra details on what's needed and whether the above works?

Comment 4 Rajini Karthik 2014-10-20 14:57:01 UTC
Related openstack defect.
https://bugs.launchpad.net/cinder/+bug/1382681

The use case is to use cinder backend storage to store glance images.

Comment 5 Flavio Percoco 2014-10-21 09:22:45 UTC
(In reply to Rajini Ram from comment #4)
> Related openstack defect.
> https://bugs.launchpad.net/cinder/+bug/1382681
> 
> The use case is to use cinder backend storage to store glance images.

Right, but this is not the final use case. What would you like to achieve? Boot from volume? Just volumes provision?

Booting from volume and creating volume's from images does not require Glance's cinder driver to be enabled, although it would make it easier if it had full support for cinder.

That said, the work on Glance's cinder driver is currently blocked on the Cinder's bricks work. We'll be digging more on this topic at the next summit and then decide the faith of this Glance's cinder driver based on the result of those discussions.

Is there something you're trying to do that can't be done unless this driver is completed?

Comment 6 arkady kanevsky 2014-10-21 12:53:23 UTC
Flavio,
we are trying to acheive consistency across multuple storage backends.
Booting from volume image is a primary use case.
There are 2 overlaping scenarios.
One is provide Customers full ability to NOT use ephemeral storage and use backend storage.
Second to use multiple different backend storages to provide different performance and differentiation. Currently we support Ceph and EQL. And will extend this set in OSP6 timeframe.
Asking Customers to do different things for different backend breaks is really bad idea and makes a mockery of common CLI and horizon UI.

Please, add more info why this has dependency on cinder brick work.

Comment 7 Flavio Percoco 2014-10-23 13:25:42 UTC
(In reply to arkady kanevsky from comment #6)
> Flavio,
> we are trying to acheive consistency across multuple storage backends.
> Booting from volume image is a primary use case.
> There are 2 overlaping scenarios.
> One is provide Customers full ability to NOT use ephemeral storage and use
> backend storage.

If I understood correctly, the need here is to boot instances in non-ephemeral disks. This is already possible through nova's API. For example:

$ nova boot --flavor 2 --block-device source=image,id=$IMAGE_ID,dest=volume,size=10,shutdown=preserve,bootindex=0 my-non-ephemeral-instance

> Second to use multiple different backend storages to provide different
> performance and differentiation. Currently we support Ceph and EQL. And will
> extend this set in OSP6 timeframe.

Yeah, in this case there's some inconsistency in the supported backends, which hopefully this driver will fix. As of now, the only way to create a volume from an image is:

1. Upload the image
2. Create the volume from an image
3. Add the volume location as a remote location for the cinder driver

The above is *far* from the desired behavior and solution.

> Asking Customers to do different things for different backend breaks is
> really bad idea and makes a mockery of common CLI and horizon UI.

Yeah, I totally agree here. Unfortunately, the Cinder driver was not completely implemented and this has created more troubles than it's fixed. FWIW, I'm personally working on clearing the story of this driver. There's a bit of discussion happening in this thread:

http://lists.openstack.org/pipermail/openstack-dev/2014-October/048933.html

> Please, add more info why this has dependency on cinder brick work.

Here[0] Zhi Yan explains why this is needed. The summary is that with the bricks library, it'd be possible to do the volume creation without going through cinder's API. Without it, Glance'd have to do the steps listed above under-the-hood, which would result in a very inefficient implementation.

Since this work will happen in the glance_store library, it should be fairly simple to backport to OSP6.

[0] http://lists.openstack.org/pipermail/openstack-dev/2014-October/048947.html

Comment 8 Russell Bryant 2014-10-30 15:26:25 UTC
*** Bug 1140269 has been marked as a duplicate of this bug. ***

Comment 9 Flavio Percoco 2015-02-20 13:59:41 UTC
@Rajini

Can we open this issue? It was open as a private issue but I don't see much of sensible information here. That probably was a mistake.

Comment 10 Rajini Karthik 2015-03-16 14:52:54 UTC
Sure. No problem

Comment 11 Flavio Percoco 2015-10-19 06:50:17 UTC
Updated the external links to point to the current spec targeting this work.

Comment 12 Rajini Karthik 2015-10-19 15:08:21 UTC
[root@r14nova2 ~]# systemctl status openstack-nova-compute -l
openstack-nova-compute.service - OpenStack Nova Compute Server
   Loaded: loaded (/usr/lib/systemd/system/openstack-nova-compute.service; enabled)
   Active: active (running) since Fri 2015-10-16 04:13:37 UTC; 3 days ago
 Main PID: 17125 (nova-compute)
   CGroup: /system.slice/openstack-nova-compute.service
           └─17125 /usr/bin/python /usr/bin/nova-compute

Oct 16 04:18:37 r14nova2.r14.rcbd.lab nova-compute[17125]: info = LibvirtDriver._get_rbd_driver().get_pool_info()
Oct 16 04:18:37 r14nova2.r14.rcbd.lab nova-compute[17125]: File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/rbd_utils.py", line 344, in get_pool_info
Oct 16 04:18:37 r14nova2.r14.rcbd.lab nova-compute[17125]: with RADOSClient(self) as client:
Oct 16 04:18:37 r14nova2.r14.rcbd.lab nova-compute[17125]: File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/rbd_utils.py", line 88, in __init__
Oct 16 04:18:37 r14nova2.r14.rcbd.lab nova-compute[17125]: self.cluster, self.ioctx = driver._connect_to_rados(pool)
Oct 16 04:18:37 r14nova2.r14.rcbd.lab nova-compute[17125]: File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/rbd_utils.py", line 112, in _connect_to_rados
Oct 16 04:18:37 r14nova2.r14.rcbd.lab nova-compute[17125]: client.connect()
Oct 16 04:18:37 r14nova2.r14.rcbd.lab nova-compute[17125]: File "/usr/lib/python2.7/site-packages/rados.py", line 419, in connect
Oct 16 04:18:37 r14nova2.r14.rcbd.lab nova-compute[17125]: raise make_ex(ret, "error calling connect")
Oct 16 04:18:37 r14nova2.r14.rcbd.lab nova-compute[17125]: TimedOut: error calling connect

Comment 13 Sergey Gotliv 2015-10-21 05:07:34 UTC
Moving to 9.0, hopefully the spec will be approved till then.

Comment 14 Sean Cohen 2016-03-10 15:13:32 UTC

*** This bug has been marked as a duplicate of bug 1293435 ***


Note You need to log in before you can comment on or make changes to this bug.