Description of problem: Is cinder as glance default_store implemented? https://ask.openstack.org/en/question/7322/how-to-use-cinder-as-glance-default_store/ I don't think that was ever really finished or fully baked. Entering this bug to keep track in kilo Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
The problem is in OpenStack. SUggest we target it to 6.0. Start the work upstream in Kilo and then backport to Juno.
Cinder's driver for Glance is far for complete and I understand how this can be confusing. I don't think it's going to be completed during Kilo - at least I haven't heard of anyone interested in doing this. What's the use case? The current workflow for cinder's driver works like this: 1. Create a volume 2. Create an image with a remote location Can I have some extra details on what's needed and whether the above works?
Related openstack defect. https://bugs.launchpad.net/cinder/+bug/1382681 The use case is to use cinder backend storage to store glance images.
(In reply to Rajini Ram from comment #4) > Related openstack defect. > https://bugs.launchpad.net/cinder/+bug/1382681 > > The use case is to use cinder backend storage to store glance images. Right, but this is not the final use case. What would you like to achieve? Boot from volume? Just volumes provision? Booting from volume and creating volume's from images does not require Glance's cinder driver to be enabled, although it would make it easier if it had full support for cinder. That said, the work on Glance's cinder driver is currently blocked on the Cinder's bricks work. We'll be digging more on this topic at the next summit and then decide the faith of this Glance's cinder driver based on the result of those discussions. Is there something you're trying to do that can't be done unless this driver is completed?
Flavio, we are trying to acheive consistency across multuple storage backends. Booting from volume image is a primary use case. There are 2 overlaping scenarios. One is provide Customers full ability to NOT use ephemeral storage and use backend storage. Second to use multiple different backend storages to provide different performance and differentiation. Currently we support Ceph and EQL. And will extend this set in OSP6 timeframe. Asking Customers to do different things for different backend breaks is really bad idea and makes a mockery of common CLI and horizon UI. Please, add more info why this has dependency on cinder brick work.
(In reply to arkady kanevsky from comment #6) > Flavio, > we are trying to acheive consistency across multuple storage backends. > Booting from volume image is a primary use case. > There are 2 overlaping scenarios. > One is provide Customers full ability to NOT use ephemeral storage and use > backend storage. If I understood correctly, the need here is to boot instances in non-ephemeral disks. This is already possible through nova's API. For example: $ nova boot --flavor 2 --block-device source=image,id=$IMAGE_ID,dest=volume,size=10,shutdown=preserve,bootindex=0 my-non-ephemeral-instance > Second to use multiple different backend storages to provide different > performance and differentiation. Currently we support Ceph and EQL. And will > extend this set in OSP6 timeframe. Yeah, in this case there's some inconsistency in the supported backends, which hopefully this driver will fix. As of now, the only way to create a volume from an image is: 1. Upload the image 2. Create the volume from an image 3. Add the volume location as a remote location for the cinder driver The above is *far* from the desired behavior and solution. > Asking Customers to do different things for different backend breaks is > really bad idea and makes a mockery of common CLI and horizon UI. Yeah, I totally agree here. Unfortunately, the Cinder driver was not completely implemented and this has created more troubles than it's fixed. FWIW, I'm personally working on clearing the story of this driver. There's a bit of discussion happening in this thread: http://lists.openstack.org/pipermail/openstack-dev/2014-October/048933.html > Please, add more info why this has dependency on cinder brick work. Here[0] Zhi Yan explains why this is needed. The summary is that with the bricks library, it'd be possible to do the volume creation without going through cinder's API. Without it, Glance'd have to do the steps listed above under-the-hood, which would result in a very inefficient implementation. Since this work will happen in the glance_store library, it should be fairly simple to backport to OSP6. [0] http://lists.openstack.org/pipermail/openstack-dev/2014-October/048947.html
*** Bug 1140269 has been marked as a duplicate of this bug. ***
@Rajini Can we open this issue? It was open as a private issue but I don't see much of sensible information here. That probably was a mistake.
Sure. No problem
Updated the external links to point to the current spec targeting this work.
[root@r14nova2 ~]# systemctl status openstack-nova-compute -l openstack-nova-compute.service - OpenStack Nova Compute Server Loaded: loaded (/usr/lib/systemd/system/openstack-nova-compute.service; enabled) Active: active (running) since Fri 2015-10-16 04:13:37 UTC; 3 days ago Main PID: 17125 (nova-compute) CGroup: /system.slice/openstack-nova-compute.service └─17125 /usr/bin/python /usr/bin/nova-compute Oct 16 04:18:37 r14nova2.r14.rcbd.lab nova-compute[17125]: info = LibvirtDriver._get_rbd_driver().get_pool_info() Oct 16 04:18:37 r14nova2.r14.rcbd.lab nova-compute[17125]: File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/rbd_utils.py", line 344, in get_pool_info Oct 16 04:18:37 r14nova2.r14.rcbd.lab nova-compute[17125]: with RADOSClient(self) as client: Oct 16 04:18:37 r14nova2.r14.rcbd.lab nova-compute[17125]: File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/rbd_utils.py", line 88, in __init__ Oct 16 04:18:37 r14nova2.r14.rcbd.lab nova-compute[17125]: self.cluster, self.ioctx = driver._connect_to_rados(pool) Oct 16 04:18:37 r14nova2.r14.rcbd.lab nova-compute[17125]: File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/rbd_utils.py", line 112, in _connect_to_rados Oct 16 04:18:37 r14nova2.r14.rcbd.lab nova-compute[17125]: client.connect() Oct 16 04:18:37 r14nova2.r14.rcbd.lab nova-compute[17125]: File "/usr/lib/python2.7/site-packages/rados.py", line 419, in connect Oct 16 04:18:37 r14nova2.r14.rcbd.lab nova-compute[17125]: raise make_ex(ret, "error calling connect") Oct 16 04:18:37 r14nova2.r14.rcbd.lab nova-compute[17125]: TimedOut: error calling connect
Moving to 9.0, hopefully the spec will be approved till then.
*** This bug has been marked as a duplicate of bug 1293435 ***