1. Proposed title of this feature request Bypass the copying of glance image on the compute node if ceph is backend for both glance and nova. 3. What is the nature and description of the request? You have configured ceph as a backend for glance and nova. If you try creating new instance then the glance image gets copied to the compute node and further gets uploaded to ceph(nova pool). So, if the compute node do not have enough space to consume the glance images then the spawning of instance fails. So, Bypass the copying of glance image on the compute node if ceph is backend for both glance and nova. Consider following test cases: Test case 1: - uploaded qcow2 image to glance. - tried creating new instance using qcow2 image. As per my understanding following is the flow: - nova downloads the image to the compute node under "/var/lib/nova/instances/_base" - then it convert to "raw" format. 2015-05-05 19:26:24.083 11351 DEBUG nova.openstack.common.processutils [-] Running cmd (subprocess): qemu-img convert -O raw /var/lib/nova/instances/_base/02011fbfa322e16ee81589642c16258d9751d8c6.part /var/lib/nova/instances/_base/02011fb fa322e16ee81589642c16258d9751d8c6.converted execute /usr/lib/python2.7/site-packages/nova/openstack/common/processutils.py:161 - this downloaded disk will act as parent for all instances which will run on the compute node using the same glance image. - Then the disk gets imported to ceph pool. 2015-05-05 19:26:33.335 11351 DEBUG nova.openstack.common.processutils [-] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/02011fbfa322e16ee81589642c16258d9751d8c6 cc1862e7-8376-45ea-96d6-4dd3e6f65739_disk -- new-format --id cinder --conf /etc/ceph/ceph.conf execute /usr/lib/python2.7/site-packages/nova/openstack/common/processutils.py:161 _________ Test case 2: - uploaded raw image to glance - tried creating new instance using raw image. 015-05-07 15:58:01.523 11351 DEBUG nova.openstack.common.processutils [-] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1722143c845c34596270946f06e0431b8a5e7621.part execute /usr/lib/python2.7/site-packages/nova/openstack/common/processutils.py:161 2015-05-07 15:58:04.712 11351 DEBUG nova.openstack.common.processutils [-] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/1722143c845c34596270946f06e0431b8a5e7621 7fd7eeb1-f53f-4bf6-b619-33d74185982e_disk --new-format --id cinder --conf /etc/ceph/ceph.conf execute /usr/lib/python2.7/site-packages/nova/openstack/common/processutils.py:161 - So still it download the image from glance to the compute node. It checks the format of the image. As its already raw, it did not converted it. - Then the raw image is pulled from the compute to ceph. This also indicates that, when we upload the qcow2 image to ceph, it simply upload it without converting it to raw. As we observed in "Test case: 1" that the image gets converted from qcow2 --> raw when we boot instance. <<<<<<< need to reconfirm if understanding is correct. ____________________________________ Here i guess the reason behind placing the disk on compute node is to convert it from other format to "raw" as ceph only accept raw volumes. Further whenever we use the same glance image, the image get pulled from compute node(/var/lib/nova/instances/_base) to ceph. To conclude, irespective of format of uploaded glance image, if you are creating instance the image first get copied to the compute node "/var/lib/nova/instances/_base" and then it gets uploaded to ceph. We need to bypass coping the image on compute node.
This feature is available in RHOS6 already (and even RHOS5 through bug #1062022). Ensure glance is cofnigured to enable_v2_api=True and show_image_direct_url=True and that you store raw images in glance.