Created attachment 803378 [details] logs Description of problem: I configured swift as glance's backend and we get an error on image upload if we try to create the images from --file. we can create images using --location though (http://) Version-Release number of selected component (if applicable): [root@nott-vdsa ~(keystone_admin)]# rpm -qa |grep glance python-glance-2013.2-0.10.b3.el6ost.noarch openstack-glance-2013.2-0.10.b3.el6ost.noarch python-glanceclient-0.10.0-1.el6ost.noarch openstack-glance-doc-2013.2-0.10.b3.el6ost.noarch [root@nott-vdsa ~(keystone_admin)]# rpm -qa |grep swift openstack-swift-plugin-swift3-1.0.0-0.20120711git.1.el6ost.noarch openstack-swift-proxy-1.8.0-6.el6ost.noarch python-swiftclient-1.6.0-1.el6ost.noarch openstack-swift-1.8.0-6.el6ost.noarch How reproducible: 100% Steps to Reproduce: 1. configure swift as glance backend 2. create a new image using --location 3. create a new image using --file Actual results: we get an error when trying to create image with --file Expected results: we should succeed to create the image Additional info: [root@nott-vdsa ~(keystone_admin)]# glance image-create --name swift2 --disk-format qcow2 --container-format bare --file /tmp/rhel-server-x86_64-kvm-6.4_20130130.0-2-sda.qcow2 Request returned failure status. 500 Internal Server Error Failed to upload image cca7a691-2346-4bcd-9cc3-9888324c1813 (HTTP 500) 2013-09-26 14:45:50.582 383 ERROR glance.store.swift [42035704-d1d6-467e-980c-e58bc21e8e46 f029a90f77bf4df5898d79c64e176738 b5311177ffd44e9c91af030911f0b9d4] Failed to add object to Swift. Got error from Swift: put_object('glance', 'cca7a691-2346-4bcd-9cc3-9888324c1813', ...) failure and no ability to reset contents for reupload. 2013-09-26 14:45:50.582 383 ERROR glance.api.v1.upload_utils [42035704-d1d6-467e-980c-e58bc21e8e46 f029a90f77bf4df5898d79c64e176738 b5311177ffd44e9c91af030911f0b9d4] Failed to upload image cca7a691-2346-4bcd-9cc3-9888324c1813 2013-09-26 14:45:50.582 383 TRACE glance.api.v1.upload_utils Traceback (most recent call last): 2013-09-26 14:45:50.582 383 TRACE glance.api.v1.upload_utils File "/usr/lib/python2.6/site-packages/glance/api/v1/upload_utils.py", line 101, in upload_data_to_store 2013-09-26 14:45:50.582 383 TRACE glance.api.v1.upload_utils store) 2013-09-26 14:45:50.582 383 TRACE glance.api.v1.upload_utils File "/usr/lib/python2.6/site-packages/glance/store/__init__.py", line 333, in store_add_to_backend 2013-09-26 14:45:50.582 383 TRACE glance.api.v1.upload_utils (location, size, checksum, metadata) = store.add(image_id, data, size) 2013-09-26 14:45:50.582 383 TRACE glance.api.v1.upload_utils File "/usr/lib/python2.6/site-packages/glance/store/swift.py", line 441, in add 2013-09-26 14:45:50.582 383 TRACE glance.api.v1.upload_utils raise glance.store.BackendException(msg) 2013-09-26 14:45:50.582 383 TRACE glance.api.v1.upload_utils BackendException: Failed to add object to Swift. 2013-09-26 14:45:50.582 383 TRACE glance.api.v1.upload_utils Got error from Swift: put_object('glance', 'cca7a691-2346-4bcd-9cc3-9888324c1813', ...) failure and no ability to reset contents for reupload. 2013-09-26 14:45:50.582 383 TRACE glance.api.v1.upload_utils 2013-09-26 14:45:50.584 383 DEBUG glance.registry.client.v1.api [42035704-d1d6-467e-980c-e58bc21e8e46 f029a90f77bf4df5898d79c64e176738 b5311177ffd44e9c91af030911f0b9d4] Updating image metadata for image cca7a691-2346-4bcd-9cc3-9888324c1813... update_image_metadata /usr/lib/python2.6/site-packages/glance/registry/client/v1/api.py:192 2013-09-26 14:45:50.584 383 DEBUG glance.common.client [42035704-d1d6-467e-980c-e58bc21e8e46 f029a90f77bf4df5898d79c64e176738 b5311177ffd44e9c91af030911f0b9d4] Constructed URL: http://0.0.0.0:9191/images/cca7a691-2346-4bcd-9cc3-9888324c1813 _construct_url /usr/lib/python2.6/site-packages/glance/common/client.py:408 2013-09-26 14:45:50.657 383 DEBUG glance.registry.client.v1.client [42035704-d1d6-467e-980c-e58bc21e8e46 f029a90f77bf4df5898d79c64e176738 b5311177ffd44e9c91af030911f0b9d4] Registry request PUT /images/cca7a691-2346-4bcd-9cc3-9888324c1813 HTTP 200 request id req-52689178-c5bc-47ac-8219-a02f38cc404b do_request /usr/lib/python2.6/site-packages/glance/registry/client/v1/client.py:98 ?
https://bugs.launchpad.net/glance/+bug/1231406
I'm a bit confused between --file and --copy-from TBH Does the latter one work with swift store?
i was too so I asked around :) when we use --location the image is created with the specified location but not downloaded to the server when we use --copy-from the image is downloaded right away so if I use [root@nott-vdsa tmp(keystone_admin)]# glance image-create --name bug1012407 --disk-format qcow2 --container-format bare --copy-from http://XXXXXX +------------------+--------------------------------------+ | Property | Value | +------------------+--------------------------------------+ | checksum | None | | container_format | bare | | created_at | 2013-09-26T15:34:14 | | deleted | False | | deleted_at | None | | disk_format | qcow2 | | id | 14cd2933-6669-4f15-b88c-89be4f5e0f04 | | is_public | False | | min_disk | 0 | | min_ram | 0 | | name | bug1012407 | | owner | b5311177ffd44e9c91af030911f0b9d4 | | protected | False | | size | 31357907 | | status | queued | | updated_at | 2013-09-26T15:34:14 | +------------------+--------------------------------------+ you can see that the image appears in glance container right away: [root@nott-vdsa ~(keystone_glance)]# swift list glance 14cd2933-6669-4f15-b88c-89be4f5e0f04 4b5cb0ad-cb14-4ed8-9510-313df61f77fe 7ac441dc-c8e3-431b-b356-959dccbd2846 c70ea9b7-3b34-4afa-b167-295a3bc3a8f8 [root@nott-vdsa ~(keystone_glance)]# but if you use --location, the image will be created but until we use it, the image was not yet downloaded -> hence it will not be shown in glance. --file however fails completely. so if its not supported for swift we need to indicate it, although I am not sure what it will not be supported.
I tried to reproduce this bug but I couldn't. Here's the output [root@rh-1012407 ~(keystone_admin)]# rpm -qa |grep glance python-glance-2013.2-0.10.b3.el6ost.noarch openstack-glance-2013.2-0.10.b3.el6ost.noarch python-glanceclient-0.10.0-1.el6ost.noarch [root@rh-1012407 ~(keystone_admin)]# rpm -qa |grep swift openstack-swift-1.9.1-2.el6ost.noarch openstack-swift-container-1.9.1-2.el6ost.noarch openstack-swift-account-1.9.1-2.el6ost.noarch openstack-swift-object-1.9.1-2.el6ost.noarch openstack-swift-proxy-1.9.1-2.el6ost.noarch openstack-swift-plugin-swift3-1.0.0-0.20120711git.1.el6ost.noarch python-swiftclient-1.6.0-1.el6ost.noarch [root@rh-1012407 ~(keystone_admin)]# glance image-list +--------------------------------------+------+-------------+------------------+------+--------+ | ID | Name | Disk Format | Container Format | Size | Status | +--------------------------------------+------+-------------+------------------+------+--------+ | 5b5f6b84-0b5e-4cce-87a4-524a4e919566 | test | qcow2 | bare | 183 | active | | 7b86d036-3806-4a33-b506-94e4580a6898 | test | qcow2 | bare | 183 | active | +--------------------------------------+------+-------------+------------------+------+--------+ [root@rh-1012407 ~(keystone_admin)]# glance image-create --name test --container-format bare --disk-format qcow2 --file somefile +------------------+--------------------------------------+ | Property | Value | +------------------+--------------------------------------+ | checksum | 84fc658615603947bab6b7ea7909a842 | | container_format | bare | | created_at | 2013-10-02T12:43:50 | | deleted | False | | deleted_at | None | | disk_format | qcow2 | | id | d8808047-16fc-4a7f-8b1a-ad7afeac7033 | | is_public | False | | min_disk | 0 | | min_ram | 0 | | name | test | | owner | 94bc5d630c1841f9926cd3d6b0ea1128 | | protected | False | | size | 183 | | status | active | | updated_at | 2013-10-02T12:43:51 | +------------------+--------------------------------------+ [root@rh-1012407 ~(keystone_admin)]# swift -U services:glance -K f0a00546e5a54144 list glance 5b5f6b84-0b5e-4cce-87a4-524a4e919566 7b86d036-3806-4a33-b506-94e4580a6898 d8808047-16fc-4a7f-8b1a-ad7afeac7033 [root@rh-1012407 ~(keystone_admin)]#
I'm working on the Havana downstream release: [root@nott-vdsa images(keystone_admin)]# rpm -qa |grep glance python-glance-2013.2-0.10.b3.el6ost.noarch openstack-glance-2013.2-0.10.b3.el6ost.noarch python-glanceclient-0.10.0-1.el6ost.noarch openstack-glance-doc-2013.2-0.10.b3.el6ost.noarch [root@nott-vdsa images(keystone_admin)]# rpm -qa |grep swift openstack-swift-plugin-swift3-1.0.0-0.20120711git.1.el6ost.noarch openstack-swift-proxy-1.8.0-6.el6ost.noarch python-swiftclient-1.6.0-1.el6ost.noarch openstack-swift-1.8.0-6.el6ost.noarch also, do you think you can actually use an actual image file? maybe its a size thing?
I did try with a proper image as well, it worked.
*** Bug 1023932 has been marked as a duplicate of this bug. ***
Dafna, once you try to reproduce with more space, if this does not reproduce then I'm guessing that means there is a packstack bug? if so, please move it to packstack, if not, close it (please keep the needinfo until after you try to reproduce). Thanks.
The problem is that packstack is installing the data servers with 1GB of space. when I increased the space we can upload the image. I am moving the bug to packstack for asking the user how much space to use or automatically using 50% of available free space on file system
Blocking on #1023532's review to close this one: https://review.openstack.org/#/c/55645/
55645 has been already merged. No longer blocking on that.
Adding OtherQA for bugs in MODIFIED
[para@virtual-rhel ~]$ df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_virtualrhel-lv_root 7.5G 6.3G 778M 90% / tmpfs 499M 0 499M 0% /dev/shm /dev/vda1 485M 56M 404M 13% /boot /srv/loopback-device/device1 1.9G 68M 1.8G 4% /srv/node/device1 [para@virtual-rhel ~]$ cat ~/packstack-answers-20131212-112631.txt | grep SWIFT_STORAGE_SIZE CONFIG_SWIFT_STORAGE_SIZE=2G
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHEA-2013-1859.html