| Summary: | packstack: we configure only 1 GB only for swift data servers | ||||||
|---|---|---|---|---|---|---|---|
| Product: | Red Hat OpenStack | Reporter: | Dafna Ron <dron> | ||||
| Component: | openstack-packstack | Assignee: | Alvaro Lopez Ortega <aortega> | ||||
| Status: | CLOSED ERRATA | QA Contact: | Martin Magr <mmagr> | ||||
| Severity: | high | Docs Contact: | |||||
| Priority: | unspecified | ||||||
| Version: | 4.0 | CC: | abaron, aortega, ddomingo, derekh, dnavale, dron, eglynn, fpercoco, hateya, mmagr, pbrady, yeylon, zaitcev | ||||
| Target Milestone: | rc | Keywords: | OtherQA, Triaged | ||||
| Target Release: | 4.0 | ||||||
| Hardware: | x86_64 | ||||||
| OS: | Linux | ||||||
| Whiteboard: | storage | ||||||
| Fixed In Version: | openstack-packstack-2013.2.1-0.14.dev919.el6ost | Doc Type: | Bug Fix | ||||
| Doc Text: |
Red Hat Enterprise Linux OpenStack Platform introduces a new PackStack parameter CONFIG_SWIFT_STORAGE_SIZE which enables the user to set the size of Object Storage loopback file storage device.
|
Story Points: | --- | ||||
| Clone Of: | Environment: | ||||||
| Last Closed: | 2013-12-20 00:25:04 UTC | Type: | Bug | ||||
| Regression: | --- | Mount Type: | --- | ||||
| Documentation: | --- | CRM: | |||||
| Verified Versions: | Category: | --- | |||||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||
| Bug Depends On: | 1017714, 1023532 | ||||||
| Bug Blocks: | |||||||
| Attachments: |
|
||||||
|
Description
Dafna Ron
2013-09-26 12:23:03 UTC
I'm a bit confused between --file and --copy-from TBH Does the latter one work with swift store? i was too so I asked around :) when we use --location the image is created with the specified location but not downloaded to the server when we use --copy-from the image is downloaded right away so if I use [root@nott-vdsa tmp(keystone_admin)]# glance image-create --name bug1012407 --disk-format qcow2 --container-format bare --copy-from http://XXXXXX +------------------+--------------------------------------+ | Property | Value | +------------------+--------------------------------------+ | checksum | None | | container_format | bare | | created_at | 2013-09-26T15:34:14 | | deleted | False | | deleted_at | None | | disk_format | qcow2 | | id | 14cd2933-6669-4f15-b88c-89be4f5e0f04 | | is_public | False | | min_disk | 0 | | min_ram | 0 | | name | bug1012407 | | owner | b5311177ffd44e9c91af030911f0b9d4 | | protected | False | | size | 31357907 | | status | queued | | updated_at | 2013-09-26T15:34:14 | +------------------+--------------------------------------+ you can see that the image appears in glance container right away: [root@nott-vdsa ~(keystone_glance)]# swift list glance 14cd2933-6669-4f15-b88c-89be4f5e0f04 4b5cb0ad-cb14-4ed8-9510-313df61f77fe 7ac441dc-c8e3-431b-b356-959dccbd2846 c70ea9b7-3b34-4afa-b167-295a3bc3a8f8 [root@nott-vdsa ~(keystone_glance)]# but if you use --location, the image will be created but until we use it, the image was not yet downloaded -> hence it will not be shown in glance. --file however fails completely. so if its not supported for swift we need to indicate it, although I am not sure what it will not be supported. I tried to reproduce this bug but I couldn't. Here's the output [root@rh-1012407 ~(keystone_admin)]# rpm -qa |grep glance python-glance-2013.2-0.10.b3.el6ost.noarch openstack-glance-2013.2-0.10.b3.el6ost.noarch python-glanceclient-0.10.0-1.el6ost.noarch [root@rh-1012407 ~(keystone_admin)]# rpm -qa |grep swift openstack-swift-1.9.1-2.el6ost.noarch openstack-swift-container-1.9.1-2.el6ost.noarch openstack-swift-account-1.9.1-2.el6ost.noarch openstack-swift-object-1.9.1-2.el6ost.noarch openstack-swift-proxy-1.9.1-2.el6ost.noarch openstack-swift-plugin-swift3-1.0.0-0.20120711git.1.el6ost.noarch python-swiftclient-1.6.0-1.el6ost.noarch [root@rh-1012407 ~(keystone_admin)]# glance image-list +--------------------------------------+------+-------------+------------------+------+--------+ | ID | Name | Disk Format | Container Format | Size | Status | +--------------------------------------+------+-------------+------------------+------+--------+ | 5b5f6b84-0b5e-4cce-87a4-524a4e919566 | test | qcow2 | bare | 183 | active | | 7b86d036-3806-4a33-b506-94e4580a6898 | test | qcow2 | bare | 183 | active | +--------------------------------------+------+-------------+------------------+------+--------+ [root@rh-1012407 ~(keystone_admin)]# glance image-create --name test --container-format bare --disk-format qcow2 --file somefile +------------------+--------------------------------------+ | Property | Value | +------------------+--------------------------------------+ | checksum | 84fc658615603947bab6b7ea7909a842 | | container_format | bare | | created_at | 2013-10-02T12:43:50 | | deleted | False | | deleted_at | None | | disk_format | qcow2 | | id | d8808047-16fc-4a7f-8b1a-ad7afeac7033 | | is_public | False | | min_disk | 0 | | min_ram | 0 | | name | test | | owner | 94bc5d630c1841f9926cd3d6b0ea1128 | | protected | False | | size | 183 | | status | active | | updated_at | 2013-10-02T12:43:51 | +------------------+--------------------------------------+ [root@rh-1012407 ~(keystone_admin)]# swift -U services:glance -K f0a00546e5a54144 list glance 5b5f6b84-0b5e-4cce-87a4-524a4e919566 7b86d036-3806-4a33-b506-94e4580a6898 d8808047-16fc-4a7f-8b1a-ad7afeac7033 [root@rh-1012407 ~(keystone_admin)]# I'm working on the Havana downstream release: [root@nott-vdsa images(keystone_admin)]# rpm -qa |grep glance python-glance-2013.2-0.10.b3.el6ost.noarch openstack-glance-2013.2-0.10.b3.el6ost.noarch python-glanceclient-0.10.0-1.el6ost.noarch openstack-glance-doc-2013.2-0.10.b3.el6ost.noarch [root@nott-vdsa images(keystone_admin)]# rpm -qa |grep swift openstack-swift-plugin-swift3-1.0.0-0.20120711git.1.el6ost.noarch openstack-swift-proxy-1.8.0-6.el6ost.noarch python-swiftclient-1.6.0-1.el6ost.noarch openstack-swift-1.8.0-6.el6ost.noarch also, do you think you can actually use an actual image file? maybe its a size thing? I did try with a proper image as well, it worked. *** Bug 1023932 has been marked as a duplicate of this bug. *** Dafna, once you try to reproduce with more space, if this does not reproduce then I'm guessing that means there is a packstack bug? if so, please move it to packstack, if not, close it (please keep the needinfo until after you try to reproduce). Thanks. The problem is that packstack is installing the data servers with 1GB of space. when I increased the space we can upload the image. I am moving the bug to packstack for asking the user how much space to use or automatically using 50% of available free space on file system Blocking on #1023532's review to close this one: https://review.openstack.org/#/c/55645/ 55645 has been already merged. No longer blocking on that. Adding OtherQA for bugs in MODIFIED [para@virtual-rhel ~]$ df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_virtualrhel-lv_root 7.5G 6.3G 778M 90% / tmpfs 499M 0 499M 0% /dev/shm /dev/vda1 485M 56M 404M 13% /boot /srv/loopback-device/device1 1.9G 68M 1.8G 4% /srv/node/device1 [para@virtual-rhel ~]$ cat ~/packstack-answers-20131212-112631.txt | grep SWIFT_STORAGE_SIZE CONFIG_SWIFT_STORAGE_SIZE=2G Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHEA-2013-1859.html |