Created attachment 816228 [details] log Description of problem: I installed swift with 5 data servers (clean - only rhel installed on them). after install I can see that swift device is less than 1 GB which fails upload from images if I set swift as glance backend. Version-Release number of selected component (if applicable): [root@opens-vdsb tmp(keystone_admin)]# rpm -qa |grep swift openstack-swift-plugin-swift3-1.0.0-0.20120711git.1.el6ost.noarch openstack-swift-1.9.1-2.el6ost.noarch openstack-swift-proxy-1.9.1-2.el6ost.noarch python-swiftclient-1.6.0-1.el6ost.noarch [root@opens-vdsb tmp(keystone_admin)]# rpm -qa |grep packstack openstack-packstack-2013.2.1-0.6.dev763.el6ost.noarch [root@opens-vdsb tmp(keystone_admin)]# How reproducible: 100% Steps to Reproduce: 1. install a setup with 1 AIO + 5 data servers (see swift config below) 2. run df -h on data servers 3. Actual results: /srv/loopback-device/device1 960M 34M 876M 4% /srv/node/device1 Expected results: if we create a specific size we need to ask user what it would like it to be. Additional info: CONFIG_SWIFT_PROXY_HOSTS=10.35.***.** # The password to use for the Swift to authenticate with Keystone CONFIG_SWIFT_KS_PW=***************** # A comma separated list of IP addresses on which to install the # Swift Storage services, each entry should take the format # <ipaddress>[/dev], for example 127.0.0.1/vdb will install /dev/vdb # on 127.0.0.1 as a swift storage device(packstack does not create the # filesystem, you must do this first), if /dev is omitted Packstack # will create a loopback device for a test setup CONFIG_SWIFT_STORAGE_HOSTS=10.35.***.***,10.35.***.***,10.35.***.***,10.35.***.***,10.35.***.*** # Number of swift storage zones, this number MUST be no bigger than # the number of storage devices configured CONFIG_SWIFT_STORAGE_ZONES=2 # Number of swift storage replicas, this number MUST be no bigger # than the number of storage zones configured CONFIG_SWIFT_STORAGE_REPLICAS=2 # FileSystem type for storage nodes CONFIG_SWIFT_STORAGE_FSTYPE=ext4
Adding OtherQA for bugs in MODIFIED
[para@virtual-rhel ~]$ df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_virtualrhel-lv_root 7.5G 6.3G 778M 90% / tmpfs 499M 0 499M 0% /dev/shm /dev/vda1 485M 56M 404M 13% /boot /srv/loopback-device/device1 1.9G 68M 1.8G 4% /srv/node/device1 [para@virtual-rhel ~]$ cat ~/packstack-answers-20131212-112631.txt | grep SWIFT_STORAGE_SIZE CONFIG_SWIFT_STORAGE_SIZE=2G
the tests were done on allione and no external data servers were deployed. moving back to on_qa
checking out in a multinode install
I deployed 5 data servers with 11 GB and we deploy the size from the answer file I;m worried about the metadata usage difference when deploying different sizes: root@vm-161-189 ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg0-lv_root 20G 2.1G 17G 12% / tmpfs 939M 0 939M 0% /dev/shm /dev/vda1 485M 54M 406M 12% /boot /dev/mapper/vg0-lv_home 15G 164M 14G 2% /home /srv/loopback-device/device3 11G 156M 9.7G 2% /srv/node/device3 [root@vm-161-189 ~]# some more tests need to be done on this but for deploying different sizes I am moving bug to verify. openstack-swift-1.10.0-2.el6ost.noarch
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHEA-2013-1859.html