This service will be undergoing maintenance at 00:00 UTC, 2017-10-23 It is expected to last about 30 minutes
Bug 1023532 - packstack [swift]: swift device on data server is created with less than 1GB
packstack [swift]: swift device on data server is created with less than 1GB
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-packstack (Show other bugs)
x86_64 Linux
unspecified Severity high
: rc
: 4.0
Assigned To: Ivan Chavero
Dafna Ron
: OtherQA, Triaged
Depends On:
Blocks: 1012407
  Show dependency treegraph
Reported: 2013-10-25 12:06 EDT by Dafna Ron
Modified: 2016-04-26 11:07 EDT (History)
8 users (show)

See Also:
Fixed In Version: openstack-packstack-2013.2.1-0.14.dev919.el6ost
Doc Type: Bug Fix
Doc Text:
When deploying the Object Storage service, openstack-packstack automatically set the service's loopback file storage device size to 1GB. This size proved insufficient for most use cases. With this update, users can now configure the new CONFIG_SWIFT_STORAGE_SIZE setting. The openstack-packstack utility will use this setting to determine what size to use for the Object Storage service's device.
Story Points: ---
Clone Of:
Last Closed: 2013-12-19 19:32:31 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)
log (748 bytes, application/x-xz)
2013-10-25 12:06 EDT, Dafna Ron
no flags Details

External Trackers
Tracker ID Priority Status Summary Last Updated
Launchpad 1244715 None None None Never
OpenStack gerrit 55645 None None None Never

  None (edit)
Description Dafna Ron 2013-10-25 12:06:26 EDT
Created attachment 816228 [details]

Description of problem:

I installed swift with 5 data servers (clean - only rhel installed on them). 
after install I can see that swift device is less than 1 GB which fails upload from images if I set swift as glance backend.

Version-Release number of selected component (if applicable):

[root@opens-vdsb tmp(keystone_admin)]# rpm -qa |grep swift 
[root@opens-vdsb tmp(keystone_admin)]# rpm -qa |grep packstack
[root@opens-vdsb tmp(keystone_admin)]# 

How reproducible:


Steps to Reproduce:
1. install a setup with 1 AIO + 5 data servers (see swift config below)
2. run df -h on data servers

Actual results:

/srv/loopback-device/device1  960M   34M  876M   4% /srv/node/device1

Expected results:

if we create a specific size we need to ask user what it would like it to be. 

Additional info:


# The password to use for the Swift to authenticate with Keystone

# A comma separated list of IP addresses on which to install the
# Swift Storage services, each entry should take the format
# <ipaddress>[/dev], for example will install /dev/vdb
# on as a swift storage device(packstack does not create the
# filesystem, you must do this first), if /dev is omitted Packstack
# will create a loopback device for a test setup

# Number of swift storage zones, this number MUST be no bigger than
# the number of storage devices configured

# Number of swift storage replicas, this number MUST be no bigger
# than the number of storage zones configured

# FileSystem type for storage nodes
Comment 2 Scott Lewis 2013-12-09 10:30:46 EST
Adding OtherQA for bugs in MODIFIED
Comment 5 Martin Magr 2013-12-12 11:15:20 EST
[para@virtual-rhel ~]$ df -h
Filesystem                          Size  Used Avail Use% Mounted on
/dev/mapper/vg_virtualrhel-lv_root  7.5G  6.3G  778M  90% /
tmpfs                               499M     0  499M   0% /dev/shm
/dev/vda1                           485M   56M  404M  13% /boot
/srv/loopback-device/device1        1.9G   68M  1.8G   4% /srv/node/device1
[para@virtual-rhel ~]$ cat ~/packstack-answers-20131212-112631.txt | grep SWIFT_STORAGE_SIZE
Comment 6 Dafna Ron 2013-12-12 11:27:25 EST
the tests were done on allione and no external data servers were deployed. 
moving back to on_qa
Comment 7 Ivan Chavero 2013-12-13 02:47:04 EST
checking out in a multinode install
Comment 8 Dafna Ron 2013-12-13 05:59:52 EST
I deployed 5 data servers with 11 GB and we deploy the size from the answer file 
I;m worried about the metadata usage difference when deploying different sizes: 

root@vm-161-189 ~]# df -h
Filesystem                    Size  Used Avail Use% Mounted on
/dev/mapper/vg0-lv_root        20G  2.1G   17G  12% /
tmpfs                         939M     0  939M   0% /dev/shm
/dev/vda1                     485M   54M  406M  12% /boot
/dev/mapper/vg0-lv_home        15G  164M   14G   2% /home
/srv/loopback-device/device3   11G  156M  9.7G   2% /srv/node/device3
[root@vm-161-189 ~]# 

some more tests need to be done on this but for deploying different sizes I am moving bug to verify. 

Comment 10 errata-xmlrpc 2013-12-19 19:32:31 EST
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

Note You need to log in before you can comment on or make changes to this bug.