Bug 1282984 - 500 Internal Server Error from running 'glance image-create' on the overcloud
500 Internal Server Error from running 'glance image-create' on the overcloud
Status: CLOSED DUPLICATE of bug 1284845
Product: Red Hat OpenStack
Classification: Red Hat
Component: rhosp-director (Show other bugs)
7.0 (Kilo)
Unspecified Unspecified
high Severity unspecified
: y2
: 7.0 (Kilo)
Assigned To: Flavio Percoco
yeylon@redhat.com
: Automation
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2015-11-17 19:00 EST by Ronelle Landy
Modified: 2016-04-18 02:52 EDT (History)
10 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2015-11-24 11:09:36 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Ronelle Landy 2015-11-17 19:00:34 EST
Description of problem:

Creating an image and uploading that image to Glance on the overcloud results in the following error:

500 Internal Server Error: Failed to upload image 33e1c319-4a2d-4bd9-996b-f8b8979e6041 (HTTP 500)

**************************

Error trace found in: /var/log/glance/api.log on the controller:

2015-11-17 17:43:30.442 3349 TRACE glance.api.v2.image_data   File "/usr/lib/python2.7/site-packages/rados.py", line 253, in __init__
2015-11-17 17:43:30.442 3349 TRACE glance.api.v2.image_data     self.conf_read_file(conffile)
2015-11-17 17:43:30.442 3349 TRACE glance.api.v2.image_data   File "/usr/lib/python2.7/site-packages/rados.py", line 302, in conf_read_file
2015-11-17 17:43:30.442 3349 TRACE glance.api.v2.image_data     raise make_ex(ret, "error calling conf_read_file")
2015-11-17 17:43:30.442 3349 TRACE glance.api.v2.image_data Error: error calling conf_read_file: errno EINVAL
2015-11-17 17:43:30.442 3349 TRACE glance.api.v2.image_data 
2015-11-17 17:43:30.591 3349 ERROR glance.common.wsgi [req-226f3550-08f2-4ff7-a051-5796aa2accc7 9db2f60dbf074e08b514030302321be2 a4a6ec01150d4d8f86adc317a71ff7dd - - -] Caught error: error calling conf_read_file: errno EINVAL
2015-11-17 17:43:30.591 3349 TRACE glance.common.wsgi Traceback (most recent call last):
2015-11-17 17:43:30.591 3349 TRACE glance.common.wsgi   File "/usr/lib/python2.7/site-packages/glance/common/wsgi.py", line 881, in __call__
2015-11-17 17:43:30.591 3349 TRACE glance.common.wsgi     request, **action_args)
2015-11-17 17:43:30.591 3349 TRACE glance.common.wsgi   File "/usr/lib/python2.7/site-packages/glance/common/wsgi.py", line 909, in dispatch
2015-11-17 17:43:30.591 3349 TRACE glance.common.wsgi     return method(*args, **kwargs)
2015-11-17 17:43:30.591 3349 TRACE glance.common.wsgi   File "/usr/lib/python2.7/site-packages/glance/common/utils.py", line 508, in wrapped
2015-11-17 17:43:30.591 3349 TRACE glance.common.wsgi     return func(self, req, *args, **kwargs)
2015-11-17 17:43:30.591 3349 TRACE glance.common.wsgi   File "/usr/lib/python2.7/site-packages/glance/api/v2/image_data.py", line 178, in upload
2015-11-17 17:43:30.591 3349 TRACE glance.common.wsgi     self._restore(image_repo, image)
2015-11-17 17:43:30.591 3349 TRACE glance.common.wsgi   File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 85, in __exit__
2015-11-17 17:43:30.591 3349 TRACE glance.common.wsgi     six.reraise(self.type_, self.value, self.tb)
2015-11-17 17:43:30.591 3349 TRACE glance.common.wsgi   File "/usr/lib/python2.7/site-packages/glance/api/v2/image_data.py", line 74, in upload
2015-11-17 17:43:30.591 3349 TRACE glance.common.wsgi     image.set_data(data, size)
2015-11-17 17:43:30.591 3349 TRACE glance.common.wsgi   File "/usr/lib/python2.7/site-packages/glance/domain/proxy.py", line 166, in set_data
2015-11-17 17:43:30.591 3349 TRACE glance.common.wsgi     self.base.set_data(data, size)
2015-11-17 17:43:30.591 3349 TRACE glance.common.wsgi   File "/usr/lib/python2.7/site-packages/glance/notifier.py", line 429, in set_data
2015-11-17 17:43:30.591 3349 TRACE glance.common.wsgi     _send_notification(notify_error, 'image.upload', msg)
2015-11-17 17:43:30.591 3349 TRACE glance.common.wsgi   File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 85, in __exit__
2015-11-17 17:43:30.591 3349 TRACE glance.common.wsgi     six.reraise(self.type_, self.value, self.tb)
2015-11-17 17:43:30.591 3349 TRACE glance.common.wsgi   File "/usr/lib/python2.7/site-packages/glance/notifier.py", line 378, in set_data
2015-11-17 17:43:30.591 3349 TRACE glance.common.wsgi     self.repo.set_data(data, size)
2015-11-17 17:43:30.591 3349 TRACE glance.common.wsgi   File "/usr/lib/python2.7/site-packages/glance/api/policy.py", line 196, in set_data
2015-11-17 17:43:30.591 3349 TRACE glance.common.wsgi     return self.image.set_data(*args, **kwargs)
2015-11-17 17:43:30.591 3349 TRACE glance.common.wsgi   File "/usr/lib/python2.7/site-packages/glance/quota/__init__.py", line 296, in set_data
2015-11-17 17:43:30.591 3349 TRACE glance.common.wsgi     self.image.set_data(data, size=size)
2015-11-17 17:43:30.591 3349 TRACE glance.common.wsgi   File "/usr/lib/python2.7/site-packages/glance/location.py", line 377, in set_data
2015-11-17 17:43:30.591 3349 TRACE glance.common.wsgi     context=self.context)
2015-11-17 17:43:30.591 3349 TRACE glance.common.wsgi   File "/usr/lib/python2.7/site-packages/glance_store/backend.py", line 364, in add_to_backend
2015-11-17 17:43:30.591 3349 TRACE glance.common.wsgi     return store_add_to_backend(image_id, data, size, store, context)
2015-11-17 17:43:30.591 3349 TRACE glance.common.wsgi   File "/usr/lib/python2.7/site-packages/glance_store/backend.py", line 339, in store_add_to_backend
2015-11-17 17:43:30.591 3349 TRACE glance.common.wsgi     context=context)
2015-11-17 17:43:30.591 3349 TRACE glance.common.wsgi   File "/usr/lib/python2.7/site-packages/glance_store/capabilities.py", line 226, in op_checker
2015-11-17 17:43:30.591 3349 TRACE glance.common.wsgi     return store_op_fun(store, *args, **kwargs)
2015-11-17 17:43:30.591 3349 TRACE glance.common.wsgi   File "/usr/lib/python2.7/site-packages/glance_store/_drivers/rbd.py", line 375, in add
2015-11-17 17:43:30.591 3349 TRACE glance.common.wsgi     with rados.Rados(conffile=self.conf_file, rados_id=self.user) as conn:
2015-11-17 17:43:30.591 3349 TRACE glance.common.wsgi   File "/usr/lib/python2.7/site-packages/rados.py", line 253, in __init__
2015-11-17 17:43:30.591 3349 TRACE glance.common.wsgi     self.conf_read_file(conffile)
2015-11-17 17:43:30.591 3349 TRACE glance.common.wsgi   File "/usr/lib/python2.7/site-packages/rados.py", line 302, in conf_read_file
2015-11-17 17:43:30.591 3349 TRACE glance.common.wsgi     raise make_ex(ret, "error calling conf_read_file")
2015-11-17 17:43:30.591 3349 TRACE glance.common.wsgi Error: error calling conf_read_file: errno EINVAL
2015-11-17 17:43:30.591 3349 TRACE glance.common.wsgi 
2015-11-17 18:02:54.344 3337 ERROR glance.api.v1.upload_utils [req-c6e7243d-8dae-430c-814f-ac35f4923b1d 9db2f60dbf074e08b514030302321be2 a4a6ec01150d4d8f86adc317a71ff7dd - - -] Failed to upload image 33e1c319-4a2d-4bd9-996b-f8b8979e6041
2015-11-17 18:02:54.344 3337 TRACE glance.api.v1.upload_utils Traceback (most recent call last):
2015-11-17 18:02:54.344 3337 TRACE glance.api.v1.upload_utils   File "/usr/lib/python2.7/site-packages/glance/api/v1/upload_utils.py", line 113, in upload_data_to_store
2015-11-17 18:02:54.344 3337 TRACE glance.api.v1.upload_utils     context=req.context)
2015-11-17 18:02:54.344 3337 TRACE glance.api.v1.upload_utils   File "/usr/lib/python2.7/site-packages/glance_store/backend.py", line 339, in store_add_to_backend
2015-11-17 18:02:54.344 3337 TRACE glance.api.v1.upload_utils     context=context)
2015-11-17 18:02:54.344 3337 TRACE glance.api.v1.upload_utils   File "/usr/lib/python2.7/site-packages/glance_store/capabilities.py", line 226, in op_checker
2015-11-17 18:02:54.344 3337 TRACE glance.api.v1.upload_utils     return store_op_fun(store, *args, **kwargs)
2015-11-17 18:02:54.344 3337 TRACE glance.api.v1.upload_utils   File "/usr/lib/python2.7/site-packages/glance_store/_drivers/rbd.py", line 375, in add
2015-11-17 18:02:54.344 3337 TRACE glance.api.v1.upload_utils     with rados.Rados(conffile=self.conf_file, rados_id=self.user) as conn:
2015-11-17 18:02:54.344 3337 TRACE glance.api.v1.upload_utils   File "/usr/lib/python2.7/site-packages/rados.py", line 253, in __init__
2015-11-17 18:02:54.344 3337 TRACE glance.api.v1.upload_utils     self.conf_read_file(conffile)
2015-11-17 18:02:54.344 3337 TRACE glance.api.v1.upload_utils   File "/usr/lib/python2.7/site-packages/rados.py", line 302, in conf_read_file
2015-11-17 18:02:54.344 3337 TRACE glance.api.v1.upload_utils     raise make_ex(ret, "error calling conf_read_file")
2015-11-17 18:02:54.344 3337 TRACE glance.api.v1.upload_utils Error: error calling conf_read_file: errno EINVAL
2015-11-17 18:02:54.344 3337 TRACE glance.api.v1.upload_utils 
2015-11-17 18:04:54.188 3336 ERROR glance.api.v1.upload_utils [req-e0793f23-3eac-429f-b5c7-a6c66e7fd79c 9db2f60dbf074e08b514030302321be2 a4a6ec01150d4d8f86adc317a71ff7dd - - -] Failed to upload image 76fdfff7-9c87-4420-88d9-dd3ebd3241e5
2015-11-17 18:04:54.188 3336 TRACE glance.api.v1.upload_utils Traceback (most recent call last):
2015-11-17 18:04:54.188 3336 TRACE glance.api.v1.upload_utils   File "/usr/lib/python2.7/site-packages/glance/api/v1/upload_utils.py", line 113, in upload_data_to_store
2015-11-17 18:04:54.188 3336 TRACE glance.api.v1.upload_utils     context=req.context)
2015-11-17 18:04:54.188 3336 TRACE glance.api.v1.upload_utils   File "/usr/lib/python2.7/site-packages/glance_store/backend.py", line 339, in store_add_to_backend
2015-11-17 18:04:54.188 3336 TRACE glance.api.v1.upload_utils     context=context)
2015-11-17 18:04:54.188 3336 TRACE glance.api.v1.upload_utils   File "/usr/lib/python2.7/site-packages/glance_store/capabilities.py", line 226, in op_checker
2015-11-17 18:04:54.188 3336 TRACE glance.api.v1.upload_utils     return store_op_fun(store, *args, **kwargs)
2015-11-17 18:04:54.188 3336 TRACE glance.api.v1.upload_utils   File "/usr/lib/python2.7/site-packages/glance_store/_drivers/rbd.py", line 375, in add
2015-11-17 18:04:54.188 3336 TRACE glance.api.v1.upload_utils     with rados.Rados(conffile=self.conf_file, rados_id=self.user) as conn:
2015-11-17 18:04:54.188 3336 TRACE glance.api.v1.upload_utils   File "/usr/lib/python2.7/site-packages/rados.py", line 253, in __init__
2015-11-17 18:04:54.188 3336 TRACE glance.api.v1.upload_utils     self.conf_read_file(conffile)
2015-11-17 18:04:54.188 3336 TRACE glance.api.v1.upload_utils   File "/usr/lib/python2.7/site-packages/rados.py", line 302, in conf_read_file
2015-11-17 18:04:54.188 3336 TRACE glance.api.v1.upload_utils     raise make_ex(ret, "error calling conf_read_file")
2015-11-17 18:04:54.188 3336 TRACE glance.api.v1.upload_utils Error: error calling conf_read_file: errno EINVAL
2015-11-17 18:04:54.188 3336 TRACE glance.api.v1.upload_utils 

************************

Version-Release number of selected component (if applicable):

[heat-admin@overcloud-controller-0 ~]$ rpm -qa | grep glance
python-glanceclient-0.17.0-2.el7ost.noarch
python-glance-store-0.4.0-3.el7ost.noarch
openstack-glance-2015.1.2-1.el7ost.noarch
python-glance-2015.1.2-1.el7ost.noarch


[heat-admin@overcloud-controller-0 ~]$ rpm -qa | grep openstack
openstack-ceilometer-common-2015.1.2-1.el7ost.noarch
openstack-ceilometer-central-2015.1.2-1.el7ost.noarch
openstack-cinder-2015.1.2-1.el7ost.noarch
openstack-neutron-metering-agent-2015.1.2-2.el7ost.noarch
openstack-selinux-0.6.43-1.el7ost.noarch
python-django-openstack-auth-1.2.0-5.el7ost.noarch
openstack-dashboard-2015.1.2-2.el7ost.noarch
openstack-heat-common-2015.1.2-1.el7ost.noarch
openstack-heat-engine-2015.1.2-1.el7ost.noarch
openstack-utils-2014.2-1.el7ost.noarch
openstack-nova-scheduler-2015.1.2-2.el7ost.noarch
openstack-neutron-openvswitch-2015.1.2-2.el7ost.noarch
openstack-swift-2.3.0-2.el7ost.noarch
openstack-ceilometer-alarm-2015.1.2-1.el7ost.noarch
openstack-nova-conductor-2015.1.2-2.el7ost.noarch
openstack-neutron-common-2015.1.2-2.el7ost.noarch
openstack-swift-object-2.3.0-2.el7ost.noarch
openstack-swift-plugin-swift3-1.7-3.el7ost.noarch
openstack-neutron-2015.1.2-2.el7ost.noarch
openstack-ceilometer-collector-2015.1.2-1.el7ost.noarch
openstack-ceilometer-compute-2015.1.2-1.el7ost.noarch
openstack-heat-api-cloudwatch-2015.1.2-1.el7ost.noarch
openstack-nova-novncproxy-2015.1.2-2.el7ost.noarch
openstack-neutron-ml2-2015.1.2-2.el7ost.noarch
openstack-swift-account-2.3.0-2.el7ost.noarch
openstack-puppet-modules-2015.1.8-29.el7ost.noarch
openstack-dashboard-theme-2015.1.2-2.el7ost.noarch
openstack-heat-api-cfn-2015.1.2-1.el7ost.noarch
openstack-nova-api-2015.1.2-2.el7ost.noarch
openstack-keystone-2015.1.2-1.el7ost.noarch
openstack-swift-container-2.3.0-2.el7ost.noarch
python-openstackclient-1.0.3-3.el7ost.noarch
openstack-nova-common-2015.1.2-2.el7ost.noarch
openstack-ceilometer-notification-2015.1.2-1.el7ost.noarch
openstack-ceilometer-api-2015.1.2-1.el7ost.noarch
openstack-nova-console-2015.1.2-2.el7ost.noarch
openstack-glance-2015.1.2-1.el7ost.noarch
openstack-swift-proxy-2.3.0-2.el7ost.noarch
openstack-neutron-bigswitch-lldp-2015.1.38-1.el7ost.noarch
redhat-access-plugin-openstack-7.0.0-0.el7ost.noarch
openstack-nova-compute-2015.1.2-2.el7ost.noarch
openstack-heat-api-2015.1.2-1.el7ost.noarch
openstack-nova-cert-2015.1.2-2.el7ost.noarch
openstack-neutron-lbaas-2015.1.2-1.el7ost.noarch

How reproducible:

Consistently reproducible - picked up when running tempest:

*************

19:28:56 2015-11-17 14:28:44.538 29901 INFO __main__ [-] Creating user 'alt_demo' with tenant 'alt_demo' and password 'secrete'
19:28:56 2015-11-17 14:28:47.114 29901 INFO __main__ [-] Creating flavor 'm1.nano'
19:28:56 2015-11-17 14:28:47.194 29901 DEBUG __main__ [-] Setting [compute] flavor_ref = 51c65e81-26c8-4702-a924-dfd78cc4346b set tools/config_tempest.py:373
19:28:56 2015-11-17 14:28:49.747 29901 INFO __main__ [-] Creating flavor 'm1.micro'
19:28:56 2015-11-17 14:28:49.814 29901 DEBUG __main__ [-] Setting [compute] flavor_ref_alt = 88d706ab-1043-414d-9e94-2b18b6a56b3f set tools/config_tempest.py:373
19:28:56 2015-11-17 14:28:51.094 29901 INFO __main__ [-] Creating image 'cirros-0.3.4-x86_64-disk.img'
19:28:56 2015-11-17 14:28:51.095 29901 INFO __main__ [-] Downloading 'http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img' and saving as 'etc/cirros-0.3.1-x86_64-disk.img'
19:28:56 2015-11-17 14:28:54.269 29901 INFO __main__ [-] Uploading image 'cirros-0.3.4-x86_64-disk.img' from '/home/stack/tempest/etc/cirros-0.3.1-x86_64-disk.img'
19:28:56 2015-11-17 14:28:56.355 29901 CRITICAL tempest [-] ValueError: dictionary update sequence element #0 has length 26; 2 is required
19:28:56 2015-11-17 14:28:56.355 29901 TRACE tempest Traceback (most recent call last):
19:28:56 2015-11-17 14:28:56.355 29901 TRACE tempest   File "tools/config_tempest.py", line 742, in <module>
19:28:56 2015-11-17 14:28:56.355 29901 TRACE tempest     main()
19:28:56 2015-11-17 14:28:56.355 29901 TRACE tempest   File "tools/config_tempest.py", line 149, in main
19:28:56 2015-11-17 14:28:56.355 29901 TRACE tempest     args.image_disk_format)
19:28:56 2015-11-17 14:28:56.355 29901 TRACE tempest   File "tools/config_tempest.py", line 534, in create_tempest_images
19:28:56 2015-11-17 14:28:56.355 29901 TRACE tempest     disk_format=disk_format)
19:28:56 2015-11-17 14:28:56.355 29901 TRACE tempest   File "tools/config_tempest.py", line 568, in find_or_upload_image
19:28:56 2015-11-17 14:28:56.355 29901 TRACE tempest     image = _upload_image(client, image_name, image_dest, disk_format)
19:28:56 2015-11-17 14:28:56.355 29901 TRACE tempest   File "tools/config_tempest.py", line 723, in _upload_image
19:28:56 2015-11-17 14:28:56.355 29901 TRACE tempest     client.store_image(image['id'], data)
19:28:56 2015-11-17 14:28:56.355 29901 TRACE tempest   File "/home/stack/tempest/tempest/services/image/v2/json/image_client.py", line 142, in store_image
19:28:56 2015-11-17 14:28:56.355 29901 TRACE tempest     return service_client.ResponseBody(resp, body)
19:28:56 2015-11-17 14:28:56.355 29901 TRACE tempest   File "/home/stack/tempest/tempest/common/service_client.py", line 51, in __init__
19:28:56 2015-11-17 14:28:56.355 29901 TRACE tempest     self.update(body_data)
19:28:56 2015-11-17 14:28:56.355 29901 TRACE tempest ValueError: dictionary update sequence element #0 has length 26; 2 is required

**************

Steps to Reproduce:
1. Install OSP director from latest poodle on baremetal (possibly this reproduces on virt install but it was found on baremetal)
2. Deploy an overcloud with one controller and one compute node
3. Run minimal tempest tests on the overcloud or try create/upload an example image to glance:

>> wget http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud-1503.qcow2
>> glance image-create --name CentOS-7-x86_64-GenericCloud-1503 --disk-format qcow2 --container-format bare < CentOS-7-x86_64-GenericCloud-1503.qcow2

Actual results:

500 Internal Server Error: Failed to upload image 33e1c319-4a2d-4bd9-996b-f8b8979e6041 (HTTP 500)

Expected results:

image appears ready to use in glance image-list output

Additional info:

Note that the *same* image uploads correctly on the same version of glance on the undercloud. So possibly a config issue.
Comment 2 Steve Linabery 2015-11-18 11:11:08 EST
We're also seeing the confusing output from tempest in ci.centos jobs for rdo-manager, e.g.
https://ci.centos.org/view/rdo/job/rdo_manager-periodic-7-rdo-liberty-delorean_mgt-centos-7.0-templates-virthost-minimal_ha-neutron-ml2-vxlan-smoke/173/console

I opened a bug upstream against tempest:
https://bugs.launchpad.net/tempest/+bug/1517536
Comment 3 Flavio Percoco 2015-11-19 12:19:58 EST
Can I have the config files for glance and ceph ? By looking at the traceback, it'd seem that there's something wrong in the configs. Is the ceph config file configured? Is it readable?
Comment 4 Flavio Percoco 2015-11-20 13:45:07 EST
I looked into one of the environments and it doesn't have ceph enabled. The logs on that environment don't show the error reported in this BZ.

For this error to happen, ceph must be enabled and used on uploads.
Comment 5 Steve Linabery 2015-11-20 16:37:28 EST
I was able to get another 500 while running config_tempest.py.

1. login to uc, su stack
2. source ~/overcloudrc
3. glance image-delete <for all images>
4. re-run config_tempest.py using command from khaleesi output*

Having added some 'print' debugging in tempest/services/image/v2/json/image_client.py, I can see the response from the glance api:
store_image:
v2/images/4b5cb428-6be5-4248-99d1-e53cebdacb59/file 
500

I can't find anything in journalctl or /var/log reflecting this, though. Maybe looking in the wrong way or place.

*to wit,
source /home/stack/overcloudrc; cd /home/stack/tempest && tools/config_tempest.py --out etc/tempest.conf --network-id 5996b189-3bac-4506-beaf-1f6fe584571d --deployer-input ~/tempest-deployer-input.conf --debug --create identity.uri $OS_AUTH_URL identity.admin_password $OS_PASSWORD network.tenant_network_cidr 192.168.0.0/24 object-storage.operator_role swiftoperator orchestration.stack_owner_role heat_stack_owner
Comment 6 Steve Linabery 2015-11-20 16:38:13 EST
(In reply to Steve Linabery from comment #5)
> I was able to get another 500 while running config_tempest.py.
> 
> 1. login to uc, su stack
> 2. source ~/overcloudrc
> 3. glance image-delete <for all images>
> 4. re-run config_tempest.py using command from khaleesi output*
> 
> Having added some 'print' debugging in
> tempest/services/image/v2/json/image_client.py, I can see the response from
> the glance api:
> store_image:
> v2/images/4b5cb428-6be5-4248-99d1-e53cebdacb59/file 
> 500
> 
> I can't find anything in journalctl or /var/log reflecting this, though.
> Maybe looking in the wrong way or place.
> 
> *to wit,
> source /home/stack/overcloudrc; cd /home/stack/tempest &&
> tools/config_tempest.py --out etc/tempest.conf --network-id
> 5996b189-3bac-4506-beaf-1f6fe584571d --deployer-input
> ~/tempest-deployer-input.conf --debug --create identity.uri $OS_AUTH_URL
> identity.admin_password $OS_PASSWORD network.tenant_network_cidr
> 192.168.0.0/24 object-storage.operator_role swiftoperator
> orchestration.stack_owner_role heat_stack_owner

I should note this was not on the env where the original bug was produced, but on a virthost-based installation that I ran subsequently.
Comment 7 Jaromir Coufal 2015-11-23 07:29:51 EST
I believe I know the answers, but I want to clarify:
* Is this happening on all deployment configurations or just some specific ones?
* Is this bug affecting basic overcloud functionality (ability to launch VM since you cannot create an image)?

Thanks, Jarda
Comment 8 Steve Linabery 2015-11-23 11:41:52 EST
(In reply to Jaromir Coufal from comment #7)
> I believe I know the answers, but I want to clarify:
> * Is this happening on all deployment configurations or just some specific
> ones?

I can't say for sure b/c it is intermittent in the places where we've seen it. IOW, it could be more widespread than we have observed (not to be alarmist, but idk).

> * Is this bug affecting basic overcloud functionality (ability to launch VM
> since you cannot create an image)?

If you can create an image (see 'intermittent' ^^) tempest passes.
> 
> Thanks, Jarda
Comment 9 Steve Linabery 2015-11-23 12:38:35 EST
Here's what it looks like in glance/api.log from a ci.centos run where we saw the error output from config_tempest.py

https://ci.centos.org/artifacts/rdo/jenkins-rdo_manager-periodic-7-rdo-liberty-production-centos-7.0-templates-virthost-minimal_ha-neutron-ml2-vxlan-smoke-37/overcloud-controller-1/var/log/glance/api.log.gz
Comment 10 Steve Linabery 2015-11-23 13:02:10 EST
fpercoco just found this
https://bugs.launchpad.net/glance-store/+bug/1213179
Comment 11 Jaromir Coufal 2015-11-23 13:28:17 EST
I am not sure if it is related, since the bug was filed more than 2 years ago and we caught this issue in the CI just now...
Comment 12 Steve Linabery 2015-11-23 14:58:46 EST
We think this is isolated to HA deployments. Cannot find an example of it failing with the tempest error output on 'minimal'.
Comment 13 Giulio Fidente 2015-11-23 17:47:29 EST
Following up on comment #3 from Flavio, can you paste the entire cmdline used to deploy?
Comment 14 Steve Linabery 2015-11-23 18:05:37 EST
(In reply to Giulio Fidente from comment #13)
> Following up on comment #3 from Flavio, can you paste the entire cmdline
> used to deploy?

openstack overcloud deploy --debug --log-file overcloud_deployment_71.log --templates --libvirt-type=qemu --neutron-network-type vxlan --neutron-tunnel-types vxlan --ntp-server 10.5.26.10 --control-scale 3 --compute-scale 1 --ceph-storage-scale 0 --block-storage-scale 0 --swift-storage-scale 0 --control-flavor baremetal --compute-flavor baremetal --ceph-storage-flavor baremetal --block-storage-flavor baremetal --swift-storage-flavor baremetal -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml -e ~/network-environment.yaml
Comment 15 Steve Linabery 2015-11-23 18:18:31 EST
We have a baremetal multinode poodle-based install from this afternoon on which I am unable to reproduce the error.
Comment 16 wes hayutin 2015-11-23 18:22:56 EST
(In reply to Steve Linabery from comment #15)
> We have a baremetal multinode poodle-based install from this afternoon on
> which I am unable to reproduce the error.

Steve can you describe your reproduce steps?
Can you also ensure you try a script to upload an image to glance at 10-20 times
Comment 18 Steve Linabery 2015-11-23 19:38:35 EST
(In reply to wes hayutin from comment #17)
> This is also interesting...
> https://rhos-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/job/osp_director-rhos-
> 7_director-poodle-rhel-7.2-templates-baremetal-dell_pe_r630-minimal_ha-
> bond_with_vlans-neutron-gre/14/testReport/tempest.api.volume.
> test_volumes_get/VolumesV2GetTest/
> test_volume_create_get_update_delete_from_image_id_54a01030_c7fc_447c_86ee_c1
> 182beae638_image_smoke_/

To reproduce on the virthost env, I deleted all glance images and ran config_tempest.py. Fails approx every other image upload.

here's what I used to exercise the baremetal where I cannot reproduce a failure:
#!/bin/bash

counter=0
while [ 1 == 1 ] ; do
  source ~/overcloudrc ; for n in `glance image-list | grep cirros | awk '{print $2}'`; do glance image-delete $n; done; 
  source /home/stack/overcloudrc; 
  cd /home/stack/tempest && tools/config_tempest.py --out etc/tempest.conf --network-id 213cda4c-0af5-4895-8179-10e643222de3 --deployer-input ~/tempest-deployer-input.conf --debug --create identity.uri $OS_AUTH_URL identity.admin_password $OS_PASSWORD network.tenant_network_cidr 192.168.0.0/24 object-storage.operator_role swiftoperator orchestration.stack_owner_role heat_stack_owner
  if [ $? != 0 ]; then 
    echo "failed"
    break
  fi
  ((counter+=1))
  echo "passed $counter times"
done
Comment 19 Steve Linabery 2015-11-24 09:06:11 EST
https://bugzilla.redhat.com/show_bug.cgi?id=1284845

This looks like the root cause.

config_tempest.py is using the v2 api:
https://github.com/redhat-openstack/tempest/blob/kilo/tempest/services/image/v2/json/image_client.py#L94
Comment 20 Mike Burns 2015-11-24 11:09:36 EST

*** This bug has been marked as a duplicate of bug 1284845 ***

Note You need to log in before you can comment on or make changes to this bug.