rubygem-staypuft: After successfule Nova deployment unable to run basic glance commands. openstack-glance-api and openstack-glance-registry services are down. Environment: ruby193-rubygem-foreman_openstack_simplify-0.0.6-8.el7ost.noarch openstack-foreman-installer-3.0.8-1.el7ost.noarch ruby193-rubygem-staypuft-0.5.9-1.el7ost.noarch rhel-osp-installer-client-0.5.4-1.el7ost.noarch openstack-puppet-modules-2014.2.8-1.el7ost.noarch rhel-osp-installer-0.5.4-1.el7ost.noarch Steps to reproduce: 1. Install rhel-osp-installer 2. Successfully deploy nonHA Nova (1 controller + 2 compute) 3. Login to the horizon of the newly deployed setup. 4. Source the keystonerc_admin file. 5. Run glance image-create --name cirros --disk-format qcow2 --container-format bare --is-public 1 --copy-from https://launchpad.net/cirros/trunk/0.3.0/+download/cirros-0.3.0-x86_64-disk.img" Result: Error finding address for http://192.168.0.16:9292/v1/images: HTTPConnectionPool(host='192.168.0.16', port=9292): Max retries exceeded with url: /v1/images (Caused by <class 'httplib.BadStatusLine'>: '') Running 'glance image-list' results in: Error finding address for http://192.168.0.16:9292/v1/images/detail?sort_key=name&sort_dir=asc&limit=20: HTTPConnectionPool(host='192.168.0.16', port=9292): Max retries exceeded with url: /v1/images/detail?sort_key=name&sort_dir=asc&limit=20 (Caused by <class 'httplib.BadStatusLine'>: '') Workaround: manually start these services on the controller: systemctl start openstack-glance-registry systemctl start openstack-glance-api Then you'll be able to run the glance commands.
Reprodued with nonHA Neutron deployment - same packages version.
*** Bug 1177813 has been marked as a duplicate of this bug. ***
Can you please attach logs? What glance configuration did you have?
I'll attach the logs once this is reproduced. The glace was configured with NFS.
Setting needinfo until requested info is provided.
After discussion with QE, this does not reproduce. Closing. Please reopen if it reproduces.
We are seeing this issue in OSP6 setup. SO I am reopening this
We are using staypuft installer to bring up OS (OSP6). provided workaround works. I was unable to get any useful logs. please let me know what are the logs required. [root@mac525413e5abeb ~(openstack_admin)]# glance image-list Error finding address for http://11.0.0.12:9292/v1/images/detail?sort_key=name&sort_dir=asc&limit=20: HTTPConnectionPool(host='11.0.0.12', port=9292): Max retries exceeded with url: /v1/images/detail?sort_key=name&sort_dir=asc&limit=20 (Caused by <class 'httplib.BadStatusLine'>: '') USED THE PROVIDED WORKAROUND [root@mac525413e5abeb ~(openstack_admin)]# systemctl start openstack-glance-registry [root@mac525413e5abeb ~(openstack_admin)]# systemctl start openstack-glance-api [root@mac525413e5abeb ~(openstack_admin)]# glance image-list +----+------+-------------+------------------+------+--------+ | ID | Name | Disk Format | Container Format | Size | Status | +----+------+-------------+------------------+------+--------+ +----+------+-------------+------------------+------+--------+ [root@mac525413e5abeb ~(openstack_admin)]#
The most useful logs would be /var/log/messages*, /var/log/glance/*, and /var/log/pacemaker . Although, it would probably be a good idea to tar up /var/log and have it stashed away just in case there is a useful error message lurking somewhere else.
Created attachment 991094 [details] sosreport
Created attachment 991102 [details] crm_report
Created attachment 991104 [details] /var/log/glance logs
Created attachment 991105 [details] glance related errors in pacemaker logs
Created attachment 991106 [details] Fail messages related to glance in pacemaker
I have attached sosreport, crm_report and lar/log/glance logs pacemaker and messages logs are too big to attach so i have grep for glance related logs and have attached them herewith
Created attachment 991107 [details] glance related errors in messages logs
Looking at the pacemaker log, it looks like there are issues around fs-varlibglanceimages-clone that might be interfering. Note that if you don't want pacemaker to attempt to mount an nfs share for glance, the parameter $pcmk_fs_manage in quickstack::pacemaker::glance: should be set to false. If you do want it to mount (usually nfs) storage for glance, make sure the other params $pcmk_fs_device, $pcmk_fs_options and $pcmk_fs_type are correct. If the above seems right to you, please attach the yaml for one of the hosts.
Was it local file for glance? If that's the case, then this is probably bug 1183815 which is fixed in A1.
yes, i am using single controller setup with glance backend as a local file. Also pcmk_fs_manage=true Thanks Mike and Crag
Ok, re-closing this. We can track in bug 1183815 *** This bug has been marked as a duplicate of bug 1183815 ***