Bug 1176674

Summary: rubygem-staypuft: After successful Neutron/Nova deployment unable to run basic glance commands. openstack-glance-api and openstack-glance-registry services are down.
Product: Red Hat OpenStack Reporter: Alexander Chuzhoy <sasha>
Component: rubygem-staypuftAssignee: Crag Wolfe <cwolfe>
Status: CLOSED DUPLICATE QA Contact: Omri Hochman <ohochman>
Severity: urgent Docs Contact:
Priority: urgent    
Version: unspecifiedCC: ajeain, ddhanapa, mburns, sasha, yeylon
Target Milestone: gaKeywords: Reopened
Target Release: Installer   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2015-02-12 19:43:05 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1177026    
Attachments:
Description Flags
sosreport
none
crm_report
none
/var/log/glance logs
none
glance related errors in pacemaker logs
none
Fail messages related to glance in pacemaker
none
glance related errors in messages logs none

Description Alexander Chuzhoy 2014-12-22 19:12:04 UTC
rubygem-staypuft:  After successfule Nova deployment unable to run basic glance commands. openstack-glance-api and openstack-glance-registry services are down.

Environment:
ruby193-rubygem-foreman_openstack_simplify-0.0.6-8.el7ost.noarch
openstack-foreman-installer-3.0.8-1.el7ost.noarch
ruby193-rubygem-staypuft-0.5.9-1.el7ost.noarch
rhel-osp-installer-client-0.5.4-1.el7ost.noarch
openstack-puppet-modules-2014.2.8-1.el7ost.noarch
rhel-osp-installer-0.5.4-1.el7ost.noarch


Steps to reproduce:
1. Install rhel-osp-installer
2. Successfully deploy nonHA Nova (1 controller + 2 compute)
3. Login to the horizon of the newly deployed setup.
4. Source the keystonerc_admin file.
5. Run glance image-create --name cirros --disk-format qcow2 --container-format bare --is-public 1 --copy-from https://launchpad.net/cirros/trunk/0.3.0/+download/cirros-0.3.0-x86_64-disk.img"

Result:
Error finding address for http://192.168.0.16:9292/v1/images: HTTPConnectionPool(host='192.168.0.16', port=9292): Max retries exceeded with url: /v1/images (Caused by <class 'httplib.BadStatusLine'>: '')  

Running 'glance image-list' results in:
Error finding address for http://192.168.0.16:9292/v1/images/detail?sort_key=name&sort_dir=asc&limit=20: HTTPConnectionPool(host='192.168.0.16', port=9292): Max retries exceeded with url: /v1/images/detail?sort_key=name&sort_dir=asc&limit=20 (Caused by <class 'httplib.BadStatusLine'>: '')



Workaround:
manually start these services on the controller:
systemctl start openstack-glance-registry
systemctl start openstack-glance-api

Then you'll be able to run the glance commands.

Comment 2 Alexander Chuzhoy 2014-12-23 17:03:10 UTC
Reprodued with nonHA Neutron deployment - same packages version.

Comment 3 Mike Burns 2015-01-06 13:37:52 UTC
*** Bug 1177813 has been marked as a duplicate of this bug. ***

Comment 4 Mike Burns 2015-01-06 13:45:44 UTC
Can you please attach logs?  What glance configuration did you have?

Comment 5 Alexander Chuzhoy 2015-01-08 18:46:58 UTC
I'll attach the logs once this is reproduced.
The glace was configured with NFS.

Comment 6 Mike Burns 2015-01-09 03:17:54 UTC
Setting needinfo until requested info is provided.

Comment 7 Mike Burns 2015-01-09 15:31:11 UTC
After discussion with QE, this does not reproduce.  Closing.  Please reopen if it reproduces.

Comment 8 Dulanjalie Ganegedara 2015-02-12 05:57:23 UTC
We are seeing this issue in OSP6 setup. SO I am reopening this

Comment 9 Dulanjalie Ganegedara 2015-02-12 05:59:20 UTC
We are using staypuft installer to bring up OS (OSP6). provided workaround works. I was unable to get any useful logs. please let me know what are the logs required.

[root@mac525413e5abeb ~(openstack_admin)]# glance image-list
Error finding address for http://11.0.0.12:9292/v1/images/detail?sort_key=name&sort_dir=asc&limit=20: HTTPConnectionPool(host='11.0.0.12', port=9292): Max retries exceeded with url: /v1/images/detail?sort_key=name&sort_dir=asc&limit=20 (Caused by <class 'httplib.BadStatusLine'>: '')

USED THE PROVIDED WORKAROUND
[root@mac525413e5abeb ~(openstack_admin)]# systemctl start openstack-glance-registry
[root@mac525413e5abeb ~(openstack_admin)]# systemctl start openstack-glance-api
[root@mac525413e5abeb ~(openstack_admin)]# glance image-list
+----+------+-------------+------------------+------+--------+
| ID | Name | Disk Format | Container Format | Size | Status |
+----+------+-------------+------------------+------+--------+
+----+------+-------------+------------------+------+--------+
[root@mac525413e5abeb ~(openstack_admin)]#

Comment 10 Crag Wolfe 2015-02-12 17:21:22 UTC
The most useful logs would be /var/log/messages*, /var/log/glance/*, and /var/log/pacemaker .  Although, it would probably be a good idea to tar up /var/log and have it stashed away just in case there is a useful error message lurking somewhere else.

Comment 11 Dulanjalie Ganegedara 2015-02-12 17:49:47 UTC
Created attachment 991094 [details]
sosreport

Comment 12 Dulanjalie Ganegedara 2015-02-12 17:50:29 UTC
Created attachment 991102 [details]
crm_report

Comment 13 Dulanjalie Ganegedara 2015-02-12 18:07:40 UTC
Created attachment 991104 [details]
/var/log/glance logs

Comment 14 Dulanjalie Ganegedara 2015-02-12 18:08:05 UTC
Created attachment 991105 [details]
glance related errors in pacemaker logs

Comment 15 Dulanjalie Ganegedara 2015-02-12 18:08:34 UTC
Created attachment 991106 [details]
Fail messages related to glance in pacemaker

Comment 16 Dulanjalie Ganegedara 2015-02-12 18:09:44 UTC
I have attached sosreport, crm_report and lar/log/glance logs
pacemaker and messages logs are too big to attach so i have grep for glance related logs and have attached them herewith

Comment 17 Dulanjalie Ganegedara 2015-02-12 18:10:19 UTC
Created attachment 991107 [details]
glance related errors in messages logs

Comment 18 Crag Wolfe 2015-02-12 18:42:04 UTC
Looking at the pacemaker log, it looks like there are issues around fs-varlibglanceimages-clone that might be interfering.  Note that if you don't want pacemaker to attempt to mount an nfs share for glance, the parameter $pcmk_fs_manage in quickstack::pacemaker::glance: should be set to false.  If you do want it to mount (usually nfs) storage for glance, make sure the other params $pcmk_fs_device, $pcmk_fs_options and $pcmk_fs_type are correct.

If the above seems right to you, please attach the yaml for one of the hosts.

Comment 19 Mike Burns 2015-02-12 19:04:30 UTC
Was it local file for glance?  If that's the case, then this is probably bug 1183815 which is fixed in A1.

Comment 20 Dulanjalie Ganegedara 2015-02-12 19:24:52 UTC
yes, i am using single controller setup with glance backend as a local file. Also pcmk_fs_manage=true

Thanks Mike and Crag

Comment 21 Mike Burns 2015-02-12 19:43:05 UTC
Ok, re-closing this.  We can track in bug 1183815

*** This bug has been marked as a duplicate of bug 1183815 ***