RDO tickets are now tracked in Jira https://issues.redhat.com/projects/RDO/issues/
Bug 1049246 - RDO: instance hangs in "scheduling" forever - topic: "conductor", RPC method: "object_class_action" info: "<unknown>"
Summary: RDO: instance hangs in "scheduling" forever - topic: "conductor", RPC method:...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: RDO
Classification: Community
Component: openstack-nova
Version: unspecified
Hardware: x86_64
OS: Linux
high
high
Target Milestone: ---
: Icehouse
Assignee: RHOS Maint
QA Contact: Ami Jeain
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-01-07 09:37 UTC by Udi Kalifon
Modified: 2014-02-12 14:40 UTC (History)
6 users (show)

Fixed In Version:
Clone Of:
Environment:
Last Closed: 2014-02-12 14:40:44 UTC
Embargoed:


Attachments (Terms of Use)
/var/log/nova/compute.log (349.94 KB, text/plain)
2014-01-07 10:00 UTC, Udi Kalifon
no flags Details

Description Udi Kalifon 2014-01-07 09:37:11 UTC
Description of problem:
With an all-in-one setup, when launching a new instance it hangs in "scheduling" state forever and the openstack-nova-compute service dies immediately. Using the Icehouse RDO. In compute.log we see:

2014-01-07 10:07:23.120 13843 ERROR nova.openstack.common.threadgroup [-] Timeout while waiting on RPC response - topic: "conductor", RPC method: "object_class_action" info: "<unknown>"
2014-01-07 10:07:23.120 13843 TRACE nova.openstack.common.threadgroup Traceback (most recent call last):
2014-01-07 10:07:23.120 13843 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.6/site-packages/nova/openstack/common/threadgroup.py", line 117, in wait
2014-01-07 10:07:23.120 13843 TRACE nova.openstack.common.threadgroup     x.wait()
2014-01-07 10:07:23.120 13843 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.6/site-packages/nova/openstack/common/threadgroup.py", line 49, in wait
2014-01-07 10:07:23.120 13843 TRACE nova.openstack.common.threadgroup     return self.thread.wait()
2014-01-07 10:07:23.120 13843 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.6/site-packages/eventlet/greenthread.py", line 166, in wait
2014-01-07 10:07:23.120 13843 TRACE nova.openstack.common.threadgroup     return self._exit_event.wait()
2014-01-07 10:07:23.120 13843 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.6/site-packages/eventlet/event.py", line 116, in wait
2014-01-07 10:07:23.120 13843 TRACE nova.openstack.common.threadgroup     return hubs.get_hub().switch()
2014-01-07 10:07:23.120 13843 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.6/site-packages/eventlet/hubs/hub.py", line 177, in switch
2014-01-07 10:07:23.120 13843 TRACE nova.openstack.common.threadgroup     return self.greenlet.switch()
2014-01-07 10:07:23.120 13843 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.6/site-packages/eventlet/greenthread.py", line 192, in main
2014-01-07 10:07:23.120 13843 TRACE nova.openstack.common.threadgroup     result = function(*args, **kwargs)
2014-01-07 10:07:23.120 13843 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.6/site-packages/nova/openstack/common/service.py", line 448, in run_service
2014-01-07 10:07:23.120 13843 TRACE nova.openstack.common.threadgroup     service.start()
2014-01-07 10:07:23.120 13843 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.6/site-packages/nova/service.py", line 154, in start
2014-01-07 10:07:23.120 13843 TRACE nova.openstack.common.threadgroup     self.manager.init_host()
2014-01-07 10:07:23.120 13843 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 801, in init_host
2014-01-07 10:07:23.120 13843 TRACE nova.openstack.common.threadgroup     context, self.host, expected_attrs=['info_cache'])
2014-01-07 10:07:23.120 13843 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.6/site-packages/nova/objects/base.py", line 110, in wrapper
2014-01-07 10:07:23.120 13843 TRACE nova.openstack.common.threadgroup     args, kwargs)
2014-01-07 10:07:23.120 13843 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.6/site-packages/nova/conductor/rpcapi.py", line 469, in object_class_action
2014-01-07 10:07:23.120 13843 TRACE nova.openstack.common.threadgroup     objver=objver, args=args, kwargs=kwargs)
2014-01-07 10:07:23.120 13843 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.6/site-packages/nova/rpcclient.py", line 85, in call
2014-01-07 10:07:23.120 13843 TRACE nova.openstack.common.threadgroup     return self._invoke(self.proxy.call, ctxt, method, **kwargs)
2014-01-07 10:07:23.120 13843 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.6/site-packages/nova/rpcclient.py", line 63, in _invoke
2014-01-07 10:07:23.120 13843 TRACE nova.openstack.common.threadgroup     return cast_or_call(ctxt, msg, **self.kwargs)
2014-01-07 10:07:23.120 13843 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/proxy.py", line 130, in call
2014-01-07 10:07:23.120 13843 TRACE nova.openstack.common.threadgroup     exc.info, real_topic, msg.get('method'))
2014-01-07 10:07:23.120 13843 TRACE nova.openstack.common.threadgroup Timeout: Timeout while waiting on RPC response - topic: "conductor", RPC method: "object_class_action" info: "<unknown>"
2014-01-07 10:07:23.120 13843 TRACE nova.openstack.common.threadgroup 


How reproducible:
100%


Steps to Reproduce:
1. Install from Icehouse repos with packstack --allinone
2. Launch a new instance using a qcow image


Actual results:
Instance never gets out of "Scheduling" state


Expected results:
... expected a running instance

Comment 1 Udi Kalifon 2014-01-07 09:41:23 UTC
Installed packages:
openstack-nova-api-2014.1-0.5.b1.el6.noarch
openstack-nova-compute-2014.1-0.5.b1.el6.noarch
openstack-nova-scheduler-2014.1-0.5.b1.el6.noarch
openstack-neutron-openvswitch-2014.1-0.1.b1.el6.noarch
openstack-dashboard-2014.1-0.1b1.el6.noarch
openstack-swift-account-1.11.0-1.el6.noarch
openstack-swift-proxy-1.11.0-1.el6.noarch
openstack-ceilometer-collector-2014.1-0.2.b1.el6.noarch
openstack-utils-2013.2-2.el6.noarch
openstack-cinder-2014.1-0.2.b1.el6.noarch
openstack-nova-common-2014.1-0.5.b1.el6.noarch
openstack-ceilometer-compute-2014.1-0.2.b1.el6.noarch
openstack-nova-console-2014.1-0.5.b1.el6.noarch
openstack-nova-conductor-2014.1-0.5.b1.el6.noarch
openstack-nova-novncproxy-2014.1-0.5.b1.el6.noarch
openstack-nova-cert-2014.1-0.5.b1.el6.noarch
openstack-swift-1.11.0-1.el6.noarch
openstack-swift-container-1.11.0-1.el6.noarch
openstack-swift-plugin-swift3-1.7-1.el6.noarch
openstack-packstack-2013.2.1-0.27.dev936.el6.noarch
openstack-ceilometer-central-2014.1-0.2.b1.el6.noarch
openstack-ceilometer-alarm-2014.1-0.2.b1.el6.noarch
openstack-ceilometer-api-2014.1-0.2.b1.el6.noarch
openstack-selinux-0.1.3-2.el6ost.noarch
openstack-keystone-2014.1-0.2.b1.el6.noarch
openstack-glance-2014.1-0.1.b1.el6.noarch
openstack-ceilometer-common-2014.1-0.2.b1.el6.noarch
openstack-neutron-2014.1-0.1.b1.el6.noarch
python-django-openstack-auth-1.1.3-1.el6.noarch
openstack-swift-object-1.11.0-1.el6.noarch

In l3-agent.log I also see a timeout error although the service is running:
2014-01-07 11:38:46.081 2726 TRACE neutron.agent.l3_agent Timeout: Timeout while waiting on RPC response - topic: "q-l3-plugin", RPC method: "get_external_network_id" info: "<unknown>"

Comment 2 Udi Kalifon 2014-01-07 10:00:37 UTC
Created attachment 846546 [details]
/var/log/nova/compute.log

Uploading the entire compute.log

Comment 3 Lars Kellogg-Stedman 2014-02-11 15:44:02 UTC
Udi,

Are you able to reproduce this problem with the latest Icehouse packages?  

After installing the RDO Icehouse packages on CentOS 6.5 and running "packstack --allinone" I am able to successfully boot a Cirros instance.

Comment 4 Udi Kalifon 2014-02-12 05:52:15 UTC
This has already been fixed.


Note You need to log in before you can comment on or make changes to this bug.