Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1115458

Summary: Packstack fails to configure neutron-details in nova.conf on compute nodes
Product: Red Hat OpenStack Reporter: yfried
Component: openstack-packstackAssignee: Martin Magr <mmagr>
Status: CLOSED CURRENTRELEASE QA Contact: yfried
Severity: urgent Docs Contact:
Priority: urgent    
Version: 5.0 (RHEL 7)CC: aortega, derekh, lbezdick, lpeer, nyechiel, oblaut, sclewis, yeylon, yfried
Target Milestone: rcKeywords: Regression
Target Release: 5.0 (RHEL 7)   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: openstack-packstack-2014.1.1-0.31.dev1208.el7ost Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2014-07-10 18:00:36 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
nova.conf file from compute node
none
nova.conf from old compute node (how it should be) none

Description yfried 2014-07-02 12:03:39 UTC
Created attachment 914141 [details]
nova.conf file from compute node

Description of problem:
nova.conf on compute node has no neutron details (such as neutron_url)
as a results VMs fail to boot, unless compute node is on the same node as neutron-server

Version-Release number of selected component (if applicable):
[root@puma45 ~(keystone_admin)]# rpm -qa | grep "packstack\|neutron\|nova"
openstack-neutron-openvswitch-2014.1-35.el7ost.noarch
python-nova-2014.1-7.el7ost.noarch
openstack-packstack-puppet-2014.1.1-0.30.dev1204.el7ost.noarch
python-neutron-2014.1-35.el7ost.noarch
openstack-packstack-2014.1.1-0.30.dev1204.el7ost.noarch
python-novaclient-2.17.0-2.el7ost.noarch
openstack-nova-api-2014.1-7.el7ost.noarch
openstack-nova-console-2014.1-7.el7ost.noarch
openstack-nova-scheduler-2014.1-7.el7ost.noarch
openstack-nova-common-2014.1-7.el7ost.noarch
openstack-nova-conductor-2014.1-7.el7ost.noarch
openstack-nova-cert-2014.1-7.el7ost.noarch
openstack-neutron-2014.1-35.el7ost.noarch
openstack-nova-compute-2014.1-7.el7ost.noarch
openstack-nova-objectstore-2014.1-7.el7ost.noarch
python-neutronclient-2.3.4-2.el7ost.noarch
openstack-nova-novncproxy-2014.1-7.el7ost.noarch


How reproducible:
cloud with compute nodes not on the same machine as neutron-server

Steps to Reproduce:
1. boot vms

Actual results:
VMs are in Error mode, with details:
message: Timed out waiting for a reply to message ID 2c212961b091470eb14d2d9d0f77903a

Details
File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 296, in decorated_function return function(self, context, *args, **kwargs) File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2075, in run_instance do_run_instance() File "/usr/lib/python2.7/site-packages/nova/openstack/common/lockutils.py", line 249, in inner return f(*args, **kwargs) File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2074, in do_run_instance legacy_bdm_in_spec) File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1207, in _run_instance notify("error", fault=e) # notify that build failed File "/usr/lib/python2.7/site-packages/nova/openstack/common/excutils.py", line 68, in __exit__ six.reraise(self.type_, self.value, self.tb) File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1191, in _run_instance instance, image_meta, legacy_bdm_in_spec) File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1355, in _build_instance filter_properties, bdms, legacy_bdm_in_spec) File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1401, in _reschedule_or_error self._log_original_error(exc_info, instance_uuid) File "/usr/lib/python2.7/site-packages/nova/openstack/common/excutils.py", line 68, in __exit__ six.reraise(self.type_, self.value, self.tb) File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1396, in _reschedule_or_error bdms, requested_networks) File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2101, in _shutdown_instance network_info = self._get_instance_nw_info(context, instance) File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1118, in _get_instance_nw_info instance) File "/usr/lib/python2.7/site-packages/nova/network/api.py", line 94, in wrapped return func(self, context, *args, **kwargs) File "/usr/lib/python2.7/site-packages/nova/network/api.py", line 389, in get_instance_nw_info result = self._get_instance_nw_info(context, instance) File "/usr/lib/python2.7/site-packages/nova/network/api.py", line 405, in _get_instance_nw_info nw_info = self.network_rpcapi.get_instance_nw_info(context, **args) File "/usr/lib/python2.7/site-packages/nova/network/rpcapi.py", line 222, in get_instance_nw_info host=host, project_id=project_id) File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/client.py", line 150, in call wait_for_reply=True, timeout=timeout) File "/usr/lib/python2.7/site-packages/oslo/messaging/transport.py", line 90, in _send timeout=timeout) File "/usr/lib/python2.7/site-packages/oslo/messaging/_drivers/amqpdriver.py", line 412, in send return self._send(target, ctxt, message, wait_for_reply, timeout) File "/usr/lib/python2.7/site-packages/oslo/messaging/_drivers/amqpdriver.py", line 403, in _send result = self._waiter.wait(msg_id, timeout) File "/usr/lib/python2.7/site-packages/oslo/messaging/_drivers/amqpdriver.py", line 280, in wait reply, ending, trylock = self._poll_queue(msg_id, timeout) File "/usr/lib/python2.7/site-packages/oslo/messaging/_drivers/amqpdriver.py", line 220, in _poll_queue message = self.waiters.get(msg_id, timeout) File "/usr/lib/python2.7/site-packages/oslo/messaging/_drivers/amqpdriver.py", line 126, in get 'to message ID %s' % msg_id)



Expected results:
VMs should boot fine

Additional info:

Comment 1 yfried 2014-07-02 12:05:09 UTC
Created attachment 914142 [details]
nova.conf from old compute node (how it should be)

Comment 2 yfried 2014-07-02 12:16:18 UTC
workaround: manually change these on compute nodes (copy from controller/nova.conf)

> network_api_class=nova.network.neutronv2.api.API
> neutron_url=http://10.35.160.171:9696
> neutron_url_timeout=30
> neutron_admin_username=neutron
> neutron_admin_password=a40230ddf32f4935
> neutron_admin_tenant_name=services
> neutron_region_name=RegionOne
> neutron_admin_auth_url=http://10.35.160.171:35357/v2.0
> security_group_api=neutron
> firewall_driver=nova.virt.firewall.NoopFirewallDriver

Comment 3 yfried 2014-07-02 12:28:42 UTC
workaround:
edit from in comment 2 in nova.conf on compute nodes (copy actual details from controller node. make sure url isn't localhost)
reboot nodes (as nova is probably stuck and cannot be restarted)

Comment 4 yfried 2014-07-02 12:28:54 UTC
workaround:
edit from in comment 2 in nova.conf on compute nodes (copy actual details from controller node. make sure url isn't localhost)
reboot nodes (as nova is probably stuck and cannot be restarted)

Comment 5 Ofer Blaut 2014-07-07 04:44:26 UTC
Verified 

openstack-packstack-2014.1.1-0.32.1.dev1209.el7ost.noarch

Comment 7 Scott Lewis 2014-07-10 18:00:36 UTC

Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHEA-2014-0846.html