Bug 1329827
Summary: | PackStack multi-host deployment fails as compute nodes attempt to connect to AMQP on localhost | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
Product: | Red Hat OpenStack | Reporter: | Stephen Gordon <sgordon> | ||||||||
Component: | openstack-packstack | Assignee: | Javier Peña <jpena> | ||||||||
Status: | CLOSED ERRATA | QA Contact: | Prasanth Anbalagan <panbalag> | ||||||||
Severity: | high | Docs Contact: | |||||||||
Priority: | high | ||||||||||
Version: | 8.0 (Liberty) | CC: | aortega, ichavero, jpena, nlevinki, srevivo | ||||||||
Target Milestone: | --- | Keywords: | Triaged, ZStream | ||||||||
Target Release: | 8.0 (Liberty) | ||||||||||
Hardware: | Unspecified | ||||||||||
OS: | Unspecified | ||||||||||
Whiteboard: | |||||||||||
Fixed In Version: | openstack-packstack-7.0.0-0.17.dev1702.g490e674.el7ost | Doc Type: | Bug Fix | ||||||||
Doc Text: | Story Points: | --- | |||||||||
Clone Of: | Environment: | ||||||||||
Last Closed: | 2016-06-29 13:58:30 UTC | Type: | Bug | ||||||||
Regression: | --- | Mount Type: | --- | ||||||||
Documentation: | --- | CRM: | |||||||||
Verified Versions: | Category: | --- | |||||||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||||||
Cloudforms Team: | --- | Target Upstream Version: | |||||||||
Embargoed: | |||||||||||
Attachments: |
|
Created attachment 1150030 [details]
Answer file
Created attachment 1150031 [details]
packstack log
I think I know where the issue is. In the answer file, I see there is one more change, enabling RHSM to subscribe to the OSP repos. In https://github.com/openstack/packstack/blob/9a32e23e6878af3282517ad3891d238a0b26d039/packstack/plugins/prescript_000.py#L1003-L1004 we see that, in that case, the RHOSP 5.0 repos (Icehouse) are enabled, instead of the Liberty ones. I'll prepare a fix for this. We need to backport https://review.openstack.org/312167 and do some more changes to fix it. https://review.openstack.org/314473 should fix this. In a local test, it works as expected for me. Verified as follows, ******** VERSION ******** [root@serverA ~]# yum list installed | grep openstack-packstac openstack-packstack.noarch 1:7.0.0-0.19.dev1702.g490e674.el7ost openstack-packstack-puppet.noarch 1:7.0.0-0.19.dev1702.g490e674.el7ost [root@serverA ~]# ********* LOGS ********* [root@serverA ~]# grep "COMPUTE_HOST" packstack-answers-20160615-204618.txt CONFIG_COMPUTE_HOSTS=a.b.c.d,x.y.z.w [root@serverA ~]# packstack --answer-file=packstack-answers-20160615-204618.txt Welcome to the Packstack setup utility The installation log file is available at: /var/tmp/packstack/20160622-185026-jJqXq7/openstack-setup.log Installing: Clean Up [ DONE ] Discovering ip protocol version [ DONE ] root.z.w's password: Setting up ssh keys [ DONE ] Preparing servers [ DONE ] Pre installing Puppet and discovering hosts' details [ DONE ] Adding pre install manifest entries [ DONE ] Setting up CACERT [ DONE ] Adding AMQP manifest entries [ DONE ] Adding MariaDB manifest entries [ DONE ] Adding Apache manifest entries [ DONE ] Fixing Keystone LDAP config parameters to be undef if empty[ DONE ] Adding Keystone manifest entries [ DONE ] Adding Glance Keystone manifest entries [ DONE ] Adding Glance manifest entries [ DONE ] Adding Cinder Keystone manifest entries [ DONE ] Checking if the Cinder server has a cinder-volumes vg[ DONE ] Adding Cinder manifest entries [ DONE ] Adding Nova API manifest entries [ DONE ] Adding Nova Keystone manifest entries [ DONE ] Adding Nova Cert manifest entries [ DONE ] Adding Nova Conductor manifest entries [ DONE ] Creating ssh keys for Nova migration [ DONE ] Gathering ssh host keys for Nova migration [ DONE ] Adding Nova Compute manifest entries [ DONE ] Adding Nova Scheduler manifest entries [ DONE ] Adding Nova VNC Proxy manifest entries [ DONE ] Adding OpenStack Network-related Nova manifest entries[ DONE ] Adding Nova Common manifest entries [ DONE ] Adding Neutron VPNaaS Agent manifest entries [ DONE ] Adding Neutron FWaaS Agent manifest entries [ DONE ] Adding Neutron LBaaS Agent manifest entries [ DONE ] Adding Neutron API manifest entries [ DONE ] Adding Neutron Keystone manifest entries [ DONE ] Adding Neutron L3 manifest entries [ DONE ] Adding Neutron L2 Agent manifest entries [ DONE ] Adding Neutron DHCP Agent manifest entries [ DONE ] Adding Neutron Metering Agent manifest entries [ DONE ] Adding Neutron Metadata Agent manifest entries [ DONE ] Adding Neutron SR-IOV Switch Agent manifest entries [ DONE ] Checking if NetworkManager is enabled and running [ DONE ] Adding OpenStack Client manifest entries [ DONE ] Adding Horizon manifest entries [ DONE ] Adding Swift Keystone manifest entries [ DONE ] Adding Swift builder manifest entries [ DONE ] Adding Swift proxy manifest entries [ DONE ] Adding Swift storage manifest entries [ DONE ] Adding Swift common manifest entries [ DONE ] Adding Provisioning Demo manifest entries [ DONE ] Adding Provisioning Demo bridge manifest entries [ DONE ] Adding Provisioning Glance manifest entries [ DONE ] Adding MongoDB manifest entries [ DONE ] Adding Redis manifest entries [ DONE ] Adding Ceilometer manifest entries [ DONE ] Adding Ceilometer Keystone manifest entries [ DONE ] Adding post install manifest entries [ DONE ] Copying Puppet modules and manifests [ DONE ] Applying a.b.c.d_prescript.pp Applying x.y.z.w_prescript.pp a.b.c.d_prescript.pp: [ DONE ] x.y.z.w_prescript.pp: [ DONE ] Applying a.b.c.d_amqp.pp Applying a.b.c.d_mariadb.pp a.b.c.d_amqp.pp: [ DONE ] a.b.c.d_mariadb.pp: [ DONE ] Applying a.b.c.d_apache.pp a.b.c.d_apache.pp: [ DONE ] Applying a.b.c.d_keystone.pp Applying a.b.c.d_glance.pp Applying a.b.c.d_cinder.pp a.b.c.d_keystone.pp: [ DONE ] a.b.c.d_glance.pp: [ DONE ] a.b.c.d_cinder.pp: [ DONE ] Applying a.b.c.d_api_nova.pp a.b.c.d_api_nova.pp: [ DONE ] Applying a.b.c.d_nova.pp Applying x.y.z.w_nova.pp a.b.c.d_nova.pp: [ DONE ] x.y.z.w_nova.pp: [ DONE ] Applying a.b.c.d_neutron.pp Applying x.y.z.w_neutron.pp a.b.c.d_neutron.pp: [ DONE ] x.y.z.w_neutron.pp: [ DONE ] Applying a.b.c.d_osclient.pp Applying a.b.c.d_horizon.pp a.b.c.d_osclient.pp: [ DONE ] a.b.c.d_horizon.pp: [ DONE ] Applying a.b.c.d_ring_swift.pp a.b.c.d_ring_swift.pp: [ DONE ] Applying a.b.c.d_swift.pp Applying a.b.c.d_provision_demo.pp a.b.c.d_swift.pp: [ DONE ] a.b.c.d_provision_demo.pp: [ DONE ] Applying a.b.c.d_provision_demo_bridge.pp a.b.c.d_provision_demo_bridge.pp: [ DONE ] Applying a.b.c.d_provision_glance a.b.c.d_provision_glance: [ DONE ] Applying a.b.c.d_mongodb.pp Applying a.b.c.d_redis.pp a.b.c.d_mongodb.pp: [ DONE ] a.b.c.d_redis.pp: [ DONE ] Applying a.b.c.d_ceilometer.pp a.b.c.d_ceilometer.pp: [ DONE ] Applying a.b.c.d_postscript.pp Applying x.y.z.w_postscript.pp a.b.c.d_postscript.pp: [ DONE ] x.y.z.w_postscript.pp: [ DONE ] Applying Puppet manifests [ DONE ] Finalizing [ DONE ] **** Installation completed successfully ****** Additional information: * Time synchronization installation was skipped. Please note that unsynchronized time on server instances might be problem for some OpenStack components. * Warning: NetworkManager is active on a.b.c.d, x.y.z.w. OpenStack networking currently does not work on systems that have the Network Manager service enabled. * File /root/keystonerc_admin has been created on OpenStack client host a.b.c.d. To use the command line tools you need to source the file. * To access the OpenStack Dashboard browse to http://a.b.c.d/dashboard . Please, find your login credentials stored in the keystonerc_admin in your home directory. * Because of the kernel update the host a.b.c.d requires reboot. * The installation log file is available at: /var/tmp/packstack/20160622-185026-jJqXq7/openstack-setup.log * The generated manifests are available at: /var/tmp/packstack/20160622-185026-jJqXq7/manifests Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2016:1354 |
Created attachment 1150029 [details] /etc/nova/nova.conf from 10.15.24.111 Description of problem: packstack --answer-file=./answers.txt with all defaults *except* changing the compute host values to: CONFIG_CONTROLLER_HOST=10.15.24.108 CONFIG_COMPUTE_HOSTS=10.15.24.111,10.15.24.144 Repeatedly fails with: ERROR : Error appeared during Puppet run: 10.15.24.111_nova.pp Error: Could not start Service[nova-compute]: Execution of '/usr/bin/systemctl start openstack-nova-compute' returned 1: Job for openstack-nova-compute.service failed because a timeout was exceeded. See "systemctl status openstack-nova-compute.service" and "journalctl -xe" for details. You will find full trace in log /var/tmp/packstack/20160423-103452-zDmNmV/manifests/10.15.24.111_nova.pp.log Please check log file /var/tmp/packstack/20160423-103452-zDmNmV/openstack-setup.log for more information Checking the systemctl information on the compute nodes: [root@localhost ~]# systemctl status openstack-nova-compute.service -l ● openstack-nova-compute.service - OpenStack Nova Compute Server Loaded: loaded (/usr/lib/systemd/system/openstack-nova-compute.service; disabled; vendor preset: disabled) Active: activating (start) since Sat 2016-04-23 10:57:55 EDT; 46s ago Main PID: 4047 (nova-compute) CGroup: /system.slice/openstack-nova-compute.service └─4047 /usr/bin/python /usr/bin/nova-compute Apr 23 10:58:08 localhost.localdomain nova-compute[4047]: 2016-04-23 10:58:08.483 4047 ERROR oslo.messaging._drivers.impl_rabbit [-] AMQP server on localhost:5672 is unreachable: [Errno 111] ECONNREFUSED. Trying again in 7 seconds. Apr 23 10:58:15 localhost.localdomain nova-compute[4047]: 2016-04-23 10:58:15.485 4047 INFO oslo.messaging._drivers.impl_rabbit [-] Reconnecting to AMQP server on localhost:5672 Apr 23 10:58:15 localhost.localdomain nova-compute[4047]: 2016-04-23 10:58:15.486 4047 INFO oslo.messaging._drivers.impl_rabbit [-] Delaying reconnect for 1.0 seconds... Apr 23 10:58:16 localhost.localdomain nova-compute[4047]: 2016-04-23 10:58:16.496 4047 ERROR oslo.messaging._drivers.impl_rabbit [-] AMQP server on localhost:5672 is unreachable: [Errno 111] ECONNREFUSED. Trying again in 9 seconds. Apr 23 10:58:25 localhost.localdomain nova-compute[4047]: 2016-04-23 10:58:25.501 4047 INFO oslo.messaging._drivers.impl_rabbit [-] Reconnecting to AMQP server on localhost:5672 Apr 23 10:58:25 localhost.localdomain nova-compute[4047]: 2016-04-23 10:58:25.501 4047 INFO oslo.messaging._drivers.impl_rabbit [-] Delaying reconnect for 1.0 seconds... Apr 23 10:58:26 localhost.localdomain nova-compute[4047]: 2016-04-23 10:58:26.513 4047 ERROR oslo.messaging._drivers.impl_rabbit [-] AMQP server on localhost:5672 is unreachable: [Errno 111] ECONNREFUSED. Trying again in 11 seconds. Apr 23 10:58:37 localhost.localdomain nova-compute[4047]: 2016-04-23 10:58:37.520 4047 INFO oslo.messaging._drivers.impl_rabbit [-] Reconnecting to AMQP server on localhost:5672 Apr 23 10:58:37 localhost.localdomain nova-compute[4047]: 2016-04-23 10:58:37.520 4047 INFO oslo.messaging._drivers.impl_rabbit [-] Delaying reconnect for 1.0 seconds... Apr 23 10:58:38 localhost.localdomain nova-compute[4047]: 2016-04-23 10:58:38.532 4047 ERROR oslo.messaging._drivers.impl_rabbit [-] AMQP server on localhost:5672 is unreachable: [Errno 111] ECONNREFUSED. Trying again in 13 seconds. It's not clear to me why the compute nodes are attempting to connect to localhost for AMQP as rabbit_host does appear to be set in the correct section (attaching nova.conf from one of the compute nodes - both exhibit same behaviour).