[stack@instack ~]$ openstack overcloud deploy --templates --ntp-server 10.5.26.10 -e network-environment.yaml --timeout 90 -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml --control-scale 3 --compute-scale 1 Deploying templates in the directory /usr/share/openstack-tripleo-heat-templates /home/stack/.ssh/known_hosts updated. Original contents retained as /home/stack/.ssh/known_hosts.old PKI initialization in init-keystone is deprecated and will be removed. ssh: connect to host 192.168.100.11 port 22: Connection timed out ERROR: openstack Command '['ssh', '-oStrictHostKeyChecking=no', '-t', '-l', 'heat-admin', u'192.168.100.11', 'sudo', 'keystone-manage', 'pki_setup', '--keystone-user', "$(getent passwd | grep '^keystone' | cut -d: -f1)", '--keystone-group', "$(getent group | grep '^keystone' | cut -d: -f1)"]' returned non-zero exit status 255 Various errors in the logs (attached). Environment: python-django-openstack-auth-1.2.0-5.el7ost.noarch openstack-dashboard-theme-2015.1.2-2.el7ost.noarch openstack-heat-common-2015.1.2-2.el7ost.noarch openstack-ceilometer-notification-2015.1.2-1.el7ost.noarch openstack-ceilometer-central-2015.1.2-1.el7ost.noarch openstack-nova-scheduler-2015.1.2-3.el7ost.noarch openstack-glance-2015.1.2-1.el7ost.noarch openstack-neutron-2015.1.2-2.el7ost.noarch openstack-nova-api-2015.1.2-3.el7ost.noarch openstack-heat-api-2015.1.2-2.el7ost.noarch openstack-swift-container-2.3.0-2.el7ost.noarch openstack-neutron-openvswitch-2015.1.2-2.el7ost.noarch openstack-neutron-common-2015.1.2-2.el7ost.noarch openstack-neutron-metering-agent-2015.1.2-2.el7ost.noarch redhat-access-plugin-openstack-7.0.0-0.el7ost.noarch openstack-swift-2.3.0-2.el7ost.noarch openstack-nova-common-2015.1.2-3.el7ost.noarch openstack-ceilometer-alarm-2015.1.2-1.el7ost.noarch openstack-nova-console-2015.1.2-3.el7ost.noarch openstack-swift-proxy-2.3.0-2.el7ost.noarch python-openstackclient-1.0.3-3.el7ost.noarch openstack-selinux-0.6.43-1.el7ost.noarch openstack-ceilometer-common-2015.1.2-1.el7ost.noarch openstack-ceilometer-collector-2015.1.2-1.el7ost.noarch openstack-nova-cert-2015.1.2-3.el7ost.noarch openstack-keystone-2015.1.2-2.el7ost.noarch openstack-neutron-lbaas-2015.1.2-1.el7ost.noarch openstack-nova-compute-2015.1.2-3.el7ost.noarch openstack-neutron-ml2-2015.1.2-2.el7ost.noarch openstack-heat-engine-2015.1.2-2.el7ost.noarch openstack-utils-2014.2-1.el7ost.noarch openstack-swift-account-2.3.0-2.el7ost.noarch openstack-dashboard-2015.1.2-2.el7ost.noarch openstack-ceilometer-api-2015.1.2-1.el7ost.noarch openstack-nova-novncproxy-2015.1.2-3.el7ost.noarch openstack-puppet-modules-2015.1.8-29.el7ost.noarch openstack-cinder-2015.1.2-1.el7ost.noarch openstack-nova-conductor-2015.1.2-3.el7ost.noarch openstack-heat-api-cfn-2015.1.2-2.el7ost.noarch openstack-neutron-bigswitch-lldp-2015.1.38-1.el7ost.noarch openstack-swift-plugin-swift3-1.7-3.el7ost.noarch openstack-ceilometer-compute-2015.1.2-1.el7ost.noarch openstack-heat-api-cloudwatch-2015.1.2-2.el7ost.noarch openstack-swift-object-2.3.0-2.el7ost.noarch instack-undercloud-2.1.2-33.el7ost.noarch Steps to reproduce: Attempt to deploy HA overcloud with network isolation. Result: The dpeloyment fails. Expected Result: The deployment should complete.
Created attachment 1096982 [details] /var/log dir from one controller.
heat reports that the deployment is complete: heat stack-list +--------------------------------------+------------+-----------------+----------------------+ | id | stack_name | stack_status | creation_time | +--------------------------------------+------------+-----------------+----------------------+ | 06247f0b-ec45-4790-adb4-ca7b640dba51 | overcloud | CREATE_COMPLETE | 2015-11-20T00:14:27Z | +--------------------------------------+------------+-----------------+----------------------+
hi, can you attach the contents of the network-environment.yaml ? if you pass it as first environment file some of the contents might get overridden by the other environment files if the environment is still available, can you check if you can ssh from the undercloud node to each controller AND to the control_virtual_ip address?
It seems this is the issue: ssh: connect to host 192.168.100.11 port 22: Connection timed out Is that your external or internal subnet? Can your undercloud reach it?
parameter_defaults: InternalApiNetCidr: 192.168.100.0/24 StorageNetCidr: 192.168.110.0/24 StorageMgmtNetCidr: 192.168.120.0/24 TenantNetCidr: 192.168.150.0/24 ExternalNetCidr: 192.168.200.0/24 The undercloud can only reach the external network addresses and the provisioning (obviously).
Not a bug. This was a missconfiguration in the included yaml file.